Posts

Comments

Comment by koning_robot on Open Problems Related to Solomonoff Induction · 2013-08-30T18:07:51.003Z · LW · GW

Whoops, thread necromancy.

Comment by koning_robot on Open Problems Related to Solomonoff Induction · 2013-08-30T17:56:27.478Z · LW · GW

I'm not sure what you're trying to say here, but if you consider this a relative weakness of Solomonoff Induction, then I think you're looking at it the wrong way. We will know it as well as we possibly could given the evidence available. Humans are subject to the constraints that Solomonoff Induction is subject to, and more.

Comment by koning_robot on Public Service Announcement Collection · 2013-06-29T21:29:45.581Z · LW · GW

Hrrm. I don't think it's that simple. Looking at that page, I imagine nonprogrammers wonder:

  • What are comments?
  • What are strings?
  • What is this "#=>" stuff?
  • "primitives"?
  • ... This seems to be written for people who are already familiar with some other language. Better to show a couple of examples so that they recognize patterns and become curious.
Comment by koning_robot on The Mere Cable Channel Addition Paradox · 2012-09-02T12:12:03.569Z · LW · GW

What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you're just making something up to rationalize your preconceptions.

Comment by koning_robot on Stupid Questions Open Thread Round 4 · 2012-08-31T18:21:09.777Z · LW · GW

Hm, I've been trying to get rid of one particular habit (drinking while sitting at my computer) for a long time. Recently I've considered the possibility of giving myself a reward every time I go to the kitchen to get a beer and come back with something else instead. The problem was that I couldn't think of a suitable reward (there's not much that I like). I hadn't thought of just making something up, like pieces of paper. Thanks for the inspiration!

Comment by koning_robot on [deleted post] 2012-08-28T20:56:32.708Z

Do you have specific ideas useful for resolving this question?

Fear of death doesn't mean death is bad in the same way that fear of black people doesn't mean black people are bad. (Please forgive me the loaded example.)

Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of "me". We don't know what we mean by "me", or if we do then we don't know what's valuable about the existence of one "me" as opposed to another, and anyway evolution meant something different by "me" (genes rather than organisms).

It's usually best to avoid using the word "rationality" in such contexts.

I actually meant rationality here, specifically instrumental rationality, i.e., "is it getting in the way of us achieving our goals?".

I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.

Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty's birth did not replace? (The grief for Alice's death and the joy for Betty's birth have instrumental value, as did Alice's acquired knowledge.)

If you find that I've set this up to fit my conclusions, then I don't think we disagree.

Comment by koning_robot on [deleted post] 2012-08-25T09:49:04.774Z

Because it feels good. My ongoing survival leaves me cold entirely.

Comment by koning_robot on [deleted post] 2012-08-25T09:38:09.789Z

It's different. The fact that I feel bad when confronted with my own mortality doesn't mean that mortality is bad. The fact that I feel bad when so confronted does mean that the feeling is bad.

Comment by koning_robot on [deleted post] 2012-08-24T23:06:16.815Z

Emotions clearly support non-fungibility, in particular concerning your own life, and it's a strong argument.

I (now) understand how the existence of certain emotions in certain situations can serve as an argument for or against some proposition, but I don't think the emotions in this case form that strong an argument. There's a clear motive. It was evolution, in the big blue room, with the reproductive organs. It cares about the survival of chunks of genetic information, not about the well-being of the gene expressions.

Thanks for helping me understand the negative response. My claim here is not about the value of life in general, but about the value of some particular "person" continuing to exist. I think the terminal value of this ceasing to exist is zero. Since posting my top-level comment I have provided some arguments in favor of my case, and also hopefully clarified my position.

Comment by koning_robot on Not for the Sake of Pleasure Alone · 2012-08-24T21:28:39.115Z · LW · GW

I accept this objection; I cannot describe in physical terms what "pleasure" refers to.

Comment by koning_robot on [deleted post] 2012-08-24T20:34:06.810Z

Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.

Comment by koning_robot on Not for the Sake of Pleasure Alone · 2012-08-24T19:42:29.589Z · LW · GW

No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this.

Yes, but they disagree because what they want is not the same as what they would like.

The "weak points" I spoke of is that you consider some "weaknesses" of your position, namely others' mental states, but those are not the weakest of your position, nor are you using the strongest "enemy" arguments to judge your own position, and the other pieces of data also indicate that there's mind-killing going on.

The value of others' mental states is not a weakness of my position; I just considered them irrelevant for the purposes of the experience machine thought experiment. The fact that hooking up to the machine would take away resources that could be used to help others weighs against hooking up. I am not necessarily in favor of wireheading.

I am not aware of weaknesses of my position, nor in what way I am mind-killing. Can you tell me?

[...] it's almost an applause light.

Yes! So why is nobody applauding? Because they disagree with some part of it. However, the part they disagree with is not what the referent of "pleasure" is, or what kind of elaborate outside-world engineering is needed to bring it about (which has instrumental value on my view), but the part where I say that the only terminal value is in mental states that you cannot help but value.

The burden of proof isn't actually on my side. A priori, nothing has value. I've argued that the quality of mental states has (terminal) value. Why should we also go to any length to placate desires?

Comment by koning_robot on Not for the Sake of Pleasure Alone · 2012-08-24T15:32:31.903Z · LW · GW

I remember starting it, and putting it away because yes, I disagreed with so many things. Especially the present subject; I couldn't find any arguments for the insistence on placating wants rather than improving experience. I'll read it in full next week.

Comment by koning_robot on Not for the Sake of Pleasure Alone · 2012-08-24T14:08:59.374Z · LW · GW

And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim's proponent.

This comment has justification. I don't see how this would affect evolutionary psychology. I'm not sure if I'm parsing your last sentence here correctly; I didn't "look up" anything, and I don't know what the weak points are.

Assuming that the scenario you paint is plausible and the optimal way to get there, then yeah, that's where we should be headed. One of the explicit truths of your scenario is that "they're all feeling the best they could possibly feel". But your scenario is a bad intuition pump. You deliberately constructed this scenario so as to manipulate me into judging what the inhabitants experience as less than that, appealing to some superstitious notion of true/pure/honest/all-natural pleasure.

You may be onto something when you say I might be confusing labels and concepts, but I am not saying that the label "pleasure" refers to something simple. I am only saying that the quality of mental states is the only thing we should care about (note the word should, I'm not saying that is currently the way things are).

Comment by koning_robot on Not for the Sake of Pleasure Alone · 2012-08-24T13:43:47.815Z · LW · GW

A priori, nothing matters. But sentient beings cannot help but make value judgements regarding some of their mental states. This is why the quality of mental states matters.

Wanting something out there in the world to be some way, regardless of whether anyone will ever actually experience it, is different. A want is a proposition about reality whose apparent falsehood makes you feel bad. Why should we care about arbitrary propositions being true or false?

Comment by koning_robot on Not for the Sake of Pleasure Alone · 2012-08-24T10:41:49.657Z · LW · GW

"Desire" denotes your utility function (things you want). "Pleasure" denotes subjectively nice-feeling experiences. These are not necessarily the same thing.

Indeed they are not necessarily the same thing, which is why my utility function should not value that which I "want" but that which I "like"! The top-level post all but concludes this. The conclusion the author draws just does not follow from what came before. The correct conclusion is that we may still be able to "just" program an AI to maximize pleasure. What we "want" may be complex, but what we "like" may be simple. In fact, that would be better than programming an AI to make the world into what we "want" but not necessarily "like".

There's nothing superstitious about caring about stuff other than your own mental state.

If you mean that others' mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn't possibly matter.

Comment by koning_robot on [deleted post] 2012-08-24T10:10:01.396Z

Sorry for being snarky. I am sincere. I really do think that death is not such a big deal. It sucks, but it sucks only because of the negative sensations it causes in those left behind. All that said, I don't think you gave me anything but an appeal to emotion.

Comment by koning_robot on [deleted post] 2012-08-24T09:59:40.957Z

The emotions are irrational in the sense that they are not supported by anything - your brain generates these emotions in these situations and that's it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I'm not saying that to be rational is to get rid of all your desires. I'm saying that it's a tradeoff, and I am suggesting the possibility that in this case the cost of placating the desire to not die is greater than the cost of getting rid of it.

What worries me is this. It could well be that I am wrong and that the cost of immortality is actually lower than the cost to get rid of the desire for it. But I strongly suspect that this was never the reason for people here to pursue immortality. The real reason has to do with preservation of something that I doubt has value.

Comment by koning_robot on [deleted post] 2012-08-24T09:37:56.957Z

Pleasurable experiences. My life facilitates them, but it doesn't have to be "my" life. Anyone's life will do.

Comment by koning_robot on [deleted post] 2012-08-23T10:20:48.759Z

Do you think that preserving my brain after the fact makes falling from a really high place any less unpleasant? Or are you appealing to my emotions (fear of death)?

Comment by koning_robot on [deleted post] 2012-08-23T10:09:45.834Z

I consider these to be emotional reasons rather than rational ones. Specifically not wanting to die, not wanting certain others to die, and being afraid of death are irrational (or at least it is unclear that there are rational reasons for them). I think there are less roundabout ways to (dis)solve these problems than to engineer immortality. In a more rational culture (which we should be steering for anyway), we would not be so viscerally averse to death.

Comment by koning_robot on [deleted post] 2012-08-22T11:14:36.173Z

What is it exactly that's so valuable about a person that justifies spending $30000 worth of resources to preserve it? Their "identity", whatever that means? Their personality, even though it's probably a dime a dozen? Their acquired knowledge that will be outdated by the time they are revived? What is it that we want to preserve?

What is it that is lost when a person dies, that cannot be regained by creating a new one? I'm not in favor of creating new ones, but new ones are created all the time anyway, so why not learn to live with them? Why do we need to do everything the hard way?

Comment by koning_robot on Not for the Sake of Pleasure Alone · 2012-08-22T08:54:10.385Z · LW · GW

In the last decade, neuroscience has confirmed what intuition could only suggest: that we desire more than pleasure. We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

Either this conclusion contradicts the whole point of the article, or I don't understand what is meant by the various terms "desire", "want", "pleasure", etc. If pleasure is "that which we like", then yes we can solve FAI by programming an AI to maximize pleasure.

The mistake you (lukeprog, but also eliezer) are apparently making worries me very much. It is irrelevant what we desire or want, as is what we act for. The only thing that is relevant is that which we like. Tell me, if the experience machine gave you that which you like ("pleasure" or "complex fun" or whatchamacallit), would you hook up to it? Surely you would have to be superstitious to refuse!

Comment by koning_robot on Meetup : Brussels meetup · 2012-03-08T09:48:14.371Z · LW · GW

I am going to try to be there. I'll be traveling from Maastricht.

Edit: I decided not to go after all.