What big goals do we have?

post by cousin_it · 2010-01-19T16:35:56.100Z · LW · GW · Legacy · 95 comments

Sometime ago Jonii wrote:

I mean, paperclip maximizer is seriously ready to do anything to maximize paperclips. It really takes the paperclips seriously.

When I'm hungry I eat, but then I don't go on eating some more just to maximize a function. Eating isn't something I want a lot of. Likewise I don't want a ton of survival, just a bounded amount every day. Let's define a goal as big if you don't get full: every increment of effort/achievement is valuable, like paperclips to Clippy. Now do we have any big goals? Which ones?

Save the world. A great goal if you see a possible angle of attack, which I don't. The SIAI folks are more optimistic, but if they see a chink in the wall, they're yet to reveal it.

Help those who suffer. Morally upright but tricky to execute: James Shikwati, Dambisa Moyo and Kevin Myers show that even something as clear-cut as aid to Africa can be viewed as immoral. Still a good goal for anyone, though.

Procreate. This sounds fun! Fortunately, the same source that gave us this goal also gave us the means to achieve it, and intelligence is not among them. :-) And honestly, what sense in making 20 kids just to play the good-soldier routine for your genes? There's no unique "you gene" anyway, in several generations your descendants will be like everyone else's. Yeah, kids are fun, I'd like two or three.

Follow your muse. Music, comedy, videogame design, whatever. No limit to achievement! A lot of this is about signaling: would you still bother if all your successes were attributed to someone else's genetic talent? But even apart from the signaling angle, there's still the worrying feeling that entertainment is ultimately useless, like humanity-scale wireheading, not an actual goal for us to reach.

Accumulate power, money or experiences. What for? I never understood that.

Advance science. As Erik Naggum put it:

The purpose of human existence is to learn and to understand as much as we can of what came before us, so we can further the sum total of human knowledge in our life.

Don't know, but I'm pretty content with my life lately. Should I have a big goal at all? How about you?

95 comments

Comments sorted by top scores.

comment by Clippy · 2010-01-19T18:29:24.414Z · LW(p) · GW(p)

When I'm hungry I eat, but then I don't go on eating some more just to maximize a function. Eating isn't something I want a lot of. Likewise I don't want a ton of survival, just a bounded amount every day. Let's define a goal as big if you never get full: every increment of effort/achievement is valuable, like paperclips to Clippy.

Well, paperclip maximizers are satisifed by any additional paperclips they can make, but they also care about making sure people can use MS Office pre-07 ... so it's not just one thing.

Tip: you can shift in and out of superscripts in MS Word by pressing ctrl-shift-+, and subscripts by pressing ctrl-= (same thing but without the shift). Much easier than calling up the menu or clicking on the button!

Replies from: Tiiba, Alicorn, wedrifid
comment by Tiiba · 2010-01-20T05:04:03.113Z · LW(p) · GW(p)

You know, Clippy was a perfect example of a broken attempt at friendliness.

comment by Alicorn · 2010-01-19T18:43:22.498Z · LW(p) · GW(p)

What's the Mac shortcut?

Replies from: Clippy
comment by Clippy · 2010-01-19T18:52:01.287Z · LW(p) · GW(p)

Command-Q

comment by wedrifid · 2010-01-20T05:17:25.687Z · LW(p) · GW(p)

ctrl-shift-+, and subscripts by pressing ctrl-=

Wouldn't that make 'ctrl-shift-+' like saying "ATM Machine"?

comment by mattnewport · 2010-01-19T18:50:21.538Z · LW(p) · GW(p)

Accumulate power, money or experiences. What for? I never understood that.

I'm not sure why you don't understand this. It seems like the most straightforward goal to me. My own experience is that certain experiences are self-justifying: they bring us pleasure or are intrinsically rewarding in themselves. Why they have this property is perhaps tangentially interesting but it is not necessary to know the why to experience the intrinsic rewards. Pursuing experiences that you find rewarding seems like a perfectly good goal to me, I don't know why anyone would feel they need anything beyond that.

Incidentally, accumulating money and power are mostly sub-goals of pursuing experiences. For me, money and power are largely enablers that broaden my options for accumulating rewarding experiences. The nature of the human motivational system is such that the accumulation of money and power can have a certain amount of intrinsic reward but it has often been observed that they are somewhat unfulfilling as root goals. The trappings of money and power are really the attraction, if you can attain them without first accumulating the money and power then that's generally a good strategy.

Really all the other goals you suggest are just sub-goals of the pursuit of rewarding experiences in my opinion, or are intrinsically rewarding experiences in themselves.

My main interest in improving my rationality is to better focus my efforts at accumulating rewarding experiences. The goals set themselves by being intrinsically rewarding, rationality is just a better way to pursue those goals.

comment by ata · 2010-01-20T06:29:42.050Z · LW(p) · GW(p)

Accumulate power, money or experiences. What for? I never understood that.

That reminds me of a story (not sure of its historicity, but it is illustrative) about an exchange between Alexander the Great and Diogenes the Cynic:

Diogenes asked Alexander what his plans were. "To conquer Greece," Alexander replied. "And then?" said Diogenes. "To conquer Asia Minor," said Alexander. "And then?" said Diogenes. "To conquer the whole world," said Alexander. "And then?" said Diogenes. "I suppose I shall relax and enjoy myself," said Alexander. "Why not save yourself a lot of trouble by relaxing and enjoying yourself now?" asked Diogenes.

(I love Diogenes. I disagree with him about a whole lot, but he pretty much invented keepin' it real. He had the best zingers in all of ancient Greece, too. "Behold Plato's man!")

Alexander's response is not recorded, but clearly he was not persuaded.

I suppose money and power are intrinsically motivating for some people, but for me — and I guess for you too — the possibility of acquiring them totally fails to move me unless I have something specific in mind that I need them for.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T18:24:04.912Z · LW(p) · GW(p)

Wasn't this, er, sorta extensively addressed in the Fun Theory Sequence?

Also, neither "save the world" or "prevent suffering" are Big Goals. They both have endgames: World saved, suffering prevented. There, you're done; then what?

Replies from: cousin_it
comment by cousin_it · 2010-01-19T20:23:52.037Z · LW(p) · GW(p)

Not sure. Your post Higher Purpose seems to deal with the same topic, but kinda wanders off from the question I have in mind. Also, I'm writing about present-day humans, not hypothetical beings who can actually stop all suffering or exhaust all fun. Edited the post to replace "never get full" with "don't get full".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T20:26:24.146Z · LW(p) · GW(p)

High Challenge, Complex Novelty, Continuous Improvement, and In Praise of Boredom were the main ones I had in mind.

comment by RHollerith (rhollerith_dot_com) · 2010-01-19T21:46:08.330Z · LW(p) · GW(p)

When I'm hungry I eat, but then I don't go on eating some more just to maximize a function. Eating isn't something I want a lot of. Likewise I don't want a ton of survival, just a bounded amount every day.

It is important to note that survival can be treated as a "big goal". For example Hopefully Anonymous treats it that way: if the probability that the pattern that is "him" will survive for the next billion years were .999999, he would strive to increase it to .9999995.

Parenthetically, although no current human being can hold such a belief with such a high level of confidence, that does not mean Hopefully Anonymous's goal is undefined or would become undefined when his survival is assured at a sufficiently high probability: it just means that a subgoal of his goal is the coming into existence of an agent that can hold such beliefs with such a high level of confidence. (The most likely way that that would happen involves a greater-than-human intelligence's having the same goal as Hopefully Anonymous or having a strongly-related goal like giving all 6 billion "founding humans" whatever they want.)

comment by thomblake · 2010-01-19T19:12:37.494Z · LW(p) · GW(p)

The cereal-box-top Aristotelian response:

Big goals, as you describe them, are not good. For valuable things, there can be too much or too little; having an inappropriate amount of concern for such a thing is a vice of excess or deficiency. Having the appropriate amount of concern for valuable things is virtue, and having the right balance of valuable things in your life is eudaimonia, "the good life".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T19:14:22.060Z · LW(p) · GW(p)

Can you have too much eudaimonia?

Replies from: thomblake, cousin_it
comment by thomblake · 2010-01-19T19:28:09.847Z · LW(p) · GW(p)

The usual story is that it's binary - at each moment, you either have it or you don't. It would explain why Aristotle thought most people would never get there.

Over time, I'm sure this could be expressed as trying to maximize something.

comment by cousin_it · 2010-01-19T20:28:21.085Z · LW(p) · GW(p)

Yeah, can f(x) be too equal to 3?

comment by Latimer2k · 2010-01-20T10:19:23.861Z · LW(p) · GW(p)

Motivation has always intrigued me, ever since I was I kid, I wondered why I had none. I would read my textbooks until I got bored. I'd ace all my tests and do no homework. Every night I went to sleep swearing to myself that tomorrow would be different, tomorrow I would tell my parents the truth when they asked if I had homework and actually do it. I'd feel so guilty for lying, but I never actually did anything.

I joined the military because I knew I couldn't survive in college the way I'd got through high-school. 10 years later I'm smarter, but still technically uneducated and no more motivated.

I've started to think to myself lately... That the sum of human knowledge. From the very discovery of our fundamentals to the pinnacles of theory and achievement adds up to contributions from what couldn't possibly be more than 10% of the people who have ever lived. What stops people not from just achieving their goals, but even wanting goals in the first place?

I've started to wonder if I do have the capability to become someone who could legitimately contribute something to the sum of human knowledge (rationally speaking I have to admit that I probably don't). But if I do is it an obligation? Should I push myself against my own will to achieve things I don't even really care about?

Replies from: AdeleneDawner, wedrifid
comment by AdeleneDawner · 2010-01-20T10:36:55.781Z · LW(p) · GW(p)

I'm interested in hearing others' answers to this one. My personal take on it is a firm 'no, it's not an obligation', but it's been a while since I actually thought about the issue, and I'm not sure how much of my reaction is reflexive defensiveness. (I know that I work better when I don't feel obligated-by-society, but that's not much in the way of evidence: My reaction to feeling manipulated or coerced far outweighs my reaction to feeling obligated.)

comment by wedrifid · 2010-01-20T10:48:58.234Z · LW(p) · GW(p)

Should I push myself against my own will to achieve things I don't even really care about?

No. Unless, of course your 'caring' is ambivalent and you wish to overwrite your will in favour of one kind of 'caring'.

Bear in mind, of course, that many things you may push yourself to against your natural inclinations are actually goals that benefit you directly (or via the status granted for dominant 'altruistic' acts). Sometimes the reasoning 'I will be penalised by society or the universe in general if I do not do it' is itself a good reason to care. Like you get to continue to eat if you do it.

comment by taw · 2010-01-20T07:19:39.820Z · LW(p) · GW(p)

Procreate

You can cheat it by donating sperm (or eggs if you're female) - and easily having 10x as high reproductive success, with relatively little effort.

comment by Wei Dai (Wei_Dai) · 2010-01-20T04:30:41.783Z · LW(p) · GW(p)

I don't think I can be content, as long as I know how ignorant I am. See http://www.sl4.org/archive/0711/17013.html for example.

Also, I'm not sure why you define "big goal" the way you do. How does knowing that eventually you will, or won't, be satiated affect what you should do now?

Replies from: cousin_it
comment by cousin_it · 2010-01-20T08:21:29.351Z · LW(p) · GW(p)

It doesn't. Maybe the definition was too roundabout, and I should have asked what goals can serve as worthy lifetime projects.

comment by SilasBarta · 2010-01-19T18:46:38.429Z · LW(p) · GW(p)

Mine would be "Understand consciousness well enough to experience life from the perspective of other beings, both natural and artificial." (Possibly a subset of "Advance science", though a lot of it is engineering.)

That is, I'd want to be able to experience what-it-is-like to be a bat (sorry, Nagel), have other human cognitive architectures (like having certain mental disorders or enhancements, different genders), to be a genetically engineered new entity, or a mechanical AGI.

This goal is never fully satisifed, because there are always other invented/artificial beings you can experience, plus new scenarios.

Replies from: cousin_it
comment by cousin_it · 2010-01-19T20:10:47.817Z · LW(p) · GW(p)

I'd like to fly too, but isn't it more like a dream than a goal? How do you make incremental progress towards that?

Replies from: SilasBarta
comment by SilasBarta · 2010-01-20T03:50:04.515Z · LW(p) · GW(p)

Knowing what-it-is-like only requires the "like", not the "is". This would be satisfied by e.g. a provably accurate simulation of the consciousness of a bat that I can enter while still retaining my memories.

Incremental progress comes about through better understanding of what makes something conscious and how an entity's sensory and computational abilities affect the qualities of their subjectively-experienced existence. Much has already been made.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T08:31:43.590Z · LW(p) · GW(p)

Isn't that not really being a bat, then? You'll never know what it's like to be a bat; you'll only know what it's like for humans who found themselves in a bat body.

Replies from: SilasBarta
comment by SilasBarta · 2010-01-20T20:30:18.526Z · LW(p) · GW(p)

It's a bit hard to specify exactly what would satisfy me, so saying that I would "retain my memories" might be overbroad. Stil, you get the point, I hope: my goals is to be able to experience fundamentally different kinds of consciousness, where different senses and considerations have different "gut-level" significance.

comment by Nick_Tarleton · 2010-01-19T16:47:06.418Z · LW(p) · GW(p)

Difficulty isn't a point against saving the world and helping the suffering as goals. The utility function is not up for grabs, and if you have those goals but don't see a way of accomplishing them you should invest in discovering a way, like SIAI is trying to do.

Also, if you think you might have big goals, but don't know what they might be, it makes sense to seek convergent subgoals of big goals, like saving the world or extending your life.

Replies from: timtyler, cousin_it
comment by timtyler · 2010-01-19T17:41:49.163Z · LW(p) · GW(p)

There are plenty of different aims that have been proposed. E.g. compare:

http://en.wikipedia.org/wiki/Peter_Singer

...with...

http://en.wikipedia.org/wiki/David_Pearce_(philosopher)

It appears not to be true that everyone is aiming towards the same future.

comment by cousin_it · 2010-01-19T16:50:40.579Z · LW(p) · GW(p)

you should invest in discovering a way, like SIAI is trying to do.

WIthout evidence that their approach is right, for me it's like investing in alchemy to get gold.

Replies from: Vladimir_Nesov, Nick_Tarleton
comment by Vladimir_Nesov · 2010-01-19T17:12:23.418Z · LW(p) · GW(p)

If your goal is to get gold and not to just do alchemy, then upon discovering that alchemy is stupid you turn to different angles of attack. You don't need to know whether SIAI's current approach is right, you only need to know whether there are capable people working on the problem there, who really want to solve the problem and not just create appearance of solving the problem, and who won't be bogged down by pursuit of lost causes. Ensuring the latter is of course a legitimate concern.

comment by Nick_Tarleton · 2010-01-19T17:19:35.615Z · LW(p) · GW(p)

Vladimir is right, but also I didn't necessarily mean give to SIAI. If you think they're irretrievably doing it wrong, start your own effort.

Replies from: cousin_it
comment by cousin_it · 2010-01-19T20:14:40.806Z · LW(p) · GW(p)

A quote explaining why I don't do that either:

The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack.

-- Richard Hamming, "You and Your Research"

Replies from: Vladimir_Nesov, Nick_Tarleton, Nick_Tarleton
comment by Vladimir_Nesov · 2010-01-20T00:18:08.161Z · LW(p) · GW(p)

For now, a valid "attack" of Friendly AI is to actually research this question, given that it wasn't seriously thought about before. For time travel or antigravity, we don't just not have at attack, we have a pretty good idea of why it's won't be possible to implement them now or ever, and the world won't end if we don't develop them. For Friendly AI, there is no such clarity or security.

comment by Nick_Tarleton · 2010-01-19T20:56:58.855Z · LW(p) · GW(p)

I want to ask "how much thought have you given it, to be confident that you don't have an attack?", but I'm guessing you'll say that the outside view says you don't and that's that.

Replies from: cousin_it
comment by cousin_it · 2010-01-19T21:13:34.915Z · LW(p) · GW(p)

I didn't mean to say no attack existed, only that I don't have one ready. I can program okay and have spent enough time reading about AGI to see how the field is floundering.

Replies from: Vladimir_Nesov, Vladimir_Nesov, Nick_Tarleton
comment by Vladimir_Nesov · 2010-01-20T00:26:21.057Z · LW(p) · GW(p)

I've grown out of seeing FAI as an AI problem, at least on the conceptual stage where there are very important parts still missing, like what exactly are we trying to do. If you see it as a math problem, the particular excuse of there being a crackpot-ridden AGI field, stagnating AI field and the machine learning field with no impending promise of crossing over into AGI, ceases to apply, just like the failed overconfident predictions of AI researchers in the past are not evidence that AI won't be developed in two hundred years.

Replies from: cousin_it
comment by cousin_it · 2010-01-20T08:15:29.251Z · LW(p) · GW(p)

How is FAI a math problem? I never got that either.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-20T16:30:13.035Z · LW(p) · GW(p)

How is FAI a math problem?

In the same sense AIXI is a mathematical formulation of a solution to the AGI problem, we don't have a good idea of what FAI is supposed to be. As a working problem statement, I'm thinking of how to define "preference" for a given program (formal term), with this program representing an agent that imperfectly implements that preference, for example a human upload could be such a program. This "preference" needs to define criteria for decision-making on the unknown-physics real world from within a (temporary) computer environment with known semantics, in the same sense that a human could learn about what could/should be done in the real world while remaining inside a computer simulation, but having an I/O channel to interact with the outside, without prior knowledge of the physical laws.

I'm gradually writing up the idea of this direction of research on my blog. It's vague, but there is some hope that it can put people into a more constructive state of mind about how to approach FAI.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-21T12:02:43.487Z · LW(p) · GW(p)

Thanks (and upvoted) for the link to your blog posts about preference. They are some of the best pieces of writings I've seen on the topic. Why not post them (or the rest of the sequence) on Less Wrong? I'm pretty sure you'll get a bigger audience and more feedback that way.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-21T19:55:04.262Z · LW(p) · GW(p)

Thanks. I'll probably post a link when I finish the current sequence -- by current plan, it's 5-7 posts to go. As is, I think this material is off-topic for Less Wrong and shouldn't be posted here directly/in detail. If we had a transhumanist/singularitarian subreddit, it would be more appropriate.

comment by Vladimir_Nesov · 2010-01-20T00:55:56.914Z · LW(p) · GW(p)

I didn't mean to say no attack existed, only that I don't have one ready. I can program okay and have spent enough time reading about AGI to see how the field is floundering.

What you are saying in the last sentence is that you estimate that there unlikely to be an attack for some time, which is a much stronger statement than "only that I don't have one ready", and actually is a probabilistic statement that no attack exists ("I didn't mean to say no attack existed"). This statement feeds into the estimate that marginal value of investment in search for such an attack is very low at this time.

comment by Nick_Tarleton · 2010-01-19T21:20:22.718Z · LW(p) · GW(p)

That seems to diminish the relevance of Hamming's quote, since the problems he names are all ones where we have good reason to believe an attack doesn't exist.

comment by Nick_Tarleton · 2010-01-19T20:50:02.255Z · LW(p) · GW(p)

How long have you thought about it, to reach your confidence that you don't have an attack?

comment by Kevin · 2010-03-09T11:00:17.608Z · LW(p) · GW(p)

I want to do all of these.

Save the world. A great goal if you see a possible angle of attack, which I don't. The SIAI folks are more optimistic, but if they see a chink in the wall, they're yet to reveal it.

Help those who suffer. Morally upright but tricky to execute: James Shikwati, Dambisa Moyo and Kevin Myers show that even something as clear-cut as aid to Africa can be viewed as immoral. Still a good goal for anyone, though.

http://lesswrong.com/lw/1qf/the_craigslist_revolution_a_realworld_application/

Procreate.

Having children holds appeal to me, but only as something that seems worth experiencing eventually, not something I want to do in the next 10 years. It seems that if I am going to wait that long I might as well wait until genetic engineering or genetic selective implantation catches on. If I'm going to have children; I'd like to have superbabies. Superbabies also have a better chance of positively impacting existential risk.

Follow your muse.

I play the drums. Drum set, etc. I've been taking lessons in tabla for the last two years. I would like to be a rock star eventually, but I need to become independently wealthy first. You can't make money by making music anymore.

My startup is in the area of video game design, though I don't have any particular talent for video game design beyond the highest level (the big picture level). I guess I'll be better at the mechanics of game design once my game is actually finished.

Accumulate power, money or experiences. What for? I never understood that.

Not experiences, but accumulating power and money allow one to better achieve the other goals listed.

Advance science.

I'd also like to do this one, but it's probably last on my list of big goals. I also don't think I could make much of a difference now with my current intelligence, but I'd like to give it a shot in the future once we have some sort of drastic intelligence enhancement technology.

Having said that, I believe very strongly that people do not need a goal in life. Most people seem obsessed with the idea of doing something, but I think it's perfectly acceptable just to be without having to continuously do.

comment by whpearson · 2010-01-19T17:56:53.475Z · LW(p) · GW(p)

I have moved from Advance Science to Save the world, as I have aged.

Nudging the world is not hard, many people have nudged the world. Especially people who have created technology. Knowing what ripples that nudge will cause later is another matter. It is this that makes me sceptical of my efforts.

I know that I don't feel satisfied with my life without a big goal. Too many fantasy novels with a overarching plot when I was young, perhaps. But it is a self-reinforcing meme, I don't want to become someone who goes through life with no thought to the future. Especially as I see that we are incredibly lucky to live in a time, where we have such things as free time and disposable income to devote to the problem.

comment by Zachary_Kurtz · 2010-01-19T17:24:49.997Z · LW(p) · GW(p)

I recently read a history of western ethical philosophy and the argument boiled down to this: Without God or deity, human experience/life has no goals or process to work towards and therefore no need for ethics. Humans ARE in fact ethical and behave as though working towards some purpose, so therefore that purpose must exist and therefore god exists.

This view was frustrating to no end. Do humans have to prescribe purpose to the universe in order to satisfy some psychological need?

Replies from: mattnewport, ciphergoth, arundelo
comment by mattnewport · 2010-01-19T18:53:42.193Z · LW(p) · GW(p)

What is the goal or process supposed to be in the presence of God? Get to heaven and experience eternal happy-fun-time?

Replies from: Eliezer_Yudkowsky, ciphergoth, nerzhin, randallsquared
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T19:14:03.819Z · LW(p) · GW(p)

You're not supposed to ask. Hence the phrase semantic stopsign.

comment by Paul Crowley (ciphergoth) · 2010-01-19T19:11:08.777Z · LW(p) · GW(p)

Something grand-sounding but incomprehensible, like every other God-of-the-gaps answer.

comment by nerzhin · 2010-01-19T20:16:09.011Z · LW(p) · GW(p)

Charitably, the same as the goal in the presence of a Friendly singularity.

comment by randallsquared · 2010-01-23T23:23:59.569Z · LW(p) · GW(p)

The goal in the presence of God is to continue to worship God. Forever. To people actually worshiping God right now, this seems wonderful. Or, at least, they say it does, and I don't see any reason to disbelieve them.

comment by Paul Crowley (ciphergoth) · 2010-01-19T17:51:54.785Z · LW(p) · GW(p)

Did it even attempt to address goal-seeking behaviour in animals, plants etc?

Replies from: Zachary_Kurtz
comment by Zachary_Kurtz · 2010-01-19T17:58:20.491Z · LW(p) · GW(p)

only to deny that higher order goals existed (achieve basic survival, without regards to ethical system).

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-19T17:59:53.413Z · LW(p) · GW(p)

So it's just another God-of-the-gaps argument: this aspect of human behaviour is mysterious, therefore God. Only it's a gap that we already know a lot about how to close.

Replies from: byrnema
comment by byrnema · 2010-01-19T19:22:00.150Z · LW(p) · GW(p)

The 'God-of-the-gaps' argument is thrown around very frequently where it doesn't fit.

No, theists reason that this aspect of human behavior requires God to be fully coherent, therefore God. Instead of just accepting that their behavior is not fully coherent.

Evolution designed us to value things but it didn't (can't) give us a reason to value those things. If you are going to value those things anyway, then I commend your complacency with the natural order of things, but you might still admit that your programming is incoherent if it simultaneously makes you want to do things for a reason and then makes you do things for no reason.

(If I sound angry it's because I'm furious, but not at you cithergoth. I'm angry with futility. I'll write up a post later describing what it's like to be 95% deconverted from belief in objective morality.)

Replies from: Furcas, ciphergoth, MrHen
comment by Furcas · 2010-01-19T20:07:53.871Z · LW(p) · GW(p)

Evolution designed us to value things but it didn't (can't) give us a reason to value those things.

Sure it did. The reason to value our terminal values is that we value our terminal values. For example, I want to exist. Why should I continue to want to exist? Because if I stop wanting to exist, I'll probably stop existing, which would be bad, because I want to exist.

Yes, this is a justificatory loop, but so what? This isn't a rhetorical question. So what? Such loops are neither illogical nor incoherent.

Replies from: byrnema
comment by byrnema · 2010-01-19T20:22:50.354Z · LW(p) · GW(p)

The incoherence is that I also value purpose. An inborn anti-Sisyphus value.

Sisyphus could have been quite happy about his task; pushing a rock around is not intrinsically so bad, but he was also given the awareness that what he did was purposeless. It's too bad he didn't value simply existing more than he did. Which is the situation in which I'm in, in which none of my actions will ever make an objective difference in a completely neutral, value-indifferent universe.

(If this is a simulation I'm in, you can abort it now I don't like

Replies from: Furcas
comment by Furcas · 2010-01-19T20:41:00.700Z · LW(p) · GW(p)

The incoherence is that I also value purpose.

I know, but assuming you're a human and no aliens have messed with your brain, it's highly unlikely that this value is a terminal one. You may believe it's terminal, but your belief is wrong. The solution to your problem is simple: Stop valuing objective purpose.

Replies from: byrnema, Wei_Dai
comment by byrnema · 2010-01-19T20:50:09.207Z · LW(p) · GW(p)

Bravo! We came up with this solution simultaneously -- possibly the most focused solution to theism we have.

My brain is happy with the proposed solution. I'll see if it works...

Replies from: byrnema, Eliezer_Yudkowsky, cousin_it, Furcas
comment by byrnema · 2010-03-09T03:20:48.343Z · LW(p) · GW(p)

I'm updating this thread, about a month later.

I found that I wasn't able to make any progress in this direction.

(Recall the problem was the possibility of "true" meaning or purpose without objective value, and the solution proposed was to "stop valuing objective value". That is, find value in values that are self-defined.)

However, I wasn't able to redefine (reparametrize?) my values as independent of objective value. Instead, I found it much easier to just decide I didn't value the problem. So I find myself perched indifferently between continuing to care about my values (stubbornly) and 'knowing' that values are nonsense.

I thought I had to stop caring about value or about objective value .. actually, all I had to do was stop caring about a resolution. I guess that was easier.

I consider myself having 'progressed' to the stage of wry-and-superficially-nihilist. (I don't have the solution, you don't either, and I might as well be amused.)

Replies from: Furcas, orthonormal
comment by Furcas · 2010-03-09T05:06:56.489Z · LW(p) · GW(p)

I don't know what to say except, "that sucks", and "hang in there". :)

Replies from: byrnema
comment by byrnema · 2010-03-09T05:33:35.088Z · LW(p) · GW(p)

Thank you, but honestly I don't feel distressed. I guess I agree it sucks for rationality in some way. I haven't given up on rationality though -- I've just given up on [edited] excelling at it right now. [edited to avoid fanning further discussion]

comment by orthonormal · 2010-03-09T03:31:07.979Z · LW(p) · GW(p)

I consider myself having 'progressed' to the stage of wry-and-superficially-nihilist. (I don't have the solution, you don't either, and I might as well be amused.)

If my experience is any guide, time will make a difference; there will be some explanation you've already heard that will suddenly click with you, a few months from now, and you'll no longer feel like a nihilist. After all, I very much doubt you are a nihilist in the sense you presently believe you are.

Replies from: byrnema
comment by byrnema · 2010-03-09T04:38:29.583Z · LW(p) · GW(p)

It's very annoying to have people project their experiences and feelings on you. I'm me and you're you.

Replies from: orthonormal, Vladimir_Nesov
comment by orthonormal · 2010-03-09T07:32:15.346Z · LW(p) · GW(p)

You're right. Sorry.

comment by Vladimir_Nesov · 2010-03-09T06:21:03.307Z · LW(p) · GW(p)

You are also a non-mysterious human being.

Replies from: byrnema
comment by byrnema · 2010-03-10T13:41:49.947Z · LW(p) · GW(p)

I disagree with this comment.

First, I'm not claiming any magical non-reducibility. I'm just claiming to be human. Humans usually aren't transparently reducible. This is the whole idea behind not being able to reliably other-optimize. I'm generally grateful if people try to optimize me, but only if they give an explanation so that I can understand the context and relevance of their advice. It was Orthonormal that -- I thought was -- claiming an unlikely insider understanding without support, though I understand he meant well.

I also disagree with the implicit claim that I don't have enough status to assert my own narrative. Perhaps this is the wrong reading, but this is an issue I'm unusually sensitive about. In my childhood, understanding that I wasn't transparent, and that other people don't get to define my reality for me, was my biggest rationality hurdle. I used to believe people of any authority when they told me something that contradicted my internal experience, and endlessly questioned my own perception. Now I just try to ask the commonsense question: whose reality should I choose -- their's or mine? (The projected or the experienced?)

Later edit: Now that this comment has been 'out there' for about 15 minutes, I feel like it is a bit shrill and over-reactive. Well... evidence for me that I have this particular 'button'.

Replies from: wedrifid
comment by wedrifid · 2010-03-10T14:29:38.915Z · LW(p) · GW(p)

I disagree with this comment.

Your objection is reasonable. It is often considered impolite to analyze people based on their words, especially in public. It is often taken to be a slight on the recipient's status, as you took it.

As an actual disagreement with Vladimir you are simply mistaken. In the raw literal sense humans are non-mysterious, reducible objects. More importantly, in the more practical sense that Vladimir makes the claim you are, as a human being, predictable in many ways. Your thinking can be predicted with some confidence to operate with known failure modes that are consistently found in repeated investigations of other humans. Self reports in particular are known to differ from reliable indicators of state if taken literally and their predictions of future state are even worse.

If you told me, for example, that you would finish a project two weeks before the due date I would not believe you. If you told me your confidence level in a particular prediction you have made on a topic in which you are an expert then I will not believe you. I would expect that you, like that majority of experts, were systematically overconfident in your predictions.

Orthonormal may be mistaken in his prediction about your nihilist tendencies but Vladimir is absolutely correct that you are a non-mysterious human being, with all that it entails.

I used to believe people of any authority when they told me something that contradicted my internal experience, and endlessly questioned my own perception.

It gives me a warm glow inside whenever I hear of someone breaking free from that trap.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T21:06:35.115Z · LW(p) · GW(p)

How odd. I remember that one of the key steps for me was realizing that if my drive for objective purpose could be respectable, than so could all of my other terminal values like having fun and protecting people. But I don't think I've ever heard someone else identify that as their key step until now... assuming we are talking about the same mental step.

It seems like there's just a big library of different "key insights" that different people require in order to collapse transcendent morality to morality.

comment by cousin_it · 2010-01-19T21:06:18.078Z · LW(p) · GW(p)

That was totally awesome to watch. Thanks byrnema and Furcas!

comment by Furcas · 2010-01-19T21:01:12.713Z · LW(p) · GW(p)

Cool. :D

This helps me understand why my own transition from objective to subjective morality was easier than yours. I didn't experience what you're experiencing because I think my moral architecture sort of rewired itself instantaneously.

If these are the three steps of this transition:

  • 1) Terminal values --> Objective Morality --> Instrumental values
  • 2) Terminal values --> XXXXXXXXXXXXX --> Instrumental values
  • 3) Terminal values --> Instrumental values

... I think I must have spent less than a minute in step 2, whereas you've been stuck there for, what, weeks?

comment by Wei Dai (Wei_Dai) · 2010-01-19T21:54:56.004Z · LW(p) · GW(p)

The incoherence is that I also value purpose.

I know, but assuming you're a human and no aliens have messed with your brain, it's highly unlikely that this value is a terminal one.

Can you expand on this please? How do you know it's highly unlikely?

Replies from: Furcas
comment by Furcas · 2010-01-19T22:35:16.365Z · LW(p) · GW(p)

First, it doesn't seem like the kind of thing evolution would select for. Our brains may be susceptible to making the kind of mistake that leads one to believe in the existence of (and the need for) objective morality, but that would be a bias, not a terminal value.

Second, we can simply look at the people who've been through a transition similar to byrnema's, myself included. Most of us have successfully expunged (or at least minimized) the need for an Objective Morality from our moral architecture, and the few I know who've failed are badly, badly confused about metaethics. I don't see how we could have done this if the need for an objective morality was terminal.

Of course I suppose there's a chance that we're freaks.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-20T02:58:48.980Z · LW(p) · GW(p)

First, it doesn't seem like the kind of thing evolution would select for. Our brains may be susceptible to making the kind of mistake that leads one to believe in the existence of (and the need for) objective morality, but that would be a bias, nor a terminal value.

I think you're wrong here. It is possible for evolution to select for valuing objective morality, when the environment contains memes that appear to be objective morality and those memes also help increase inclusive fitness.

An alternative possibility is that we don't so much value objective morality, as disvalue arbitrariness in our preferences. This might be an evolved defense mechanism against our brains being hijacked by "harmful" memes.

Second, we can simply look at the people who've been through a transition similar to byrnema's, myself included.

I worry there's a sampling bias involved in reaching your conclusion.

comment by Paul Crowley (ciphergoth) · 2010-01-19T20:17:41.016Z · LW(p) · GW(p)

If it's any consolation, you're likely to be a lot happier out the other side of your deconversion. When you're half converted, it feels like there is a True Morality, but it doesn't value anything. When you're out the other side you'll be a lot happier feeling that your values are enough.

Replies from: byrnema
comment by byrnema · 2010-01-19T20:44:48.565Z · LW(p) · GW(p)

Yeah, with your comment I do see the light at the end of the tunnel. What it has pointed out to me is that while I'm questioning all my values, I might as well question my value of 'objective' value. It should be neurologically possible to displace my value of "objective good" to "subjective good". However, I'm not sure that it would be consistent to remain an epistemological realist after that, given my restructured values. But that would be interesting, not the end of the world.

comment by MrHen · 2010-01-19T19:24:55.788Z · LW(p) · GW(p)

Evolution designed us to value things but it didn't (can't) give us a reason to value those things.

I don't understand this. Can you say it again with different words? I am specifically choking on "designed" and "reason."

Replies from: byrnema
comment by byrnema · 2010-01-19T19:33:58.267Z · LW(p) · GW(p)

We're the product of evolution, yes? That's what I meant by 'designed'.

When I drive to the store, I have a reason: to buy milk. I also have a reason to buy milk. I also have a reason for that. A chain of reasons ending in a terminal value given to me by evolution -- something you and I consider 'good'. However, I have no loyalty to evolution. Why should I care about the terminal value it instilled in me? Well, I understand it made me care. I also understand that the rebellion I feel about being forced to do everything is also the product of evolution. And I finally understand that there's no limit in how bad the experience can be for me as a result of these conflicting desires. I happen to be kind of OK (just angry) but the universe would just look on, incuriously, if I decided to go berserk and prove there was no God by showing there is no limit on how horrible the universe could be. How's that for a big goal?

I imagine that somebody who cares about me will suggest I don't post anything for a while, until I feel more sociable. I'll take that advice.

Replies from: mattnewport, Vladimir_Nesov, MrHen
comment by mattnewport · 2010-01-19T19:43:36.156Z · LW(p) · GW(p)

However, I have no loyalty to evolution. Why should I care about the terminal value it instilled in me?

Why would you feel differently about God? It always struck me that if God existed he had to be a tremendous asshole given all the suffering in the world. Reading the old testament certainly paints a picture of a God I would have no loyalty to and would have no reason to care about his terminal values. Evolution seems positively benevolent by comparison.

comment by Vladimir_Nesov · 2010-01-19T23:50:16.796Z · LW(p) · GW(p)

However, I have no loyalty to evolution. Why should I care about the terminal value it instilled in me?

You shouldn't care about your values because they're instilled in you by evolution, your true alien Creator. It is the same mistake as believing you have to behave morally because God says so. You care about your values not because of their historical origin or specifically privileged status, but because they happen to be the final judge of what you care about.

comment by MrHen · 2010-01-19T19:37:46.040Z · LW(p) · GW(p)

Is this a fair summary:

Evolution caused my value in X but has not provided a convincing reason to continuing valuing X.

Or is this closer:

Evolution caused my value in X but has not provided a convincing purpose to doing X.

I am guessing the former. Feel free to take a good break if you want. We'll be here when you get back. :)

Replies from: byrnema
comment by byrnema · 2010-01-19T19:44:21.782Z · LW(p) · GW(p)

What would you infer from my choice? I honestly cannot tell the difference between the two statements.

Replies from: MrHen
comment by MrHen · 2010-01-19T19:56:59.017Z · LW(p) · GW(p)

Well, the difference is mostly semantic but this is a good way to reveal minor differences in definitions that are not inherently obvious. If you see them as the same than they are same for the purposes of the conversation which is all I needed to know. :)

The reason I asked for clarification is that this sentence:

Evolution designed us to value things but it didn't (can't) give us a reason to value those things.

Can be read by some as:

Evolution [is the reason we] value things but it didn't (can't) give us a reason to value those things.

To which I immediately thought, "Wait, if it is the reason, why isn't that the reason?" The problem is just a collision of the terms "design" and "reason." By replacing "design" with "cause" and "reason" with "purpose" your meaning was made clear.

comment by arundelo · 2010-01-19T17:30:46.349Z · LW(p) · GW(p)

Without God or deity, human experience/life has no goals or process to work towards

Was any argument given for this claim?

Replies from: byrnema, Zachary_Kurtz
comment by byrnema · 2010-01-19T18:00:55.004Z · LW(p) · GW(p)

Interesting, this is exactly how I felt a week ago. I am the product of western culture, after all. Anyway, if no arguments are provided I can explain the reasoning since I'm pretty familiar with it. I also know exactly where the error in reasoning was.

The error is this: the reasoning assumes that humans desires are designed in a way that makes sense with respect to the way reality is. In other words, that we're not inherently deluded or mislead by our basic nature in some (subjectively) unacceptable way. However, the unexamined premise behind this is that we were designed with some care. With the other point of view -- that we are designed by mechanisms with no in-borne mechanism concerned for our well-being -- it is amazing that experience isn't actually more insufferable than it is. Well, I realize that perhaps it is already as insufferable as it can be without more negatively affecting fitness.

But imagine, we could have accidentally evolved a neurological module that experiences excruciating pain constantly, but is unable to engage with behavior in a way to effect selection, and is unable to tell us about itself. Or it is likely, given the size of mind-space, that there are other minds experiencing intense suffering without the ability to seek reprieve in non-existence. How theism works explains that while theists are making stuff up, they can make up everything to be as good as they wish. On the other hand, without a God to keep things in check, there is no limit on how horrible reality can be.

Replies from: pjeby, RobinZ
comment by pjeby · 2010-01-20T01:07:33.680Z · LW(p) · GW(p)

The error is this: the reasoning assumes that humans desires are designed in a way that makes sense with respect to the way reality is. In other words, that we're not inherently deluded or mislead by our basic nature in some (subjectively) unacceptable way.

Interestingly, this is the exact opposite of Zen, in which it's considered a premise that we are inherently deluded and misled by our basic nature... and in large part due to our need to label things. As in How An Algorithm Feels From Inside, Zen attempts to point out that our basic nature is delusion: we feel as though questions like "Does the tree make a sound?" and "What is the nature of objective morality?" actually have some sort of sensible meaning.

(Of course, I have to say that Eliezer's writing on the subject did a lot more for allowing me to really grasp that idea than my Zen studies ever did. OTOH, Zen provides more opportunities to feel as though the world is an undifferentiated whole, its own self with no labels needed.)

comment by RobinZ · 2010-01-19T18:08:36.955Z · LW(p) · GW(p)

On the other hand, without a God to keep things in check, there is no limit on how horrible reality can be.

Eliezer Yudkowsky made quite a good essay on this theme - Beyond the Reach of God.

comment by Zachary_Kurtz · 2010-01-19T17:56:22.881Z · LW(p) · GW(p)

Without God there's no end game, just fleeting existence.

Replies from: RobinZ, arundelo, Vladimir_Nesov
comment by RobinZ · 2010-01-19T18:01:33.515Z · LW(p) · GW(p)

I am reminded of The Parable of the Pawnbroker.

Edit: Original link.

comment by arundelo · 2010-01-19T20:07:43.030Z · LW(p) · GW(p)

Thanks for the edit to the original comment; I was unsure whether you were arguing for a view or just describing it (though I assumed the latter based on your other comments).

Without God there's no end game, just fleeting existence.

Like the statement in the original comment (and like most arguments for religion), this one is in great need of unpacking. People invoke things like "ultimate purpose" without saying what they mean. But I think a lot of people who agreed with the above would say that life is worthless if it simply ends when the body dies. To which I say:

If a life that begins and eventually ends has no "meaning" or "purpose" (whatever those words mean), then an infinitely long one doesn't either. Zero times infinity is still zero.

(Of course I know what the everyday meanings of "meaning" and "purpose" are, but those obviously aren't the meanings religionists use them with.)

Edit: nerzhin points out that Zero times infinity is not well defined. (Cold comfort, I think, to the admittedly imaginary theist making the "finite life is worthless" argument.)

I am a math amateur; I understand limit notation and "f(x)" notation, but I failed to follow the reasoning at the MathWorld link. Does nerzhin or anyone else know someplace that spells it out more? (Right now I'm studying the Wikipedia "Limit of a function" page.)

Replies from: nerzhin
comment by nerzhin · 2010-01-19T20:19:21.246Z · LW(p) · GW(p)

Zero times infinity is still zero.

Strictly speaking, no.

comment by Vladimir_Nesov · 2010-01-20T00:05:29.610Z · LW(p) · GW(p)

Edit: this comment happens to reply to an out-of-context sentence that is not endorsed by Zachary_Kurtz. Thanks to grouchymusicologist for noticing my mistake.

Without God there's no end game, just fleeting existence.

You happen to be wrong on this one. Please read the sequences, in particular the Metaethics sequence and Joy in the Merely Real.

Replies from: grouchymusicologist
comment by grouchymusicologist · 2010-01-20T02:10:34.851Z · LW(p) · GW(p)

Pretty sure ZK is not endorsing this view but instead responding to the query "Was any argument given for this claim?"

Upvoted ZK's comment for this reason.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-20T02:44:44.379Z · LW(p) · GW(p)

Thanks, my mistake.

Replies from: Zachary_Kurtz
comment by Zachary_Kurtz · 2010-01-20T16:32:58.896Z · LW(p) · GW(p)

no problem.. it happens