Should humanity give birth to a galactic civilization?

post by XiXiDu · 2010-08-17T13:07:10.407Z · LW · GW · Legacy · 122 comments

Contents

  See also
None
122 comments

Followup to: Should I believe what the SIAI claims? (Point 4: Is it worth it?)

It were much better that a sentient being should never have existed, than that it should have existed only to endure unmitigated misery.  Percy Bysshe Shelley

Imagine humanity to succeed. To spread out into the galaxy and beyond. Trillions of entities...

Then, I wonder, what in the end? Imagine if our dreams of a galactic civilization come true. Will we face unimaginable war over resources and torture as all this beauty will face its inevitable annihilation as the universe approaches absolute zero temperature?

What does this mean? Imagine how many more entities of so much greater consciousness and intellect will be alive in 10^20 years. If they are doomed to face that end or commit suicide, how much better would it be to face extinction now? That is, would the amount of happiness until then balance the amount of suffering to be expected at the beginning of the end? If we succeed to pollinate the universe, is the overall result ethical justifiable? Or might it be ethical to abandon the idea of reaching out to stars?

The question is, is it worth it? Is it ethical? Should we worry about the possibility that we'll never make it to the stars? Or should we rather worry about the prospect that trillions of our distant descendants may face, namely unimaginable misery? 

And while pondering the question of overall happiness, all things considered, how sure are we that on balance there won't be much more suffering in the endless years to come? Galaxy spanning wars, real and simulated torture? Things we cannot even imagine now.

One should also consider that it is more likely than not that we'll see the rise of rogue intelligences. It might also be possible that humanity succeeds to create something close to a friendly AI, which however fails to completely follow CEV (Coherent Extrapolated Volition). Ultimately this might not lead to our inevitable extinction but even more suffering, on our side or that of other entities out there. 

Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 1000 and still have fun? What if soon after the singularity we discover that all that is left is endless repetition? If we've learnt all there is to learn, done all there is to do. All games played, all dreams dreamed, what if nothing new under the sky is to be found anymore? And don’t we all experience this problem already these days? Have you people never thought and felt that you’ve already seen that movie, read that book or heard that song before for that they all featured the same plot, the same rhythm?

If it is our responsibility to die for our children to live, for the greater public good, if we are in charge of the upcoming galactic civilization, if we bear a moral responsibility for those entities to be alive, why don't the face the same responsibility for the many more entities to be alive but suffering? Is it the right thing to do, to live at any cost, to give birth at any price?

What if it is not about "winning" and "not winning" but about losing or gaining one possibility among millions that could go horrible wrong?

Isn't even the prospect of a slow torture to death enough to consider to end our journey here, a torture that spans a possible period from 10^20 years up to the Dark Era from 10^100 years and beyond? This might be a period of war, suffering and suicide. It might be the Era of Death and it might be the lion's share of the future. I personally know a few people who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.

To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.

So what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the relative amount of suffering increasing. What if the ultimate payoff is notably negative? If it is our moral responsibility to minimize suffering and if we are unable minimize suffering by actively shaping the universe, but rather risk to increase it, what should we do about it? Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?

Hereby I ask the Less Wrong community to help me resolve potential fallacies and biases in my framing of the above ideas.



See also

The Fun Theory Sequence

"Should This Be the Last Generation?" By PETER SINGER (thanks timtyler)

122 comments

Comments sorted by top scores.

comment by thomblake · 2010-08-17T15:21:49.343Z · LW(p) · GW(p)

Do not attempt a literal interpretation, rather try to consider the gist of the matter, if possible.

I have a better idea. Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading.

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2010-08-17T17:55:56.945Z · LW(p) · GW(p)

You are right, I was being an idiot there.

comment by XiXiDu · 2010-08-17T15:39:31.322Z · LW(p) · GW(p)

First of all, I'm not a native speaker. Then I think this writing style is much more appealing and thought to warn those who can't cope with it.

I'd like to give a little example (of many):

I wrote:

I'm sorry, but people like Wei force me to do this as they make this whole movement look like being completely down-to-earth, when in fact most people, if they knew about the full complexity of beliefs within this community, would laugh out loud.

katydee answered:

The "laugh test" is not rational.

I'm not going to accept this at whatever cost. If I say that people would laugh out loud that is a way of saying that they would find matters implausible. It does not imply that I'm suggesting some kind of laughing test which I further attribute to be rational.

If people here are unable to interpret colloquial language, then they might want to learn it. If you want to downvote me for that, go on. Or ban me, that's laughable.

Replies from: thomblake
comment by thomblake · 2010-08-17T15:48:20.067Z · LW(p) · GW(p)

First of all, I'm not a native speaker. Then I think this writing style is much more appealing and thought to warn those who can't cope with it.

If people here are unable to interpret colloquial language, then they might want to learn it.

Interesting tension in that you seem to attribute your lack of clarity to being a non-native speaker, and then defend yourself as speaking colloquially. Other non-native speakers here do not seem to have these problems.

Perhaps we should start non-English versions of Less Wrong, so that people who are incapable of communicating clearly and effectively in English have somewhere else to post.

Replies from: komponisto, None, XiXiDu
comment by komponisto · 2010-08-17T16:02:23.808Z · LW(p) · GW(p)

Perhaps we should start non-English versions of Less Wrong,

I have seriously considered posting to propose this.

comment by [deleted] · 2010-08-17T21:29:07.010Z · LW(p) · GW(p)

I haven't seen the original argument since the comment has been deleted.

But I've decided to comment since I sympathize, I have had problems with English spelling in the past to the point of having people volunteer to proofread my posts before I post them. Language barriers are something I can relate to.

I would love to participate in a nonEnglish Less Wrong, however my native language is so small I would be a nonnative speaker anyway.

Perhaps German? I'm better at it than English, French could also be doable but any other languages are off the table for me.

Translating the main sequences into various world languages would be a good way to raise the global sanity water line a bit more. LW is very very Anglocentric (understandable considering the recent history of Compsci) in its demographics. Its also a way for people who don't feel they know enough to write top level posts to contribute. I know we have the people here to translate all the articles into Russian, French, German and Spanish. I have no idea about Hindi but there are probably a few notable posters who are very fluent and have a large enough specialized vocabulary. Arab, Chinese and perhaps Japanese would be more tricky I think.

On second thought... Is anyone here interested in starting a learning group for Lojban?

Perhaps translating LW articles and occasionally debating in the language would be a good way to learn it, and perhaps (considering the nature of the language) it would help to have eventually all articles translated into Lojban, with Lojban being the "source code" version of the article and English the obligatory translation and default option. Unambiguity can be very useful.

Replies from: Emile, DSimon
comment by Emile · 2010-08-17T21:38:28.323Z · LW(p) · GW(p)

Thinking of this... Is anyone here interested in starting a learning group for Lojban?

Kial ne lerni Esperanto?

Replies from: arundelo
comment by arundelo · 2010-08-17T22:40:02.329Z · LW(p) · GW(p)

komponisto and I both speak it. I figured Konkvistador did too, or was at least familiar with it, based on the username.

In fact, the last letter in my username is in Esperanto! (Seriously.)


komponisto kaj mi ambaŭ scipovas ĝin. Mi supozis, ke ankaŭ Konkvistador scipovas ĝin, aŭ almenaŭ konas ĝin, pro la uzulnomo.

Efektive la fina litero de mia uzulnomo estas en Esperanto! (Serioze.)

Replies from: MartinB
comment by MartinB · 2010-08-18T01:10:10.912Z · LW(p) · GW(p)

Ŝajnas ke estas jam rimarkebla minoritato :-)

comment by DSimon · 2010-08-18T01:14:52.481Z · LW(p) · GW(p)

i ie go'a i ji'a xu do se bangu la lojban

I agree that translating LW articles into Lojban would be worthwhile, though i see its benefit more as promoting Lojban than promoting LW. Unlike other Lojban translation projects (i.e. Alice in Wonderland, The Legend of Zelda), the LW articles are likely to be subject to a lot of examiners getting helpfully nit-picky about the translation.

Also, longer term, it may help add to Lojban's practical usage as a rational language to early on have a set of high-quality Lojban documents that are explicitly and thoroughly rational. It could provide something analagous to a set of "kata" for future users of the language, a target ideal and example form.

comment by XiXiDu · 2010-08-17T16:06:00.652Z · LW(p) · GW(p)

I'd guess that other non-native speaker are of high educational background. I failed secondary school and am classified as handicapped by the federal employment office of Germany. If I'm not good enough for this community, I'm not offended if someone wrote a post and comment rule disqualifying anyone under a certain educational background or IQ to post and comment here.

What I meant to say by colloquially is that I'm not intending to use math but natural language. A little bit of ambiguity and the use of idioms is a tool to transcribe something that is rather difuse and elusive. If it was that clear to me, I would make assertions and not ask questions.

As my last sentence said, I asked the Less Wrong community to help me understand where I am wrong. If this is the wrong thing to do in a community blog devoted to refining the art of rationality, I'm sorry.

And I don't think your comment was helpful. If you had any questions about what I meant to say you could simply ask rather than telling me, "Hey, you can't get things across, either make yourself intelligible to all in the first place or go to hell...".

Replies from: thomblake
comment by thomblake · 2010-08-17T16:13:23.818Z · LW(p) · GW(p)

And I don't think your comment was helpful. If you had any questions about what I meant to say you could simply ask

I was not trying to be helpful to you, nor did I care about the specifics of what you wrote.

I was expressing disapproval at the general strategy of posting things that are unclear and then asking the readers to do interpretation for you. Rather, you should do the work of making your writing clear instead of pushing that work onto the readers. For any sort of utilitarian, at least, the benefit should be obvious - the writer is doing the work rather than hundreds of readers doing analogous work.

Replies from: erratio, Jonathan_Graehl, XiXiDu, XiXiDu
comment by erratio · 2010-08-17T21:25:38.550Z · LW(p) · GW(p)

Academic communication style is different in Europe than it is in the US/Aust/UK. My understanding of the situation is that there's an expectation that it's the reader's responsibility to understand, not the writer's to be clear. In practice this means that writers in Europe are penalised for being too clear. (I can provide citations if need be)

Which I guess means that there should be some writing guidelines for people from non-English backgrounds, emphasising the importance of clarity. Or that there should be a workshop area on the site where non-natives can get advice on how to make their article clearer before they post it.

Replies from: Tyrrell_McAllister, JoshuaZ
comment by Tyrrell_McAllister · 2010-08-18T01:21:28.967Z · LW(p) · GW(p)

In practice this means that writers in Europe are penalised for being too clear. (I can provide citations if need be)

I don't doubt you, but I would be interested in seeing specific examples of this.

comment by JoshuaZ · 2010-08-18T01:40:43.214Z · LW(p) · GW(p)

I suspect that this varies more by discipline than by physical area. In math for example not making things reasonably easy to understand is considered bad although there is a tension with a desire for succinctness. Even then, ambiguity that requires a reader to use context to resolve is considered very poor writing.

comment by Jonathan_Graehl · 2010-08-17T18:06:01.898Z · LW(p) · GW(p)

the writer is doing the work rather than hundreds of readers doing analogous work

I agree, but sometimes a person does the best they can and it's just not enough. I think it's appropriate to downvote for poor writing, unless the content is compelling. The incompetent writer should ask for help pre-posting if they really care about being understood.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-19T13:02:26.980Z · LW(p) · GW(p)

I think it's appropriate to downvote for poor writing, unless the content is compelling.

This post got upvoted 32 times and mine downvoted 10 times. Is the difference that drastic? I don't see that, but ok.

Replies from: WrongBot, wedrifid, Jonathan_Graehl
comment by WrongBot · 2010-08-19T16:44:02.492Z · LW(p) · GW(p)

Is the difference that drastic?

Short version: Yes.

Long version: Writing quality can be meaningfully compared along many axes. There are mechanical axes, like correct grammar usage, clarity of expression, precision, succinctness, and readability, all of which I found to be problems (to varying degrees) with this post. These are all (relatively) easy to improve by proof-reading, making multiple drafts, and/or asking others for editing help. Wei Dai's post performs well by all of those measurements.

There are also content axes, like originality, rigor, cleverness, evidentiary support, and usefulness. Hacking the CEV for Fun and Profit does pretty well by these measures, too. This post is a little better with content than it is with mechanics, but poor mechanics obscure content and dilute its weight, so I suspect that the points you were trying to make were underevalued, though not drastically so. Fixing up content is harder than fixing up mechanics; for some ideas, it is impossible. After all, some ideas are just wrong or useless (though this is usually far from obvious).

One writing technique I like and don't use enough: come up with lots of ideas and only explore the most promising ones. Or, as it is written in the Book of Yudkowsky, hold off on proposing solutions.

comment by wedrifid · 2010-08-19T13:10:45.382Z · LW(p) · GW(p)

Err... 33 now. But that is because the content is very compelling. Posts pointing out why CEV is quite possibly a bad thing would have to be quite poor to get a downvote from me. It is a subject that is obvious but avoided.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-19T13:24:43.956Z · LW(p) · GW(p)

I see that too (upvoted it before), yet the argument was that my post was poorly written. Further the argument was that my post lacked references and detail. Also, my post mentions CEV. Further, an AI based on a extrapolated volition of humanity might, very well, conclude that given the better part of the future is unable to sustain our volition, it will abandon humanity. If CEV tries to optimize volition, which includes suffering, negative utilitarianism might well be a factor to lean the result towards non-existence. This idea is widely explored in Stephen Baxter' Manifold trilogy where the far future decides to destroy the universe acausally. This is fictional evidence, but do you want to argue superhuman AI and CEV isn't? It's the exploration of an idea.

Again, how I get downvoted this drastically in comparison to three paragraphs which basically say that a superhuman AI (premise) uses CEV (idea based on premise) would base its extrapolation on uploaded copies (idea based on an idea based on a shaky premise). Compare this to my post which is based on evidence from economic and physics.

comment by Jonathan_Graehl · 2010-08-19T23:23:32.566Z · LW(p) · GW(p)

I voted up your post even in its earlier revisions.

However, Wei Da's is far more novel and entertaining. I would have voted it up 3 times if I could :)

These are all questions I (and most thinking people, have considered before): "Would it be better not to exist at all, if existence is mostly suffering?" ("To be, or not to be?"). "If a deist-type god (not intervening after creation) created this universe and all its rules that imply the suffering we observe, was that a moral act?" "How much pleasure (and for how long) does it take to make it worth some amount of suffering?"

If there was much beyond that in your post, I may have missed it.

comment by XiXiDu · 2010-08-17T16:20:24.142Z · LW(p) · GW(p)

Ok, let's be honest. Have you seriously considered that the disclaimer was anything but a mock? I've been clear enough in the post. If you show me what you don't understand, I'll try to clarify.

I was not trying to be helpful to you, nor did I care about the specifics of what you wrote.

I never said that you were trying to be helpful. I stated that your comment wasn't helpful. This lack of basic understanding, or deliberate misintepretation is what I denounce.

I think I'm going to delete the dislclaimer now.

comment by XiXiDu · 2010-08-17T16:25:55.836Z · LW(p) · GW(p)

Deleted the disclaimer. It was just a mock. I attest the post is clear.

comment by ata · 2010-08-17T17:08:25.321Z · LW(p) · GW(p)

Let's reach the stars first and worry later about how many zillions of years of fun remain. If we eventually run out, then we can abort or wirehead. For now, it seems like the expected awesome of creating an intergalactic posthuman civilization is pretty high.

Even if we create unimaginably many posthumans having unimaginable posthuman fun, and then get into some bitter resource struggles as we approach the heat death of the universe, I think it will have been worth it.

comment by Psychohistorian · 2010-08-17T17:28:08.145Z · LW(p) · GW(p)

This post is centered on a false dichotomy, to address its biggest flaw in reasoning. If we're at time t=0, and widespread misery occurs at time t=10^10, then solutions other than "Discontinue reproducing at t=0" exist. Practical concerns aside - as without practical concerns aside, there is no point in even talking about this - the appropriate solution would be to end reproduction at, say, t=10^9.6. This post arbitrarily says "Act now, or never" when, practically, we can't really act now, so any later time is equally feasible and otherwise simply better.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-17T18:53:20.410Z · LW(p) · GW(p)

It is not a matter of reproduction but the fact that there will be trillions of entities at the point of fatal decay. That is, let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe

But I think practical considerations are also rather important. For one, no entity, not even a FAI, might be able to influence parts of the universe that are no more causally connected due to the accelerated expansion of the universe. There will be many island universes.

Replies from: Psychohistorian
comment by Psychohistorian · 2010-08-17T21:39:44.575Z · LW(p) · GW(p)

The false dichotomy is when to do something about it. The solution to the above problem would be that those last 100 entities were never created. That does not require us to stop creating entities right now. If the entity is never created, its utility is undefined. That's why this is a false dichotomy: you say do something now or never do something, when we could wait until very near the ultimate point of badness to remedy the problem.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T08:57:49.433Z · LW(p) · GW(p)

Look, I'm using the same argumentation as EY and others, that the existence of those being depends on us, just in reverse. Why not their suffering too? I never said this is sound, I don't think it is. I argued before that all those problems only arise if you try to please imaginary entities.

comment by Mitchell_Porter · 2010-08-18T07:07:36.164Z · LW(p) · GW(p)

Most of these questions are beyond our current ability to answer. We can speculate and counter-speculate, but we don't know. The immediate barrier to understanding is that we do not know what pleasure and pain, happiness and suffering are, the way that we think we know what a star or a galaxy is.

We have a concept of matter. We have a concept of computation. We have a concept of goal-directed computation. So we can imagine a galaxy of machines, acting according to shared or conflicting utility functions, and constrained by competition and the death of the universe. But we do not know how that would or could feel; we don't even know that it needs to feel like anything at all. If we imagine the galaxy populated with people, that raises another problem - the possibility of the known range of human experience, including its worst dimensions, being realized many times over. That is a conundrum in itself. But the biggest unknown concerns the forms of experience, and the quality of life, of "godlike AIs" and other such hypothetical entities.

The present reality of the world is that humanity is reaching out for technological power in a thousand ways and in a thousand places. That is the reality that will issue either in catastrophe or in superintelligence. The idea of simply halting that process through cautionary persuasion is futile. To actually stop it, and not just slow it down, would require force. So I think the most constructive attitude towards these doubts about the further future is to see them as input to the process which will create superintelligence. If this superintelligence acts with even an approximation of humaneness, it will be sensitive to such issues, and if it really does embody something like the extrapolated volition of humanity, it will resolve them as we would wish to see them resolved.

Therefore, I propose that your title question - "Should humanity give birth to a galactic civilization?" - should be regarded as a benchmark of progress towards an exact concept of friendliness. A friendly AI should be able to answer that question, and explain its answer; and a formal strategy for friendly AI should be able to explain how its end product - the AI itself - would be capable of answering the question.

comment by XiXiDu · 2010-08-17T17:48:06.686Z · LW(p) · GW(p)

Updated the post to suit criticism.

Let's see, if I can't write a good post maybe I can tweak one to become good based on feedback.

Replies from: WrongBot, thomblake
comment by WrongBot · 2010-08-17T18:23:42.155Z · LW(p) · GW(p)

I applaud this approach (and upvoted this comment), but I think any future posts would be better received if you did more tweaking prior to publishing them.

Replies from: John_Maxwell_IV, XiXiDu
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-20T03:17:51.044Z · LW(p) · GW(p)

How about a Less Wrong peer review system? This could be especially good for Less Wrongers who are non-native speakers. I'll volunteer to review a few posts--dreamalgebra on google's email service. (Or private message, but I somewhat prefer the structure of email since it's easier for me to see the messages I've written.)

comment by XiXiDu · 2010-08-17T18:34:02.764Z · LW(p) · GW(p)

I'll post open thread comments from now on. This was just something that has been on my mind for so long that it became too familiar to be identified as imprudent.

I've been watching Star Trek: The Next Generation as a kid and still remember a subset of this problem to be faced by the Q Continuum. I think Q said that most of its kind committed suicide for that there was nothing new to be discovered out there.

But the main point came to my mind when skimming over what some utilitarians had to say. And people on LW considering the amount of happiness that a future galactic civilization may bear. Now if the universe was infinite, that be absolutely true. But if indeed most of the time is that of decay, especially given an once striving civilization, is the overall payoff still positive?

Replies from: MartinB
comment by MartinB · 2010-08-18T01:17:17.177Z · LW(p) · GW(p)

Fictional evidence: Q as allegedly the last born - but where are his parents? And what about the 'true Q' from her own episode? They fight a freaking war over Qs wish to reproduce, but do not allow one guy to commit suicide. But handing out powers or taking them away is easy as cake. Not particular consistent.

If time comes to build a universe wide civilization, then there will be many minds to ponder all the questions. We do not have to get that right now. (Current physics only allows for colonization of the local group anyhow.) If we put in enough effort to solve the GUT there might be some way around the limitations of the universe, or we will find another way to deal with them - as has been the case many times before. Now is a great time to build an amazing future, but not yet the time to end reproduction.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T09:16:47.470Z · LW(p) · GW(p)

Oh my god! Are you really telling me about fictional evidence here? That is what I criticize to be the case with this whole community. Why am I not allowed to use it?

Anyway, my post is not based on fictional evidence but physics and basic economics.

It's a matter of not creating so many minds in the first place. Minds that the universe is unable to sustain in the short run. Yes, most of the future will be unable to support all those minds.

comment by thomblake · 2010-08-17T17:56:04.213Z · LW(p) · GW(p)

That's often a good strategy.

comment by komponisto · 2010-08-17T13:23:45.312Z · LW(p) · GW(p)

These are very important questions which deserve to be addressed, and I hope this post isn't downvoted severely. However, at least one subset of them has been addressed already:

Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 150 and still have fun?

See the Fun Theory Sequence for discussion.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-17T13:45:20.197Z · LW(p) · GW(p)

I knew about the sequence but forgot to mention it. It is an issue to be integrated into the overall question, and that is why I included it. Thanks for reminding me though.

comment by prase · 2010-08-17T14:36:20.307Z · LW(p) · GW(p)

It is probably premature to ask such questions now. We have no idea how the world will look like in 10^20 years. And when I write no idea, I don't mean we have several theories from which we have to choose the right one, but still can't do that. I mean that (if human race doesn't get extinct soon and future will not turn out to be boringly same as the present or the recent history) we can't possibly imagine how would the world function, and even if told, we wouldn't understand. If there will be intelligent creatures in 10^20 years, they will certainly have emotions we don't possess, thoughts we can't fathom, values we would call perverse, if even the description in the language of emotions and values would make sense in that world.

Why should we care about the world we don't understand one bit? Trying to answer questions about such distant future puts us in the situaton of a Homo erectus evaluating the risks of inventing fire. Do we imagine any of ideas that a Homo erectus could invent would be even marginally valuable for us today? And given that we are no more than several hundert thousand years younger, flatworm would perhaps be a more fitting analogy than Homo erectus.

Replies from: timtyler
comment by timtyler · 2010-08-17T16:28:44.062Z · LW(p) · GW(p)

Why should we care about the world we don't understand one bit?

Darwin answered the question of why we care.

Replies from: Baughn
comment by Baughn · 2010-08-18T15:52:37.824Z · LW(p) · GW(p)

No, Darwin explained what actually happens. There is no should there; we invent those ourselves. Unless you meant that the consequences of evolution give us a better reason to care; but that would in itself be a personal judgement.

I care, too, but there's no law of nature stating that all other humans must also care.

Replies from: timtyler
comment by timtyler · 2010-08-18T16:46:09.853Z · LW(p) · GW(p)

Darwin answered the question of: "why do we care...".

Replies from: Baughn
comment by Baughn · 2010-08-18T17:18:42.488Z · LW(p) · GW(p)

Ah. Point taken; though of course he didn't literally do so for humans, evolution definitely has a lot to do with it.

comment by Richard_Kennaway · 2010-08-18T06:51:06.782Z · LW(p) · GW(p)

Caterpillars discussing the wisdom of flight.

comment by PaulAlmond · 2010-08-17T19:03:24.624Z · LW(p) · GW(p)

We are looking at very long time scales here, so how wide should our scope be? If we use a very wide scope like this, we get issues, but if we widen it still further we might get even more. Suppose the extent of reality were unlimited, and that the scope of effect of an individual action were unlimited, so that if you do something it affects something, which affects something else, which affects something else, and so on, without limit. This doesn't necessarily need infinite time: We might imagine various cosmologies where the scope could be widened in other ways. Where would that leave the ethical value of any action we commit?

I will give an analogy, which we can call "Almond's Puppies" (That's a terrible name really, but it is too late now.)

Suppose we are standing at the end of two lines of boxes. Each line continues without end, and each box contains a puppy - so each line contains an infinity of puppies. You can choose to press a button to blow up the first box or another button to spare it. After you press the button, some mechanism, that you can't predict, will decide to blow up the second box or spare it, based on your decision, and then it will decide to blow up the third box or spare it, based on your decision, and so on. So you press that button, and either the first box is blown up or spared, and then boxes get blown up or spared right along the line, with no end to it.

You have to press a button to start one line off. You choose to press the button to spare the first puppy. Someone else chooses to press the button to blow up the first puppy. The issue now is: Did the other person do a bad thing? If so, why? Did he kill more puppies than you? Does the fact that he was nicer to the nearby puppies matter? Does it matter that the progress of the wave of puppy explosions along the line of boxes will take time, and at any instant of time, only a finite number of puppies will have been blown up, even though there is no end to it in the future?

If we are looking at distant future scenarios, we might ask if we are sure that reality is limited.

Replies from: Emile
comment by Emile · 2010-08-17T19:50:04.228Z · LW(p) · GW(p)

I don't understand your Puppies question. When you say:

You can choose to press a button to blow up the first box or another button to spare it. After you press the button, some mechanism, that you can't predict, will decide to blow up the second box or spare it, based on your decision, and then it will decide to blow up the third box or spare it, based on your decision, and so on.

.... what do you mean by "based on your decision"? They decide the same as you did? The opposite? There's a relationship to your decision but you don't know which one.

I am really quite confused, and don't see what moral dilemma there is supposed to be beyond "should I kill a puppy or not?" - which on the grand scale of things isn't a very hard Moral Dilemma :P

Replies from: PaulAlmond
comment by PaulAlmond · 2010-08-17T20:04:03.426Z · LW(p) · GW(p)

"There's a relationship to your decision but you don't know which one". You won't see all the puppies being spared or all the puppies being blown up. You will see some of the puppies being spared and some of them being blown up, with no obvious pattern - however you know that your decision ultimately caused whatever sequence of sparing/blowing up the machine produced.

comment by Armok_GoB · 2010-08-19T11:46:45.365Z · LW(p) · GW(p)

If this was the case, a true FAI will just kill us more painlessly than we could. And then go out and stop life evolved in other places from causing such suffering.

comment by knb · 2010-08-17T19:47:40.704Z · LW(p) · GW(p)

But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.

A different (more likely?) scenario is that the god-like entities will not gradually decline their resource usage--they'll store up energy reserves, then burn through them as efficiently as possible, then shut down. It will be really sad each time a god-like entity dies, but not necessarily painful.

Actually, if evolutionary pressures continue (i.e. no singleton) it seems fairly likely that usable resources will collapse suddenly, and resource starvation is relatively brief. Right now, we have an energy diet from the sun--it only releases so much energy at once. But future entities may try to break up stars to use their energy more efficiently (solar fusion is highly inefficient compared to possible levels).

comment by Perplexed · 2010-08-17T16:04:47.203Z · LW(p) · GW(p)

Thanks for posting. Upvoted.

I have always had an uncomfortable feeling whenever I have been asked to include distant-future generations in my utilitarian moral considerations. Intuitively, I draw on my background in economics, and tell myself that the far-distant future should be discounted toward zero weight. But how do I justify the discounting morally? Let me try to sketch an argument.

I will claim that my primary moral responsibility is to the people around me. I also have a lesser responsibility to the next generation, and a responsibility lesser yet to the generation after that, and so on. A steep discount rate - 30% per generation or so. I will do my duty to the next generation, but in turn I expect the next generation to do its duty to the generation after that. After all, the next generation is in a far better position than me to forsee what problems the generation after that really faces. Their efforts will be much less likely than mine to be counterproductive.

If I were to spread my concern over too many generations, I would be shortchanging the next generation of their fair share of my concern. Far-future generations have plenty of predecessor generations to worry about their welfare. The next generation has only us. We mustn't shortchange them!

This argument is just a sketch, of course. I just invented it today. Feedback is welcome.

Replies from: timtyler, Kingreaper, Dagon
comment by timtyler · 2010-08-17T16:24:30.204Z · LW(p) · GW(p)

In nature, the best way you can help your great grand-kids is to help your children. If there was a way to help your grandchildren at the expense of your children that ultimately benefitted the grandchildren, nature might favour it - but usually there is simply no easy way to do that.

Grandparents do sometimes favour more distant offspring in their wills - if they think the direct offspring are compromised or irresponsible, for example. Such behaviour is right and natural.

Temporal discounting is a reflection of your ignorance and impotence when it comes to the distant future. It is not really that you fundamentally care less about the far future - it is more that you don't know and can't help - so investing mental resources would be rather pointless.

Replies from: Unknowns
comment by Unknowns · 2010-08-19T12:36:28.391Z · LW(p) · GW(p)

According to Robin Hanson, our behavior proves that we don't care about the far future.

Replies from: timtyler
comment by timtyler · 2010-08-19T16:58:17.880Z · LW(p) · GW(p)

Robin argues that few are prepared to invest now to prevent future destruction of the planet. The conclusion there seems to be that humans are not utilitarian agents.

Robin seems to claim that humans do not invest in order to pass things on to future generations - whereas in fact they do just that whenever they invest in their own offspring.

Obviously you don't invest in your great-grandchildren directly. You invest in your offspring - they can manage your funds better than you can do so from your wheelchair or grave.

Temporal discounting make sense. Organisms do it becasue they can't see or control the far future as well as their direct descendants can. In those rare cases where that is not true, direct descendants can sometimes be bypassed.

However, you wouldn't want to build temporal discounting into the utility function of a machine intelligence. It knows better than you do its prediction capablities - and can figure out such things for itself.

Since that exact point was made in the Eliezer essay Robin's post was a reply to, it isn't clear that Robin understands that.

comment by Kingreaper · 2010-08-17T23:51:51.522Z · LW(p) · GW(p)

I don't think you need any discounting. Your effect on the year 2012 is somewhat predictable. It is possible to choose a course of action based on known effect's on the year 2012.

You effect on the year 3000 is unpredictable. You can't even begin to predict what effect your actions will have on the human race in the year 3000.

Thus, there is an automatic discounting effect. An act is only as valuable as it's expected outcome. The expected outcome on the year 1,000,000 is almost always ~zero, unless there is some near-future extinction possibility, because the probability of you having a desired impact is essentially zero.

comment by Dagon · 2010-08-17T18:11:20.724Z · LW(p) · GW(p)

I tend to agree, in that I also have a steep discount across time and distance (though I tend to think of it as "empathetic distance", more about perceived self-similarity than measurable time or distance, and I tend to think of weightings in my utility function rather than using the term "moral responsibility").

That said, it's worth asking just how steep a discount is justifiable - WHY do you think you're more responsible to a neighbor than to four of her great-grandchildren, and do you think this is the correct discount to apply?

And even if you do think it's correct, remember to shut up and multiply. It's quite possible for there to be more than 35x as much sentience in 10 generations as there is today.

comment by PaulAlmond · 2010-08-17T14:27:43.978Z · LW(p) · GW(p)

Is anyone going to propose this as an answer to (what some say is) the Fermi paradox?

Replies from: None
comment by [deleted] · 2010-08-17T14:42:04.185Z · LW(p) · GW(p)

I thought people would be too bored of it to mention. I've heard it proposed dozens of times as a possible explanation. I should probably spend less time with philosophy majors.

Anyway the strong version of the statement is much more interesting. Not only do naturally evolved intelligences all have values that for some reason or another choose to rather let it all end than endure existence, it also means that they never spawn AI's with values sufficiently radical to disagree. The mind space that encompasses is mind boggling.

Either its hard for a civ to build a AI with truly alien values, they go extinct before they can build AIs (different argument), decide to kill themselves before doing so (odd) or nearly all possible minds agree nonexistance is good.

We may have very very weird minds if the last option is the answer.

comment by [deleted] · 2010-08-17T17:18:29.945Z · LW(p) · GW(p)

what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized

If it does turn out that the overwhelmingly likely future is one of extreme negative utility, voluntary extinction (given some set of assumptions) IS maximizing utility.

Also, if the example really is as tangental as you're implying, it should probably not account for 95% of the text (and the title, and the links) in your post.

comment by jimrandomh · 2010-08-17T13:31:44.183Z · LW(p) · GW(p)

I cannot fathom the confusion that would lead to this question. Of course it's better for humanity to survive than to not survive. Of course it's better to go extinct in a million years than to go extinct now. The future is more wondrous and less scary than you imagine.

Replies from: komponisto
comment by komponisto · 2010-08-17T13:38:42.427Z · LW(p) · GW(p)

Of course it's better for humanity to survive than to not survive

That only makes sense if you think life is always better than death. But that certainly isn't my view -- I think some possible futures are so bad that extinction would be preferable. In that case, the answer to the title question depends on the probabilities of such futures.

EDIT: For the record, I don't think we need to resort to pulling the plug on ourselves anytime soon.

Replies from: None, XiXiDu
comment by [deleted] · 2010-08-17T14:12:07.138Z · LW(p) · GW(p)

I don't think life is always better than death according to my utility function.

I do however think that the most likley outcome considering the priorities of the blind idiot god or perhaps even self described benevolent minds is that inhabitants of such spaces in the very long term, are minds who are quite ok being there.

On "Benevolent" minds: If I knew beyond a doubt that something which I would consider hell exists and that everyone goes there after being resurrected on judgment day, and I also knew that it was very unlikely that I could stop everyone from ever being born or being resurrected I would opt for trying to change or create people that would enjoy living in that hell.

comment by XiXiDu · 2010-08-17T13:48:15.136Z · LW(p) · GW(p)

My question was not meant to be interpreted literally but was rather instrumental in highlighting the idea of what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the amount of suffering increasing. Instrumentally, isn't it better to believe that winning is impossible, than that it's likely, if the actual probability is very low?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-17T18:00:22.793Z · LW(p) · GW(p)

To decide to lose intentionally, I need to know how much it costs to try to win, what the odds of success are, and what the difference in utility is if I win.

I feel like people weigh those factors unconsciously and automatically (using bounded resources and rarely with perfect knowledge or accuracy).

comment by cata · 2010-08-17T13:26:28.182Z · LW(p) · GW(p)

I think freedom should win when contemplating how to ethically shape the future. I don't have any direct evidence that posthumans in a post-Singularity universe will be "happy" throughout their lives in a way that we value, but you certainly don't have evidence to the contrary, either.

As long as neither of us know the exact outcome, I think the sensible thing to do is to maximize freedom, by trying to change technology and culture to unburden us and make us more capable. Then the future can decide for itself, instead of relying on you or I to worry about these things.

Also, how many people do you know who could honestly claim, "I wish I had never been born?" Although there are certainly some, I don't think there are very many, and life down here on Earth isn't even that great.

Replies from: None, XiXiDu
comment by [deleted] · 2010-08-17T14:04:05.458Z · LW(p) · GW(p)

And there is nearly a infinite space of minds that wold look at all life today and consider it better to have never existed.

The minds most likley to live for long periods in a situation that we would judge them to be better off never been born at all are either extremely unfree (no suicide) or are already adapted to consider it perfectly tolerable or perhaps even enjoyable.

comment by XiXiDu · 2010-08-17T13:57:34.949Z · LW(p) · GW(p)

...how many people do you know who could honestly claim, "I wish I had never been born?"

I personally know a few who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time between 10^20 and 10^100 where possible trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse and longer, without any hope.

Replies from: cata
comment by cata · 2010-08-17T14:20:25.920Z · LW(p) · GW(p)

I agree, that sounds very depressing. However, I don't understand the minds, emotions, or culture of the entities that will exist then, and as such, I don't think it's ethical for me to decide in advance how bad it is. We don't kill seniors with Alzheimer's, because it's not up to us to judge whether their life is worth living or not.

Plus, I just don't see the point in making a binding decision now about potential suffering in the far future, when we could make it N years from now. I don't see how suicide would be harder later, if it turns out to be actually rational (as long as we aim to maintain freedom.)

Replies from: XiXiDu
comment by XiXiDu · 2010-08-17T14:26:21.321Z · LW(p) · GW(p)

To pull the plug later could (1) be impossible, (2) result in more death than it would now.

However, I agree with you. It was not my intention to suggest we should abord humanity but rather to inquire about the similarities to the abortion of a fetus that is predicted to suffer from severe disabilities in its possible future life.

Further, my intention was to inquire about the perception that it is our moral responsibility to minimize suffering. If we cannot minimize it by actively shaping the universe, but rather risk to increase it, what should we do about it?

Replies from: cata
comment by cata · 2010-08-17T15:37:19.833Z · LW(p) · GW(p)

I don't really understand your greater argument. Inaction (e.g. sitting on Earth, not pursuing AI, not pursuing growth) is not morally neutral. By failing to act, we're risking suffering in various ways; insufficiency of resources on the planet, political and social problems, or a Singularity perpetrated by actors who are not acting in the interest of humanity's values. All of these could potentially result in the non-existence of all the future actors we're discussing. That's got to be first and foremost in any discussion of our moral responsibility toward them.

We can't opt out of shaping the universe, so we ought to do a good a job as we can as per our values. The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-17T15:54:20.546Z · LW(p) · GW(p)

The argument is about action. We forbid inbreeding because it causes suffering in future generations. Now if there is no way that the larger future could be desirable, i.e. if suffering is prevailing, then I ask how many entities have to suffer to forbid humanity to seed the universe? What is your expected number of entities born after 10^20 years who'll face a increasing lack of resources until the end at around 10^100 years? All of them are doomed to face a future that might be shocking and undesirable. This is not a small part but most of it.

The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.

But what is there that speaks for our future ability to stop entropy?

Replies from: cata
comment by cata · 2010-08-17T17:21:42.119Z · LW(p) · GW(p)

If we can't stop entropy, then we can't stop entropy, but I still don't see why our descendants should be less able to deal with this fact than we are. We appreciate living regardless, and so may they.

Surely posthuman entities living at the 10^20 year mark can figure out much more accurately than us whether it's ethical to continue to grow and/or have children at that point.

As far as I can tell, the single real doomsday scenario here is, what if posthumans are no longer free to commit suicide, but they nevertheless continue to breed; heat death is inevitable, and life in a world with ever-decreasing resources is a fate worse than death. That would be pretty bad, but the first and last seem to me unlikely enough, and all four conditions are inscrutable enough from our limited perspective that I don't see a present concern.

comment by XiXiDu · 2010-08-17T18:58:49.241Z · LW(p) · GW(p)

I added to the post:

To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.

Replies from: Kingreaper
comment by Kingreaper · 2010-08-18T00:06:56.355Z · LW(p) · GW(p)

Gradually reducing mental processing speed, approaching the universes heatdeath (ie. at a point where nothing else of interest is occuring) and death (painlessly) are analogous.

Neither of those options is, in any sense, torture. They're just death.

So I'm really not sure what you're getting at.

comment by Jonathan_Graehl · 2010-08-17T17:51:16.296Z · LW(p) · GW(p)

What if there isn't even enough fun for normal human beings to live up until an age of 150 and still have fun?

Really?

my intention is to inquire about the perception that it is our moral responsibility to minimize suffering

It's okay for us to cause more entities to exist, for a greater sum of suffering, provided that it's one of the better possible outcomes.

While intervening in a way that (with or without) consent inflicts a certain amount of additional net suffering on others (such as causing them to be created) is to be avoided all other things equal, it's justifiable if the net fun is increased by some multiple of the suffering (the requisite multiple depends on consent, on who gets the fun, i.e. if you're gaining fun by torturing another, the multiple may have to be huge).

I agree that we should consider the possibility of suffering. Suicide (by radical modification into something that is not suffering, or actual termination) seems like an easy solution.

I imagine some "artist" eventually creating a creature that is sentient, feels great pain, and eloquently insists that it does not want to be changed, or ended. Sick bastard. Or perhaps it would merely be programmed to elaborately fake great pain, to others' discomfort, while secretly reveling in it. I imagine technology would be able to tell the difference.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-17T18:25:32.549Z · LW(p) · GW(p)

I updated the unreasonable age of 150 to 1000 in the OP. I was thinking about myself and how movies seem to become less interesting the more I watch as the amount of unique plots and general content they expose continues to decrease.

Thanks for your insightful comment.

Replies from: None
comment by [deleted] · 2010-08-17T21:53:32.243Z · LW(p) · GW(p)

I have at least a century of interesting math waiting for me that I will never get to. I feel really bad every time I think about that.

Replies from: cousin_it
comment by cousin_it · 2010-08-17T21:55:32.359Z · LW(p) · GW(p)

Seconded.

And more new interesting math seems to get created all the time. It's like drinking from the firehose.

comment by XiXiDu · 2010-08-17T14:38:35.318Z · LW(p) · GW(p)

I added to the post. Please read the To clarify addendum. Thank you.

comment by PhilGoetz · 2010-08-23T18:42:54.319Z · LW(p) · GW(p)

This is why Buddhism is dangerous.

comment by Summerspeaker · 2010-08-18T14:27:23.819Z · LW(p) · GW(p)

What do y'all think about John Smart's thesis that an inward turn is more likely that the traditional script galactic colonization?

http://www.accelerating.org/articles/answeringfermiparadox.html

Rather wild read, but perhaps worth a thought. Would that alternative trajectory affect your opinion of the prospect, XiXiDu?

comment by [deleted] · 2010-08-17T17:00:18.676Z · LW(p) · GW(p)

Interesting thoughts. I also haven't finished the fun sequence, so this may be malformed. The way I see it is this: You can explore and modify your environment for fun and profit (socializing counts here too), and you can modify your goals to get more fun and profit without changing your knowledge.

Future minds may simply have a "wirehead suicide contingency" they choose to abide by, by which, upon very, very strong evidence that they can have no more fun with their current goals, they could simply wirehead themselves. Plan it so that the value of just being alive and experiencing the slow end of the world goes up as other sources of fun diminish. (And leave in there a huge reward for discovering that you are wrong, just not motive to seek it out irrationally).

You would need a threshold of probability that life is going to suck forever from here on out, only after which the contingency was initiated.

comment by timtyler · 2010-08-17T16:27:07.888Z · LW(p) · GW(p)

Relevant literature: "Should This Be the Last Generation?" By PETER SINGER

Replies from: XiXiDu
comment by XiXiDu · 2010-08-17T16:30:38.379Z · LW(p) · GW(p)

Great, thank you.

Replies from: timtyler
comment by timtyler · 2010-08-17T16:40:33.225Z · LW(p) · GW(p)

's OK. That lists a whole book about the topic - and there is also:

"The Voluntary Human Extinction Movement"

Replies from: None
comment by [deleted] · 2010-08-17T21:47:51.536Z · LW(p) · GW(p)

If any movement is dysgenic that surely must be it.

Lets see people who are altruistic and in control of their instincts and emotions enough to not have children in order to alleviate very distant future suffering which to top it all of is a very very abstract argument to begin with, yeah those are the kind who should stop having children first. Great plan.

I first wanted to write "self-defeating" but I soon realized they may actually get their wish, but only if they convince enough of the people who's kids should be working on friendly AI in 20 something years to rather not have the second or even first one.

But it won't leave the Earth to "nature" as they seem to be hoping.

comment by XiXiDu · 2010-08-17T15:29:10.278Z · LW(p) · GW(p)

I'd like to ask those people who downvote this post for their reasons. I thought this is a reasonable antiprediction to the claims made regarding the value of a future galactic civilisation. Based on economic and scientific evidence it is reasonable to assume that the better part of the future, namely the the time from 10^20 to 10^100 years (and beyond) will be undesirable.

If you spend money and resources on the altruistic effort of trying to give birth to this imaginative galactic civilisation, why don't you take into account the more distant and much larger part of the future that lacks any resources to sustain given civilisation? You are deliberately causing suffering here by putting short-term interests over those of the bigger part of the future.

Replies from: Emile, Vladimir_Nesov, Dagon, Kingreaper, neq1
comment by Emile · 2010-08-17T15:55:44.592Z · LW(p) · GW(p)

I didn't downvote the post - it is thought-provoking, though I don't agree with it.

But I had a negative reaction to the title (which seems borderline deliberately provocative to attract attention), and the disclaimer - as thomblake said, "Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading."

Replies from: XiXiDu
comment by XiXiDu · 2010-08-17T16:12:02.726Z · LW(p) · GW(p)

It is the disclaimer. I was rather annoyed at all the comments to my other post. People claimed things I to my understanding never said. And if what I said was analyzed I'm sure nobody could show me how to come to such conclusions. As was obvious, not even EY read my post but simply took something out of context and run with it.

comment by Vladimir_Nesov · 2010-08-18T00:41:38.239Z · LW(p) · GW(p)

Future is the stuff you build goodness out of. The properties of stuff don't matter, what matters is the quality and direction of decisions made about arranging it properly. If you suggest a plan with obvious catastrophic problems, chances are it's not what will be actually chosen by rational agents (that or your analysis is incorrect).

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T09:12:22.365Z · LW(p) · GW(p)

The analysis is incorrect? Well, ask the physicists.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-18T09:30:22.839Z · LW(p) · GW(p)

Moral analysis.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T09:51:09.429Z · LW(p) · GW(p)

Yes, I think so too. But I haven't seen any good arguments against Negative utilitarianism in the comments yet. (More here)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-18T10:05:33.637Z · LW(p) · GW(p)

You lost the context. Try not to drift.

Replies from: Wei_Dai, XiXiDu
comment by Wei Dai (Wei_Dai) · 2010-08-18T10:14:41.385Z · LW(p) · GW(p)

Is this really worth your time (or Carl Shulman's)? Surely you guys have better things to do?

Replies from: XiXiDu, CarlShulman
comment by XiXiDu · 2010-08-18T10:23:05.197Z · LW(p) · GW(p)

If you tell me where my argumentation differs from arguments like this, I'll know if it is a waste or not. I can't figure it out.

comment by CarlShulman · 2010-08-18T10:39:10.105Z · LW(p) · GW(p)

Since XiXiDu and multifoliaterose's posts have all been made during the Singularity Summit, when everyone at SIAI is otherwise occupied and so cannot respond, I thought someone familiar with the issues should engage rather than leave a misleading appearance of silence. And giving a bit of advice that I think has a good chance of improving XiXiDu's contributions seemed reasonable and not too costly.

comment by XiXiDu · 2010-08-18T10:17:52.147Z · LW(p) · GW(p)

Future is the stuff you build goodness out of. The properties of stuff don't matter, what matters is the quality and direction of decisions made about arranging it properly.

There is not enough stuff to sustain a galactic civilization for very long (relative to the expected time of the universe to sustain intelligence). There is no way to alter the quality or direction of the fundamental outcome in any way to overcome this problem (given what we know right now).

If you suggest a plan with obvious catastrophic problems, chances are it's not what will be actually chosen by rational agents (that or your analysis is incorrect).

That's what I am inquiring about, is it rational given that we adopt a strategy of minimizing suffering? Or are we going to create trillions to have fun for a relatively short period and then have them suffering or commit suicide for a much longer period?

comment by Dagon · 2010-08-17T18:21:00.146Z · LW(p) · GW(p)

It's a worthwhile question, but probably fits better on an open thread for the first round or two of comments, so you can refine the question to a specific proposal or core disagreement/question.

My first response to what I think you're asking is that this question applies to you as an individual just as much as it does to humans (or human-like intelligences) as a group. There is a risk of sadness and torture in your future. Why keep living?

comment by Kingreaper · 2010-08-18T00:13:05.092Z · LW(p) · GW(p)

I thought this is a reasonable antiprediction to the claims made regarding the value of a future galactic civilisation. Based on economic and scientific evidence it is reasonable to assume that the better part of the future, namely the the time from 10^20 to 10^100 years (and beyond) will be undesirable.

I don't believe that is a reasonable prediction. You're dealing with timescales so far beyond human lifespans that assuming they will never think of the things you think of is entirely implausible.

In this horrendous future of yours, why do people keep reproducing? Why don't the last viable generation (knowing they're the last viable generation) cease reproduction?

If you think that this future civilisation will be incapable of understanding the concepts you're trying to convey, what makes you think we will understand them?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T09:00:58.809Z · LW(p) · GW(p)

It is not about reproduction but that at that time there'll already be much more entities than ever before. And they all will have to die. Now only a few will have to die or suffer.

And it is not my future. It's much more based on evidence than the near-term future talked about on LW.

Replies from: Kingreaper
comment by Kingreaper · 2010-08-18T11:15:10.084Z · LW(p) · GW(p)

Ah, I get it now, you believe that all life is necessarily a net negative. That existing is less of a good than dying is of a bad.

I disagree, and I suspect almost everyone else here does too. You'll have to provide some justification for that belief if you wish us to adopt it.

Replies from: Baughn
comment by Baughn · 2010-08-18T15:48:12.654Z · LW(p) · GW(p)

I'm not sure I disagree, but I'm also not sure that dying is a necessity. We don't understand physics yet, much less consciousness; it's too early to assume it as a certainty, which means I have a significantly nonzero confidence of life being an infinite good.

Replies from: ata, XiXiDu
comment by ata · 2010-08-18T15:52:15.215Z · LW(p) · GW(p)

I have a significantly nonzero confidence of life being an infinite good.

Doesn't that make most expected utility calculations make no sense?

Replies from: Baughn
comment by Baughn · 2010-08-18T16:02:45.338Z · LW(p) · GW(p)

A problem with the math, not with reality.

There are all kinds of mathematical tricks to deal with infinite quantities. Renormalization is something you'd be familiar with from physics; from my own CS background, I've got asymptotic analysis (which can't see the fine details, but easily can handle large ones). Even something as simple as taking the derivative of your utility function would often be enough to tell which alternative is best.

I've also got a significantly nonzero confidence of infinite negative utility, mind you. Life isn't all roses.

comment by XiXiDu · 2010-08-18T15:53:47.508Z · LW(p) · GW(p)

We already donate based on the assumption that superhuman AI is possible and that it is right to base our decisions on extrapolated utility of it and a possible galactic civilisation. Why are we not able to make decisions based on a more evidence based economic and physical assumption of a universe that is unable to sustain a galactic civilisation for most of its lifespan and the extrapolated suffering that is a conclusion of this prediction?

Replies from: Baughn
comment by Baughn · 2010-08-18T16:06:32.599Z · LW(p) · GW(p)

Well, first off..

What kind of decisions were you planning to take? You surely wouldn't want to make a "friendly AI" that's hardcoded to wipe out humanity; you'd expect it to come to the conclusion that that's the best option by itself, based on CEV. I'd want it to explain its reasoning in detail, but I might even go along with that.

My argument is that it's too early to take any decisions at all. We're still in the data collection phase, and the state of reality is such that I wouldn't trust anything but a superintelligence to be right about the consequences of our various options anyway.

We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T16:11:38.095Z · LW(p) · GW(p)

We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.

True, I have to read up on CEV and see if there was a possibility that a friendly AI could decide to kill us all to reduce suffering in the long-term.

The whole idea in the OP stems from the kind of negative utilitarianism that sgggests that it is not worth to torture 100 people infinitel to make billions happy. So I thought to extrapolate this and see what if we figure out that in the long run most entities will be suffering?

Replies from: Baughn
comment by Baughn · 2010-08-18T16:17:28.152Z · LW(p) · GW(p)

Negative utilitarianism is.. interesting, but I'm pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?

That's not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.

comment by neq1 · 2010-08-18T00:32:00.144Z · LW(p) · GW(p)

In my opinion, the post doesn't warrant -90 karma points. That's pretty harsh. I think you have plenty to contribute to this site -- I hope the negative karma doesn't discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T09:08:14.688Z · LW(p) · GW(p)

That I get bad karma here is completely biased in my opinion. People just don't realize that I'm basing extrapolated conclusions on some shaky premises just like LW does all the time when talking about the future galactic civilization and risks from AI. The difference is, my predictions are much more based on evidence.

It's a mock of all that is wrong with this community. I already thought I'd get bad karma for my other post but was surprised not to. I'll probably get really bad karma now that I say this. Oh well :-)

To be clear, this is a thought experiment about asking what we can and should do if we ultimately are prone to cause more suffering than happiness. It's nothing more than that. People suspect that I'm making strong arguments, that it is my opinion, that I ask for action. Which is all wrong, I'm not the SIAI. I can argue for things I don't support and not even think are sound.

Replies from: CarlShulman, Kevin
comment by CarlShulman · 2010-08-18T09:34:39.228Z · LW(p) · GW(p)

Note that multifoliaterose's recent posts and comments have been highly upvoted: he's gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T09:47:19.760Z · LW(p) · GW(p)

You are one of the few people here whose opinion I'm actually taking serious, after many insightful and polite comments. What is the bone of contention in the OP? I took a few different ingredients: Robin Hanson's argumentation about resource problems in the far future (the economic argument); Questions based on Negative utilitarianism (the ethical argument); The most probable fate of the universe given current data (the basic premise) -- then I extrapolated from there and created an antiprediction. That is, I said that it is too unlikely that the outcome will be good to believe that it is possible. Our responsibility is to prevent a lot of suffering over 10^100 years.

I never said I support this conclusion or think that it is sound. But I think it is very similar to other arguments within this community.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-18T10:27:25.120Z · LW(p) · GW(p)

On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which "reads" angry, and doesn't fit with norms of politeness and discourse here).

Substantively, I'll consider the major pieces individually.

The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar's arguments and made your points more explicit, but instead simply stated the conclusion.

The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce's Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.

For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the "lifeboat ethics" scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer's doesn't work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.

In several places throughout the post you use "what if" language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.

Edit: I misread the "likely" in this sentence and mistakenly objected to it.

Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T10:41:31.513Z · LW(p) · GW(p)

I think that spending more time reading the sequences, and the posts of highly upvoted Less Wrongers such as Yvain and Kaj Sotala, will help you to improve your sense of the norms of discourse around here.

I copied that sentence from here (last sentence).

Thanks, I'll quit making top-level posts as I doubt I'll ever be able to exhibit the attitude required for the level of thought and elaboration you demand. That was actually my opinion before making the last and first post. But all this, in my opinion, laughable attitude around Roko's post made me sufficiently annoyed to signal my incredulity.

ETA

In several places throughout the post you use "what if" language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.

The SIAI = What If?

Replies from: CarlShulman
comment by CarlShulman · 2010-08-18T10:57:02.562Z · LW(p) · GW(p)

I copied that sentence from here (last sentence). I completely misread that sentence taking "likely" as 0<p<1. My apologies.

comment by Kevin · 2010-08-18T09:15:56.624Z · LW(p) · GW(p)

I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T09:26:16.745Z · LW(p) · GW(p)

I'm talking about these kind of statements: http://www.vimeo.com/8586168 (5:45)

"If you confront it rationally full on then you can't really justify trading off any part of galactic civilization for anything that you could get now days."

So why, I ask you directly, am I not to argue that we can't really justify to balance the happiness and utility of a galactic civilization with the MUCH longer time of decay? There is this whole argument about how we have to give rise to the galactic civilization and have to survive now. But I predict that suffering will prevail. That it is too unlikely that the outcome will be positive. What is wrong with that?

comment by [deleted] · 2010-08-17T13:34:30.131Z · LW(p) · GW(p)

First a few minor things I would like to get out there:

We are according to consensus which I do not dispute (since its well founded) slowly approaching heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?

I don't quite see the difference between real and simulated torture in the context of a civilization as advanced as the one you are arguing against we let develop. So I'm not sure you are getting at by mentioning them as separate things.

You need to read up on fun theory. And if you disregard it, let me just point out that worrying about people not having fun is different concern than from assuming they will experience mental anguish at the prospect of suicide or a inevitable death. Actually not having fun can be neatly solved by suicide if you exhaust all other options as long as you aren't built to find it stressful committing to it.

Now assuming your overall argument has merit: My value function says its better to have loved and lost than not to have loved at all.

Humans may have radically different values once they are blown up to scale. Unless you get your finger first into the first AI's values, there will always be a nonzero fraction of agents who would wish to carry on even knowing it will increase total suffering, because they feel their values are worth suffering for. I am basically talking about practicality now: So what if you are right? The only way to do anything about it is to make sure your AI eliminates anything be it human or alien AI that can paperclip anything like beings capable of suffering. To do this in the long run (not just kill or sterilize all humans which is easy) properly you need to understand friendliness much better than we do now.

If you want to learn about friendliness you better try and learn to deceive agents with whom you might be able to work together to figure out more about it, especially concerning your motives. ;)

Replies from: humpolec, XiXiDu
comment by humpolec · 2010-08-17T13:53:06.364Z · LW(p) · GW(p)

We are according to consensus which I do not dispute since its well founded slowly approach heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?

Dyson's eternal intelligence. Unfortunately I know next to nothing about physics so I have no idea how this is related to what we know about the universe.

Replies from: Baughn
comment by Baughn · 2010-08-18T15:55:18.645Z · LW(p) · GW(p)

It runs into edge conditions we know little about; like, are protons stable or not. (The answer appears to be no, by the way.)

At this point in time I would not expect to be able to do infinite computation in the future. The future has a way of surprising, though; I'd prefer to wait and see.

comment by XiXiDu · 2010-08-17T14:19:15.659Z · LW(p) · GW(p)

I don't quite see the difference between real and simulated torture...

I tried to highlight the increased period of time you have to take into account. This allows for even more suffering than the already huge time span implies from a human perspective.

You need to read up on fun theory.

Indeed, but I felt this additional post was required as many people were questioning this point in the other post. Also, I came across a post by a physicist which triggered this post. I simply have my doubts that the sequence you mention has resolved this issue? But I will read it of course.

My value function says its better to have loved and lost than not to have loved at all.

Mine too. I would never recommend to give up. I want to see the last light shine. But I perceive many people here to be focused on the amount of possible suffering, so I thought to inquire on what they would recommend if it is more likely that the overall suffering will increase. Would they rather pull the plug?

comment by Summerspeaker · 2010-08-17T21:54:44.576Z · LW(p) · GW(p)

On balance I'm not too happy with the history of existence. As Douglas Adams wrote, "In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move." I'd rather not be here myself, so I find the creation of other sentients a morally questionable act. On the other hand, artificial intelligence offers a theoretical way out of this mess. Worries about ennui strike me as deeply misguided. Oppression, frailty, and stupidity makes hanging out in this world unpleasant, not any lack of worthwhile pursuits. Believe me, I could kill a few millennia no problem. If Kurzweil's dreams of abundance (in every sense) come true, I won't be complaining.

Now, the notion of a negative but nonfatal Singularity deserves consideration. The way I typically see things, there's either death or Singularity in the long run and both are good. Indefinite life extension without revolutionary economic and social change would be a nightmare, though perhaps better at every individual point than the pain of aging.

Your concerns about the ultimate fate of the universe are intriguing but too distant to arouse much emotion from me. Who knows what will happen then? Such entities might travel to other universes or forge their own. I'll just say that judging by the present record, intelligence and suffering go together. Whether we can escape this remains to be seen.