Living in the shadow of superintelligence
post by Mitchell_Porter · 2013-06-24T12:06:18.614Z · LW · GW · Legacy · 17 commentsContents
17 comments
Although it regularly discusses the possibility of superintelligences with the power to transform the universe in the service of some value system - whether that value system is paperclip maximization or some elusive extrapolation of human values - it seems that Less Wrong has never systematically discussed the possibility that we are already within the domain of some superintelligence, and what that would imply. So how about it? What are the possibilities, what are the probabilities, and how should they affect our choices?
17 comments
Comments sorted by top scores.
comment by DSherron · 2013-06-24T12:47:23.158Z · LW(p) · GW(p)
Endless, negligible, and not at all. Reference every atheism argument ever.
Replies from: loup-vaillant↑ comment by loup-vaillant · 2013-06-25T16:23:55.525Z · LW(p) · GW(p)
I disagree with "not at all", to the extent that the Matrix has probably much less computing power than the universe it runs on. Plus, it could have exploitable bugs.
This is not a question worth asking for us mere mortals, but a wannabe super-intelligence should probably think about it for at least a nanosecond.
Replies from: DSherron↑ comment by DSherron · 2013-06-26T00:54:16.050Z · LW(p) · GW(p)
Hell, it's definitely worth us thinking about it for at least half a second. Probably a lot more than that. It could have huge implications if we discovered that there was evidence of any kind of powerful agent affecting the world, Matrix-esque or not. Maybe we could get into heaven by praying to it, or maybe it would reward us based on the number of paperclips we created per day. Maybe it wouldn't care about us, maybe it would actively want to cause us pain. Maybe we could use it, maybe it poses an existential risk. All sorts of possible scenarios there, and the only way to tell what actions are appropriate is to examine... the... evidence... oh right. There is none, because in reality we don't live in the Matrix and there isn't any superintelligence out there in our universe. So we file away the thought, with a note that if we ever do run into evidence of such a thing (improbable events with no apparent likely cause) that we should pull it back out and check. But that's not the same as thinking about it. In reality, we don't live in that world, and to the extent that is true then the answer to "what do we do about it" is "exactly what we've always done."
comment by Shmi (shminux) · 2013-06-24T14:46:18.891Z · LW(p) · GW(p)
Matrixy scenarios have been discussed here and elsewhere (including xkcd and smbc) quite a bit, actually. Given that a superintelligence would be incomprehensible to mere mortals, it's kind of pointless to discuss this eventuality seriously. Plus you risk joining the dark world of anthropics. However, we can understand a similar situation from the other end: life of animals in the shadow of humans. Some of these humans are paperclip maximizers (displace animals to make room for crops), and others have an animal CEV value system (various animal rights groups). This has also been discussed a lot, of course. Unfortunately the only benefit of the latter discussion that I can see is that of a low inferential distance analogy in the fight to raise awareness of the potential dangers of "living in the shadow of superintelligence" to humans.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2013-06-24T16:27:18.517Z · LW(p) · GW(p)
I observe that even animal rights groups are somewhat unlikely to assert the right of lions to eat humans, which an anthropomorphized lion would likely consider a fundamental value. So there's a limit to our Friendliness. :)
Replies from: DanielLC, shminux↑ comment by Shmi (shminux) · 2013-06-24T16:38:45.892Z · LW(p) · GW(p)
the right of lions to eat humans
I'm pretty sure that lions prefer gazelles to humans, and generally don't care too much as long as there is enough food. Maybe a decent zoo is what the lion CEV amounts to :)
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2013-06-24T19:10:46.837Z · LW(p) · GW(p)
Maybe a decent zoo is what the human CEV amounts to.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-06-24T19:42:03.135Z · LW(p) · GW(p)
Yes, this has been suggested repeatedly, too, because the analogy is so obvious.
comment by [deleted] · 2013-06-24T15:19:17.878Z · LW(p) · GW(p)
You've never read anything Will_Newsome has posted?
comment by JoshuaFox · 2013-06-24T12:50:21.109Z · LW(p) · GW(p)
What are the SI's goals? We'd expect to see some conditions that are (almost) always achieved -- that's what an SI is. Maybe: the laws of physics.
But that's not a human-relevant goal except in the broadest sense that we must make our way through a physics-constrained world. It would be far more interesting if there were an SI whose goals closely related to humans' goals. Is there?
comment by [deleted] · 2013-06-24T13:27:33.279Z · LW(p) · GW(p)
We definitely do live in a world where there are representatives all along the intelligence spectrum. We also live in a world where the difference in the mental life of an infant and an adult is so great we adults can't remember or convey what it was like to be infants. Then there's humans and the other primates. Plenty of intelligence disparity relations already in play. If there is a level up it will have the same wonderful and terrible relation we have with each other already.
comment by Quantumental · 2013-06-27T22:14:40.897Z · LW(p) · GW(p)
I find it highly unlikely that a superintelligence would care to create a medieval simulation with tons of suffering
comment by Will_Newsome · 2013-06-25T18:47:10.372Z · LW(p) · GW(p)
Portions for foxes.
comment by TheOtherDave · 2013-06-24T16:14:34.015Z · LW(p) · GW(p)
Some earlier discussion along these lines here.