↑ comment by Mitchell_Porter ·
2012-10-17T09:03:34.182Z · LW(p) · GW(p)
On these three issues - I'll call them post-singularity concentration of power, size and duration of post-singularity civilization, and length of post-singularity lifespan - my preference is to make the low estimate the default, but to regard the high estimates as defensible.
In each case, there are arguments (increasing returns on self-enhancement, no sign of alien civilizations, fun theory / basic physics) in favor of the high estimate (one AI rules the world, the winner on Earth gets this Hubble volume as prize, no psychological or physical limits on lifespan shorter than that of the cosmos). And it's not just an either-or choice between low estimate and high estimate; the low estimates could be off by just a few orders of magnitude, and that would still mean a genuinely transcendent future.
However, the real situation is that we just don't know how much power could accrue to a developing AI, how big a patch of space and time humanity's successors might get to claim as their own, and how much lifespan is achievable or bearable. Science and technology have undermined the intellectual basis for traditional ideas of what the limits are, but they haven't ratified the transcendent scenarios either. So the "sober" thing to do is to say that AIs can become at least as smart as human beings, that we could live throughout the solar system, and that death by old age is probably unnecessary; and to allow that even more than that may be possible, with the extremes being represented by what I've just called the transcendent scenarios.
I think FAI is still completely relevant and necessary, even in the minimal scenario. UFAI can still wipe out the human race in that scenario; it would be just another genocide, just another extinction, in which one group of sapients displaces another, and history is full of such events. Similarly, the usual concerns of FAI - identifying the correct value system, stability under self-enhancement - remain completely relevant even if we're just talking about AIs of roughly humanesque capabilities, as do concepts like CEV and reflective decision theory.
This has all become clear to me when I think what it would take for the concept of Friendly AI (whether under that name, or some other) to really gain traction, culturally and politically. I can make the case for the minimal future scenario; I can make the case that something beyond the minimal scenario is possible, and that we don't know how much beyond. I can live with the existence of people who dream about ruling their own galaxy one day. But it would be very unwise to tie serious FAI advocacy to that sort of dreaming.
Maybe one day there will be mass support, even for that sort of transhumanism; but that will be the extremist wing of the movement, a bit like the role of fundamentalists in conservative American politics. They are part of the conservative coalition, so they matter politically, but when they actually take over, you get craziness.
Replies from: loup-vaillant
↑ comment by loup-vaillant ·
2012-10-19T18:31:48.202Z · LW(p) · GW(p)
AIs can become at least as smart as human beings, that we could live throughout the solar system, and that death by old age is probably unnecessary
That is much better starting point than the original post. Let's explore those lower bounds:
Even if we can't make something smarter than us, there's no reason to think we can't make something faster than us. Plus, it's only a program, so it can duplicate itself, say, over the internet.
Even if it's no smarter than us, its still can be pretty much unstoppable.
The sun is due to die some billions years in the future. Hominoidea only exist for 28 millions years, so, our potential is at least 2 orders of magnitude longer than our past. Add population size into the mix, (let's say, 10 billions and stabilizing, while our past would be more like less that 1 million for most of the time), and we add 4 more orders of magnitude to the value of the future.
So, the potential of our future is at least a million times greater than our entire past. That's one hell of a potential, for a conservative estimate.
I don't have good "mainstream" for the feasibility of long lifespans. Like wedrifid, Id' just avoid the subject if possible.
Now we can speculate about reasonable sounding worst-case and best-case scenarios:
Best case: Machines take over chores that make us unhappy. Lifespan is about 100 years, healthy lifespan is about 60-80 years. This goes on for the few billions years the Sun let us have. Not a paradise, but still much better than the current state of affairs.
Worst case: Skynet wants to maximize paperclips, we all die very soon. Or we don't do AI at all, and we kill ourselves in another manner, very soon.
There. Does that sound mainstream-friendly enough?