Are the effects of UFAI likely to be seen at astronomical distances?
post by NancyLebovitz · 2010-11-05T13:05:43.970Z · LW · GW · Legacy · 12 commentsContents
12 comments
My comment to a discussion of great filters/existential risk:
How likely is it that a UFAI disaster would produce effects we can see from here? I think "people can't suffer if they're dead" disasters (failed attempt at FAI) is possibly more likely than paperclip maximizers.
Not sure what a money-maximizing UFAI disaster would look like, but I can't think of any reason it would be likely to go far off-planet.
National dominance-maximizing UFAI is a hard call, but possibly wouldn't go off-planet. It would depend on whether it's looking for absolute dominance of all possible territory or dominance/elimination of existing enemies.
12 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2010-11-05T14:19:47.897Z · LW(p) · GW(p)
AI never stops. It only stops if it estimates "stopping" to be the most optimal decision, and it'd need to be specifically programmed to have this strange goal.
(If you try to unpack the concept of "stopping", you'll see just how strange it is. The AI just sitting in one place exerts gravitational attraction on all the galaxies in its light cone, so what makes dismantling all the stars different? Which of the two is preferable? If the AI is indifferent between the two, it can just toss a coin.)
In any other case, something else will be better than "stopping". If it estimates that taking over the universe has a tiny chance of making the outcome a tiny bit better than if it stops, it'll do it.
comment by James_Miller · 2010-11-05T13:16:39.693Z · LW(p) · GW(p)
Money = Free Energy.
Dyson sphere = huge profit center.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-11-05T19:16:43.672Z · LW(p) · GW(p)
You don't know how an AI given a vague concept of "money" would wind up cashing it out on reflection. Really big numbers in memory is another possibility.
Replies from: James_Miller↑ comment by James_Miller · 2010-11-05T20:22:59.884Z · LW(p) · GW(p)
The limitation on how large a number the AI could store in memory would likley be free energy.
Replies from: Manfred↑ comment by Manfred · 2010-11-09T06:16:16.601Z · LW(p) · GW(p)
Or it could be programmed to recognize only the currency of some central bank, in which case it would force the mint to make literally astronomical amounts of money.
Or the bank could just tell the AI that it had an infinite amount of money, which might make it stop.
comment by DanArmak · 2010-11-07T17:47:35.656Z · LW(p) · GW(p)
Not sure what a money-maximizing UFAI disaster would look like, but I can't think of any reason it would be likely to go far off-planet.
Tile the universe with banknotes? Convert all matter into RAM to represent the greatest possible bank account balance in binary code?
National dominance-maximizing UFAI is a hard call, but possibly wouldn't go off-planet. It would depend on whether it's looking for absolute dominance of all possible territory or dominance/elimination of existing enemies.
Elimination of all enemies implies going exploring the entire universe to find all existing and potential enemies and eliminate them.
My point: how easy or how hard it is for us to think of a contrived scenario where UFAI spreads out, or where it doesn't, is pure imagination. Neither case tells us much about the actual probabilities of this happening.
comment by magfrump · 2010-11-05T17:31:50.188Z · LW(p) · GW(p)
If you'll pardon updating off of fictional evidence: the malignant AI in "A Fire Upon the Deep" stays hidden until it has the capability to explode across space--it might be the case that an UFAI which was in conflict with its creators would expect more conflict and therefore quiet down.
Also I think the failed FAI concept seems somewhat reasonable--if the AI had some basic friendliness that made it go looking for morality, but in the meantime its moral instincts involved turning people into paperclips rather than pulling babies from in front of trains it might eventually "catch on" and feel really terrible about everything, then decide that it wasn't able to be confident in its metaethics and it would be better to commit suicide.
Of course I haven't got much expertise in the subject so I feel like I may have just created a more complicated and therefore less likely scenario than I anticipated. I do still think that various forms of failed FAI (is this a term worth canonizing? An AI with some incomplete friendliness architecture is a very small subset of UFAI) would be relatively populous in the design space of "minds that humans would design," even if they are rare in the space of all possible minds.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-05T17:50:37.548Z · LW(p) · GW(p)
The notion I was thinking of was a program which is tasked to increase dominance for real Americans.
Unfortunately, the specs for real Americans weren't adequately thought out, so the human race is destroyed.
I don't think such a program is likely to spread further than the human race did.
More fictional evidence, from John Brunner's The Jagged Orbit.
N cebtenz vf frg gb znkvzvmvat cebsvgf (be cbffvoyl ergheaf) sbe n jrncbaf pbzcnal. Ntnvafg gur nqivpr bs grpuf, znantrzrag vafvfgf ba gheavat hc gur vapragvirf gbb uvtu (be znxvat gur gvzr senzr gbb fubeg-- vg'f orra n juvyr fvapr V'ir ernq vg).
Gur pbzcnal nqiregvfrf cnenabvn-- gur bayl jnl gb or fnsr vf gb unir zber naq zber cbjreshy crefbany jrncbaf. Vg oernxf pvivyvmngvba. Ab zber cebsvgf.
VVEP, gur pbzchgre cebtenz vairagf gvzr geniry gb jvcr vgfrys bhg naq erfbyir gur qvyrzan.
Replies from: DanArmak, magfrump↑ comment by magfrump · 2010-11-05T22:07:00.924Z · LW(p) · GW(p)
That makes sense. My thoughts were basically along the lines that the space of AIs with goals centered around their creators which later peter out after their creators are destroyed is probably bigger than I gave it credit for.
I'm sad that I don't get to read the second half of your comment because I haven't read that book and intend to eventually read as much science fiction recommended hear as possible.