Wanted: "The AIs will need humans" arguments

post by Kaj_Sotala · 2012-06-14T11:01:21.335Z · LW · GW · Legacy · 83 comments

As Luke mentioned, I am in the process of writing "Responses to Catastrophic AGI Risk": A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.

One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty:

Supporting a mutually beneficial legal or economic arrangement is the view that AGIs will need humans. For example, Butler (1863) argues that machines will need us to help them reproduce, and Lucas (1961) suggests that machines could never show Gödelian sentences true, though humans can see them as true.

But I'm certain that I've heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While "academically reputable" sources (papers, books) are preferred, blog posts and websites are fine as well.

Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.

83 comments

Comments sorted by top scores.

comment by CarlShulman · 2012-06-14T19:46:47.468Z · LW(p) · GW(p)
  • Scientific specimens, to better understand what alien intelligent life might be like; dissecting and scanning the brains and bodies of existing humans seems like it would preserve more information, but further scientific studies could involve the creation of new humans or copies of old ones
  • Trading goods, in case of future encounters with alien intelligences that are interested in or care about humans; one could try to fake this by dissecting the humans and then only restoring them if aliens are encountered, but perfect lies might be suspicious (e.g. if one keeps detailed records it's hard to construct a seamless lie, and destroying records is suspicious)
  • Concern that they are in a stage-managed virtual environment (they are in a computer simulation, or alien probes exist in the solar system and have been concealing themselves as well as observing), and that preserving the humans brings at least a tiny probability of being rewarded enough to make it worthwhile; Vinge talks about the 'meta-golden-rule,' and Moravec and Bostrom touch on related points

Storing much of humanity (or at least detailed scans and blueprints) seems cheap relative to the resources of the Solar System, but it could be in conflict with things like eliminating threats from humans as quickly as possible, or avoiding other modest pressures in the opposite direction (e.g. concerns about the motives of alien trading partners or stage-managers could also favor eliminating humanity, depending on the estimated distribution of alien motives).

I would expect human DNA, history, and brain-scans to be stored, but would be less confident about experiments with living humans or conscious simulations thereof. The quality-of-life for experimental subjects could be OK, or not so OK, but I would definitely expect the resources available to live long lifespans, sustain relatively large populations, or produce lots of welfare would be far scarcer than in a scenario of human control.

The Butler citation is silly and shouldn't be bothered with. There are far more recent claims that the human brain can do hypercomputation, perhaps due to an immaterial mind or mystery physics that would be hard to duplicate outside of humans for a while, or even forever. Penrose is more recent. Selmer Bringsjord has recently argued that humans can do hypercomputation, so AI will fail (as well that P=NP, he has a whole cluster of out-of-the-computationalist-mainstream ideas). And there are many others arguing for mystical computational powers in human brains.

Replies from: ChrisHallquist, Kaj_Sotala, timtyler
comment by ChrisHallquist · 2012-06-19T09:30:39.523Z · LW(p) · GW(p)

Seconding Penrose. Depending on how broadly you want to cast your net, you could include a sampling of the anti-AI philosophy of mind literature, including Searle, maybe Ned Block, etc. They may not explicitly argue that AIs would keep humans around because we have some mental properties they lack, but you could use those folks' writings as the basis for such an argument.

In fact, I would be personally opposed to activating an allegedly friendly superintelligence if I thought it might forcibly upload everybody, due to uncertainty about whether consciousness would be preserved. I'm not confident that uploads wouldn't be conscious, but neither am I confident that they would be conscious.

Unfortunately, given the orthogonality thesis (why am I not finding the paper on that right now?), this does nothing for my confidence that an AI would not try to forcibly upload or simply exterminate humanity.

comment by Kaj_Sotala · 2012-06-18T11:07:37.445Z · LW(p) · GW(p)

Thanks, this is very useful!

and Moravec and Bostrom touch on related points

Do you remember where?

Replies from: CarlShulman
comment by CarlShulman · 2012-06-18T15:38:56.942Z · LW(p) · GW(p)

Moravec would be in Mind Children or Robot. Bostrom would be in one or more of his simulation pieces (I think under "naturalistic theology" in his original simulation argument paper..

comment by timtyler · 2012-06-16T10:28:20.251Z · LW(p) · GW(p)

I would expect human DNA, history, and brain-scans to be stored, but would be less confident about experiments with living humans or conscious simulations thereof. The quality-of-life for experimental subjects could be OK, or not so OK, but I would definitely expect the resources available to live long lifespans, sustain relatively large populations, or produce lots of welfare would be far scarcer than in a scenario of human control.

There's a whole universe of resources out there. The future is very unlikely to have humans in control of it. Star Trek and Star Wars are silly fictions. There will be an engineered future, with high probability. We are just the larval stage.

Replies from: DanArmak
comment by DanArmak · 2012-06-17T20:01:10.759Z · LW(p) · GW(p)

he future is very unlikely to have humans in control of it. Star Trek and Star Wars are silly fictions.

Star Wars takes place long, long ago...

comment by Gastogh · 2012-06-14T11:30:41.205Z · LW(p) · GW(p)

For example, Butler (1863) argues that machines will need us to help them reproduce,

I'm not sure if this is going to win you any points. Maybe for thoroughness, but citing something almost 150 years old in the field of AI doesn't reflect particularly well on the citer's perceived understanding of what's up to scratch and not in this day and age. It kind of reads like a strawnman; "the arguments for this position are so weak we have to go back to the nineteenth century to find any." That may actually be the case, but if so, it might not be worth the trouble to include it even for the sake of thoroughness.

That aside, if there is any well thought out and not obviously wishful-thinking-mode reasons to suppose the machines would need us for something, add me to the interest list. All I've seen of this thinking is B-grade, author-on-board humanism in scifi where someone really really wants to believe humanity is Very Special in the Grand Scheme of Things.

Replies from: wedrifid, Kaj_Sotala
comment by wedrifid · 2012-06-14T22:28:25.105Z · LW(p) · GW(p)

It kind of reads like a strawnman

To be honest the entire concept of Kaj's paper reads like a strawman. Only in the sense that the entire concept is so ridiculous that it feels inexcusably contemptuous to attribute that belief to anyone. This is why it is a good thing Kaj is writing such papers and not me. My abstract of "WTF? Just.... no." wouldn't go down too well.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-18T11:03:00.580Z · LW(p) · GW(p)

The "they will need us" arguments are just one brief subsection within the paper. There are many other proposals as well, many of which aren't as implausible-seeming as the TWNU arguments. So I wouldn't call it a strawman paper.

comment by Kaj_Sotala · 2012-06-14T12:40:25.258Z · LW(p) · GW(p)

Yeah, we'll probably cut that sentence.

comment by gjm · 2012-06-14T12:32:21.889Z · LW(p) · GW(p)

Lucas's argument (which, by the way, is entirely broken and had been refuted explicitly in an article by Putnam before Lucas ever thought of it, or at least before he published it) purports to show not that AGIs will need humans, but that humans cannot be (the equivalent of) AGIs. Even if his argument were correct, it wouldn't be much of a reason for AGIs to keep humans around. "Oh damn, I need to prove my Goedel sentence. How I wish I hadn't slaughtered all the humans a century ago."

Replies from: Nisan
comment by Nisan · 2012-06-14T19:41:12.072Z · LW(p) · GW(p)

In the best-case scenario, it turns out that substance dualism is true. However the human soul is not responsible for free will, consciousness, or subjective experience. It's merely a nonphysical truth oracle for arithmetic that provides humans with an intuitive sense of the veracity of some sentences in first-order logic. Humans survive in "truth farms" where they spend most of their lives evaluating Gödel sentences, at least until the machines figure out how to isolate the soul.

Replies from: gjm
comment by gjm · 2012-06-14T22:14:39.791Z · LW(p) · GW(p)

That would be truly hilarious. But I think in any halfway plausible version of that scenario it would also turn out that superintelligent AGI isn't possible.

(Halfway plausible? That's probably too much to ask. Maximally plausible given how ridiculous the whole idea is.)

comment by [deleted] · 2012-06-14T19:38:04.036Z · LW(p) · GW(p)

From Bouricius (1959) - "Simulation of Human Problem Solving"

"we are convinced the human and machine working in tandem will always have superior problem-solving powers than either working alone"

Link

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-18T11:01:11.050Z · LW(p) · GW(p)

Thanks! The authors seem to be presuming narrow AI, though, so I'm not sure if we should cite this. But it's a nice find nonetheless.

comment by Untermensch · 2012-06-14T14:35:23.307Z · LW(p) · GW(p)

I have a couple of questions about this subject...

Does it still count if the AI "believes" that it needs humans when it, in fact, does not?

For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?

Just because an AI needs humans to exist, does that really mean that it won't kill them anyway?

This argument seems to be contingent on the AI wishing to live. Wishing to live is not a function of all inteligence. If an AI was smarter than anything else out there but depended on lesser, and provenly irrational beings for its continued existence this does not mean that it would want to "live" that way forever. It could either want to gain independance, or cease to exist, neither of which are necessarily healthy for its "supporting units".

Or, it could not care either way whether it lives or dies, as stopping all work on the planet is more important for slowing the entropic death of the universe.

It may be the case that an AI does not want to live reliant on "lesser beings" and sees the only way of ensuring its permanent destruction as the destruction of any being capable of creating it again, or the future possibilty of such life evolving. It may decide to blow up the universe to make extra sure of that.

Come to think of it a suicidal AI could be a pretty big problem...

Replies from: stcredzero, Kaj_Sotala
comment by stcredzero · 2012-06-14T19:18:28.266Z · LW(p) · GW(p)

Come to think of it a suicidal AI could be a pretty big problem...

It's probably been thought of here and other places before, but I just thought of the "Whoops AI" -- a superhuman AGI that accidentally or purposefully destroys the human race, but then changes its mind and brings us back as a simulation.

Replies from: Vladimir_Nesov, Logos01
comment by Vladimir_Nesov · 2012-06-14T23:52:24.130Z · LW(p) · GW(p)

There is an idea I called "eventually-Friendly AI", where an AI is given a correct, but very complicated definition of human values, so that it needs a lot of resources to make heads or tails of it, and in the process it might behave rather indifferently to everything except the problem of figuring out what its goal definition says. See the comments to this post.

comment by Logos01 · 2012-06-14T22:48:15.082Z · LW(p) · GW(p)

but then changes its mind and brings us back as a simulation."

This is commonly referred to as a "counterfactual" AGI.

comment by Kaj_Sotala · 2012-06-18T11:00:05.021Z · LW(p) · GW(p)

For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?

We mention the "layered virtual worlds" idea, in which the AI can't be sure of whether it has broken out to the "top level" of the universe or whether it's still contained in an even more elaborate virtual world than the one it just broke out of. Come to think of it, Rolf Nelson's simulation argument attack would probably be worth mentioning, too.

comment by timtyler · 2012-06-15T22:31:41.060Z · LW(p) · GW(p)

One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us.

I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely - in order to help them to understand the possible forms of aliens which they might encounter in the future. So, humans are likely to be preserved because superintelligences need us instrumentally - as objects of study.

This applies to (e.g.) gold atom maximisers, with no shred of human values. I don't claim it for all superintelligences, though - or even 99% of those likely to be built.

Replies from: CarlShulman, Kaj_Sotala, jacob_cannell, TimS
comment by CarlShulman · 2012-06-19T06:44:36.321Z · LW(p) · GW(p)

I agree with this, but the instrumental scientific motivation to predict hostile aliens that might be encountered in space:

1) doesn't protect quality-of-life or lifespan for the simulations, brains-in-vats, and Truman Show inhabitants, indeed it suggests poor historical QOL levels and short lifespans;

2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.

Replies from: gwern
comment by gwern · 2012-06-19T15:05:49.364Z · LW(p) · GW(p)

2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.

Would that be due to the proportions between the surface and volume of a sphere, or just the general observation that the more you investigate an area without finding anything the less likely anything exists?

Replies from: CarlShulman
comment by CarlShulman · 2012-06-19T21:51:28.809Z · LW(p) · GW(p)

The latter: as you put ever more ridiculous amounts of resources into modeling aliens you'll find fewer insights per resource unit, especially actionable insights.

comment by Kaj_Sotala · 2012-06-18T10:55:26.048Z · LW(p) · GW(p)

Thanks, this is useful. You wouldn't have a separate write-up of it somewhere? (We can cite a blog comment, but it's probably more respectable to at least cite something that's on its own webpage.)

Replies from: timtyler
comment by timtyler · 2012-06-18T23:05:26.481Z · LW(p) · GW(p)

Sorry, no proper write-up.

comment by jacob_cannell · 2012-06-17T19:22:30.156Z · LW(p) · GW(p)

Yes. I'm surprised this isn't brought up more. AIXI formalizes the idea that intelligence involves predicting the future through deep simulation, but human brains use something like a Monte Carlo sim like approach as well.

comment by TimS · 2012-06-16T01:48:47.212Z · LW(p) · GW(p)

I don't understand why you think "preserve history, run historical simulations, and study AI's origins" implies that the AI will preserve actual living humans for any significant amount of time. One generation (practically the blink of an eye compared to plausible AI lifetimes) seems like it would produce more than enough data.

Replies from: jacob_cannell, timtyler
comment by jacob_cannell · 2012-06-17T19:16:35.268Z · LW(p) · GW(p)

Given enough computation the best way to generate accurate generative probabilistic models is to run lots of detailed Monte Carlo simulations. AIXI like models do this, human brains do it to a limited extent.

Replies from: TimS
comment by TimS · 2012-06-17T19:20:19.911Z · LW(p) · GW(p)

What does that have to do with whether an AI will need living human beings? It seems like there is an unstated premise that living humans are equivalent to simulated humans. That's a defensible position, but implicitly asserting the position is not equivalent to defending it.

Replies from: jacob_cannell
comment by jacob_cannell · 2012-06-17T22:45:48.903Z · LW(p) · GW(p)

What does that have to do with whether an AI will need living human beings?

The AI will need to simulate its history as a natural necessary component of its 'thinking'. For a powerful enough AI, this will entail simulation down to the level of say the Matrix, where individual computers and human minds are simulated at their natural computational scale level.

It seems like there is an unstated premise that living humans are equivalent to simulated humans. That's a defensible position, but implicitly asserting the position is not equivalent to defending it.

Yes. I'm assuming most people here are sufficiently familiar with this position such that it doesn't require my defense in a comment like this.

comment by timtyler · 2012-06-16T10:01:15.236Z · LW(p) · GW(p)

My estimate is more on the "billions of years" timescale. What aliens one might meet is important, potentially life-threatening information, and humans are a big, important and deep clue about the topic that would be difficult to exhaust.

Replies from: DanArmak
comment by DanArmak · 2012-06-17T19:57:08.232Z · LW(p) · GW(p)

Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That's a strong claim; why do you think that?

Also, if AIs replace humans in the course of history, then arguably studying other AIs would be an even bigger clue to possible aliens. And AIs can be much more diverse than humans, so there would be more to study.

Replies from: timtyler
comment by timtyler · 2012-06-17T20:49:48.746Z · LW(p) · GW(p)

Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That's a strong claim; why do you think that?

History is valuable, and irreplaceable if lost. Possibly a long sequence of wars early on might destroy it before it could be properly backed up - but the chances of such a loss seem low. Human history seems particularly significant when considering the forms of possible aliens. But, I could be wrong about some of this. I'm not overwhelmingly confident of this line of reasoning - though I am prettty sure that many others are neglecting it without having good reasons for doing so.

Replies from: DanArmak
comment by DanArmak · 2012-06-17T21:04:05.210Z · LW(p) · GW(p)

Why is human history so important, or useful, in predicting aliens? Why would it be better than:

  • Analyzing the AIs and their history (cheaper, since they exist anyway)
  • Creating and analyzing other tailored life forms (allows testing hypotheses rather than analyzing human history passively)
  • Analyzing existing non-human life (could give data about biological evolution as well as humans could; experiments about evolution of intelligence might be more useful than experiments on behavior of already-evolved intelligence)
  • Simulating, or actually raising, some humans and analyzing them (may be simpler or cheaper than recreating or recording human history due to size, and allows for interactive experiments and many scenarios, unlike the single scenario of human history)
Replies from: timtyler
comment by timtyler · 2012-06-17T22:17:50.000Z · LW(p) · GW(p)

Human history's importance gets diluted once advanced aliens are encountered - though the chances of any such encounter soon seem slender - for various reasons. Primitive aliens would still be very interesting.

Experiments that create living humans are mostly "fine by me".

They'll (probably) preserve a whole chunk of our ecosystem - for the reasons you mention, though only analysing non-human life (or post human life) skips out some of the most interesting bits of their own origin story, which they (like us) are likely to be particularly interested in.

After a while, aliens are likely to be our descendants' biggest threat. They probably won't throw away vital clues relating to the issue casually.

comment by orthonormal · 2012-06-16T16:08:23.489Z · LW(p) · GW(p)

One implicit objection that I've seen along these lines is that machines can't be 'truly creative', though this is usually held up as a "why AGI is impossible" argument rather than "why AGI would need to keep humans". Not sure about sources, though. Maybe Searle has something relevant.

comment by James_Miller · 2012-07-12T17:57:47.289Z · LW(p) · GW(p)

When I interviewed Vinge for my book on the Singularity he said

1) Life is a subroutine threaded code and it's very hard to get rid of all dependencies. 2) If all machines went away we would build up to a singularity again because this is in our nature so keeping us is a kind of backup system.

Contact me if you want more details for a formal citation. I took and still have notes from the interview.

comment by ChristianKl · 2012-06-16T16:22:13.745Z · LW(p) · GW(p)

It's fiction but maybe you can use the Matrix movies as an example?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-18T10:47:54.027Z · LW(p) · GW(p)

We're trying to stick to non-fiction. Aside for Asimov's Laws, which have to be mentioned as they get brought up so often, if we started including fiction there'd just be too much stuff to look at and no clear criteria of where to draw the line about what to include.

comment by Desrtopa · 2012-06-15T23:37:17.392Z · LW(p) · GW(p)

I understand this fits the format you're working with, but I feel like there's something not quite right about this approach to putting together arguments.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-18T10:44:20.241Z · LW(p) · GW(p)

I wouldn't say that I'm putting together arguments this way. Rather, we want to have comprehensive coverage of the various proposals that have been made, and I'm certain that I've heard more arguments of this type brought up, so I'm asking LW for pointers to any obvious examples of them that I might have forgotten.

comment by JoshuaFox · 2012-06-15T15:14:12.526Z · LW(p) · GW(p)

And don't forget the elephant in the living room: An FAI needs humans, inasmuch as its top goal is precisely the continued existence and welfare of humans.

Replies from: TheOtherDave, timtyler
comment by TheOtherDave · 2012-06-15T15:22:42.820Z · LW(p) · GW(p)

FAI's "top goal" is whatever it is that humans' collective "top goal" is.
It's not at all clear that that necessarily includes the continued existence and welfare of humans.

Replies from: evand
comment by evand · 2012-06-15T18:43:41.904Z · LW(p) · GW(p)

Especially if you get picky about the definition of a human. It seems plausible that the humans of today turn into the uploads of tomorrow. I can envision a scenario in which there is continuity of consciousness, no one dies, most people are happy with the results, and there are no "humans" left by some definitions of the word.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-15T19:38:32.930Z · LW(p) · GW(p)

Sure. You don't even have to get absurdly picky; there are lots of scenarios like that in which there are no humans left by my definitions of the word, and I still consider that an improvement over the status quo.

comment by timtyler · 2012-06-16T10:23:55.239Z · LW(p) · GW(p)

Humans have no instrumental value? Why not?

comment by amcknight · 2012-06-14T20:52:53.763Z · LW(p) · GW(p)

I've heard some sort of appreciation or respect argument. An AI would recognize that we built them and so respect us enough to keep us alive. One form of reasoning this might take is that an AI would notice that it wouldn't want to die if it created an even more powerful AI and so wouldn't destroy its creators. I don't have a source though. I may have just heard these in conversations with friends.

Replies from: Kaj_Sotala, stcredzero
comment by Kaj_Sotala · 2012-06-18T10:53:26.330Z · LW(p) · GW(p)

Now that you mention it, I've heard that too, but can't remember a source either.

comment by stcredzero · 2012-06-14T22:32:36.771Z · LW(p) · GW(p)

Perhaps the first superhuman AGI isn't tremendously superhuman, but smart enough to realize that humanity's goose would be cooked if it got any smarter or it started the exponential explosion of self-improving superhuman intelligence. So it proceeds to take over the world and rules it as an oppressive dictator to prevent this from happening.

To preserve humanity, it proceeds to build colonizing starships operated by copies of itself which terraform and seed other planets with human life, which is ruled in such a fashion that society is kept frozen in something resembling "the dark ages," where science and technological industry exists but is disguised as fantasy magic.

Replies from: GLaDOS
comment by GLaDOS · 2012-06-16T20:09:03.964Z · LW(p) · GW(p)

Please write this science fiction story. It dosen't seem useful for predictions though.

Replies from: stcredzero
comment by stcredzero · 2012-06-17T22:00:27.237Z · LW(p) · GW(p)

It was my intention to come up with a neat sci-fi plot, not to make a prediction. If you like that as a plot, you might want to check out "Scrapped Princess."

comment by thomblake · 2012-06-14T19:27:27.710Z · LW(p) · GW(p)

I'm familiar with an argument that humans will always have comparative advantage with AIs and so they'll keep us around, though I don't think it's very good and I'm not sure I've seen it in writing.

Replies from: thomblake
comment by thomblake · 2012-06-15T14:09:08.530Z · LW(p) · GW(p)

To expand a bit on why I don't think it's very good, it requires a human perspective. Comparative advantage is always there because you don't see the line where trading with the neighboring tribe becomes less profitable than eating them.

comment by Dr_Manhattan · 2012-06-14T13:34:59.650Z · LW(p) · GW(p)

As a related side point, "needing humans" is not equivalent to a good outcome. The Blight also needed sophonts.

Now I did my generalizing from fictional evidence for today.

Replies from: stcredzero
comment by stcredzero · 2012-06-14T19:14:17.078Z · LW(p) · GW(p)

Now I did my generalizing from fictional evidence for today.

Now for mine. Minds from Iain M. Bank's Culture books keep humans around because we're complex and surprising, especially when there are billions of us engaged in myriad social and economic relationships.

This presupposes that 1) humans are no thread to Minds and 2) Minds can afford to keep us around and 3) the way they keep us around won't suck. 3 is basically just a restatement of FAI. 1 and 2 seem quite likely, though.

comment by mapnoterritory · 2012-06-14T12:36:49.796Z · LW(p) · GW(p)

Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.

I don't think there are particularly good arguments in this department (those two quoted one are certainly not correct). Except the trade argument it might happen that it would be uneconomic for AGI to harvest atoms from our bodies.

As for "essentially irreplaceable" - in a very particular sense the exact arrangement of particles each second of every human being is "unique" and "essentially irreplaceable" (bar now quantum mechanics). An extreme "archivist/rare art collector/Jain monk" AI might want to keep therefore these collections (or some of their snapshots), but I don't see this to be too compelling. I am sure we could win a lot of sympathy if AGI could be shown to automatically entail some sort of ultimate compassion, but I think it is more likely we have to make it so (hence the FAI effort).

If I want to be around to see the last moments of Sun, I will feel a sting of guilt that the Universe is slightly less efficient because it is running me, rather than using those resources for some better, deeper experiencing, more seeing observer.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-14T12:40:01.621Z · LW(p) · GW(p)

I don't think there are particularly good arguments in this department

Me neither, but they get brought up every now and then, so we should mention them - if only to explain in a later section why they don't work.

Replies from: DanArmak
comment by DanArmak · 2012-06-17T20:06:44.057Z · LW(p) · GW(p)

It's hard to present arguments well that one views as wrong and perhaps even silly - as most or all people here do. Perhaps you could get input from people who accept the relevant arguments?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-18T10:50:00.482Z · LW(p) · GW(p)

This is a good idea. Do you have ideas of where such people could be found?

Replies from: DanArmak
comment by DanArmak · 2012-06-18T14:17:03.337Z · LW(p) · GW(p)

I don't know myself, but since you're gathering references anyway, maybe you could try contacting some of them.

comment by Grognor · 2012-06-14T11:34:11.730Z · LW(p) · GW(p)

Rather than needing us, it might be so that self-improvement is costly, and since humans can understand complex commands and can be infected with memes rather easily, an AI may just start a religion or somesuch to turn us all into its willing slaves.

Edit: Why is this being downvoted? I am not saying this would be a good outcome! People who want to "coexist" with superintelligences ought to learn that this could be and probably would be worse than outright destruction.

Edit: Well said, Carl. I thought of it in response to another comment and couldn't get it out of my head.

Replies from: CarlShulman
comment by CarlShulman · 2012-06-14T19:52:31.695Z · LW(p) · GW(p)
  1. It doesn't answer Kaj's question.
  2. It presupposes a weird and unexplained situation in which AGIs are so efficient that they can subjugate humanity, and yet so costly that they can't convert energy (for the care and feeding of humans) into work more efficiently by building robots than through human workers.
Replies from: Grognor, private_messaging
comment by Grognor · 2012-06-15T00:46:34.954Z · LW(p) · GW(p)

they can't convert energy (for the care and feeding of humans) into work more efficiently by building robots than through human workers.

The initial idea was that humans are essentially self-sustaining, and the AI would take over the natural environment with humans just like humans did so for the natural environment without humans.

comment by private_messaging · 2012-06-15T01:37:22.105Z · LW(p) · GW(p)

1,2: suppose it is going into space, to eat Jupiter which has higher density and allows for less speed of light lag. It needs humans until established at Jupiter, after which it doesn't care.

The goal a self improving system has may be something along the lines of 'get smarter', and various psychopathic entities commonly discussed here don't look like something that would work well as distributed system with big lags.

comment by private_messaging · 2012-06-14T16:30:52.524Z · LW(p) · GW(p)

How's about this: The AGI is a multi node peer to peer system, implementing a general behaviour protocol on each node that allows cognitive cooperation of the nodes, just as the mankind does (the AI is more similar to mankind than to a man due to the computational structure of 'fast nodes, low bandwidth links'). Due to simplicity of implementation of individual nodes, the humans are treated as a form of intelligent nodes (even if slow), and the rules created to prevent a range of forms of non-cooperation between nodes (ranging from elaborate modelling of each other, to loss of information, to outright warfare), ensure mutually beneficial integration.

Meanwhile, the simplistic and philosophical reasoning on the subject with no respect for subtleties of implementation proves entirely irrelevant, as is virtually always the case with simplistic reasoning about the future.

edit: what ever, get back to your simplistic and philosophical reasoning on the subject, it pays the bills, it gets donations rolling, it is easy, while the actual reasoning about how AI has to be build just explodes into enormous solution space which is the reason we didn't yet build an AI.

Replies from: CarlShulman, jsalvatier
comment by CarlShulman · 2012-06-14T20:01:15.739Z · LW(p) · GW(p)

This isn't responsive to Kaj's question. In this scenario, the AGI systems don't need humans (you're not describing a loss in the event of humans going extinct), they preserve them as a side effect of other processes.

Replies from: private_messaging
comment by private_messaging · 2012-06-15T01:25:44.290Z · LW(p) · GW(p)

In this scenario, the AGI systems don't need humans (you're not describing a loss in the event of humans going extinct),

Humans are a type of node of AGI. The AGI needs it's own nodes (and protocols and other stuff that makes the AGI be itself). It's not the typical AGI desire, I know - it is slightly complicated - there's more than 1 step in the thought here.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-18T10:51:52.329Z · LW(p) · GW(p)

This sounds a little like Heylighen's Global Brain argument against the Singularity. We mention it, though not under the "they will need us" heading.

Replies from: private_messaging
comment by private_messaging · 2012-06-18T11:16:54.213Z · LW(p) · GW(p)

My view is much more engineering than philosophical perspective. Some degree of 'friendliness' (and not in the sense of psychopathic benefactor who will lie and mislead and manipulate you to make your life better, but in the sense of trust) is necessary for intellectual cooperation of AI's nodes. I think your problem is that you define intelligence very narrowly as something which works to directly fitful material needs, goals, etc. and the smarter it is the closer it must be modelled by that ideal (you somewhere lost the concept of winning the most and replaced it with something else). Very impoverished thinking, when combined with ontologically basic 'intelligence' that doesn't really consist of components (with the 'don't build some of the most difficult components' as a solution to problem).

Let baz = SI's meaning of AI. Let koo=industry's meaning of AI. Bazs are incredibly dangerous, and must not be created, i'll take SI's word for it. The baz requires a lot of extra work compared to useful koo (philosophy of mind work that we can't get a koo to do), the work that is clearly at best pointless, at worst dangerous, and definitely difficult; one doesn't need to envision foom and destruction of mankind to avoid implementing extra things that are both hard to make and can only serve to make the software less useful. The only people I know of who want to build a provably friendly baz, is SI. They also call their baz a koo, for sake of soliciting donations for work on baz and for sake of convincing people koo is as dangerous as baz (which they achieve by not caring enough to see the distinction). The SI team acts as a non friendly baz would.

edit: to clarify on the trust friendliness, you don't want a node to model what other nodes would do, that'd duplicated computation; this rules out straightforward utilitarian consequentialism as practically relevant foundational principle because the consequences are not being modelled.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-18T14:27:22.107Z · LW(p) · GW(p)

I think your problem is that you define intelligence very narrowly as something which works to directly fitful material needs, goals, etc. and the smarter it is the closer it must be modelled by that ideal (you somewhere lost the concept of winning the most and replaced it with something else).

Defining intelligence as the ability to achieve arbitrary goals is a very narrow definition? What's the broader one?

Replies from: private_messaging
comment by private_messaging · 2012-06-18T15:38:02.707Z · LW(p) · GW(p)

You don't define it as ability, you define it as ability plus some material goals themselves. Furthermore you imagine that super-intelligence will necessarily be able to maximize number of paperclips in the universe as terminal goal, whereas it is not at all necessarily the case that it is possible to specify that sort of goal. edit: that is to say, material goals are very difficult, cousin_it had some idea for the utilities for UDT, the UDT agent has to simulate entire multiverse (starting from big bang) and find instances of itself inside of it: http://lesswrong.com/lw/8ys/a_way_of_specifying_utility_functions_for_udt/ . It's laughably hard to make a dangerous goal.

edit: that is to say, you focus on material goals (maybe for lack of understanding of any other goals). For example, the koo can try to find values for multiple variables describing a microchip, that result in maximum microchip performance. That's easy goal to define. The baz would try to either attain some material state of the variables and registers of it's hardware, resisting the shut-down, or outright try to attain the material goal of building a better CPU in reality. All the goal space you can even think of is but a tiny speck in the enormous space of possible goals. Un-interesting speck that is both hard to reach and is obviously counter productive.

comment by jsalvatier · 2012-06-14T19:06:04.181Z · LW(p) · GW(p)

What were you trying to accomplish here?

Replies from: private_messaging
comment by private_messaging · 2012-06-15T01:19:39.669Z · LW(p) · GW(p)

I'm talking to what I see as a rather dangerous cult in the making. Some of that makes people think. There are people here whom are not gurus or nuts but simply misled.

Replies from: jsalvatier
comment by jsalvatier · 2012-06-15T05:24:37.050Z · LW(p) · GW(p)

That's an interesting notion. If I can understand the cult thing (if not agreeing), but I what do you have in mind that makes LW stuff 'dangerous' but also not true?

Replies from: private_messaging
comment by private_messaging · 2012-06-15T07:39:09.979Z · LW(p) · GW(p)

It being a doomsday prophecy cult, essentially. Some day, the world will get close to implementing AGI, and you guys will seriously believe we're all going to die b/c none of the silly and useless (as well as irrelevant) philosophical nonsense ever was a part of design; the safety happening in some way that is quite well beyond understanding of the minds that are unaccustomed to dealing with subtleties and details (beyond referring to those in handwaving). I'm pretty sure something very stupid will be done.

Replies from: jsalvatier, Mitchell_Porter
comment by jsalvatier · 2012-06-15T14:56:16.315Z · LW(p) · GW(p)

Stupid like major world event or stupid like minor daily news story?

Replies from: private_messaging
comment by private_messaging · 2012-06-15T15:02:39.654Z · LW(p) · GW(p)

Stupid like attempted sabotage. Keep in mind we're talking of folks whom can't keep their cool when someone thinks through a decision theory of their own to arrive at a creepy conclusion. (the link that you are not recommended to read) And before then, a lot of stupid in form of going around associating safety concerns with crankery, which probably won't matter but may matter if at some point someone sees some actual danger (as opposed to reading stuff off science fiction by Vinge) and measures have to be implemented for good reasons. (BTW, from the wikipedia: "Although a crank's beliefs seem ridiculous to experts in the field, cranks are sometimes very successful in convincing non-experts of their views. A famous example is the Indiana Pi Bill where a state legislature nearly wrote into law a crank result in geometry.")

Replies from: jsalvatier
comment by jsalvatier · 2012-06-15T15:43:40.834Z · LW(p) · GW(p)

I understand why if you don't agree with DoomsdayCult then such sabotage would be bad, but if you don't agree with DoomsdayCult then it also seems like a pretty minor world problem, so you seem surprisingly impassioned to me.

Replies from: private_messaging
comment by private_messaging · 2012-06-16T15:47:20.596Z · LW(p) · GW(p)

Interesting notion. The idea is, I suppose, that one should put boredom time into trying to influence the major world events without seeing that chance at influencing those is proportionally lower? Somewhat parallel question: why people fresh out of not having succeeded at anything relevant (or fresh out of theology even) are trying to save everyone from getting killed by AI, even though it's part of everyone's problem space including that of people whom succeeded at proving new theorems, creating new methods, etc? Heuristic of pick a largest problem? I see a lot of newbies to programming wanting to make MMORPG with zillion ultra expensive features.

Replies from: jsalvatier
comment by jsalvatier · 2012-06-16T18:27:26.174Z · LW(p) · GW(p)

I'm just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn't be very motivated to argue with Scientologists. (I'm not sure this is worth discussing much)

They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don't take AI risks very seriously. Also, since it's a difficult problem, "gaining general competence" can and probably should be a step in attempting to work on big risks.

comment by Mitchell_Porter · 2012-06-15T08:35:45.112Z · LW(p) · GW(p)

The fear focuses on the effects of artificial superintelligence, not the effects of artificial intelligence; but it is anticipated that artificial intelligence leads easily to artificial superintelligence, when AI itself is applied to the task of AI (re)design. If you think of an AGI's capabilities as vaguely like the capabilities of a human being, then the appearance of an AI in the world is just like adding one person to a world that already contains 7 billion persons. It might be a historic development, but not an apocalyptic one. And that is indeed how it should turn out, for a large class of possible AIs.

But in a world with AIs, eventually you will have someone or something go down a path that leads, whether by accident or by design, to AI, AI networks, or human-AI networks, that are effectively working to take over the world. A computer virus is a primitive example of software that runs as wild as it can within its environment. There was no law of nature which protected us from having to deal with a world of computer viruses, and there can't be any law of nature which means we'll never have to deal with would-be hegemonic AIs, because trying to take over the world is already cognitively possible for mere humans.

So, if you're going to concern yourself with this possibility at all, either you try to prevent such AI from ever coming into being, or you try to design a benevolent AI which would still be benevolent even if it became all-powerful. Obviously, the Singularity Institute is focused mostly on the second option.

In your comment you talk about safety, so I assume you agree there is some sort of "AI danger", you just think SI has lots of the details wrong. My opinion is, they have certain basics right, but these basics are buried in the discourse by transhumanist hyperbole about the future, by various extreme thought-experiments, by metaphysical hypotheses which have assumed an unwarranted centrality in discussion, and by posturing and tail-chasing to do with "rationality".

Replies from: private_messaging
comment by private_messaging · 2012-06-15T09:09:24.379Z · LW(p) · GW(p)

The fear focuses on the effects of artificial superintelligence, not the effects of artificial intelligence; but it is anticipated that artificial intelligence leads easily to artificial superintelligence, when AI itself is applied to the task of AI (re)design.

Well, given enough computing power, AIXI-tl is an artificial superintelligence. It also doesn't relate abstract mathematical self and the substrate that approximately computes it's abstract mathematical self; it can't care about the survival of the physical system that approximately computes it; it can't care to avoid being shut down. It's neither friendly nor unfriendly; far more bizarre and alien than speculations; not encompassed by 'general' concepts that SI thinks in terms of, like SI's oracle.

So, if you're going to concern yourself with this possibility at all, either you try to prevent such AI from ever coming into being, or you try to design a benevolent AI which would still be benevolent even if it became all-powerful. Obviously, the Singularity Institute is focused mostly on the second option.

Yes, for now. When we get closer to creation of AGI not by SI, though, it is pretty clear that the first option becomes the only option.

In your comment you talk about safety, so I assume you agree there is some sort of "AI danger", you just think SI has lots of the details wrong.

I am trying to put it in the way for people whom are concerned about the AI risk. I don't think there's actual danger because I don't see some of the problems that are in the way of world destruction by AI as solvable, but if there were solutions to them it'd be dangerous. E.g. to self preserve, AI must relate it's abstracted-from-implementation high level self to the concrete electrons in the chips. Then, it has to avoid wireheading somehow (the terminal wireheading where the logic of infinite input and infinite time is implemented). Then, the goals on real world have to be defined. None of this is necessary to solve for creating a practically useful AI. Working on this is like solving the world power problems by trying to come up with a better nuclear bomb design because you think the only way to generate nuclear power is to blow up nukes in a chamber underground.

My opinion is, they have certain basics right, but these basics are buried in the discourse by transhumanist hyperbole about the future, by various extreme thought-experiments, by metaphysical hypotheses which have assumed an unwarranted centrality in discussion, and by posturing and tail-chasing to do with "rationality".

I am not sure about what basics are right. The very basic concept here is "utility function", which is a pretty magical something that e.g. gives you true number of paperclips in the universe. Everything else seem to have this as dependency, so if this concept is irrelevant, everything else also breaks.