2012 Robin Hanson comment on “Intelligence Explosion: Evidence and Import”

post by Rob Bensinger (RobbBB) · 2021-04-02T16:26:51.725Z · LW · GW · 4 comments

The 2013 anthology Singularity Hypotheses: A Scientific and Philosophical Assessment (edited by Eden, Moor, Soraker, and Steinhart) included short responses to some articles. The response that most stuck with me was Robin Hanson's reply to Luke Muehlhauser and Anna Salamon’s "Intelligence Explosion: Evidence and Import."

Unfortunately, when I try to link to this reply, people don't tend to realize they can scroll down to find it. So I've copied the full thing here.

(Note that I'm not endorsing Robin's argument, just sharing it as a possible puzzle piece for identifying cruxes of disagreement between people with Hansonian versus Yudkowskian intuitions about AGI.)


Muehlhauser and Salamon [M&S] talk as if their concerns are particular to an unprecedented new situation: the imminent prospect of “artificial intelligence” (AI). But in fact their concerns depend little on how artificial will be our descendants, nor on how intelligence they will be. Rather, Muehlhauser and Salamon’s concerns follow from the general fact that accelerating rates of change increase intergenerational conflicts. Let me explain.

Here are three very long term historical trends:

  1. Our total power and capacity has consistently increased. Long ago this enabled increasing population, and lately it also enables increasing individual income.
  2. The rate of change in this capacity increase has also increased. This acceleration has been lumpy, concentrated in big transitions: from primates to humans to farmers to industry.
  3. Our values, as expressed in words and deeds, have changed, and changed faster when capacity changed faster. Genes embodied many earlier changes, while culture embodies most today.

Increasing rates of change, together with constant or increasing lifespans, generically imply that individual lifetimes now see more change in capacity and in values. This creates more scope for conflict, wherein older generations dislike the values of younger more-powerful generations with whom their lives overlap.

As rates of change increase, these differences in capacity and values between overlapping generations increase. For example, Muehlhauser and Salamon fear that their lives might overlap with

> "[descendants] superior to us in manufacturing, harvesting resources, scientific discovery, social charisma, and strategic action, among other capacities. We would not be in a position to negotiate with them, for [we] could not offer anything of value [they] could not produce more effectively themselves. … This brings us to the central feature of [descendant] risk: Unless a [descendant] is specifically programmed to preserve what [we] value, it may destroy those valued structures (including [us]) incidentally."

The quote actually used the words “humans”, “machines” and “AI”, and Muehlhauser and Salamon spend much of their chapter discussing the timing and likelihood of future AI. But those details are mostly irrelevant to the concerns expressed above. It doesn’t matter much if our descendants are machines or biological meat, or if their increased capacities come from intelligence or raw physical power. What matters is that descendants could have more capacity and differing values.

Such intergenerational concerns are ancient, and in response parents have long sought to imprint their values onto their children, with modest success.

Muehlhauser and Salamon find this approach completely unsatisfactory. They even seem wary of descendants who are cell-by-cell emulations of prior human brains, “brain-inspired AIs running on human-derived “spaghetti code”, or `opaque’ AI designs …produced by evolutionary algorithms.” Why? Because such descendants “may not have a clear `slot’ in which to specify desirable goals.”

Instead Muehlhauser and Salamon prefer descendants that have “a transparent design with a clearly definable utility function,” and they want the world to slow down its progress in making more capable descendants, so that they can first “solve the problem of how to build [descendants] with a stable, desirable utility function.”

If “political totalitarians” are central powers trying to prevent unwanted political change using thorough and detailed control of social institutions, then “value totalitarians” are central powers trying to prevent unwanted value change using thorough and detailed control of everything value-related. And like political totalitarians willing to sacrifice economic growth to maintain political control, value totalitarians want us to sacrifice capacity growth until they can be assured of total value control.

While the basic problem of faster change increasing intergenerational conflict depends little on change being caused by AI, the feasibility of this value totalitarian solution does seem to require AI. In addition, it requires transparent-design AI to be an early and efficient form of AI. Furthermore, either all the teams designing AIs must agree to use good values, or the first successful team must use good values and then stop the progress of all other teams.

Personally, I’m skeptical that this approach is even feasible, and if feasible, I’m wary of the concentration of power required to even attempt it. Yes we teach values to kids, but we are also often revolted by extreme brainwashing scenarios, of kids so committed to certain teachings that they can no longer question them. And we are rightly wary of the global control required to prevent any team from creating descendants who lack officially approved values.

Even so, I must admit that value totalitarianism deserves to be among the range of responses considered to future intergenerational conflicts.

4 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2021-04-02T16:48:10.998Z · LW(p) · GW(p)

General note: I'm confident Luke and Anna wouldn't endorse Robin's characterization of their position here. (Nor do I think Robin's trying to summarize their view in a way they'd endorse. Rather, he's deliberately reframing their view in other terms to try to encourage original seeing [? · GW].)

A quick response on Luke and Anna's behalf (though I'm sure their own response would look very different):

 

You can call a nuclear weapon design or a computer virus our 'child' or 'descendant', but this wouldn't imply that we should have similar policies for nukes or viruses as the ones we have for our actual descendants. There needs to be a specific argument for why we should expect AGI systems to be like descendants on the dimensions that matter.

(Or, if we have the choice of building AGI systems that are more descendant-like versus less descendant-like, there needs to be some argument for why we ought to choose to build highly descendant-like AGI systems. E.g., if we have the option of building sentient AGI vs. nonsentient AGI, is there a reason we should need to choose "sentient"?)

It's true we shouldn't mistreat sentient AGI systems any more than we should mistreat humans; but we're in the position of having to decide what kind of AGI systems to build, with finite resources.

If approach U would produce human-unfriendly AGI and approach F would produce human-friendly AGI, you can object that choosing F over U is "brainwashing" AGI or cruelly preventing U's existence; but if you instead chose U over F, you could equally object that you're brainwashing the AGI to be human-unfriendly, or cruelly preventing F's existence. It's true that we should expect U to be a lot easier than F, but I deny that this or putting on a blindfold creates a morally relevant asymmetry. In both cases, you're just making a series of choices that determine what AGI systems look like.

Replies from: None
comment by [deleted] · 2021-04-02T20:55:29.283Z · LW(p) · GW(p)

It's true we shouldn't mistreat sentient AGI systems any more than we should mistreat humans; but we're in the position of having to decide what kind of AGI systems to build, with finite resources.

That's not how R&D works.  The early versions of something, you need the freedom to experiment, and early versions of an idea need to be both simple and well instrumented.  

One reason your first AI might be a 'paperclip maximizer' is simply that's less code to fight with.  Certainly, OpenAI's papers that's basically what all their systems are.  (they don't have the ability or capacity to allocate additional resources which seems to be the key step that makes a paperclip maximizer dangerous)

comment by gwillen · 2021-04-02T17:33:47.668Z · LW(p) · GW(p)

I think my immediate objection to Robin's take can be summarized with a classic "shit rationalists say" quote: "You have no idea how BIG mindspace is." Sure, we quibble over how much it's okay to impose our values on our descendants, but those value differences are about (warning: tongue firmly in cheek) relatively trivial things like "what fictional deity to believe in" or "whether it's okay to kill our enemies, and who they are". Granted, Robin talks about acceleration of value change, but value differences like "whether to convert the Earth and its occupants into computronium" seem like a significant discontinuity with previous value differences among generations of humans.

Humans may have different values from their ancestors, but it is not typical for them to have both the capability and the desire (or lack of inhibition) to exterminate those ancestors. If it was, presumably "value totalitarianism" would be a lot more popular. Perhaps Robin doesn't believe that AGI would have such a large value difference with humans; but in that case we've come back into the realm of a factual dispute, rather than a philosophical one.

(It does seem like a very Robin Hanson take to end by saying: Although my opponents advocate committing what I clearly suggest should be considered an atrocity against their descendants, "[e]ven so, I must admit that [it] deserves to be among the range of responses considered to future intergenerational conflicts.")

comment by Rob Bensinger (RobbBB) · 2021-04-02T16:31:39.133Z · LW(p) · GW(p)

They even seem wary of descendants who are cell-by-cell emulations of prior human brains, “brain-inspired AIs running on human-derived “spaghetti code”, or `opaque’ AI designs …produced by evolutionary algorithms.” Why? Because such descendants “may not have a clear `slot’ in which to specify desirable goals.”

I think Robin is misunderstanding Anna and Luke here; they're talking about vaguely human-brain-inspired AI, not about literal human brains run on computer hardware. In general, I think Robin's critique here makes sense as a response to someone saying 'we should be terrified of how strong and fast-changing ems will be, and potentially be crazily heavy-handed about controlling ems'. I don't think AGI systems are relevantly analogous, because AGI systems have a value loading problem and ems just don't.