An onion strategy for AGI discussion
post by lukeprog · 2014-05-31T19:08:24.784Z · LW · GW · Legacy · 12 commentsContents
12 comments
Cross-posted from my blog.
"The stabilization of environments" is a paper about AIs that reshape their environments to make it easier to achieve their goals. This is typically called enforcement, but they prefer the term stabilization because it "sounds less hostile."
"I'll open the pod bay doors, Dave, but then I'm going to stabilize the ship..."
Sparrow (2013) takes the opposite approach to plain vs. dramatic language. Rather than using a modest term like iterated embryo selection, Sparrow prefers the phrase in vitro eugenics. Jeepers.
I suppose that's more likely to provoke public discussion, but... will much good will come of that public discussion? The public had a needless freak-out about in vitro fertilization back in the 60s and 70s and then, as soon as the first IVF baby was born in 1978, decided they were in favor of it.
Someone recently suggested I use an "onion strategy" for the discussion of novel technological risks. The outermost layer of the communication onion would be aimed at the general public, and focus on benefits rather than risks, so as not to provoke an unproductive panic. A second layer for a specialist audience could include a more detailed elaboration of the risks. The most complete discussion of risks and mitigation options would be reserved for technical publications that are read only by professionals.
Eric Drexler seems to wish he had more successfully used an onion strategy when writing about nanotechnology. Engines of Creation included frank discussions of both the benefits and risks of nanotechnology, including the "grey goo" scenario that was discussed widely in the media and used as the premise for the bestselling novel Prey.
Ray Kurzweil may be using an onion strategy, or at least keeping his writing in the outermost layer. If you look carefully, chapter 8 of The Singularity is Near takes technological risks pretty seriously, and yet it's written in such a way that most people who read the book seem to come away with an overwhelmingly optimistic perspective on technological change.
George Church may be following an onion strategy. Regenesis also contains a chapter on the risks of advanced bioengineering, but it's presented as an "epilogue" that many readers will skip.
Perhaps those of us writing about AGI for the general public should try to discuss:
- astronomical stakes rather than existential risk
- Friendly AI rather than AGI risk or the superintelligence control problem
- the orthogonality thesis and convergent instrumental values and complexity of values rather than "doom by default"
- etc.
MIRI doesn't have any official recommendations on the matter, but these days I find myself leaning toward an onion strategy.
12 comments
Comments sorted by top scores.
comment by James_Miller · 2014-06-01T18:35:16.207Z · LW(p) · GW(p)
The outermost layer should concern issues people you are trying to influence care about. Alas, aside from global warming, this means ignoring things that won't happen in the next 50 years. The outermost layer should also take into account confirmation bias so don't claim in this layer that the possibility of developing a super-intelligence reduces the optimal amount of money we should spend today on slowing global warming because this will cause much of the elite media to reduce their opinion of you. The outermost layer should also completely exclude anything that will trigger disgust reactions in the elite media even if this disgust is totally irrational. Finally, your goal in this layer should be to embed an elevator pitch that overcomes the absurdity heuristic you must encounter when you claim your goal is to develop a super-intelligence.
Although this speaks very poorly for our species, the danger of discussing existential risk in the outermost layer is that it will cause most people to find you uninteresting.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-06-02T17:24:45.441Z · LW(p) · GW(p)
This doesn't mean the outer layer is about the dangers of robotic cars or military drones. Other people are already talking about this, they are more expert than you and have more interesting things to say that include Powerpoint slides, and you will rapidly be lost in the crowd.
Replies from: James_Miller↑ comment by James_Miller · 2014-06-02T18:20:59.484Z · LW(p) · GW(p)
If the outer layer is for communicating with the "general public" won't you be doing this mostly through reporters who are writing <1000 word articles or doing short audio or video segments? Don't they just want someone to say interesting things for 10 minutes, and in return for helping them you could get your name out there and perhaps attract new donors? Or have you found little benefit from appearing in such articles?
comment by David_Gerard · 2014-06-03T08:13:13.789Z · LW(p) · GW(p)
This appears to be a reinvention of milk before meat, one of those bad ideas that just keeps coming back. This sometimes works out not so well; people do understand the concept of bait and switch quite readily.
Basically, if you have bits that you know perfectly well look weird or stupid, then pretending you don't just looks fraudulent.
comment by Sean_o_h · 2014-06-05T09:48:58.675Z · LW(p) · GW(p)
In my experience, in communicating on these matters to the public or generalists, it's definitely good to highlight benefits as well as risks - and that style of onion strategy sounds about right and is roughly the type of approach I take (unless in public/general discussion I'm e.g. specifically asked to comment on a particular risk concern).
In speaking to public and policymakers here (outer layer 1 to layer 1.5, if you will), I've found a "responsible innovation" type framing to be effective. I'm pro-progress, the world has a lot of problems that advances in technology will need to play a role in solving, and some of the benefits of synthetic biology, artificial intelligence etc will be wonderful. However, we can make progress most confidently if the scientific community (along with other key players) devotes some resources towards identifying, evaluating, and if necessary taking proactive steps to prevent the occurrence of extreme negative scenarios. In such presentations/discussions, I present CSER and FHI as aiming to lead and coordinate such work. I sometimes make the analogy to an insurance policy: we hope that the risks we work on would never come to pass, but if the risk is plausible and the impact would be big enough, then we can only progress with confidence if we take steps ahead of time to protect our interests. This seems to be effective particularly with UK policymakers and industry folk - I can have risk concerns received better if I signal that I'm pro-progress, not irrationally risk-averse or fear-mongering, and can hint at a reasonably sophisticated understanding of what these technologies entail and what benefits they can be expected to bring.
I would add a small caution on "astronomical stakes". It works very well in some rhetoric-friendly public speaking/writing settings (and I've used it), but for certain individuals and audiences it can produce a bit of a knee-jerk negative reaction as being a grandiose, slightly self-important perspective (perhaps this applies more in Europe than in the US though, where the level of public rhetoric is a notch or two lower).
Replies from: lukeprogcomment by torekp · 2014-05-31T23:48:15.541Z · LW(p) · GW(p)
I don't think it's wise to make the outer layer "focus on benefits rather than risks." That sort of spin control will probably be detected and lower your credibility in the eyes of the public. But more importantly, panic isn't always a bad thing. (I'm assuming we're using "panic" broadly here: I don't remember any riots over IVF.) I'd like to see a little more panic about AGI.
Your concrete suggestions look good. For example astronomical stakes rather than existential risk is a good move, because it's balanced. And it's not a good idea to push too hard on doom and gloom: that will lower your credibility just as much as positive spin. The outer layer doesn't need to be positive, just suitably non-technical and representative of the overall layout of the issues.
Replies from: Vulture↑ comment by Vulture · 2014-06-01T22:38:18.821Z · LW(p) · GW(p)
I don't think it's wise to make the outer layer "focus on benefits rather than risks." That sort of spin control will probably be detected and lower your credibility in the eyes of the public.
As I understand it, this is usually not the case if the spin is done professionally.
The outer layer doesn't need to be positive, just suitably non-technical and representative of the overall layout of the issues.
This pretty much negates the entire premise as suggested in the post, and just reduces down to "explain things to laymen"
comment by Lumifer · 2014-06-02T17:59:58.981Z · LW(p) · GW(p)
The outermost layer of the communication onion would ... focus on benefits ... A second layer for a specialist audience could include a more detailed elaboration of the risks.
Since the specialist audience also reads the general-public materials this would open you to accusations of being two-faced (or even hypocrisy).
I am also not clear on the specific goals of this exercise. Do you aim to manipulate the public opinion (that's a craft with its own set of skills and many tricks of the trade)?
comment by Douglas_Reay · 2014-07-18T10:08:59.890Z · LW(p) · GW(p)
To paraphrase "Why Flip a Coin: The Art and Science of Good Decisions", by H. W. Lewis
Good decisions are made when the person making the decision shares in both the benefits and the consequences of that decision. Shield a person from either, and you shift the decision making process.
However, we know there are various cognitive biases which makes people's estimates of evidence depend upon the order in which the evidence is presented. If we want to inform people, rather than manipulate them, then we should present them information in the order that will minimise the impact of such biases, even if doing so isn't the tactic most likely to manipulate them into agreeing with the conclusion that we ourselves have come to.
Replies from: Douglas_Reay↑ comment by Douglas_Reay · 2014-07-18T10:13:22.258Z · LW(p) · GW(p)
Having said that, there is research suggesting that some groups are more prone than others to the particular cognitive biases that unduly prejudice people against an option when they hear about the scary bits first.
comment by danieldewey · 2014-06-01T20:46:06.148Z · LW(p) · GW(p)
This is an interesting consideration. One related thing I think about is that potential technical contributors will usually need to pass through the outermost layer of the onion before they get to the inner ones, and so not bouncing them off with something that seems too non-technical is important. This should be adjustable independently of the "alarm level" of the language, though.