Inferential credit history
post by RyanCarey
Here’s an interview with Seth Baum. Seth is an expert in risk analysis and a founder of the Global Catastrophic Risk Institute. As expected, Bill O’Reilly caricatured Seth as extreme, and cut up his interview with dramatic and extreme events from alien films. As a professional provocateur, it is his job to lay the gauntlet down to his guests. Also as expected, Seth put on a calm and confident performance. Was the interview net-positive or negative? It’s hard to say, even in retrospect. Getting any publicity for catastrophic risk reduction is good, and difficult. Still, I’m not sure just how bad publicity has to be before it really is bad publicity…
Explaining catastrophic risks to the audience of Fox News is perhaps equally difficult to explaining the risk of artificial intelligence to anyone. This is a task that frustrated Eliezer Yudkowsky so deeply that he was driven to write the epic LessWrong sequences. In his view, the inferential distance was too large to be bridged by a single conversation. There were too many things that he knew that were prerequisites to understanding his current plan. So he wrote this sequence of online posts that set out everything he knew about cognitive science and probability theory, applied to help readers think more clearly and live out their scientific values. He had to write a thousand words per day for about two years before talking about AI explicitly. Perhaps surprisingly, and as an enormous credit to Eliezer’s brain, these sequences formed the founding manifesto of the quickly growing rationality movement, many of whom now share his concerns about AI. Since he wrote these, his Machine Intelligence Research Institute (formely the singularity Institute) has grown precipitously and spun off the Center for Applied Rationality, a teaching facility and monument to the promotion of public rationality.
Why have Seth and Eliezer had such a hard time? Inferential distance explains a lot, but I have a second explanation, Seth and Eliezer had to build an inferential credit history. By the time you get to the end of the sequences, you have seen Eliezer bridge many an inferential distance, and you trust him to span another! If each time I loan Eliezer some attention, and suspend my disbelief, he has paid me back (in the currency of interesting and useful insight), then I will listen to him saying things that I don’t yet believe for a long time.
When I watch Seth on The Factor, his interview is coloured by his Triple A credit rating. We have talked before, and I have read his papers. For the rest of the audience, he had no time to build intellectual rapport. It’s not just that the inferential distance was large, it’s more that he didn’t have a credit rating of sufficient quality to take out a loan of that magnitude!
I contend that if you want to explain something abstract and unfamiliar, first you have to give a bunch of small and challenging chunks of insight, some of which must be practically applicable, and ideally you will lead your audience on a trek across a series of inferential distances, each slightly bigger than the last. It’ll sure be helpful fills in some of the steps toward understanding the bigger picture, but not necessary.
This proposal could explain why historical explanations are often effective. Explanations that go like:
Initially I wanted to help people. And then I read The Life You Can Save. And then I realised I had been neglecting to think about large numbers of people. And then I read about scope insensitivity, which made me think this, and then I read Bostrom’s Fable of the Dragon Tyrant, which made me think that, and so on…
This kind of explanation is often disorganised, with frequent detours, and false turns – steps in your ideological history that turned out to be wrong or unhelpful. The good thing about historical explanations is that they are stories, and that they have a main character – you – and this all makes the story more compelling. I will argue that a further advantage is that they give you the opportunity to borrow lots of small amounts of your audience’s attention, and accrete a good credit rating, that you will need to make your boldest claims.
Lastly, let me present an alternative philosophy to overcoming inferential distances. It will seem to contradict what I have said so far, although I find it also useful.
If you say that X idea is crazy, then this can often become a self-fulfilling prophesy.
On this view, those who publicise AI risk should never complain about, and rarely talk about the large inferential distance before them, least of all publicy. They should normalise their proposal by treating it as normal. I still think it’s important for them to acknowledge any intuitive reluctance on the part of their audience to entertain an idea. It’s like how if you don’t appear embarrassed after committing a faux-pas, you’re seen as untrustworthy. But after acknowledging this challenge, they had best get back to their subject material, as any normal person would!
So if you believe in inferential distance, inferential credit history (building trust), and acting normal, then explain hard things by first beginning with lots of easy things, build larger and larger bridges, and acknowledge, but beware overemphasising any difficulties.
[also posted on my blog]
Comments sorted by top scores.
comment by JoshuaZ ·
2013-07-24T14:31:11.517Z · LW(p) · GW(p)
Explaining catastrophic risks to the audience of Fox News is perhaps equally difficult to explaining the risk of artificial intelligence to anyone.
This seems like an unnecessary Blue/Green dig, and frankly isn't even obviously true even if one buys into certain stereotypes. The standard accusations of Fox viewers emphasizes their level of worry and paranoia. In that context, existential risk would be something they'd be more likely to listen to for irrational reasons. But to be blunt, I suspect that explaining catastrophic risk will be about as difficult for CNN or MSNBC viewers also.
Replies from: ChristianKl, RyanCarey, Fhyve, buybuydandavis
↑ comment by Fhyve ·
2013-07-28T23:57:40.974Z · LW(p) · GW(p)
Okay, that's reasonable. But can we talk about the content post itself? I don't think that this really is the most important part of the post and that the top comment should be about it.
Replies from: JoshuaZ
↑ comment by buybuydandavis ·
2013-07-24T19:25:16.559Z · LW(p) · GW(p)
Probably because Fox is the only major news outlet with a significant libertarian presence, Fox has been giving more coverage than other major news outlets on these kinds of issues for a while. OReilly had the interview. I believe Beck has talked about most everything AI - existential risk, uploading, transhumanism. Even Stossel has touched on these topics.
Replies from: JoshuaZ
↑ comment by JoshuaZ ·
2013-07-24T20:26:51.507Z · LW(p) · GW(p)
Probably because Fox is the only major news outlet with a significant libertarian presence, Fox has been giving more coverage than other major news outlets on these kinds of issues for a while.
I don't quite see why one would expect that to be associated with libertarianism, and in so far as some aspects of some of these ideas connect to that (transhumanism definitely has a "let me do what I want" aspect), other things that are associated with Fox, such as social conservativism would militate against that. And given standard ideological biases, my naive guess would be that libertarians would be less accepting of notions of existential risk rather than more.
comment by NancyLebovitz ·
2013-07-24T15:44:56.918Z · LW(p) · GW(p)
For what it's worth, I've never found it difficult to get people to understand the concept of UFAI. I have two possible explanations. One is that I'm kidding myself about how well they understand it, but the one I find more plausible is that I'm framing it as an interesting idea, and Eliezer frames it as something people ought to do something about.
Replies from: bramflakes, cousin_it, Alexei, RomeoStevens, Lumifer
↑ comment by cousin_it ·
2013-07-25T13:39:40.098Z · LW(p) · GW(p)
I've also explained the idea to many people. It's quite easy to explain in a single conversation, but difficult to explain in such a way that they can re-explain it to someone else and answer possible questions. When I explain it to someone really smart, they usually start asking many questions, which I can quickly answer only because I have read most of LW and many related texts.
↑ comment by Alexei ·
2013-07-27T01:04:31.382Z · LW(p) · GW(p)
Likewise. Most people get it, at least on some level. It's a lot more difficult to convince them of: 1) the timeline and 2) that they ought to do something about it.
↑ comment by Lumifer ·
2013-07-24T15:51:48.265Z · LW(p) · GW(p)
I've never found it difficult to get people to understand the concept of UFAI
It's the staple of sci-fi.
Replies from: NancyLebovitz
↑ comment by NancyLebovitz ·
2013-07-24T16:59:37.612Z · LW(p) · GW(p)
Not exactly-- science fiction is usually set up so that humanity has a chance and that chance leads to victory, and part of Eliezer's point is that victory is not at all guaranteed.
Replies from: RolfAndreassen
↑ comment by RolfAndreassen ·
2013-07-24T17:57:28.328Z · LW(p) · GW(p)
For a true UFAI, victory is guaranteed. For the AI. Otherwise it's not actually "intelligent", merely "somewhat smarter than a rock" - also an adequate description of humans.
Replies from: twanvl
↑ comment by twanvl ·
2013-07-24T22:23:13.329Z · LW(p) · GW(p)
For a true UFAI, victory is guaranteed.
No it is not. An AI does not have to be all-powerful. It might get beaten by a coordinated effort of humanity, or by another AI, or by an unlikely accident.
Replies from: ThisSpaceAvailable
↑ comment by ThisSpaceAvailable ·
2013-07-27T04:50:09.947Z · LW(p) · GW(p)
It's theoretically possible to have an isolated AI that has no ability to affect the physical world, but that's rather boring. Hollywood AI has enough power to produce dramatic tension, but is still weak enough to be defeated by plucky humans. In reality, pnce a UFAI gets to post-Judgement Day Skynet-level power, it's over. Even putting aside the fact that it has freaking time machine, the idea of fighting a UFAI that has wiped out most of humanity in a nuclear holocaust and controls self-replicating, nigh indestructible robots is absurd.
comment by Jack ·
2013-07-24T20:16:38.787Z · LW(p) · GW(p)
If you're on a cable-news talk show you're message is being mediated by the host and the producers. If they don't like you, the audience won't like you. If O'Reilly had treated Seth like a serious person his audience would have taken Seth's ideas seriously. O'Reilly didn't so they won't. This is especially true for guests that are likely only to be on once. With politicians or notable figures on the Left (or the Right if we're talking about MSNBC, exactly the same thing) the show wants the guest to feel like they at least had a fair shake so that they'll come back again. But Seth is basically a replacement-level guest for the show and there is zero reason not to mock him.
If you're not a professional debater you probably should never go on tv with someone who doesn't like you.
Replies from: ChristianKl
↑ comment by ChristianKl ·
2013-07-25T11:17:39.582Z · LW(p) · GW(p)
If you're not a professional debater you probably should never go on tv with someone who doesn't like you.
That depends on your goals. Sometimes it can be worthwhile to get attention to the issue that you want to promote even if you are faced with hostile TV people and a majority of the audience won't take you seriously.
comment by chaosmage ·
2013-07-24T16:57:52.586Z · LW(p) · GW(p)
Why the hell didn't he mention Stephen Hawking's comments about danger from extraterrestrials? Endorsements from trusted parties are a highly effective shortcut to gaining credibility, especially at the first look, and he didn't use it.
Peter Thiel's support of MIRI, for example, is huge because he evidently has been good at predicting before. In laypeople such as the O'Reilly audience, I'll wager that builds more trust in Eliezer's arguments than discussions of paperclip maximizers.
The endorsers don't even have to be experts for the endorsement to help. (Think of Tom Cruise.) I propose that if a nonexpert voice with a huge audience, such as The Big Bang Theory, mentioned HPMOR even in passing, that'd do more to reduce existential risk than three more sequences.
Replies from: RomeoStevens, ChristianKl
comment by RomeoStevens ·
2013-07-24T22:22:17.316Z · LW(p) · GW(p)
Most people don't believe in ideas, they want to associate with high status. Saying lots of clever contrarian things is a way of building status with contrarians. Almost no one is qualified to judge the truth value of far claims.
Replies from: RyanCarey
comment by ThisSpaceAvailable ·
2013-07-27T05:24:31.720Z · LW(p) · GW(p)
As a meta-example, I found this post title rather uninformative as to what the post is about, which made we reluctant to take the time to read something by someone who appeared to not be taking the time to tell me what the post was about. I figured, though, if this was a worthless post, it would get downvoted, so I the Karma system has shown some use for providing attentional capital. As we move more to an information society, having good attentional capital systems will become more important.
Replies from: noahpocalypse
↑ comment by noahpocalypse ·
2013-08-02T15:48:32.772Z · LW(p) · GW(p)
I had that same thought. Perhaps "How to Build a High-level Argument"? Imagine every post on LW having "Hear Me Out" as a title. It would actually be an apt if unnecessary plea for most threads, but it would be such a pain.
comment by ESRogs ·
2013-07-24T15:33:04.481Z · LW(p) · GW(p)
I like the inferential credit history idea!
- it's 'faux pas' (or was that a self-referential joke? ;))
- your post isn't showing up in the standard font, which many readers here have a strong preference for (it can be jarring for the font to switch between articles); here's how to fix it
comment by Kawoomba ·
2013-07-24T20:33:25.294Z · LW(p) · GW(p)
don’t overemphasise any difficulties
Tautologically true, a truism. The "over"emphasise reduces your advice to "don't emphasise more than you should emphasise", which is a non-statement. The crux of the argument is, of course, to elicit how much is too much, and to find that right balance: "Do the correct action!" isn't helpful advice in my opinion.
The rest of your advice, building up some authority first so that the argument you're most interesting in will be taken seriously, is instrumentally useful, but epistemically fragile: "Become an authority so that people tend to believe you by default" has a bit of a dark arts ring to it.
While in generality it is a valid Bayesian inference to predict that someone who has turned out to be correct a bunch of times will continue to do so, if that history of being correct was built up mainly to lend credence to that final and crucial argument, the argumentum ad auctoritatem fails: like many a snake oil salesmen, building rep to make the crucial sale is effective, but it shouldn't work, it's building a "yes, yes, yes" loop when your final argument should stand on its own merits.
Replies from: polarix, ThisSpaceAvailable
↑ comment by polarix ·
2013-07-27T16:32:17.523Z · LW(p) · GW(p)
While in generality it is a valid Bayesian inference to predict that someone who has turned out to be correct a bunch of times will continue to do so, if that history of being correct was built up mainly to lend credence to that final and crucial argument, the argumentum ad auctoritatem fails
You're right that the argument should stand on its own merit if heard to completion.
The point here is that heuristics can kick in early and the listener, either due to being irrational or due to time considerations, might not give the argument the time and attention to finish. This is about how to craft an argument so that it is more likely to be followed to completion.
↑ comment by ThisSpaceAvailable ·
2013-07-27T05:09:56.330Z · LW(p) · GW(p)
Well, if we're being pedantic, it's an imperative, not declarative, sentence, so "tautological" doesn't apply. And even if it were a tautology, communicative theory says that words communicate information through their meta-meaning, not their explicit meaning, so their literal meanings, in themselves, are not relevant. That a statement's literal meaning is non-informative is not important if the meta-meaning is informative. In fact, given the implicit assumption that every statement is meaningful, the lack of information in the literal meaning simply makes the recipient look hard at the meta-meaning.
Also, that phrase doesn't appear in the article, but that's likely because the article was edited.