Help us name a short primer on AI risk!

post by lukeprog · 2013-09-17T20:35:34.895Z · LW · GW · Legacy · 75 comments

MIRI will soon publish a short book by Stuart Armstrong on the topic of AI risk. The book is currently titled “AI-Risk Primer” by default, but we’re looking for something a little more catchy (just as we did for the upcoming Sequences ebook).

The book is meant to be accessible and avoids technical jargon. Here is the table of contents and a few snippets from the book, to give you an idea of the content and style:

  1. Terminator versus the AI
  2. Strength versus Intelligence
  3. What Is Intelligence? Can We Achieve It Artificially?
  4. How Powerful Could AIs Become?
  5. Talking to an Alien Mind
  6. Our Values Are Complex and Fragile
  7. What, Precisely, Do We Really (Really) Want?
  8. We Need to Get It All Exactly Right
  9. Listen to the Sound of Absent Experts
  10. A Summary
  11. That’s Where You Come In …

The Terminator is a creature from our primordial nightmares: tall, strong, aggressive, and nearly indestructible. We’re strongly primed to fear such a being—it resembles the lions, tigers, and bears that our ancestors so feared when they wandered alone on the savanna and tundra.

As a species, we humans haven’t achieved success through our natural armor plating, our claws, our razor-sharp teeth, or our poison-filled stingers. Though we have reasonably efficient bodies, it’s our brains that have made the difference. It’s through our social, cultural, and technological intelligence that we have raised ourselves to our current position.

Consider what would happen if an AI ever achieved the ability to function socially—to hold conversations with a reasonable facsimile of human fluency. For humans to increase their social skills, they need to go through painful trial and error processes, scrounge hints from more articulate individuals or from television, or try to hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. But it could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. Imagine a human who, every time they opened their mouth, had spent a solid year to ponder and research whether their response was going to be maximally effective. That is what a social AI would be like.

So, title suggestions?

75 comments

Comments sorted by top scores.

comment by MichaelAnissimov · 2013-09-18T22:10:45.993Z · LW(p) · GW(p)

I like "Smarter than Us: an overview of AI Risk". The first three words should knock the reader out of their comfort zone.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-09-21T21:36:21.266Z · LW(p) · GW(p)

I concur on the main title, but, in accordance with cousin_it's comment below, we might go with AI as a Danger to Mankind as a subtitle or something like that. Maybe AI's Promise and Peril for Humanity to avoid (a) giving people the impression we think AI should never be built (b) the charge of sexism.

Note that "promise and peril" is Kurzweil's turn of phrase; it sounds much better in my head than "promise and danger" which I also thought of.

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2013-09-21T23:55:44.650Z · LW(p) · GW(p)

Sexism..?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-09-22T15:30:27.550Z · LW(p) · GW(p)

Yes, sexism. "Mankind" is male-tilted in a way that "humanity" isn't.

comment by So8res · 2013-09-17T21:58:30.077Z · LW(p) · GW(p)

These suggestions lean towards sensationalism:

  • Losing the Future: The Potential of AI
  • The Power of Intelligence: an overview of AI Risk
  • Smarter than Us: an overview of AI Risk
  • The Fragile Future: an overview of AI Risk
  • An introduction to superhuman intelligence and the risks it poses
Replies from: lukeprog, lincolnquirk
comment by lukeprog · 2013-09-18T04:09:09.163Z · LW(p) · GW(p)

The Power of Intelligence: A.I. as a Danger to Mankind might be good, too...

comment by lincolnquirk · 2013-09-18T23:57:41.183Z · LW(p) · GW(p)

Along the lines of "Fragile Future" - I like alliteration:

  • The Common Cause: how artificial intelligence will save the world -- or destroy it. (neat double meaning, maybe a bit too abstracted)
  • The Digital Demon (uhm... a bit too personified)
  • The Silicon Satan (okay, this is getting ridiculous)

Honestly I really like Fragile Future though.

comment by cousin_it · 2013-09-17T20:49:22.259Z · LW(p) · GW(p)

My model of people who are unaware of AI risk says that they will understand a title like "Artificial intelligence as a danger to mankind".

Replies from: lukeprog, Randaly
comment by lukeprog · 2013-09-18T04:07:30.067Z · LW(p) · GW(p)

Artificial Intelligence as a Danger to Mankind seems pretty good, if we think it's good to emphasize the risk angle in the title. Though unlike many publishers, I'll also be getting the author's approval before choosing a title.

Replies from: jmmcd, Stuart_Armstrong
comment by jmmcd · 2013-09-18T13:10:57.734Z · LW(p) · GW(p)

"X as a Y" is an academic idiom. Sounds wrong for the target audience.

comment by Stuart_Armstrong · 2013-09-18T13:15:36.943Z · LW(p) · GW(p)

Don't have "robot" in the title, or anything that pattern matches to the Terminator (unless it's specifically to draw a contrast).

comment by Randaly · 2013-09-17T22:05:34.758Z · LW(p) · GW(p)

Possibly emphasize 'risk' as opposed to 'danger'? "The Risks of Artificial Intelligence Development"? "Risks from the Development of Superhuman AI"?

Replies from: loup-vaillant
comment by loup-vaillant · 2013-09-18T16:41:29.069Z · LW(p) · GW(p)

Or, "Artificial intelligence as a risk to mankind". (Without the emphasis.)

comment by palladias · 2013-09-17T21:41:33.437Z · LW(p) · GW(p)

I don't have anything good but I think the sweet spot is something that kinda draws in people who'd be excited about mainstream worries about AI, but implies there's a twist.

  • Blue Screen of Death... Forever: A Guide to AI Risk
  • Life or Death Programming: The Future of AI Risk
  • Life or Death Philosophy: The Future of AI Risk
  • Decision Theory Xor Death
  • Cogito Ergo Doom: The Unexpected Risks of AI
  • Worse than Laser Eyes: The Real Risks of AI
Replies from: Richard_Kennaway, Stuart_Armstrong, John_Maxwell_IV
comment by Richard_Kennaway · 2013-09-17T22:32:09.455Z · LW(p) · GW(p)

Cogito Ergo Doom

NIce.

comment by Stuart_Armstrong · 2013-09-18T13:11:53.953Z · LW(p) · GW(p)

Sigh... this makes me realise how untalented I am at finding titles!

Replies from: palladias
comment by palladias · 2013-09-18T18:42:20.610Z · LW(p) · GW(p)

Practice practice practice! I've had to find titles for daily blog posts for three years.

comment by John_Maxwell (John_Maxwell_IV) · 2013-09-20T02:07:07.554Z · LW(p) · GW(p)

Blue Screen of Death... Forever: A Guide to AI Risk

I like this one as "Blue Screen of Death: A Primer on AI Risk". "Have you read Blue Screen of Death?" There's something appealing about a book that doesn't take itself too seriously, IMO.

comment by gjm · 2013-09-18T08:13:24.070Z · LW(p) · GW(p)

I don't like all the clever-clever titles being proposed because (1) they probably restrict the audience and (2) one of the difficulties MIRI faces is persuading people to take the risk seriously in the first place -- which will not be helped by a title that's flippant, or science-fiction-y, or overblown, or just plain confusing.

You don't need "primer" or anything like it in the title; if the book has a fairly general title, and is short, and has a preface that begins "This book is an introduction to the risks posed by artificial intelligence" or something, you're done. (No harm in having something like "primer" or "introduction" in the title, if that turns out to make a good title.)

Spell out "artificial intelligence". (Or use some other broadly equivalent term.)

I would suggest simply "Risks of artificial intelligence" or maybe "Risks of machine intelligence" (matching MIRI's name).

Replies from: ciphergoth, palladias, ChrisHallquist
comment by Paul Crowley (ciphergoth) · 2013-09-18T11:34:36.069Z · LW(p) · GW(p)

I take your point, but it looks like the book they've decided to write is one that's at least a little flippant and science-fiction-y, and that being so the title should reflect that.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-09-18T13:07:18.338Z · LW(p) · GW(p)

The Terminator section is to counter that issue immediately, rather than being sci-fi ish.

comment by palladias · 2013-09-20T05:58:55.454Z · LW(p) · GW(p)

I think titles also follow the "the only goal of the first sentence is to make the reader want to read the second sentence" rule. If MIRI is pitching this book at bright laypeople, I think it's good to be a bit jazzy and then dismantle the Skynet assumptions early on (as it looks like this does).

If the goal is for it to be a technical manual for people in math and CS, I'd agree that anything that sounds like pop sci or Gladwell is probably a turn-off.

Of course, you could always have two editions, with two titles (and differing amounts of LaTeX)

comment by ChrisHallquist · 2013-09-21T21:29:16.966Z · LW(p) · GW(p)

These are reasonable concerns, but a boring title will restrict the audience in its own way. Michael's "Smarter than Us" suggestion avoids both risks, though, I think.

Edit: Wait, that wasn't Michael's idea originally, he was just endorsing it, but I agree with his endorsement and reasoning why. Definitely sends shivers down my spine.

comment by James_Miller · 2013-09-17T23:18:49.159Z · LW(p) · GW(p)

To Serve Man: an overview of AI Risk

comment by Stuart_Armstrong · 2013-09-18T13:17:39.034Z · LW(p) · GW(p)

Maybe:

I'm sorry Dave, I'm doing exactly what you asked me

(followed by a dull but informative "risks of artificial intelligence"-style subtitle)

comment by Richard_Kennaway · 2013-09-17T22:30:35.908Z · LW(p) · GW(p)

"The Last Machine: Why Artificial Intelligence Might Just Wipe Us All Out"

It could include a few cartoons of robots destroying us all while saying things like:

"I do not hate you, but you are made of atoms I can use for something else."
"I am built to maximise human happiness, so unhappy people must die."
"Must...make...paperclips!"
"Muahahahaha! I will grant ALL your wishes!!!"

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-20T00:19:40.004Z · LW(p) · GW(p)

I strongly advocate eliminating the word 'risk' from the title. I have never spoken of 'AI risk'.

It is a defensive word and in a future-of-technology context it communicates to people that you are about to talk about possible threats that no amount of argument will talk you out of. Only people who like the 'risk' dogwhistle will read, and they probably won't like the content.

  • What We Can Know About Powerful Artificial Intelligence
  • Powerful Artificial Intelligence: Why Its Friendliness or Hostility is Knowably Design-Dependent
  • Foreseeable Difficulties of Having AI Be A Good Thing
  • Friendly AI: Possible But Difficult
Replies from: Stuart_Armstrong, ESRogs, John_Maxwell_IV, somervta
comment by Stuart_Armstrong · 2013-09-20T15:20:25.690Z · LW(p) · GW(p)

None of these titles seem likely to grip people...

Replies from: lukeprog
comment by lukeprog · 2013-09-22T14:12:14.064Z · LW(p) · GW(p)

I like Friendly AI: Possible But Difficult best, but given your text, it might need to be Good Artificial Intelligence: Possible But Difficult.

But I agree these are unlikely to grip people.

Maybe just The Rise of Superintelligence?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-09-22T21:18:00.279Z · LW(p) · GW(p)

Apt to be confused with Bostrom's forthcoming book?

comment by ESRogs · 2013-09-20T00:39:10.403Z · LW(p) · GW(p)

I notice that I am confused.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-20T01:31:55.167Z · LW(p) · GW(p)

"AI as a positive and negative factor in global risk", in a book called "Global Catastrophic Risks". The phrase 'AI risk' does not appear in the text. If I'd known then what I know now, I would have left the word 'risk' out of the title entirely.

Replies from: ESRogs
comment by ESRogs · 2013-09-20T10:23:16.239Z · LW(p) · GW(p)

Confusion cleared :)

comment by John_Maxwell (John_Maxwell_IV) · 2013-09-20T02:20:29.693Z · LW(p) · GW(p)

I'd assume that anyone who hears about the book is going to learn that it's about risks from AI. Do you really think it comes down to the word "risk"? Borrowing Mike Anissimov's title, how about "Smarter than Us: On the Safety of Artificial Intelligence Research"?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-20T06:26:49.566Z · LW(p) · GW(p)

'Safety' has much of the same problem, though not as much as 'risk'.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-09-20T07:14:16.663Z · LW(p) · GW(p)

Makes sense. Here are a few more ideas, tending towards a pop-sci feel.

  • Ethics for Robots: AI, Morality, and the Future of Humankind

  • Big Servant, Little Master: Anticipating Superhuman Artificial Intelligence

  • Friendly AI and Unfriendly AI

  • AI Morality: Why We Need It and Why It's Tough

  • AI Morality: A Hard Problem

  • The Mindspace of Artificial Intelligences

  • Strong AI: Danger and Opportunity

  • Software Minds: Perils and Possibilities of Human-Level AI

  • Like Bugs to Them: The Coming Rise of Super-Intelligent AI

  • From Cavemen to Google and Beyond: The Future of Intelligence on Earth

  • Super-Intelligent AI: Opportunities, Dangers, and Why It Could Come Sooner Than You Think

Replies from: daniel-1
comment by daniel-1 · 2013-09-22T17:46:19.973Z · LW(p) · GW(p)

I think Ethics for robots catches your attention(or at least it caught mine) but think some of the other subtitles you suggested go better with it

Ethics for Robots: Perils and Possibilities of Super-Intelligent AI

Ethics for Robots: A Hard Problem

Although maybe you wouldn't want to associate AI and robots

Replies from: John_Maxwell_IV, TheOtherDave
comment by John_Maxwell (John_Maxwell_IV) · 2013-09-22T19:23:10.233Z · LW(p) · GW(p)

Yep, absolutely feel free to mix/match/modify my suggested titles.

comment by TheOtherDave · 2013-09-22T19:17:38.574Z · LW(p) · GW(p)

"Artificial Ethics"?

comment by somervta · 2013-09-20T01:46:12.757Z · LW(p) · GW(p)

Of these the first is the best, by a long shot.

comment by luminosity · 2013-09-18T11:23:37.416Z · LW(p) · GW(p)

An Alien Mind: The Risks of AI

comment by shminux · 2013-09-18T01:36:46.297Z · LW(p) · GW(p)

Needle in the AIstack :)

comment by [deleted] · 2013-09-18T01:25:47.762Z · LW(p) · GW(p)
  • Preventing the Redundancy of the Human Race
  • What will you do when your smartphone doesn't need you anymore?
  • Humans vs Machines - a Battle we could not win
comment by somervta · 2013-09-18T01:23:05.522Z · LW(p) · GW(p)

Important question - is this going to be a broad overview of AI risk in that it will cover different viewpoints (other than just MIRI's), a little like Responses to Catastrophic AGI Risk was, or is it to be more focused on the MIRI-esque view of things?

Replies from: lukeprog
comment by lukeprog · 2013-09-18T04:07:47.016Z · LW(p) · GW(p)

Focused on the MIRI-esque view.

comment by Zaine · 2013-09-20T00:00:13.274Z · LW(p) · GW(p)

Risks of Artificial Intelligence

Or, adding a wee bit a of flair:

Parricide: Risks of Artificial Intelligence

Conceding the point to Eliezer:

Parricide and the Quest for Machine Intelligence

comment by Stuart_Armstrong · 2013-09-18T13:25:42.201Z · LW(p) · GW(p)

Can market research be done?

Replies from: gwern
comment by gwern · 2013-09-19T17:51:39.119Z · LW(p) · GW(p)

Sure; you could compile a list from the comments and throw them into Google AdWords to see what maximizes clicks (the landing page would be something on intelligence.org). Anyone could do this - heck, I have $40 of AdWords credit I didn't realize I had, and could do it. But would this really be worthwhile, especially if people keep suggesting titles?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-09-20T02:10:39.897Z · LW(p) · GW(p)

But would this really be worthwhile, especially if people keep suggesting titles?

Stuart could wait until activity in the thread dies out.

If there's going to be a decent-sized push behind this book, I'd advocate doing market research.

Replies from: gwern
comment by gwern · 2013-09-20T15:29:44.071Z · LW(p) · GW(p)

Stuart could wait until activity in the thread dies out.

That resolves the second question, but not the big original one: if someone were to do an AdWords campaign as I've suggested, would Luke or the person in charge actually change the name of the book based on the results? What's the VoI here?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-09-21T08:12:38.356Z · LW(p) · GW(p)

would Luke or the person in charge actually change the name of the book based on the results?

I'd be surprised if they didn't read the results if you sent them, and I'd also be surprised if they didn't do Bayesian updates about the optimal book title based on the results. But you could always contact them.

comment by spuckblase · 2013-09-18T08:18:11.724Z · LW(p) · GW(p)

Risky Machines: Artificial Intelligence as a Danger to Mankind

comment by BaconServ · 2013-09-18T02:05:05.346Z · LW(p) · GW(p)

What is the target audience we are aiming to attract here?

  • AI: The Most Dangerous Game
  • What if God Were Imperfect?
  • Unlimited Power: The Threat of Superintelligence
  • The Threat of Our Future Selves
  • A True Identity/Existential Crisis
  • Refining Identity: The Dangers of AI
  • Maximum Possible Risk: Intelligence Beyond Recognition

All I have for now.

comment by Alexei · 2013-09-20T03:28:50.876Z · LW(p) · GW(p)

Finding perfect future through AI
Getting everything you want with AI
Good Future
The Perfect Servant
Programming a God

comment by kgalias · 2013-09-18T13:59:00.450Z · LW(p) · GW(p)

I think I'd like "machine intelligence" instead of "artificial intelligence" in the title, the latter pattern-matches to too many non-serious things.

So, after cousin_it or gjm: "Machine Intelligence as a Danger to Mankind" or, for a less doomsayer-ish vibe, "Risks of Machine Intelligence".

comment by newerspeak · 2013-09-18T10:17:49.128Z · LW(p) · GW(p)

Safe at any Speed: Fundamental Challenges in the Development of Self-Improving Artificial Intelligence

Replies from: TheOtherDave
comment by TheOtherDave · 2013-09-18T16:01:20.090Z · LW(p) · GW(p)

Be aware that "Safe at any Speed," while a marvelous summary of the correct attitude towards risk management to take here, is also the title of a moderately well-known Larry Niven short story.

comment by Dr_Manhattan · 2013-09-17T21:11:51.351Z · LW(p) · GW(p)

"Primer" feels wrong. "A short introduction" would be more inviting, though there might be copyright issues with that. "AI-risk" is probably too much of an insider term.

I like cousin_it's direction http://lesswrong.com/r/discussion/lw/io3/help_us_name_a_short_primer_on_ai_risk/9rl6 - though would avoid anything that sounds like fear mongering.

Replies from: somervta
comment by somervta · 2013-09-18T01:13:40.684Z · LW(p) · GW(p)

"AI-risk" is probably too much of an insider term.

Something like "Risks from Artificial Intelligence" or "Risks from Advanced Artificial Intelligence" might help with this.

comment by casperdog · 2013-09-24T22:46:58.355Z · LW(p) · GW(p)

Deus ex Machina: The dangers of AI

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-09-24T23:07:44.608Z · LW(p) · GW(p)

Deus est Machina?

... nah, too many religious overtones.

Replies from: Desrtopa
comment by Desrtopa · 2013-09-24T23:30:56.451Z · LW(p) · GW(p)

I'd been thinking it's been done, but apparently it's not already used by anything published. The top result is for a trope page.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-09-25T01:39:30.352Z · LW(p) · GW(p)

Right, that's where I got it from.

comment by ChrisHallquist · 2013-09-21T21:26:48.952Z · LW(p) · GW(p)

"Preventing Skynet"

(First thing that popped into my mind after I saw "Terminator versus the AI," before reading thread. May or may not be a good idea.)

comment by Alexei · 2013-09-20T03:32:38.469Z · LW(p) · GW(p)

Making the future: A Guide to AI Development

comment by John_Maxwell (John_Maxwell_IV) · 2013-09-20T02:08:05.973Z · LW(p) · GW(p)

Where is this book supposed to fit in with Facing the Intelligence Explosion? I have a friend who I was thinking of sending Facing the Intelligence Explosion to; should I wait for this new book to come out?

comment by Brillyant · 2013-09-19T16:04:29.004Z · LW(p) · GW(p)

AI: More than just a Creepy Spielberg Movie

comment by Halfwitz · 2013-09-19T14:58:17.911Z · LW(p) · GW(p)
  • The Indifferent God: The Promise and Peril of Machine Intelligence
  • The Arbitrary Mind: ^
  • The Parable of the Paperclip Maximizer
comment by blogospheroid · 2013-09-19T04:34:12.387Z · LW(p) · GW(p)

Flash Crash of the Universe : The Perils of designed general intelligence

The flash crash is a computer triggered event. The knowledgeable amongst us know about it. It indicates the kind of risks expected. Just my 2 cents.

My second thought is way more LW specific. Maybe it could be a chapter title.

You are made of atoms : The risks of not seeing the world from the viewpoint of an AI

comment by loup-vaillant · 2013-09-18T17:12:42.220Z · LW(p) · GW(p)

It just occurred to me that we may be able to avoid the word "intelligence" entirely in the title. I was thinking of Cory Doctorrow on the coming war on general computation, where he explain unwanted behaviour on general purpose computers is basically impossible to stop. So:

Current computers are fully general hardware. An AI would be fully general software. We could also talk about general purpose computers vs general purpose programs.

The Idea is, many people already understand some risks associated with general purpose computers (if only for the various malware). Maybe we could use that to draw attention to the risks of general purpose programs.

That may avoid drawing unwanted associations with the word "intelligence". Many people believe that machines cannot be intelligent "by definition". Many believe there is something "magic" between the laws of physics and the high-level functioning of a human nervous system. They would be hard-pressed to admit it outright, but it is at the root of a fundamental disbelief of the possibility of AI.

As for actual titles…

  • The Risks of General Purpose Software.
  • General Purpose Computers can do anything. General Purpose Programs, will. (Sounds better as a subtitle, that one.)

(Small inconvenience: phrasing the title this way may require to touch the content of the book itself.)

comment by [deleted] · 2013-09-18T02:13:37.112Z · LW(p) · GW(p)
  • Artificial Intelligence or Sincere Stupidity: Tomorrow's Choice.

  • You Can't Spell Fail Without AI.

  • AI-Yi-Yi! Peligro!

  • Better. Stronger. Faster.

  • Deus ex machina.

comment by polymathwannabe · 2013-09-18T17:37:16.547Z · LW(p) · GW(p)

"How do we outsmart something designed to outsmart us?"

comment by Paul Crowley (ciphergoth) · 2013-09-18T11:33:44.480Z · LW(p) · GW(p)

How Not To Be Killed By A Robot: Why superhuman intelligence poses a danger to humanity, and what to do about it.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-09-18T13:08:09.978Z · LW(p) · GW(p)

Anything with "robot" brings up the Terminator and suggest entirely the wrong idea.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-09-18T13:59:58.345Z · LW(p) · GW(p)

Your reply to my other comment clarifies. OK scratch that :)