Help us name a short primer on AI risk!

post by lukeprog · 2013-09-17T20:35:34.895Z · score: 7 (12 votes) · LW · GW · Legacy · 75 comments

MIRI will soon publish a short book by Stuart Armstrong on the topic of AI risk. The book is currently titled “AI-Risk Primer” by default, but we’re looking for something a little more catchy (just as we did for the upcoming Sequences ebook).

The book is meant to be accessible and avoids technical jargon. Here is the table of contents and a few snippets from the book, to give you an idea of the content and style:

  1. Terminator versus the AI
  2. Strength versus Intelligence
  3. What Is Intelligence? Can We Achieve It Artificially?
  4. How Powerful Could AIs Become?
  5. Talking to an Alien Mind
  6. Our Values Are Complex and Fragile
  7. What, Precisely, Do We Really (Really) Want?
  8. We Need to Get It All Exactly Right
  9. Listen to the Sound of Absent Experts
  10. A Summary
  11. That’s Where You Come In …

The Terminator is a creature from our primordial nightmares: tall, strong, aggressive, and nearly indestructible. We’re strongly primed to fear such a being—it resembles the lions, tigers, and bears that our ancestors so feared when they wandered alone on the savanna and tundra.

As a species, we humans haven’t achieved success through our natural armor plating, our claws, our razor-sharp teeth, or our poison-filled stingers. Though we have reasonably efficient bodies, it’s our brains that have made the difference. It’s through our social, cultural, and technological intelligence that we have raised ourselves to our current position.

Consider what would happen if an AI ever achieved the ability to function socially—to hold conversations with a reasonable facsimile of human fluency. For humans to increase their social skills, they need to go through painful trial and error processes, scrounge hints from more articulate individuals or from television, or try to hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. But it could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. Imagine a human who, every time they opened their mouth, had spent a solid year to ponder and research whether their response was going to be maximally effective. That is what a social AI would be like.

So, title suggestions?

75 comments

Comments sorted by top scores.

comment by MichaelAnissimov · 2013-09-18T22:10:45.993Z · score: 14 (14 votes) · LW · GW

I like "Smarter than Us: an overview of AI Risk". The first three words should knock the reader out of their comfort zone.

comment by ChrisHallquist · 2013-09-21T21:36:21.266Z · score: 1 (3 votes) · LW · GW

I concur on the main title, but, in accordance with cousin_it's comment below, we might go with AI as a Danger to Mankind as a subtitle or something like that. Maybe AI's Promise and Peril for Humanity to avoid (a) giving people the impression we think AI should never be built (b) the charge of sexism.

Note that "promise and peril" is Kurzweil's turn of phrase; it sounds much better in my head than "promise and danger" which I also thought of.

comment by MichaelAnissimov · 2013-09-21T23:55:44.650Z · score: 1 (1 votes) · LW · GW

Sexism..?

comment by NancyLebovitz · 2013-09-22T15:30:27.550Z · score: 3 (7 votes) · LW · GW

Yes, sexism. "Mankind" is male-tilted in a way that "humanity" isn't.

comment by palladias · 2013-09-17T21:41:33.437Z · score: 13 (19 votes) · LW · GW

I don't have anything good but I think the sweet spot is something that kinda draws in people who'd be excited about mainstream worries about AI, but implies there's a twist.

  • Blue Screen of Death... Forever: A Guide to AI Risk
  • Life or Death Programming: The Future of AI Risk
  • Life or Death Philosophy: The Future of AI Risk
  • Decision Theory Xor Death
  • Cogito Ergo Doom: The Unexpected Risks of AI
  • Worse than Laser Eyes: The Real Risks of AI
comment by RichardKennaway · 2013-09-17T22:32:09.455Z · score: 13 (15 votes) · LW · GW

Cogito Ergo Doom

NIce.

comment by Stuart_Armstrong · 2013-09-18T13:11:53.953Z · score: 1 (1 votes) · LW · GW

Sigh... this makes me realise how untalented I am at finding titles!

comment by palladias · 2013-09-18T18:42:20.610Z · score: 2 (2 votes) · LW · GW

Practice practice practice! I've had to find titles for daily blog posts for three years.

comment by John_Maxwell_IV · 2013-09-20T02:07:07.554Z · score: 0 (0 votes) · LW · GW

Blue Screen of Death... Forever: A Guide to AI Risk

I like this one as "Blue Screen of Death: A Primer on AI Risk". "Have you read Blue Screen of Death?" There's something appealing about a book that doesn't take itself too seriously, IMO.

comment by So8res · 2013-09-17T21:58:30.077Z · score: 12 (12 votes) · LW · GW

These suggestions lean towards sensationalism:

  • Losing the Future: The Potential of AI
  • The Power of Intelligence: an overview of AI Risk
  • Smarter than Us: an overview of AI Risk
  • The Fragile Future: an overview of AI Risk
  • An introduction to superhuman intelligence and the risks it poses
comment by lukeprog · 2013-09-18T04:09:09.163Z · score: 6 (6 votes) · LW · GW

The Power of Intelligence: A.I. as a Danger to Mankind might be good, too...

comment by lincolnquirk · 2013-09-18T23:57:41.183Z · score: 2 (2 votes) · LW · GW

Along the lines of "Fragile Future" - I like alliteration:

  • The Common Cause: how artificial intelligence will save the world -- or destroy it. (neat double meaning, maybe a bit too abstracted)
  • The Digital Demon (uhm... a bit too personified)
  • The Silicon Satan (okay, this is getting ridiculous)

Honestly I really like Fragile Future though.

comment by gjm · 2013-09-18T08:13:24.070Z · score: 11 (11 votes) · LW · GW

I don't like all the clever-clever titles being proposed because (1) they probably restrict the audience and (2) one of the difficulties MIRI faces is persuading people to take the risk seriously in the first place -- which will not be helped by a title that's flippant, or science-fiction-y, or overblown, or just plain confusing.

You don't need "primer" or anything like it in the title; if the book has a fairly general title, and is short, and has a preface that begins "This book is an introduction to the risks posed by artificial intelligence" or something, you're done. (No harm in having something like "primer" or "introduction" in the title, if that turns out to make a good title.)

Spell out "artificial intelligence". (Or use some other broadly equivalent term.)

I would suggest simply "Risks of artificial intelligence" or maybe "Risks of machine intelligence" (matching MIRI's name).

comment by palladias · 2013-09-20T05:58:55.454Z · score: 2 (2 votes) · LW · GW

I think titles also follow the "the only goal of the first sentence is to make the reader want to read the second sentence" rule. If MIRI is pitching this book at bright laypeople, I think it's good to be a bit jazzy and then dismantle the Skynet assumptions early on (as it looks like this does).

If the goal is for it to be a technical manual for people in math and CS, I'd agree that anything that sounds like pop sci or Gladwell is probably a turn-off.

Of course, you could always have two editions, with two titles (and differing amounts of LaTeX)

comment by Paul Crowley (ciphergoth) · 2013-09-18T11:34:36.069Z · score: 2 (2 votes) · LW · GW

I take your point, but it looks like the book they've decided to write is one that's at least a little flippant and science-fiction-y, and that being so the title should reflect that.

comment by Stuart_Armstrong · 2013-09-18T13:07:18.338Z · score: 1 (1 votes) · LW · GW

The Terminator section is to counter that issue immediately, rather than being sci-fi ish.

comment by ChrisHallquist · 2013-09-21T21:29:16.966Z · score: 1 (1 votes) · LW · GW

These are reasonable concerns, but a boring title will restrict the audience in its own way. Michael's "Smarter than Us" suggestion avoids both risks, though, I think.

Edit: Wait, that wasn't Michael's idea originally, he was just endorsing it, but I agree with his endorsement and reasoning why. Definitely sends shivers down my spine.

comment by cousin_it · 2013-09-17T20:49:22.259Z · score: 10 (10 votes) · LW · GW

My model of people who are unaware of AI risk says that they will understand a title like "Artificial intelligence as a danger to mankind".

comment by lukeprog · 2013-09-18T04:07:30.067Z · score: 1 (1 votes) · LW · GW

Artificial Intelligence as a Danger to Mankind seems pretty good, if we think it's good to emphasize the risk angle in the title. Though unlike many publishers, I'll also be getting the author's approval before choosing a title.

comment by jmmcd · 2013-09-18T13:10:57.734Z · score: 1 (1 votes) · LW · GW

"X as a Y" is an academic idiom. Sounds wrong for the target audience.

comment by Stuart_Armstrong · 2013-09-18T13:15:36.943Z · score: 0 (0 votes) · LW · GW

Don't have "robot" in the title, or anything that pattern matches to the Terminator (unless it's specifically to draw a contrast).

comment by Randaly · 2013-09-17T22:05:34.758Z · score: 1 (1 votes) · LW · GW

Possibly emphasize 'risk' as opposed to 'danger'? "The Risks of Artificial Intelligence Development"? "Risks from the Development of Superhuman AI"?

comment by loup-vaillant · 2013-09-18T16:41:29.069Z · score: 0 (0 votes) · LW · GW

Or, "Artificial intelligence as a risk to mankind". (Without the emphasis.)

comment by Stuart_Armstrong · 2013-09-18T13:17:39.034Z · score: 7 (7 votes) · LW · GW

Maybe:

I'm sorry Dave, I'm doing exactly what you asked me

(followed by a dull but informative "risks of artificial intelligence"-style subtitle)

comment by James_Miller · 2013-09-17T23:18:49.159Z · score: 6 (10 votes) · LW · GW

To Serve Man: an overview of AI Risk

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-20T00:19:40.004Z · score: 5 (9 votes) · LW · GW

I strongly advocate eliminating the word 'risk' from the title. I have never spoken of 'AI risk'.

It is a defensive word and in a future-of-technology context it communicates to people that you are about to talk about possible threats that no amount of argument will talk you out of. Only people who like the 'risk' dogwhistle will read, and they probably won't like the content.

  • What We Can Know About Powerful Artificial Intelligence
  • Powerful Artificial Intelligence: Why Its Friendliness or Hostility is Knowably Design-Dependent
  • Foreseeable Difficulties of Having AI Be A Good Thing
  • Friendly AI: Possible But Difficult
comment by Stuart_Armstrong · 2013-09-20T15:20:25.690Z · score: 4 (4 votes) · LW · GW

None of these titles seem likely to grip people...

comment by lukeprog · 2013-09-22T14:12:14.064Z · score: 0 (0 votes) · LW · GW

I like Friendly AI: Possible But Difficult best, but given your text, it might need to be Good Artificial Intelligence: Possible But Difficult.

But I agree these are unlikely to grip people.

Maybe just The Rise of Superintelligence?

comment by Paul Crowley (ciphergoth) · 2013-09-22T21:18:00.279Z · score: 0 (0 votes) · LW · GW

Apt to be confused with Bostrom's forthcoming book?

comment by ESRogs · 2013-09-20T00:39:10.403Z · score: 4 (4 votes) · LW · GW

I notice that I am confused.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-20T01:31:55.167Z · score: 4 (4 votes) · LW · GW

"AI as a positive and negative factor in global risk", in a book called "Global Catastrophic Risks". The phrase 'AI risk' does not appear in the text. If I'd known then what I know now, I would have left the word 'risk' out of the title entirely.

comment by ESRogs · 2013-09-20T10:23:16.239Z · score: 2 (2 votes) · LW · GW

Confusion cleared :)

comment by John_Maxwell_IV · 2013-09-20T02:20:29.693Z · score: 0 (0 votes) · LW · GW

I'd assume that anyone who hears about the book is going to learn that it's about risks from AI. Do you really think it comes down to the word "risk"? Borrowing Mike Anissimov's title, how about "Smarter than Us: On the Safety of Artificial Intelligence Research"?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-20T06:26:49.566Z · score: 0 (0 votes) · LW · GW

'Safety' has much of the same problem, though not as much as 'risk'.

comment by John_Maxwell_IV · 2013-09-20T07:14:16.663Z · score: 6 (6 votes) · LW · GW

Makes sense. Here are a few more ideas, tending towards a pop-sci feel.

  • Ethics for Robots: AI, Morality, and the Future of Humankind

  • Big Servant, Little Master: Anticipating Superhuman Artificial Intelligence

  • Friendly AI and Unfriendly AI

  • AI Morality: Why We Need It and Why It's Tough

  • AI Morality: A Hard Problem

  • The Mindspace of Artificial Intelligences

  • Strong AI: Danger and Opportunity

  • Software Minds: Perils and Possibilities of Human-Level AI

  • Like Bugs to Them: The Coming Rise of Super-Intelligent AI

  • From Cavemen to Google and Beyond: The Future of Intelligence on Earth

  • Super-Intelligent AI: Opportunities, Dangers, and Why It Could Come Sooner Than You Think

comment by daniel-1 · 2013-09-22T17:46:19.973Z · score: 0 (0 votes) · LW · GW

I think Ethics for robots catches your attention(or at least it caught mine) but think some of the other subtitles you suggested go better with it

Ethics for Robots: Perils and Possibilities of Super-Intelligent AI

Ethics for Robots: A Hard Problem

Although maybe you wouldn't want to associate AI and robots

comment by John_Maxwell_IV · 2013-09-22T19:23:10.233Z · score: 0 (0 votes) · LW · GW

Yep, absolutely feel free to mix/match/modify my suggested titles.

comment by TheOtherDave · 2013-09-22T19:17:38.574Z · score: 0 (0 votes) · LW · GW

"Artificial Ethics"?

comment by somervta · 2013-09-20T01:46:12.757Z · score: 0 (0 votes) · LW · GW

Of these the first is the best, by a long shot.

comment by RichardKennaway · 2013-09-17T22:30:35.908Z · score: 5 (11 votes) · LW · GW

"The Last Machine: Why Artificial Intelligence Might Just Wipe Us All Out"

It could include a few cartoons of robots destroying us all while saying things like:

"I do not hate you, but you are made of atoms I can use for something else."
"I am built to maximise human happiness, so unhappy people must die."
"Must...make...paperclips!"
"Muahahahaha! I will grant ALL your wishes!!!"

comment by luminosity · 2013-09-18T11:23:37.416Z · score: 4 (4 votes) · LW · GW

An Alien Mind: The Risks of AI

comment by shminux · 2013-09-18T01:36:46.297Z · score: 4 (6 votes) · LW · GW

Needle in the AIstack :)

comment by djm · 2013-09-18T01:25:47.762Z · score: 3 (3 votes) · LW · GW
  • Preventing the Redundancy of the Human Race
  • What will you do when your smartphone doesn't need you anymore?
  • Humans vs Machines - a Battle we could not win
comment by Zaine · 2013-09-20T00:00:13.274Z · score: 2 (2 votes) · LW · GW

Risks of Artificial Intelligence

Or, adding a wee bit a of flair:

Parricide: Risks of Artificial Intelligence

Conceding the point to Eliezer:

Parricide and the Quest for Machine Intelligence

comment by BaconServ · 2013-09-18T02:05:05.346Z · score: 2 (2 votes) · LW · GW

What is the target audience we are aiming to attract here?

  • AI: The Most Dangerous Game
  • What if God Were Imperfect?
  • Unlimited Power: The Threat of Superintelligence
  • The Threat of Our Future Selves
  • A True Identity/Existential Crisis
  • Refining Identity: The Dangers of AI
  • Maximum Possible Risk: Intelligence Beyond Recognition

All I have for now.

comment by somervta · 2013-09-18T01:23:05.522Z · score: 2 (2 votes) · LW · GW

Important question - is this going to be a broad overview of AI risk in that it will cover different viewpoints (other than just MIRI's), a little like Responses to Catastrophic AGI Risk was, or is it to be more focused on the MIRI-esque view of things?

comment by lukeprog · 2013-09-18T04:07:47.016Z · score: 1 (1 votes) · LW · GW

Focused on the MIRI-esque view.

comment by Alexei · 2013-09-20T03:28:50.876Z · score: 1 (1 votes) · LW · GW

Finding perfect future through AI
Getting everything you want with AI
Good Future
The Perfect Servant
Programming a God

comment by kgalias · 2013-09-18T13:59:00.450Z · score: 1 (1 votes) · LW · GW

I think I'd like "machine intelligence" instead of "artificial intelligence" in the title, the latter pattern-matches to too many non-serious things.

So, after cousin_it or gjm: "Machine Intelligence as a Danger to Mankind" or, for a less doomsayer-ish vibe, "Risks of Machine Intelligence".

comment by Stuart_Armstrong · 2013-09-18T13:25:42.201Z · score: 1 (1 votes) · LW · GW

Can market research be done?

comment by gwern · 2013-09-19T17:51:39.119Z · score: 4 (4 votes) · LW · GW

Sure; you could compile a list from the comments and throw them into Google AdWords to see what maximizes clicks (the landing page would be something on intelligence.org). Anyone could do this - heck, I have $40 of AdWords credit I didn't realize I had, and could do it. But would this really be worthwhile, especially if people keep suggesting titles?

comment by John_Maxwell_IV · 2013-09-20T02:10:39.897Z · score: 4 (4 votes) · LW · GW

But would this really be worthwhile, especially if people keep suggesting titles?

Stuart could wait until activity in the thread dies out.

If there's going to be a decent-sized push behind this book, I'd advocate doing market research.

comment by gwern · 2013-09-20T15:29:44.071Z · score: 2 (2 votes) · LW · GW

Stuart could wait until activity in the thread dies out.

That resolves the second question, but not the big original one: if someone were to do an AdWords campaign as I've suggested, would Luke or the person in charge actually change the name of the book based on the results? What's the VoI here?

comment by John_Maxwell_IV · 2013-09-21T08:12:38.356Z · score: 1 (1 votes) · LW · GW

would Luke or the person in charge actually change the name of the book based on the results?

I'd be surprised if they didn't read the results if you sent them, and I'd also be surprised if they didn't do Bayesian updates about the optimal book title based on the results. But you could always contact them.

comment by newerspeak · 2013-09-18T10:17:49.128Z · score: 1 (3 votes) · LW · GW

Safe at any Speed: Fundamental Challenges in the Development of Self-Improving Artificial Intelligence

comment by TheOtherDave · 2013-09-18T16:01:20.090Z · score: 2 (2 votes) · LW · GW

Be aware that "Safe at any Speed," while a marvelous summary of the correct attitude towards risk management to take here, is also the title of a moderately well-known Larry Niven short story.

comment by spuckblase · 2013-09-18T08:18:11.724Z · score: 1 (1 votes) · LW · GW

Risky Machines: Artificial Intelligence as a Danger to Mankind

comment by Dr_Manhattan · 2013-09-17T21:11:51.351Z · score: 1 (1 votes) · LW · GW

"Primer" feels wrong. "A short introduction" would be more inviting, though there might be copyright issues with that. "AI-risk" is probably too much of an insider term.

I like cousin_it's direction http://lesswrong.com/r/discussion/lw/io3/help_us_name_a_short_primer_on_ai_risk/9rl6 - though would avoid anything that sounds like fear mongering.

comment by somervta · 2013-09-18T01:13:40.684Z · score: 2 (2 votes) · LW · GW

"AI-risk" is probably too much of an insider term.

Something like "Risks from Artificial Intelligence" or "Risks from Advanced Artificial Intelligence" might help with this.

comment by casperdog · 2013-09-24T22:46:58.355Z · score: 0 (0 votes) · LW · GW

Deus ex Machina: The dangers of AI

comment by linkhyrule5 · 2013-09-24T23:07:44.608Z · score: 1 (1 votes) · LW · GW

Deus est Machina?

... nah, too many religious overtones.

comment by Desrtopa · 2013-09-24T23:30:56.451Z · score: 0 (0 votes) · LW · GW

I'd been thinking it's been done, but apparently it's not already used by anything published. The top result is for a trope page.

comment by linkhyrule5 · 2013-09-25T01:39:30.352Z · score: 0 (0 votes) · LW · GW

Right, that's where I got it from.

comment by ChrisHallquist · 2013-09-21T21:26:48.952Z · score: 0 (0 votes) · LW · GW

"Preventing Skynet"

(First thing that popped into my mind after I saw "Terminator versus the AI," before reading thread. May or may not be a good idea.)

comment by Alexei · 2013-09-20T03:32:38.469Z · score: 0 (0 votes) · LW · GW

Making the future: A Guide to AI Development

comment by John_Maxwell_IV · 2013-09-20T02:08:05.973Z · score: 0 (0 votes) · LW · GW

Where is this book supposed to fit in with Facing the Intelligence Explosion? I have a friend who I was thinking of sending Facing the Intelligence Explosion to; should I wait for this new book to come out?

comment by Brillyant · 2013-09-19T16:04:29.004Z · score: 0 (0 votes) · LW · GW

AI: More than just a Creepy Spielberg Movie

comment by Halfwitz · 2013-09-19T14:58:17.911Z · score: 0 (0 votes) · LW · GW
  • The Indifferent God: The Promise and Peril of Machine Intelligence
  • The Arbitrary Mind: ^
  • The Parable of the Paperclip Maximizer
comment by blogospheroid · 2013-09-19T04:34:12.387Z · score: 0 (0 votes) · LW · GW

Flash Crash of the Universe : The Perils of designed general intelligence

The flash crash is a computer triggered event. The knowledgeable amongst us know about it. It indicates the kind of risks expected. Just my 2 cents.

My second thought is way more LW specific. Maybe it could be a chapter title.

You are made of atoms : The risks of not seeing the world from the viewpoint of an AI

comment by polymathwannabe · 2013-09-18T17:37:16.547Z · score: 0 (2 votes) · LW · GW

"How do we outsmart something designed to outsmart us?"

comment by loup-vaillant · 2013-09-18T17:12:42.220Z · score: 0 (0 votes) · LW · GW

It just occurred to me that we may be able to avoid the word "intelligence" entirely in the title. I was thinking of Cory Doctorrow on the coming war on general computation, where he explain unwanted behaviour on general purpose computers is basically impossible to stop. So:

Current computers are fully general hardware. An AI would be fully general software. We could also talk about general purpose computers vs general purpose programs.

The Idea is, many people already understand some risks associated with general purpose computers (if only for the various malware). Maybe we could use that to draw attention to the risks of general purpose programs.

That may avoid drawing unwanted associations with the word "intelligence". Many people believe that machines cannot be intelligent "by definition". Many believe there is something "magic" between the laws of physics and the high-level functioning of a human nervous system. They would be hard-pressed to admit it outright, but it is at the root of a fundamental disbelief of the possibility of AI.

As for actual titles…

  • The Risks of General Purpose Software.
  • General Purpose Computers can do anything. General Purpose Programs, will. (Sounds better as a subtitle, that one.)

(Small inconvenience: phrasing the title this way may require to touch the content of the book itself.)

comment by Paul Crowley (ciphergoth) · 2013-09-18T11:33:44.480Z · score: 0 (2 votes) · LW · GW

How Not To Be Killed By A Robot: Why superhuman intelligence poses a danger to humanity, and what to do about it.

comment by Stuart_Armstrong · 2013-09-18T13:08:09.978Z · score: 0 (0 votes) · LW · GW

Anything with "robot" brings up the Terminator and suggest entirely the wrong idea.

comment by Paul Crowley (ciphergoth) · 2013-09-18T13:59:58.345Z · score: 3 (3 votes) · LW · GW

Your reply to my other comment clarifies. OK scratch that :)

comment by [deleted] · 2013-09-18T02:13:37.112Z · score: 0 (4 votes) · LW · GW
  • Artificial Intelligence or Sincere Stupidity: Tomorrow's Choice.

  • You Can't Spell Fail Without AI.

  • AI-Yi-Yi! Peligro!

  • Better. Stronger. Faster.

  • Deus ex machina.