An AI risk argument that resonates with NYTimes readers
post by Julian Bradshaw · 2023-03-12T23:09:20.458Z · LW · GW · 14 commentsContents
Dwarf Planet None 14 comments
Ezra Klein of NYT put out a surprisingly sympathetic post on AI risk in the Sunday edition. It even quotes Paul Christiano and links back to LessWrong!
But what I'm actually here to talk about is the top reader-recommended comment on the article as of Sunday 11pm UTC:
Dwarf Planet
I wonder how many of these AI researchers have children. What Ezra describes here is what I see every day with my teenager. Of course, no one understands teenagers, but that's not what I mean. I taught my daughter to play chess when she was very young. I consider myself a reasonably good player, and for many years (as I was teaching her), I had to hold myself back to let her win enough to gain confidence. But now that she is thirteen, I suddenly discovered that within a span of weeks, I no longer needed to handicap myself. The playing field was level. And then, gradually and then very suddenly, she leapt past my abilities. As with AI, could understand the broad outlines of what she was doing--moving this knight or that rook to gain an advantage--but I had no clue how to defend against these attacks. And worse (for my game, at least), I would fall into traps where I thought I was pursuing a winning hand but was lead into ambush after ambush. It was very humbling: I had had the upper hand for so long that it became second nature, and then suddenly, I went to losing every game.
As parents, we all want our children to surpass us. But with AI, these "summoners" are creating entities whose motives are not human. We seem to be at the cusp of where I was before my daughter overtook me: confident and complacent, not knowing what lay ahead. But, what we don't realize is that very soon we'll begin to lose every game against these AIs. Then, our turn in the sun will be over.
Generally NYT comments on AI risk are either dismissive, or just laden with general anxiety about tech. (Indeed, the second-most recommended comment is deeply dismissive, and the third is generic anxiety/frustration.) There's hopefully something to learn from commentor "Dwarf Planet" in terms of messaging.
14 comments
Comments sorted by top scores.
comment by Peter Wildeford (peter_hurford) · 2023-03-13T15:28:53.391Z · LW(p) · GW(p)
If we want to know what arguments resonate with New York Times articles we can actually use surveys, message testing, and focus groups to check and we don't need to guess! (Disclaimer: My company sells these services.)
Replies from: jskatt↑ comment by JakubK (jskatt) · 2023-03-15T21:23:09.921Z · LW(p) · GW(p)
Does RP have any results to share from these studies? What arguments seem to resonate with various groups?
comment by JakubK (jskatt) · 2023-03-14T08:54:24.980Z · LW(p) · GW(p)
At the time of me writing, this comment is still the most recommended comment with 910 recommendations. 2nd place has 877 recommendations:
Never has a technology been potentially more transformative and less desired or asked for by the public.
3rd place has 790 recommendations:
“A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”
Sundar Pichai’s comment beautifully sums up the arrogance and grandiosity pervasive in the entire tech industry—the notion that building machines that can mimic and repace actual humans, and providing wildly expensive and environmentally destructive toys for those who can pay for them, is “the most important” project ever undertaken by humanity, rather than a frivolous indulgence of a few overindulged rich kids with an inflated sense of themselves.
Off the top of my head, I am sure most of us can think of more than a few human other projects—both ongoing and never initiated—more important than the development of A.I.—like the development technologies that will save our planet from burning or end poverty or mapping the human genome in order to cure genetic disorders. Sorry, Mr. Pichai, but only someone who has lived in a bubble of privilege would make such a comment and actually believe it.
4th place has 682 recommendations:
“If you think calamity so possible, why do this at all?”
Having lived and worked in the Bay Area and around many of these individuals, the answer is often none that Ezra cites. More often than not, the answer is: money.
Tech workers come to the Bay Area to get early stock grants and the prospect of riches. It’s not AI that will destroy humanity. It’s capitalism.
After that, 5th place has 529, 6th place has 390, and the rest have 350 or fewer.
My thoughts:
- 2nd place reminds me of Let's think about slowing down AI [LW · GW]. But I somewhat disagree with the comment, because I do sense that many people have a desire for cool new AI tech.
- 3rd place sounds silly since advanced AI could help with reducing climate change, poverty, and genetic disorders. I also wonder if this commenter knows about AlphaFold.
- 4th place seems important. But I think that even if AGI jobs offered lower compensation, there would still be a considerable number of workers interested in pursuing them.
comment by memeticimagery · 2023-03-13T20:51:46.514Z · LW(p) · GW(p)
I think it may be necessary to accept that at first, there may need to be a stage of general AI wariness within public opinion before AI Safety and specific facets of the topic are explored. In a sense, the public has not yet fully digested the 'AI is a serious risk' or perhaps even 'AI will be transformative to human life' in the relatively near term future. I don't think it is very likely that is a phase that can simply be skipped and it will probably be useful to get as many people broadly on topic before the more specific messaging, because if they are not then they will reject your messaging immediately, perhaps becoming further entrenched in the process.
If this is the case then right now sentiments along the lines of general anxiety about AI are not too bad, or at least they are better than dismissive sentiment.
comment by Evan R. Murphy · 2023-03-13T17:42:51.018Z · LW(p) · GW(p)
It even quotes Paul Christiano and links back to LessWrong!
The article also references Katja Grace and an AI Impacts survey. Ezra seems pretty plugged into this scene.
comment by Michael Soareverix (michael-soareverix) · 2023-03-13T01:11:36.757Z · LW(p) · GW(p)
Great post. This type of genuine comment (human-centered rather than logically abstract) seems like the best way to communicate the threat to non-technical people. I've tried talking about the problem to friends in social sciences and haven't found a good way to convey how serious I feel about it and how there is no current logical prevention of this problem.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-13T02:16:43.057Z · LW(p) · GW(p)
One thing I notice is that it doesn’t link to a plan of action, or tell you how you should feel. It just describes the scenario. Perhaps that’s what’s needed - less of the complete argument, and more just breaking it down into digestible morsels.
comment by trevor (TrevorWiesinger) · 2023-03-13T04:24:45.797Z · LW(p) · GW(p)
I can see an undertone of concern about large-scale job loss due to cognitive automation. It's a catastrophe to personally prepare for, and it's also a very important dynamic for forecasting and world modelling. But blurring the line between job automation and AI safety will cause some serious problems down the line, and that line was likely blurred for a sizeable proportion of the people who upvoted that comment (hopefully not a majority).
I still think it did a great job, but I was much more impressed with the twelve page introduction that Toby Ord wrote about AI safety in The Precipice [LW · GW].
comment by Ninad Patil (ninad-patil) · 2023-03-13T02:26:27.118Z · LW(p) · GW(p)
That comment is well-constructed. However, some may object to the analogy on the grounds that AI may not reach human-level intelligence outside of constrained tasks such as playing chess. While storytelling and emotional appeals can be effective forms of persuasion, what I appreciate about the analogy is that it is relatable, and it highlights the exponential development of AI.
comment by sithlord · 2023-03-13T14:08:26.590Z · LW(p) · GW(p)
A nice read, however it does not present a valid argument, is his time over because his daughter is better at chess then him? This is the beginning of something, not an end.
Replies from: jskatt↑ comment by JakubK (jskatt) · 2023-03-14T08:27:52.867Z · LW(p) · GW(p)
I didn't read it as an argument so much as an emotionally compelling anecdote that excellently conveys this realization:
Replies from: M. Y. ZuoI had had the upper hand for so long that it became second nature, and then suddenly, I went to losing every game.
↑ comment by M. Y. Zuo · 2023-03-15T18:30:09.630Z · LW(p) · GW(p)
As parents, we all want our children to surpass us. But with AI, these "summoners" are creating entities whose motives are not human. We seem to be at the cusp of where I was before my daughter overtook me: confident and complacent, not knowing what lay ahead. But, what we don't realize is that very soon we'll begin to lose every game against these AIs. Then, our turn in the sun will be over.
The final paragraph does seem to be making several arguments, or at least presuming multiple things that are not universally accepted as axioms.
Replies from: jskatt↑ comment by JakubK (jskatt) · 2023-03-15T21:20:05.831Z · LW(p) · GW(p)
Yeah, the author is definitely making some specific claims. I'm not sure if the comment's popularity stems primarily from its particular arguments or from its emotional sentiment. I was just pointing out what I personally appreciated about the comment.
comment by Review Bot · 2024-03-12T16:26:27.342Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?