Why so little AI risk on rationalist-adjacent blogs?
post by Grant Demaree (grant-demaree) · 2022-06-13T06:31:40.288Z · LW · GW · 23 commentsContents
What they wrote Explicitly agrees and provides original gears-level analysis (2) Zvi Mowshowitz Holden Karnofsky of OpenPhil and GiveWell Explicitly agrees (4) Jacob Falkovich of Putanumonit Kelsey Piper of Vox Steve Hsu of Information Processing Applied Divinity Studies Explicitly ambivalent (2) Tyler Cowen of Marginal Revolution Julia Galef of Rationally Speaking No opinion stated, and I was surprised (3) No opinion stated, and AGI risk is off-topic (5) Partially disagrees (3) Scott Aaronson (Mathematician) Peter of Bayesian Investor Dynomight Explicitly disagrees (2) Nintil The Scholar’s Stage Not included (1) My explanation Recommendation None 23 comments
I read a lot of rationalist-adjacents. Outside of LessWrong and ACX, I hardly ever see posts on AI risk. Tyler Cowen of Marginal Revolution writes that, "it makes my head hurt" but hasn't engaged with the issue. Even Zvi spends very few posts on AI risk.
This is surprising, and I wonder what to make of it. Why do the folks most exposed to MIRI-style arguments have so little to say about them?
Here's a few possibilities
- Some of the writers disagree that AGI is a major near-term threat
- It's unusually hard to think and write about AI risk
- The best rationalist-adjacent writers don't feel like they have a deep enough understanding to write about AI risk
- There's not much demand for these posts, and LessWrong/Alignment Forum/ACX are already filling it. Even a great essay wouldn't be that popular
- Folks engaged in AI risk are a challenging audience. Eliezer might get mad at you
- When you write about AGI for a mainstream audience, you look weird. I don't think this is as true it used to be, since Ezra Klein did it in the New York Times and Kelsey Piper in Vox
- Some of these writers are heavily specialized. The mathematicians want to write about pure math. The pharmacologists want to write about drug development. The historians want to argue that WWII strategic bombing was based on a false theory of popular support for the enemy regime, and present-day sanctions are making the same mistake
- Some of the writers are worried that they'll present the arguments badly, inoculating their readers against a better future argument
What they wrote
I'll treat Scott Alexander's blogroll as the canonical list of rationalist-adjacent writers. I've grouped them by their stance on the following statement:
Misaligned AGI is among the most important existential risks to humanity[1]
Explicitly agrees and provides original gears-level analysis (2)
Zvi Mowshowitz
“The default outcome, if we do not work hard and carefully now on AGI safety, is for AGI to wipe out all value in the universe.” (Dec 2017)
Zvi gives a detailed analysis here followed by his own model in response to the 2021 MIRI conversations.
Holden Karnofsky of OpenPhil and GiveWell
In his Most Important Century series (Jul 2021 to present), Holden explains AGI risk to mainstream audiences. Ezra Klein featured Holden’s work in the New York Times.
This series had a high impact on me, because Holden used to have specific and detailed objections [LW · GW] to MIRI’s arguments (2012). Ten years later, he’s changed his mind.
Explicitly agrees (4)
Jacob Falkovich of Putanumonit
“Misaligned AI is an existential threat to humanity, and I will match $5,000 of your donations to prevent it.” (Dec 2017)
Jacob doesn’t make the case himself, but he links to external sources.
Kelsey Piper of Vox
“the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans — that is, humanity doesn’t construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.” (Apr 2021)
Steve Hsu of Information Processing
“I do think that the top 3 risks to humanity, in my view, are AI, bio, and unknown unknowns.” (Apr 2020)
Applied Divinity Studies
“AI risk seems to be about half of all possible existential risk.”
The above quote is from a May 2021 PDF, rather than a direct post. I can’t find a frontpage post that makes the AI risk case directly
EDIT: Previous version gave an incorrect name for the author of Applied Divinity Studies. The author also clarified their position in the comments:
In the piece you link I'm just taking Toby Ord's estimates at face value to use them as a parameter, I haven't given this a ton of thought.
But basically I do think AI Risk is important. I don't write about it because I don't have anything particularly smart to say. As you note, it's a complex topic, and I don't really feel like there's any value in me contributing unless I were to really invest in learning much more.
Explicitly ambivalent (2)
Tyler Cowen of Marginal Revolution
“As for Rogue AI... For now I will just say that it makes my head hurt. It makes my head hurt because the topic is so complicated… I see nuclear war as the much greater large-scale risk, by far” (Feb 2022).
Julia Galef of Rationally Speaking
Julia interviews Toby Ord at FHI and Kelsey Piper at Vox (AI risk discussions start on pages 9 and 6, respectively). These aren’t dated, but they appear to be 2021-2022.
“a sticking point for me, the whole time that I've been engaging with the AI risk argument and community -- is just feeling caught between, well, the abstract argument is too abstract for me to really feel like I can get a handle on, or know how to take seriously. And any specific scenario feels too implausible. And so I don't really know how to engage with this.”
No opinion stated, and I was surprised (3)
Good Optics writes mainly about history and philosophy. They take existential risks seriously, quote Nick Bostrom, and one of their blog topics is “avant-garde effective altruism.” So they’re definitely exposed to the idea.
Rohit Krishnan is a venture capitalist who writes on rationalist-adjacent topics. He mentions AGI risk in passing as a place to allocate grant money (Apr 2022). It’s not an endorsement or a criticism – it’s a discussion of how hard it is to evaluate grant effectiveness.
Razib Khan writes about history, genetics, IQ, and general science. He never mentions AGI risk.
No opinion stated, and AGI risk is off-topic (5)
Zeynep of The Insight, Derek Lowe of In the Pipeline, and Slime Mold Time Mold write about biology. They don’t mention AGI risk, but you wouldn’t expect them to.
Freddie deBoer is a self-described old-school Marxist, who writes about US politics. He mentions that AI will disrupt the economy, but I can’t find any discussion of x-risks.
Greg Cochran of West Hunter writes about anthropology, biology, history, and evolution. He doesn’t mention AGI risk, but you wouldn’t expect him to.
Partially disagrees (3)
Scott Aaronson (Mathematician)
“my views on AI risk have evolved… when I talk to the AI researchers whose expertise I trust the most, many, though not all, have updated in the direction of “maybe we should start worrying” (Dec 2017).
A few paragraphs later Scott says:
“But one more point: given the way civilization seems to be headed, I’m actually mildly in favor of superintelligences coming into being sooner rather than later. Like, given the choice between a hypothetical paperclip maximizer destroying the galaxy, versus a delusional autocrat burning civilization to the ground while his supporters cheer him on and his opponents fight amongst themselves, I’m just about ready to take my chances with the AI.”
Peter of Bayesian Investor
He takes AI risk seriously but suggests the problems could be fairly easy to solve:
“I’m around 50% confident that CAIS plus a normal degree of vigilance by AI developers will be sufficient to avoid global catastrophe from AI” (Jul 2017).
Dynomight
In the context of a Universal Basic Income argument, Dynomight suggests creating new jobs as deliberate gaps in a future AI’s workflow. My best guess is that they take AI risk seriously but would not endorse Eliezer’s view. I could be wrong, though.
“How to create jobs that look useful? In a future where AI is so powerful that normal human jobs are gone, shouldn’t we be worried about AI risk? Can we create bottlenecks in processes that have to be filled by humans? How to do this in a way that actually reduces AI risk is a hard problem, but surely we can at least make it look plausible” (Nov 2021)?
Explicitly disagrees (2)
Nintil
Ninil is the only writer on Scott’s blogroll to disagree recently and state a specific reason. He links to Curtis Yarvin’s argument.
My interpretation of his view: Diminishing returns to intelligence mean superintelligent AI won’t be powerful enough to destroy the world. Likewise, the world is so unpredictable that superintelligent strategies don’t have a decisive advantage over human ones.
“Curtis Yarvin on AI risk skepticism (Coincidentally, also my argument: diminishing returns to intelligence + inherent unpredictability of the world; though afaik I've never written about this.)” (June 2021)
Update: Nintil no longer endorses this. His new view:
I think some overall points in Yarvin's essay are valid (the world is indeed uncertain and there are diminishing returns to intelligence), but AGIs would still have the advantage of speed and parallelism (Imagine the entirety of Google but no need for meetings, and where workweeks are ran at 100x speed). Even in the absence of superior intelligence, that alone leads to capacities beyond what a human or group thereof can accomplish. I don't know exactly what I was endorsing, but definitely as of today _I do not think Curtis Yarvin's post shows there is no reason to worry about AI risk_. I might write about AI risk at some point. After all I recently compiled [a reading list](https://nintil.com/links-57) on the topic!
And answering the question, why haven't I written about it, other topics come to mind where I have something that I think is worth saying, I think AGI is still somewhat into the future. I am somewhat specialized these days. Usually when I write I like to read all that has been said about the topic, or at least enough to see if something new deserves to be said and then I say it. I don't like being repetitive. I like writing summaries and critical summaries, but even for that there seems to be decent sources around in the internet. If I spent more time reading about it I still think I could write the best primer to the subject :-) .There's still an argument for why someone like me should write one post on this, which is to add my endorsement to the "this is a serious problem", which marginally could increase the odds of someone doing something about it.
The Scholar’s Stage
Their history writing is excellent, but I was disappointed with their take on AGI risk. They ridiculed the idea without making a specific objection. Context is a Feb 2021 essay criticizing the NYT article on Scott Alexander. In their defense, it’s from a footnote, not the main essay.
“were I the one trying to be “critical” of the rationalists, I would write much less about how their comment threads are hostile to feminism–which isn’t true in any case, there were plenty of feminists in those threads, more of those commenting than there ever were neoreactionaries–and more about how they continually agitate for people to give part of their income to stopping Skynet. I kid you not. If this is not a grift, what is? How did that not make it into the article while all this nonsense about the Silicon Valley psyche did?
Well, we know the answer to that. The absurdity of the AI risk project is so far outside of the NYT‘s existing narrative frame that this detail did not even register.”
Not included (1)
I didn’t include Eli Dourado, because his last AGI-related post is from 2011. For the last few years, he’s only posted a few times per year and never on AGI. I don’t know his current views.
My explanation
First, Holden Karnofsky is an exception to the pattern. 28 of his 88 posts on Cold Takes are mainly about AI risk. That's more than 10x the rate of anyone else. Even those who mentioned AI risk did so in 1-2 out of hundreds of posts.
Theory 1 (Some of the writers disagree that AGI is a major near-term threat) explains at least 5 (and up to 9) of the 20 blogs. 5 writers explicitly disagreed that that AGI is a threat or took positions that imply deprioritizing it.
I bet it's at least part of the reason for Tyler Cowen's relative silence. He made clear that it's a lower priority than nuclear safety, and that reduces the amount of effort we should expect him to dedicate to the issue.
Good Optics, Rohit, and Razib didn't write anything, so it's hard to know why. But I bet they've been exposed to MIRI style arguments and, if they found them true and important, would have written about them.
Theory 7 (Some of the writers are heavily specialized) explains another 5.
Julia Galef and Tyler Cowen endorsed Theory 2, that it's unusually hard to think about AI risk.
That leaves the 5 who explicitly agree yet don't write about it very often. Zvi, Kelsey Piper, Jacob Falkovich, Steve Hsu, Alexey Guzey. That could be any number of theories, but I think it's a combination of 2 (unusually hard to think and write about AI risk), 3 (not enough expertise), 5 (challenging audience), and 8 (presenting the arguments badly does harm).
Recommendation
I think it's a good idea for popular rationalist-adjacents to write about AI risk more often, especially high-quality essays for mainstream readers who don't visit LessWrong.
Rationalist-adjacent writers are a major path for LessWrong ideas to influence elite and mainstream opinion. This can lead to good policies, like avoiding a race with China and discouraging certain types of capabilities research.
Finally, I read quite a few of the folks named above. I pay for several of their Substacks, think you should too, and feel like I'm getting a good deal. I'll continue to be a happy reader whether or not they write about AI risk.
- ^
I wanted to include a stronger statement that specified short or medium timelines (5 to 40 years), defined existential risk as "at least as bad as 7 billion deaths", and identified AGI as the single most important risk. But almost none of the writers specified their position in that much detail.
23 comments
Comments sorted by top scores.
comment by Davidmanheim · 2022-06-13T09:05:48.025Z · LW(p) · GW(p)
I think it's a good idea for popular rationalist-adjacents to write about AI risk more often, especially high-quality essays for mainstream readers who don't visit LessWrong.
I'm confused by this, because you didn't actually address any of the objections you yourself raised. Is there a reason you don't think, for example, that #8 is true? Or #2, #3, and #4? (And #6 seems like a stronger argument if you think that looking weird is a way of losing your audience and having less influence over things they might actually listen to you about - so I think not weighing in until you have something specific to offer can be a good strategy even if your highest priority is AI risk.)
Replies from: grant-demaree↑ comment by Grant Demaree (grant-demaree) · 2022-06-13T16:54:32.839Z · LW(p) · GW(p)
I think 2, 3, and 8 are true but pretty easy to overcome. Just get someone knowledgeable to help you
4 (low demand for these essays) seems like a calibration question. Most writers probably would lose their audience if they wrote about it as often as Holden. But more than zero is probably ok. Scott Alexander seems to be following that rule, when he said that we was summarizing the 2021 MIRI conversations at a steady drip so as not to alienate the part of his audience that doesn’t want to see that
I think 6 (look weird) used to be true, but it’s not any more. It’s hard to know for sure without talking to Kelsey Piper or Ezra Klein, but I suspect they didn’t lose any status for their Vox/NYT statements
Replies from: Davidmanheim↑ comment by Davidmanheim · 2022-06-17T13:31:00.672Z · LW(p) · GW(p)
I think that you're grossly underestimating the difficulty of developing and communicating a useful understanding, and the value and scarcity of expert time. I'm sure Kelsey or someone similar can get a couple of hours of time from one of the leading researchers to ensure they understand and aren't miscommunicating, if they really wanted to call in a favor - but they can't do it often, and most bloggers can't do it at all.
Holden has the advantage of deep engagement in the issues as part of his job, working directly with tons of people who are involved in the research, and getting to have conversations as a funder - none of which are true for most writers.
comment by trevor (TrevorWiesinger) · 2022-06-13T06:56:35.150Z · LW(p) · GW(p)
Regarding the "Explicitly Disagrees" section, I'd worry more about people like Nintil than Scholar's Stage.
Scholar's Stage clearly took a face-value approach, based on the classic bayesian approach that if someone is trying to get your money, it doesn't matter how complicated or convoluted their strategy seems; you're more likely to encounter winning strategies than losing ones because winning strategies are adopted by more grifters. That problem is actually less solvable than it sounds, but nonetheless was solved by the widespread draw-down of earning to give. Fortunately, those dark days are over.
Nintil, on the other hand, worries me greatly. There are massive vested interests with public support/ambivalence of the AI industry, and they are theoretically capable of flicking a switch and stomping on AI risk via counterargument DDOS-ing (or gradually ratcheting up those systems whenever they need to keep AI-risk below some acceptable threshold).
Counterarguments that are refutable, but not quickly or conveniently refutable, are something that can become a much more prevalent concern out of nowhere; much more than their persuasiveness would ordinarily warrant if those counterarguments are considered on their own merit, rather than being artificially propped up in very sophisticated and deliberate ways.
Replies from: Artircomment by AppliedDivinityStudies (kohaku-none) · 2022-06-15T17:52:09.770Z · LW(p) · GW(p)
Hi, I write AppliedDivinityStudies.com which you link to. A couple quick clarifications:
- The blog is not written by Alexey Guzey.
- In the piece you link I'm just taking Toby Ord's estimates at face value to use them as a parameter, I haven't given this a ton of thought.
But basically I do think AI Risk is important. I don't write about it because I don't have anything particularly smart to say. As you note, it's a complex topic, and I don't really feel like there's any value in me contributing unless I were to really invest in learning much more.
Once every couple years or so, I feel a bad about this and try to spend a few days learning much more. Given those experiences, I think it's reasonable for me to believe that I'm bad enough at thinking about AI Risk that I can justify not working on it full-time.
My contributions to the effort, if I have any, will mostly be in more abstract philosophical discourse. The post you link for example is about whether or not trying to accelerate scientific progress would be good for x-risk. I have more work coming up on whether or not we should expect optimized dystopia to be worse than optimized utopia is good.
↑ comment by Grant Demaree (grant-demaree) · 2022-06-16T14:02:20.060Z · LW(p) · GW(p)
My mistake! Fixed
comment by Unnamed · 2022-06-13T20:43:47.432Z · LW(p) · GW(p)
Matthew Yglesias has written a couple things about AI risk & existential risk more broadly, and he has also talked a few times about why he doesn't write more about AI, e.g.:
I don’t write takes about how we should all be more worried about an out-of-control AI situation, but that’s because I know several smart people who do write those takes, and unfortunately they do not have much in the way of smart, tractable policy ideas to actually address it.
This seems different than your 8 possibilities. It sounds like his main issue is that he doesn't see the path that you think you see where "Rationalist-adjacent writers are a major path for LessWrong ideas to influence elite and mainstream opinion. This can lead to good policies, like avoiding a race with China and discouraging certain types of capabilities research."
Replies from: grant-demaree↑ comment by Grant Demaree (grant-demaree) · 2022-06-14T01:03:14.345Z · LW(p) · GW(p)
I bet you're right that a perceived lack of policy options is a key reason people don't write about this to mainstream audiences
Still, I think policy options exist
The easiest one is adding right right types AI capabilities research to the US Munitions List, so they're covered under ITAR laws. These are mind-bogglingly burdensome to comply with (so it's effectively a tax on capabilities research). They also make it illegal to share certain parts of your research publicly
It's not quite the secrecy regime that Eliezer is looking for, but it's a big step in that direction
comment by Artir · 2022-06-15T00:42:01.455Z · LW(p) · GW(p)
Hi, I'm the author of Nintil.com. As of today I think the endorsement I gave to Yarvin's argument was too strong, and I have just amended the post to make that clear. I added the following:
[Edit 2022-06-14]: I think some overall points in Yarvin's essay are valid (the world is indeed uncertain and there are diminishing returns to intelligence), but AGIs would still have the advantage of speed and parallelism (Imagine the entirety of Google but no need for meetings, and where workweeks are ran at 100x speed). Even in the absence of superior intelligence, that alone leads to capacities beyond what a human or group thereof can accomplish. I don't know exactly what I was endorsing, but definitely as of today _I do not think Curtis Yarvin's post shows there is no reason to worry about AI risk_. I might write about AI risk at some point. After all I recently compiled [a reading list](https://nintil.com/links-57) on the topic!
And answering the question, why haven't I written about it, other topics come to mind where I have something that I think is worth saying, I think AGI is still somewhat into the future. I am somewhat specialized these days. Usually when I write I like to read all that has been said about the topic, or at least enough to see if something new deserves to be said and then I say it. I don't like being repetitive. I like writing summaries and critical summaries, but even for that there seems to be decent sources around in the internet. If I spent more time reading about it I still think I could write the best primer to the subject :-) .There's still an argument for why someone like me should write one post on this, which is to add my endorsement to the "this is a serious problem", which marginally could increase the odds of someone doing something about it.
Perhaps additional reasons for why we not see more: Less caring? Take something like an asteroid hitting the Earth in a year, with 80% probability. How bad would I feel, or how much would I do to prevent it? Not much. Of course if success relied solely on me then I would do a lot :). You can observe something similar with Covid, There is no covidposting at Nintil and there was lots of it in canonically rationalist spheres.
Replies from: grant-demaree↑ comment by Grant Demaree (grant-demaree) · 2022-06-15T10:49:19.886Z · LW(p) · GW(p)
Many thanks for the update… and if it’s true that you could write the very best primer, that sounds like a high value activity
I don’t understand the astroid analogy though. Does this assume the impact is inevitable? If so, I agree with taking no action. But in any other case, doing everything you can to prevent it seems like the single most important way to spend your days
Replies from: Artir↑ comment by Artir · 2022-06-15T15:44:56.514Z · LW(p) · GW(p)
The asteroid case - it wouldn't be inevitable; it's just the knowledge that there are people out there substantially more motivated than me (and better positioned) to deal with it. For some activities where I'm really good (like... writing blogposts) and where I expect my actions to make more of an impact relative to what others would be doing I could end up writing a blogpost about 'what you guy should do' and emailing it to some other relevant people.
Also, you can edit your post accordingly to reflect my update!
Replies from: grant-demaree↑ comment by Grant Demaree (grant-demaree) · 2022-06-22T19:13:21.688Z · LW(p) · GW(p)
Updated! Excuse the delay
comment by Chris_Leong · 2022-06-13T19:47:33.951Z · LW(p) · GW(p)
Great effort post.
comment by MSRayne · 2022-06-22T22:38:22.409Z · LW(p) · GW(p)
I think a lot of this is people feeling like they're not qualified to speak on the topic. I've lurked on LessWrong for years but mostly haven't posted or commented until lately because I don't think I could possibly have independently produced most of the reasoning on AI alignment topics. I'm just not smart enough, and so it's painful to try to interact here.
comment by Bezzi · 2022-06-14T13:20:19.930Z · LW(p) · GW(p)
Judging from his recent post on AlphaCode, I would say that Scott Aaronson is probably more concerned about AI risk now.
Replies from: jskatt↑ comment by JakubK (jskatt) · 2022-10-25T03:17:30.009Z · LW(p) · GW(p)
comment by Bill Benzon (bill-benzon) · 2022-06-22T12:50:28.386Z · LW(p) · GW(p)
On Tyler Cowen: "For now I will just say that it makes my head hurt. It makes my head hurt because the topic is so complicated."
Yes, I read that when he posted it.
You may know that he's fond of giving 'Straussian' readings of documents and movies. Has it occurred to you that he may also engage in Straussian writing. So, when he says "it makes my head hurt..." he's being polite. He doesn't want to offend anyone. But maybe, just maybe, he thinks the problem of a rogue AI isn't likely enough to warrant giving it any thought.
Replies from: grant-demaree↑ comment by Grant Demaree (grant-demaree) · 2022-06-22T18:06:27.536Z · LW(p) · GW(p)
I buy that… so many of the folks funded by Emergent Ventures are EAs, so directly arguing against AI risk might alienate his audience
Still, this Straussian approach is a terrible way to have a productive argument
Replies from: bill-benzon↑ comment by Bill Benzon (bill-benzon) · 2022-06-24T10:14:48.448Z · LW(p) · GW(p)
FWIW, Cowen rarely has arguments. He'll state strong positions on any number of things in MR but (almost) he never engages with comments at MR. If you want an actual back and forth discussion, the most likely way to get it is in conversation in some forum.
comment by romeostevensit · 2022-06-13T21:00:26.491Z · LW(p) · GW(p)
If you do AI research competently you'll start quickly noticing that a lot of research is dual use, with uncertainty about how much your work would contribute to safety vs capability gain. Thus, the virtue of silence.
Replies from: jskatt↑ comment by JakubK (jskatt) · 2022-12-13T17:57:11.209Z · LW(p) · GW(p)
This is one downside to be careful of with outreach, but on net I think it's quite good to have more high-quality analyses of AI risk. The goal should be to get people to take the problem seriously, not to get people to blindly accept the first safety-related research opportunity they can find.
comment by burmesetheater (burmesetheaterwide) · 2022-06-13T16:43:02.661Z · LW(p) · GW(p)
Most probably just haven't identified it as salient / don't understand it / don't take it seriously, and besides there tend to be severely negative social / audience ramifications associated with doomsday forecasting.