Matt Yglesias on AI Policy
post by Grant Demaree (grant-demaree) · 2022-08-17T23:57:59.380Z · LW · GW · 1 commentsThis is a link post for https://www.slowboring.com/p/whats-long-term-about-longtermism
Contents
1 comment
Yglesias is a widely read center-left journalist. Co-founder of Vox, ex-NYT. Note the implicit invitation: “I also try to make it clear to people who are closer to the object-level work that I’m interested in writing columns on AI policy if they have ideas for me, but they mostly don’t.”
Full article is ungated on his Substack. Relevant excerpt below:
The typical person’s marginal return on investment for efforts to reduce existential risk from misaligned artificial intelligence is going to diminish at an incredibly rapid pace. I have written several times that I think this problem is worth taking seriously and that the people working on it should not be dismissed as cranks. I’m a somewhat influential journalist, and my saying this has, I think, some value to the relevant people. But I write five columns a week and they are mostly not about this, because being tedious and repetitive on this point wouldn’t help anyone. I also try to make it clear to people who are closer to the object-level work that I’m interested in writing columns on AI policy if they have ideas for me, but they mostly don’t.
So am I “prioritizing” AGI risk as a cause? On one level, I think I am, in the sense that I do literally almost everything in my power to help address it. On another level, I clearly am not prioritizing this because I am barely doing anything.
1 comments
Comments sorted by top scores.
comment by Chris_Leong · 2022-08-27T03:19:54.886Z · LW(p) · GW(p)
I’m sure if he spent five minutes brainstorming he could come up with more things, or maybe I’m just wrongly calibrated on how much agency people have?