Questionable Narratives of "Situational Awareness"

post by fergusq · 2024-06-17T21:01:37.930Z · LW · GW · 1 comments

This is a link post for https://forum.effectivealtruism.org/posts/WuPs6diJQnznmS4bo/questionable-narratives-of-situational-awareness

Contents

1 comment

This is my analysis of narratives present in Leopold Aschenbrenner's "Situational Awareness" essay series. In the post, I argue that Aschenbrenner uses dubious, propaganda-esque, and nationalistic narratives, and flawed argumentation overall, which weakens his essay's credibility. I don't believe there is necessarily any malicious intent behind this, but I think it is still right to point out these issues, since they make it easier for people to just discard what he is saying.

(This was posted on the EA forum, and I was asked to crosspost it here by someone who reads both forums. I'm not sure how crossposts work since I didn't have an account here already, so I made a linkpost. I hope it's appropriate.)

1 comments

Comments sorted by top scores.

comment by Seth Herd · 2024-06-17T23:59:29.873Z · LW(p) · GW(p)

I commented in more detail [EA(p) · GW(p)] and more vehemence over on the EA forum.

In my opinion, this post does not engage with Achenbrenner's narrative at the object level, and merely objects to the "vibes" and notes that his predictions are questionable. Of course they are; they are predictions about something we've never done before. I don't like the conclusions either, but that does not stop me from taking the arguments seriously.

The object-level claim is that AGI is not immanent, so we shouldn't freak out about safety and world-dominating power, since that would deprive a lot of people of the benefits of AGI. However, there are exactly zero arguments made for the object-level claim that AGI is still far away.

We know that timelines are difficult to predict. That shouldn't stop us from taking short timelines seriously, and analyzing arguments as best we can even when it's difficult.