aog's Shortform

post by aog (Aidan O'Gara) · 2025-04-19T22:07:15.559Z · LW · GW · 3 comments

Contents

3 comments

3 comments

Comments sorted by top scores.

comment by aog (Aidan O'Gara) · 2025-04-19T22:07:15.558Z · LW(p) · GW(p)

Shoutout to Epoch for having its own intellectual culture. 

Views on AGI seem suspiciously correlated to me, as if many people's views are more determined by diffusion through social networks and popular writing, rather than independent reasoning. This isn't unique to AGI. Most individual people are not capable of coming up with useful worldviews on their own. Often, the development of interesting, coherent, novel worldviews benefits from an intellectual scene. 

What's an intellectual scene? It's not just an idea. Usually it has a set of complementary ideas, each of which make more sense with the others in place. Often there’s a small number of key thinkers who come up with many new ideas, and a broader group of people who agree with the ideas, further develop them, and follow their implied call to action. Scenes benefit from shared physical and online spaces, though they can also exist in social networks without a central hub. Sometimes they professionalize, offering full-time opportunities to develop the ideas or act on them. Members of a scene are shielded from pressure to defer to others who do not share their background assumptions, and therefore feel freer to come up with new ideas that would be unusual to outsiders, but make sense within the scene's shared intellectual framework. These conditions seem to raise the likelihood of novel intellectual progress. 

There are many examples of intellectual scenes within AI risk, at varying levels of granularity and cohesion. I've been impressed with Davidad recently for putting forth a set of complementary ideas around Safeguarded AI and FlexHEGs, and creating opportunities for people who agree with his ideas to work on them. Perhaps the most influential scenes within AI risk are the MIRI / LessWrong / Conjecture / Control AI / Pause AI cluster, united by high p(doom) and focus on pausing or stopping AI development, and the Constellation / Redwood / METR / Anthropic cluster, focused on prosaic technical safety techniques and working with AI labs to make the best of the current default trajectory. (Though by saying these clusters have some shared ideas / influences / spaces, I don't mean to deny the fact that most people within those clusters disagree on many important questions.) Rationalism and effective altruism are their own scenes, as are the conservative legal movement, social justice, new atheism, progress studies, neoreaction, and neoliberalism. 

Epoch has its own scene, with a distinct set of thinkers, beliefs, and implied calls to action. Matthew Barnett has written the most about these ideas publicly, so I'd encourage you to read his posts on these topics, though my understanding is that many of these ideas were developed with Tamay, Ege, Jaime, and others. Key ideas include long timelines, slow takeoff, eventual explosive growth, optimism about alignment, concerns about overregulation, concerns about hawkishness towards China, advocating the likelihood of AI sentience and desirability of AI rights, debating the desirability of different futures, and so on. These ideas motivate much of Epoch's work, as well as Mechanize. Importantly, the people in this scene don't seem to mind much that many others (including me) disagree with them. 

I'd like to see more intellectual scenes that seriously think about AGI and its implications. There are surely holes in our existing frameworks, and it can be hard for people operating within them to spot. Creating new spaces with different sets of shared assumptions seems like it could help. 

Replies from: Chris_Leong, Tenoke
comment by Chris_Leong · 2025-04-20T04:34:40.369Z · LW(p) · GW(p)

I used to really like Matthew Barnett's posts as providing contrarian but interesting takes.

However, over the last few years, I've started to few more negatively about them. I guess I feel that his posts tend to be framed in a really strange way such that, even though there's often some really good research there, it's more likely to confuse the average reader than anything else and even if you can untangle the frames, I usually don't find worth it the time.

I should mention though that as I've started to feel more negative about them, I've started to read less of them and to engage less deeply with the ones I do look it, so there's a chance my view would be different if I read more.

I'd probably feel more positive about any posts he writes that are closer to presenting data and further away from interpretation.

That said, Epoch overall has produced some really high-quality content and I'd definitely like to see more independent scenes.

comment by Tenoke · 2025-04-20T06:37:28.853Z · LW(p) · GW(p)

It's hard for me to respect a Safety-ish org so obviously wrong about the most important factors of their chosen topic. 

I won't judge a random celebrity for expecting e.g. very long timelines but an AI research center? I'm sure they are very cool people but come on.