[Cross-post] Welcome to the Essay Meta
post by davekasten · 2025-01-16T23:36:49.152Z · LW · GW · 2 commentsContents
Welcome to the Essay Meta Wherein we consider an exceptional sample of the genre Wherein we explain what a meta is, and why we are in an Essay Meta now Disclosures: None 2 comments
[Cross-posted from my substack, davekasten.substack.com. {I never said that I was creative at naming things}; core claim probably obvious to most Lesswrong readers but may be entertaining and illuminating to read nonetheless for the rationale and descriptive elements]
Hi,
So here are some things I’ve been thinking about this week.
Welcome to the Essay Meta
One of my bigger achievements in 2024 year was co-authoring a 80-ish page essay on how we can manage the risks of powerful AI systems1. We did that for several reasons:
- There wasn’t, at the time a comprehensive report laying out a relatively-strict approach to mitigating AI risk.
- We felt that far too many policy ideas on AI specifically were buried in lots of inconvenient spaces — random Google Docs and think tank reports and LessWrong forum posts — and that not enough work had been done to pull the ideas together into a comprehensive whole. We thought an essay would be an ideal format to share the ideas in.
- We wanted humankind to have a proposal “on the shelf” for how to control the extinction risks posed by powerful AIs, and buy time to figure out how to manage them.
And then — well after we had already had done many drafts internally, but well before we published the final version — somebody else did it first.
Wherein we consider an exceptional sample of the genre
As we discussed previously, the most influential essay of 2024, seen through the long lens of history, was very plausibly Leopold Aschenbrenner’s “Situational Awareness.” In the essay, Aschenbrenner argues that if current AI trends just keep going, we very quickly end up in an insane world where humanity generally, and America specifically, has to fight to keep control — an arms race to build superintelligent AIs that far exceed human capabilities, fueled by building automated AI researchers that rapidly and recursively improve themselves. If we’re lucky, the outcome is paradise; if we’re mildly unlucky, nuclear war; if we’re actually unlucky in the ways many of the most distinguished AI researchers fear, we build uncaring alien gods that take control and likely wipe out humanity.
(I disagree with many of his proposed solutions, but generally agree with his description of the problem.)
The essay not only made a splash in the AI policy world, it also has been cited by Ivanka Trump, and its themes have found its way into Trump’s speeches and off-the-cuff statements. You can see its fingerprints all over various statements of the incoming administration.
And, let’s be honest; Aschenbrenner is a great writer, and almost certainly better than me. This is how his essay begins:
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Did you catch that? Aschenbrenner isn’t just writing a report — he’s writing an opinionated narrative, where he explores and tests out an idea that stands outside the consensus, and tries to get you to see the world his way.
In other words, he’s writing an essay, in the truest historical sense. But to explain why that’s true, I have to tell you a hard truth: your English teacher lied to you.
Essays are not about using five paragraphs (or more, as an adult) to prove a point. Rather, they began as an effort to try to find the truth. Michel de Montaigne’s Essays weren’t an effort to argue a prebaked thesis. Rather, they were an act of sensemaking, growing out of his commonplace book — a way of trying to extract meaning from different quotes, ideas, and stories that he had encountered. Some of the Essays are coherent, others less so. But they were far less certain than the approach that gets you a 5 on the AP English exam.2
Some of this was probably a desired outcome — Montaigne was playing with his favorite ideas, and trying to derive new value from the works he’d tapped from others. Some of it was probably defensive — Montaigne wrote at the height of the wars of religion, where aggressive pamphleteering and adversarial quotation between Catholics and Protestants were a risk not only to one’s career but one’s head.
So, essays historically and structurally:
- Live in the context of a dense web of citation and referencing;
- Are designed to help test out ideas not fully believed in or provable; and
- Are demonstrably a risk-reducing strategy for highly uncertain times with adversarial politics.
Together, it means that the essay format isn’t just an argument.
It’s a guess at what might be true.
Wherein we explain what a meta is, and why we are in an Essay Meta now
So why is Aschenbrenner’s essay so well-suited to the moment? Why did I co-author a similar effort? Why have others, such as AI company CEOs Dario Amodei and Sam Altman write their own?
It’s because it fits the current meta. (No, not the Mark Zuckerberg company.)
In video games, a “meta” is the subset of strategies in the game that are most favorable, given the current design of the game, the skills of the players on each side, and their common knowledge about what others might do. For example, the NBA’s transitioned from a game focused on big men under the basket to a corner-seeking, three-pointer-centric game due to a combination of players’ changing skillsets, the analytics revolution informing optimal strategy, and (frankly) just Steph Curry being so damn popular. The key point here is: it changes all the time, in response to player behavior, rules changes, and just simple trend-following, and you can return to an earlier meta that stopped being prevalent if and when similar conditions apply.
This is not the first time we were here: the early Cold War was full of long essays shaping the opinions of the American population about how to win the Cold War, such as George Kennan’s “X Article" aka “The Sources of Soviet Conduct” in Foreign Affairs, expanding on his famous “Long Telegram.”3 In an era of ludicrous magazine budgets and deep geopolitical uncertainty and institutional change, an essay that explained how to make sense of the world, and manage its complexity, could catch a lot of eyeballs. The same happened, to a lesser degree, after the end of the Cold War, when Francis Fukuyama ended history until Samuel Huntington triggered a Clash of Civilizations to bring it back.
So why are we here in the Essay Meta again? It’s because some of the same conditions apply.
The world has gotten very uncertain: America faces challenges ranging from wildfires to skyrocketing housing costs to the potential of World War III in the Taiwan Strait that the professionals seem more interested in passing the buck on rather than fixing; and people just are plain miserable about the economy and their futures even though the data says they “should be” happy.
And when you ask folks, over beers, or while canvassing doorstops, or while hanging out watching kids at the playground who they turn to for advice, the answer is pretty consistent: they just don’t trust the traditional experts in government, the media, business, etc. They had too many examples they can easily recall of times when the experts4 said “trust me, do what I say and it will all be all right,” and then it wasn’t, and the COVID years pushed that feeling into overdrive.
Some of those examples are bullshit; far too many of them are documented misdeeds that even a skeptical defender of an institution would have to admit are true.5 Our society’s traditional pillars of certainty and expertise went reputationally bankrupt in the usual way: first slowly, then suddenly.
And as those institutions went broke, they opened up new space for new ways of making sense of problems and proposing solutions. And thanks to Substack and earlier players like Patreon, it’s now once again possible to make lots of money writing essay-length piles of words, directly from your fans. (You don’t need to rely upon the late-1940s General Motors ad budget to subsidize authoring and distribution, these days)
To give you just one recent example that crossed my email inbox — my dear friend and cryptocurrency skeptic Patrick “patio11” McKenzie reports in a letter to his subscribers that he has been reliably informed that his most recent 25,000 word (excellent!) essay for his paid newsletter Bits About Money on crypto and debanking is walking the seniormost halls of Washington at the moment; I do not doubt this.
The essay earns its length because it pulls off a magic trick: it engages with the confusion its pro-crypto readers might have seriously about things that have happened to their bank accounts, even when Patrick bluntly disagrees with those readers’ policy views. As a result, it gives pro-crypto folks a conceptual framework to enlighten a series of dimly-seen opaque governmental actions, even when Patrick sometimes thinks those actions were good and necessary. But it earns its spread throughout Washington not only because it’s a good set of ideas, but because Patrick’s opinionated, personal expertise (rather than any institutional perspective) shines through. He’s fair because he tells you how he’s unfair. You can trust him, because he doesn’t hide his biases on a topic where everyone has biases, and having followed the issue for a decade, he has the citations to back it up.
I imagine that if you think back over your past year, you can think of an idiosyncratic essay like this, that you encountered somewhere unusual on the internet that didn’t exist a decade before, that changed how you saw some issue. And I expect you’ll see more essays like them, from folks as deeply knowledgeable as Leopold Aschenbrenner or Patrick McKenzie, in the years to come.6 The so-called “posting to policy pipeline” is real and influential, and will only grow in the uncertain years to come.
So, where are we to go now? I’m not sure, but I’m trying to make sense of things, and I’m willing to put some guesses down — after all, that’s the point of an essay.
- The traditional Washington advice is to get very good at short form 1-2 page memos. I think this advice is still useful for some audiences, but far less right than it used to be. 1-2 pagers are stuck in an awkward evolutionary dead end — too short to be truly compelling or opinionated, too long to be tweeted out.
- “The institutional view” will lose prestige in comparison to the individual expert view; this will reduce the prestige of organizations that are committed to processes that only output institution-wide views.
- Do you think policymakers (who have the same phone as you) are more excited to read something with flair on X aka Twitter or Substack, or a view-from-nowhere, death-by-boredom bureaucratic memo? Heck, we know that the incoming administration is unusually online.
- Relatedly, where do the rising policy wonks of today spend their time, and where do you think they _will_ spend their time as they get more senior? They read lengthy Substacks, listen to multi-hour podcasts, and they relax with tiny tweets and TikToks. Solve for the equilibrium.
- Finally, AI is probably going to be the most important issue of the 21st century. Many of the people who work in AI technology and policy are extremely online compared to the average professional, and live in a Bay Area culture where reading long Substack articles, blog posts, LessWrong Sequences [? · GW], and similar writing at length is culturally normative. How would you expect to influence them (and the AI models those folks are training), given that preference?7
I’m excited to see how these predictions unfold. But here’s the thing, dear reader. In the Essay Meta, if I’ve done my job, you scarcely need these predictions written down. After all, if I’ve explored these ideas well, if my try was successful, then I can just ask you this: do you see the world a little more like I do, now?
If you do, then the predictions should flow naturally from there.
Disclosures:
Views are my own and do not represent those of current or former clients, employers, friends, George Kennan, Patrick “patio11” McKenzie, Leopold’s Large Language Model, or you.
2 comments
Comments sorted by top scores.
comment by Rasool · 2025-01-17T09:11:00.325Z · LW(p) · GW(p)
This is another one that was doing the rounds in the UK progress / NIMBY / growth space:
Replies from: davekasten↑ comment by davekasten · 2025-01-17T16:01:43.945Z · LW(p) · GW(p)
Ooh, interesting, thank you!