Speaking to Congressional staffers about AI risk
post by Akash (akash-wasil), hath · 2023-12-04T23:08:52.055Z · LW · GW · 23 commentsContents
Context Arriving in DC & initial observations Hierarchy of a Congressional office Outreach to offices A typical meeting Meeting Logistics How meetings went Following-up Staffer attitudes toward AI risk Lessons Learned Final Takes None 23 comments
In May and June of 2023, I (Akash) had about 50-70 meetings about AI risks with congressional staffers. I had been meaning to write a post reflecting on the experience and some of my takeaways, and I figured it could be a good topic for a LessWrong dialogue. I saw that hath had offered to do LW dialogues with folks [LW(p) · GW(p)], and I reached out.
In this dialogue, we discuss how I decided to chat with staffers, my initial observations in DC, some context about how Congressional offices work, what my meetings looked like, lessons I learned, and some miscellaneous takes about my experience.
Context
Arriving in DC & initial observations
Hierarchy of a Congressional office
Outreach to offices
A typical meeting
Staffer attitudes toward AI risk
Lessons Learned
Final Takes
23 comments
Comments sorted by top scores.
comment by trevor (TrevorWiesinger) · 2023-12-05T06:38:41.127Z · LW(p) · GW(p)
Before I start criticizing, I would make it clear that I'm grateful for your work and I could not do better myself; I certainly did try, in fact I was one of the first in DC in 2018, but I could not do well since I was one of the many "socially-inept" people which are in fact a serious problem in DC (for the record: if you want to do AI policy, do not move to DC, first visit and have the people there judge your personality/charisma, the standards might be far higher or far lower than you might expect, they are now much better at testing people for fit than when I started 6 years ago).
I'm also grateful to see you put your work out there for review on Lesswrong, rather than staying quiet. I think the decision to attempt to be open vs. closed about AI policy work is vastly more complicated than most people in AI policy believe.
Your post is fantastic, especially the reflections.
- You never mentioned the words "committee" or "chair" in this post. ?????????? Everything in Congress, other than elections and constituent calls, revolves around the congressional committees and particularly their chairmen. Is your model that congressional committees, just, aren't important at all relative to party leadership in each chamber? If the balance of power has shifted that far by now, I wouldn't know. Either way, congress is very much the kind of place where 20% of the congressmembers have >80% of the power, and the ones in the bottom 50% are the easiest to talk to, and their staffers exist to look important and talk to as many people as possible per day and make them feel heard, and their offices are focused on maintaining the appearance of being capable of substantially influencing legislation, in order to mitigate the risk that their voters and their professional network find out that they are in the bottom 50%. Over the centuries, Congress has become incredibly sophisticated at constructing mazes and leading people around. The Committee system is the first step to cutting through that and getting to where the bills are actually getting negotiated and written (primarily by lobbyists with de-facto personal ties to the office of the Chairs of the relevant committees and maybe the deputy chairs). Unless I'm wrong about this e.g. maybe a way larger share of policymaking power has accrued to the party leadership, who are even harder to meet with, or maybe the lobbyists from the big 5 tech companies are the main hotspot for tech-related policymaking in general including AI, and they meet with whoever they want, making the committee structure not very relevant to AI policy. It would have been great to hear more about the people you met at think tanks and the executive branch.
- When it comes to foreign policy, which might be pretty important, a helpful way to look at it is like congress and other parliamentary bodies acting as a wall between domestic elites and the foreign policymaking institutions, and intelligence agencies as the main holders of real power [LW · GW]. Obviously, these are people, and backdoor deals and revolving door employment is everywhere, so even this wall is fuzzy. But it is much more robust than, say, domestic policy (e.g. farm bills) where congress basically acts the conduit between elites and policy (e.g. like how most of the actual lawmaking work on capitol hill is done by lobbyists, not staffers). Intelligence agencies can easily bribe or infiltrate parliaments, and parliaments cannot easily bribe or infiltrate intelligence agencies. Authoritarian countries like China, on the other hand, don't have real parliaments, and the strongman leader must mitigate the creep of rich domestic elites seeking policymaking influence (in reality it's much more complex e.g. hybrid regimes, redirecting domestic elites to focus on local/provincial governments instead of the central/national government, etc your book might help with this but it's important to note that books about intelligence agencies are products that need to optimize for entertainment in order to sell copies; books must be recommended by personal connections, and even then you never know, I might read and trust a book recommended to me by someone like Jason Matheny).
Lots of people asked me if I had draft legislation. Apparently, if you have regulatory ideas, people want to see that you have a (short) version of it written up like a bill.
They want you to propose solutions, they get annoyed when people come to them with a new issue they know nothing about and expect them to be the one to think of solutions. They want you to do the work writing up the final product and then hand it to them. If they have any issue with it, they'll rewrite parts of it or throw it in the recycle bin.
In terms of my effect– I think I mostly just got them to think about it more and raised it in their internal "AI policy priorities" list. I think people forget that staffers have like 100 things on their priority list, so merely exposing and re-exposing them to these ideas can be helpful.
I've heard this characterized as "goldfish memory". It's important to note that many of the other 100 things on their priority list also have people trying to "expose and re-expose" them to ideas, and many staffers are hired for skill at pretending that they're listening. I think you were correct to evaluate your work building relationships as more useful than this.
My experience in DC made me think that the Overton Window is extremely wide. Congress does not have cached takes on AI policy, and it seems like a lot of people genuinely want to learn. It's unclear how long this will last (e.g., maybe AI risk ends up getting polarized), but we seem to be in a period of unusually high open-mindedness & curiosity.
I disagree that the Overton window in DC, or even Congress, is as wide as your impression. This is both for the reasons stated above, and because it seems very likely (>95%) that military-adjacent people in both the US and China are actively pursuing AI for things like economic growth/stabilization, military applications like EW and nuclear-armed cruise missiles, or for the data processing required for modern information warfare [LW · GW]. I agree that we seem to be in a period of unusually high open-mindedness and curiosity.
With that said, I think coordination would be easier if people ended up being more explicit about what they believe, more explicit about specific policy goals they are hoping to achieve, and more explicit about their legible wins (and losses). In the absence of this, we run the risk of giving too much power and too many resources to people who "play the game", develop influence, but don't end up using their influence to achieve meaningful change.
I think that DC is a very Moloch-infested place, resulting in an intense and pervasive culture of nihilism- a near-universal belief that Moloch is inevitable. Prolonged exposure to that environment (several years), where everyone around you thinks this way, and will permanently mark you as low-social-status if you ever reveal you are one of those people with hope for the world, likely (>90%) has intense psychological effects on the AI Safety people in DC.
Likewise, the best people will know the risks associated with having important conversations near smartphones in a world where people use AI for data science [? · GW], but they don't know you well enough to know whether you yourself will proceed to have important conversations about them near smartphones. They can't heart-to-heart with you about the problem, because that would turn that conversation into an important one, and it would be near a smartphone.
I think I would've written up a doc that explained my reasoning, documented the people I consulted with, documented the upside and downside risks I was aware of, and sent it out to some EAs.
internally screaming
I would've come with a printed-out 1-pager that explained what CAIS is & summarized the regulatory ideas in the NTIA response. I ended up doing this halfway through, and I would've done this sooner.
If you ever decide to write a doc properly explaining the situation with AI Safety to policymakers who read it, Scott Alexander's Superintelligence FAQ [LW(p) · GW(p)] is considered in high esteem, you could probably read it, think about how/why it was good at giving laymen a fair chance to understand the situation, and write a much shorter 1-pager yourself that's optimized for the particular audience. I convinced both of my ~60-year-old parents to take AI safety seriously by asking them to read the AI chapter in Toby Ord's The Precipice [LW · GW], so you might consider that instead.
Replies from: akash-wasil, Josephm↑ comment by Akash (akash-wasil) · 2023-12-05T21:06:30.952Z · LW(p) · GW(p)
Thanks for all of this! Here's a response to your point about committees.
I agree that the committee process is extremely important. It's especially important if you're trying to push forward specific legislation.
For people who aren't familiar with committees or why they're important, here's a quick summary of my current understanding (there may be a few mistakes):
- When a bill gets introduced in the House or the Senate, it gets sent to a committee. The decision is made by the Speaker of the House or the priding officer in the Senate. In practice, however, they often defer to a non-partisan "parliamentarian" who specializes in figuring out which committee would be most appropriate. My impression is that this process is actually pretty legitimate and non-partisan in most cases(?).
- It takes some degree of skill to be able to predict which committee(s) a bill is most likely to be referred to. Some bills are obvious (like an agriculture bill will go to an agriculture committee). In my opinion, artificial intelligence bills are often harder to predict. There is obviously no "AI committee", and AI stuff can be argued to affect multiple areas. With all that in mind, I think it's not too hard to narrow things down to ~1-3 likely committees in the House and ~1-3 likely committees in the Senate.
- The most influential person in the committee is the committee chair. The committee chair is the highest-ranking member from the majority party (so in the House, all the committee chairs are currently Republicans; in the Senate, all the committee chairs are currently Democrats).
- A bill cannot be brought to the House floor or the Senate floor (cannot be properly debated or voted on) until it has gone through committee. The committee is responsible for finalizing the text of the bill and then voting on whether or not they want the bill to advance to the chamber (House or Senate).
- The committee chair typically has a lot of influence over the committee. The committee chair determines which bills get discussed in committee, for how long, etc. Also, committee chairs usually have a lot of "soft power"– members of Congress want to be in good standing with committee chairs. This means that committee chairs often have the ability to prevent certain legislation from getting out of committee.
- If you're trying to get legislation passed, it's ideal to have the committee chair think favorably of that piece of legislation.
- It's also important to have at least one person on the committee as someone who is willing to "champion" the bill. This means they view the bill as a priority & be willing to say "hey, committee, I really think we should be talking about bill X." A lot of bills die in committee because they were simply never prioritized.
- If the committee chair brings the bill to a vote, and the majority of committee members vote in favor of the bill moving to the chamber, the bill can be discussed in the full chamber. Party leadership (Speaker of the House, Senate Majority Leader, etc.) typically play the most influential role in deciding which bills get discussed or voted on in the chambers.
- Sometimes, bills get referred to multiple committees. This generally seems like "bad news" from the perspective of getting the bill passed, because it means that the bill has to get out of multiple committees. (Any single committee could essentially prevent the bill from being discussed in the chamber).
(If any readers are familiar with the committee process, please feel free to add more info or correct me if I've said anything inaccurate.)
↑ comment by Joseph Miller (Josephm) · 2024-02-29T16:39:17.212Z · LW(p) · GW(p)
> I think I would've written up a doc that explained my reasoning, documented the people I consulted with, documented the upside and downside risks I was aware of, and sent it out to some EAs.
internally screaming
Can you please explain what this means?
comment by johnswentworth · 2023-12-05T09:53:04.652Z · LW(p) · GW(p)
I started asking other folks in AI Governance. The vast majority had not talked to congressional staffers (at all).
??? WTF do people "in AI governance" do?
Replies from: akash-wasil, TrevorWiesinger, Zach Stein-Perlman↑ comment by Akash (akash-wasil) · 2023-12-05T20:31:25.928Z · LW(p) · GW(p)
WTF do people "in AI governance" do?
Quick answer:
- A lot of AI governance folks primarily do research. They rarely engage with policymakers directly, and they spend much of their time reading and writing papers.
- This was even more true before the release of GPT-4 and the recent wave of interest in AI policy. Before GPT-4, many people believed "you will look weird/crazy if you talk to policymakers about AI extinction risk." It's unclear to me how true this was (in a genuine "I am confused about this & don't think I have good models of this" way). Regardless, there has been an update toward talking to policymakers about AI risk now that AI risk is a bit more mainstream.
- My own opinion is that, even after this update toward policymaker engagement, the community as a whole is still probably overinvested in research and underinvested in policymaker engagement/outreach. (Of course, the two can be complimentary, and the best outreach will often be done by people who have good models of what needs to be done & can present high-quality answers to the questions that policymakers have).
- Among the people who do outreach/policymaker engagement, my impression is that there has been more focus on the executive branch (and less on Congress/congressional staffers). The main advantage is that the executive branch can get things done more quickly than Congress. The main disadvantage is that Congress is often required (or highly desired) to make "big things" happen (e.g., setting up a new agency or a licensing regime).
↑ comment by trevor (TrevorWiesinger) · 2023-12-06T16:08:27.655Z · LW(p) · GW(p)
the community as a whole is still probably overinvested in research and underinvested in policymaker engagement/outreach.
My prediction is that the AI safety community will overestimate the difficulty of policymaker engagement/outreach.
I think that the AI safety community has quickly and accurately taken social awkwardness and nerdiness into account, and factored that out of the equation. However, they will still overestimate the difficulty of policymaker outreach, on the basis that policymaker outreach requires substantially above-average sociability and personal charisma.
Even among the many non-nerd extroverts in the AI safety community, who have above average or well above average social skills (e.g. ~80th or 90th percentile), the ability to do well in policy requires an extreme combination of traits that produce intense charismatic competence, such the traits required for as a sense of humor near the level of a successful professional comedian (e.g. ~99th or 99.9th percentile). This is because the policy environment, like corporate executives, selects for charismatic extremity.
Because people who are introspective or think about science at all are very rarely far above the 90th percentile for charisma, even if only the obvious natural extroverts are taken into account, the AI safety community will overestimate the difficulty of policymaker outreach.
I don't think they will underestimate the value of policymaker outreach (in fact I predict they are overestimating the value, due to the American interests in using AI for information warfare [LW · GW] pushing AI decisionmaking towards inaccessible and inflexible parts of natsec agencies [LW · GW]). But I do anticipate underestimating the feasibility of policymaker outreach.
Replies from: Rana Dexsin↑ comment by Rana Dexsin · 2023-12-07T08:31:46.878Z · LW(p) · GW(p)
I'm not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-12-07T20:15:28.299Z · LW(p) · GW(p)
I should have made it more clear at the beginning.
- AI governance successfully filters out the nerdy people
- They see that they're still having a hard time finding their way to the policymakers with influence (e.g. what Akash was doing, meeting people in order to meet more people through them).
- They conclude that the odds of success are something like ~30% or any other number.
- I think that they would be off by something like 10, so it would actually be ~40%, because factoring out the nerds still leaves you with the people at the 90th percentile of Charisma and you need people at the 99th percentile. They might be able to procure those people.
- This is because I predict that people at the 99th percentile of Charisma are underrepresented in AI safety, even if you only look at the non-nerds.
↑ comment by johnswentworth · 2023-12-05T20:42:46.295Z · LW(p) · GW(p)
Among the people who do outreach/policymaker engagement, my impression is that there has been more focus on the executive branch (and less on Congress/congressional staffers).
That makes sense and sounds sensible, at least pre-ChatGPT.
↑ comment by trevor (TrevorWiesinger) · 2023-12-05T11:16:36.798Z · LW(p) · GW(p)
Modern congressional staffers are the product of Goodhart's law; ~50-100 years ago, they were the ones that ran congress de-facto, so all the businessmen and voters wanted to talk to them, so the policymaking ended up moving elsewhere. Just like what happened with congressmen themselves ~100-150 years ago. Congressional staffers today primarily take constituent calls from voters, and make interest groups think they're being listened to. Akash's accomplishments came from wading through that bullshit, meeting people through people until he managed to find some gems.
Most policymaking today is called in from outside, with lobbyists having the domain-expertise needed to write the bills, and senior congressional staffers (like the legislative directors and legislative assistants here) overseeing the process, usually without getting very picky about the details.
It's not like congressmembers have no power, but they're just one part of what's called an "Iron triangle", the congressional lawmakers, the executive branch bureaucracies (e.g. FDA, CDC, DoD, NSA), and the private sector companies (e.g. Walmart, Lockheed, Microsoft, Comcast), with the lobbyists circulating around the three, negotiating and cutting deals between them. It's incredibly corrupt and always has been, but not all-crushingly corrupt like African governments. It's like the Military Industrial Complex, except that's actually a bad example because congress is increasingly out of the loop de-facto on foreign policy (most structures are idiosyncratic, because the fundamental building block is people who are thinking of ways to negotiate backdoor deals).
People in the executive branch/bureaucracies like the DoD have more power on interesting things like foreign policy, Congress is more powerful for things that have been entrenched for decades like farming policy. Think tank people have no power but they're much less stupid and have domain expertise and are often called up to help write bills instead of lobbyists.
I don't know how AI policy is made in Congress, I jumped ship from domestic AI policy to foreign AI policy 3.5 years ago in order to focus more on the incentives from the US-China angle [LW · GW], Akash is the one to ask about where AI policymaking happens in congress, as he was the one actually there deep in the maze (maybe via DM because he didn't describe it in this post).
I strongly recommend people talking to John Wentworth about AI policy, even if he doesn't know much at first; after looking at Wentworth's OpenAI dialog [LW · GW], he's currently my top predicted candidate for "person who starts spending 2 hours a week thinking about AI policy instead of technical alignment, and thinks up galaxy brained solutions that break the stalemates that vexed the AI policy people for years".
↑ comment by Zach Stein-Perlman · 2023-12-06T03:49:20.819Z · LW(p) · GW(p)
Most don't do policy at all. Many do research. Since you're incredulous, here are some examples of great AI governance research (which don't synergize much with talking to policymakers):
- Towards best practices in AGI safety and governance
- Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring
- Survey on intermediate goals in AI governance
↑ comment by johnswentworth · 2023-12-06T08:24:23.948Z · LW(p) · GW(p)
I mean, those are all decent projects, but I would call zero of them "great". Like, the whole appeal of governance as an approach to AI safety is that it's (supposed to be) bottlenecked mainly on execution, not on research. None of the projects you list sound like they're addressing an actual rate-limiting step to useful AI governance.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-12-06T18:24:29.357Z · LW(p) · GW(p)
Like, the whole appeal of governance as an approach to AI safety is that it's (supposed to be) bottlenecked mainly on execution, not on research.
(I disagree. Indeed, until recently governance people had very few policy asks for government.)
(Also note that lots of "governance" research is ultimately aimed at helping labs improve their own safety. Central example: Structured access.)
Replies from: Lblack↑ comment by Lucius Bushnaq (Lblack) · 2023-12-06T23:07:01.508Z · LW(p) · GW(p)
(I disagree. Indeed, until recently governance people had very few policy asks for government.)
Did that change because people finally finished doing enough basic strategy research to know what policies to ask for?
It didn't seem like that to me. Instead, my impression was that it was largely triggered by ChatGPT and GPT4 making the topic more salient, and AI safety feeling more inside the Overton window. So there were suddenly a bunch of government people asking for concrete policy suggestions.
↑ comment by Zach Stein-Perlman · 2023-12-06T23:10:24.873Z · LW(p) · GW(p)
(I disagree. Indeed, until recently governance people had very few policy asks for government.)
Did that change because people finally finished doing enough basic strategy research to know what policies to ask for?
Yeah, that's Luke Muehlhauser's claim; see the first paragraph of the linked piece.
I mostly agree with him. I wasn't doing AI governance years ago but my impression is they didn't have many/good policy asks. I'd be interested in counterevidence — like pre-2022 (collections of) good policy asks.
Anecdotally, I think I know one AI safety person who was doing influence-seeking-in-government and was on a good track but quit (to do research) because they weren't able to leverage their influence because the AI governance community didn't really have asks for (the US federal) government.
Replies from: akash-wasil↑ comment by Akash (akash-wasil) · 2023-12-07T00:01:33.817Z · LW(p) · GW(p)
My own model differs a bit from Zach's. It seems to me like most of the publicly-available policy proposals have not gotten much more concrete. It feels a lot more like people were motivated to share existing thoughts, as opposed to people having new thoughts or having more concrete thoughts.
Luke's list, for example, is more of a "list of high-level ideas" than a "list of concrete policy proposals." It has things like "licensing" and "information security requirements"– it's not an actual bill or set of requirements. (And to be clear, I still like Luke's post and it's clear that he wasn't trying to be super concrete).
I'd be excited for people to take policy ideas and concretize them further.
Aside: When I say "concrete" in this context, I don't quite mean "people on LW would think this is specific." I mean "this is closer to bill text, text of a section of an executive order, text of an amendment to a bill, text of an international treaty, etc."
I think there are a lot of reasons why we haven't seen much "concrete policy stuff". Here are a few:
- This work is just very difficult– it's much easier to hide behind vagueness when you're writing an academic-style paper than when you're writing a concrete policy proposal.
- This work requires people to express themselves with more certainty/concreteness than academic-style research. In a paper, you can avoid giving concrete recommendations, or you can give a recommendation and then immediately mention 3-5 crucial considerations that could change the calculus. In bills, you basically just say "here is what's going to happen" and do much less "and here are the assumptions that go into this and a bunch of ways this could be wrong."
- This work forces people to engage with questions that are less "intellectually interesting" to many people (e.g., which government agency should be tasked with X, how exactly are we going to operationalize Y?)
- This work just has a different "vibe" to the more LW-style research and the more academic-style research. Insofar as LW readers are selected for (and reinforced for) liking a certain "kind" of thinking/writing, this "kind" of thinking/writing is different than the concrete policy vibe in a bunch of hard-to-articulate ways.
- This work often has the potential to be more consequential than academic-style research. There are clear downsides of developing [and advocating for] concrete policies that are bad. Without any gatekeeping, you might have a bunch of newbies writing flawed bills. With excessive gatekeeping, you might create a culture that disincentivizes intelligent people from writing good bills. (And my own subjective impression is that the community erred too far on the latter side, but I think reasonable people could disagree here).
For people interested in developing the kinds of proposals I'm talking about, I'd be happy to chat. I'm aware of a couple of groups doing the kind of policy thinking that I would consider "concrete", and it's quite plausible that we'll see more groups shift toward this over time.
comment by RobertM (T3t) · 2024-02-28T19:28:25.671Z · LW(p) · GW(p)
Curated. I liked that this post had a lot of object-level detail about a process that is usually opaque to outsiders, and that the "Lessons Learned" section was also grounded enough that someone reading this post might actually be able to skip "learning from experience", at least for a few possible issues that might come up if one tried to do this sort of thing.
comment by Mikhail Samin (mikhail-samin) · 2023-12-05T23:18:37.560Z · LW(p) · GW(p)
It's great to see this being publicly posted!
comment by wassname · 2024-01-13T00:23:54.638Z · LW(p) · GW(p)
Read books. I found Master of the Senate and Act of Congress to be especially helpful. I'm currently reading The Devil's Chessboard to better understand the CIA & intelligence agencies, and I'm finding it informative so far.
Would you recommend "The Devil's Chessboard"? It seems intriguing, yet it makes substantial claims with scant evidence.
In my opinion, intelligence information often leads to exaggerated stories unless it is anchored in public information, leaked documents, and numerous high-quality sources.
comment by Aryeh Englander (alenglander) · 2024-02-28T21:10:30.133Z · LW(p) · GW(p)
One final thing is that I typically didn't emphasize loss of control//superintelligence//recursive self-improvement. I didn't hide it, but I included it in a longer list of threat models
I'd be very interested to see that longer threat model list!
Replies from: akash-wasil↑ comment by Akash (akash-wasil) · 2024-02-28T21:28:08.435Z · LW(p) · GW(p)
If memory serves me well, I was informed by Hendrycks' overview of catastrophic risks. I don't think it's a perfect categorization, but I think it does a good job laying out some risks that feel "less speculative" (e.g., malicious use, race dynamics as a risk factor that could cause all sorts of threats) while including those that have been painted as "more speculative" (e.g., rogue AIs).
I've updated toward the importance of explaining & emphasizing risks from sudden improvements in AI capabilities, AIs that can automate AI research, and intelligence explosions. I also think there's more appetite for that now than there used to be.
comment by Baometrus (worlds-arise) · 2024-02-29T05:42:04.683Z · LW(p) · GW(p)
There are a lot of antibodies and subtle cultural pressures that can prevent me from thinking about certain ideas and can atrophy my ability to take directed action in the world.
This hit me like a breath of fresh air. "Antibodies" yes. Makes me feel less alone in my world-space
comment by Review Bot · 2024-02-14T06:48:10.643Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?