Why We Need More Shovel-Ready AI Notkilleveryoneism Megaproject Proposals

post by Peter Berggren (peter-berggren) · 2025-01-20T22:38:26.593Z · LW · GW · 1 comments

Contents

  Reasons to anticipate a massive funding influx
  Why notkilleveryoneism research is likely to be proposal-limited in a high-funding world
  A model of "shovel-ready proposals"
  The immediate benefits of shovel-ready proposals
  Personal challenges, and some questions for the community
  Conclusion
None
1 comment

A lot of people within AI safety [LW · GW] and adjacent ecosystems [? · GW] have discussed AI notkilleveryoneism megaproject ideas, but there appears to be a key problem with these previous discussions. These discussions presuppose that the funding landscape will not change much in the near future, in ways that have proven inaccurate in the past (e.g. assuming that FTX funding would last forever). As such, their current assumption that there will not be enough funding for true megaprojects in the AI notkilleveryoneism space puts a damper on serious discussion of megaprojects. However, there is tremendous uncertainty surrounding funding and public awareness in the AI notkilleveryoneism space, even in the very near future. For this reason, planning for a wide range of funding environments would be prudent. This includes serious planning for a massive increase in funding. 

If one thinks there is likely to be a massive increase in funding in the near future, one should focus a large amount of effort on writing shovel-ready AI notkilleveryoneism megaproject proposals. These proposals could not only serve to organize research in a high-funding world, but to serve as specific requests within the AI policy and governance ecosystem, and make a high-funding world more likely to come to pass.

Note: throughout this post I will use "AI notkilleveryoneism" to refer to research aimed at preventing AI from killing everyone, and "AI safety" more broadly to refer to any research aimed at making AI safer in the broadest sense of the word.

Reasons to anticipate a massive funding influx

Too often I see people conflating normative arguments on funding ("massive additional funding wouldn't do much good in AI notkilleveryoneism") with descriptive arguments ("AI notkilleveryoneism will not be receiving massive additional funding"). This seems deeply myopic to me, since it assumes that most AI notkilleveryoneism funding will always be coming from the Effective Altruism funding ecosystem. 

The vast majority of money in the world is not controlled by the EA funding ecosystem. This is still true even if we're only talking about money that could in principle be allocated to AI notkilleveryoneism. For that reason, the EA community's view on funding constraints within AI is unlikely to be the primary ideological motivator for more or less funding in the field.

To that end, it is distinctly possible for the field to be funded to the tune of hundreds of billions or even trillions of dollars with no one in the existing AI notkilleveryoneism community having a good idea for how to spend it. This could happen, for instance, if politicians were to reason as follows:

  1. Something must be done about the risk of uncontrollable superintelligent AI
  2. Spending a trillion dollars on a vaguely defined "AI safety moonshot" is something
  3. We must spend a trillion dollars on a vaguely defined "AI safety moonshot"

Note that this is an exaggeration of actual political reasoning processes, but only a modest one. It is distinctly possible for large amounts of money to be appropriated to AI safety broadly, in a way which could include substantial AI notkilleveryoneism funding if our cards are played right. Crucially, this could occur regardless of one's opinion on whether that is a good use of money or not. 

Anyone who disagrees with this is cordially invited to bet against this market on Manifold and make their case in the comments.

Why notkilleveryoneism research is likely to be proposal-limited in a high-funding world

The US government has a lot of money. Like, a lot of money. Even smaller countries' governments, state governments, and the world's largest charitable foundations have enough money to easily drown out the EA funding ecosystem. This means that a large funding increase would be able to massively fund just about every single AI notkilleveryoneism megaproject idea that has been fully developed. I admit that ideas such as those suggested here [LW · GW] could plausibly eat up massive amounts of funding if fully developed. However, they so far have not been fully developed, and so are unlikely to receive funding by any research bureaucracy built similarly to current research bureaucracies. This is especially true if they are competing against more polished proposals, which they will be.

While wealthy, the groups listed above do not have a strong value alignment to the current AI notkilleveryoneism community. This means that numerous other groups with only a tenuous connection to AI notkilleveryoneism could beat us to the punch in writing proposals, getting hired by these large organizations, etc. This would make AI notkilleveryoneism unable to make maximally effective use of this additional funding. I admit it is possible that, in the process of appropriating this funding, politicians or other decision-makers recognize a your view of AI safety or mine as the correct one. This is not the default case. The default case is that there is no clearly defined view and that money gets thrown around seemingly at random.

The fundamental limit in such a world won't be funding, because there will be plenty of funding. It also won't be talent, because plenty of talented people flock to whatever the current big sexy project is. It also won't be political will, because political will was what got AI safety the funding in the first place. Instead, the fundamental limit will be two things.

First is the number and apparent quality of good AI notkilleveryoneism project proposals (however you happen to define "good") that have been written in advance of this funding being allocated. Second is the ability for our community to write a large number of high-apparent-quality AI notkilleveryoneism proposals of high expected value once this funding is allocated.

A model of "shovel-ready proposals"

As I said before, the key constraints on AI notkilleveryoneism research in high-funding worlds will be:

  1. The number of pre-existing high-quality AI notkilleveryoneism proposals that look good to decision-makers (e.g. highly legible, well-written, pragmatic-looking)
  2. The ability of the AI notkilleveryoneism community to write these sort of proposals on short notice.

While both are important, I expect that (2) is mostly a factor of the size of the research community, and many people are already working on this, making it comparatively non-neglected when compared to (1). To that end, it seems advantageous to start writing these proposals now. This would give us a head start over other AI safety research proposals that are less focused on preventing AI from killing everyone (or potentially even actively dangerous on that front). If they are already writing these proposals in preparation for a funding influx, then there's still a significant advantage, even if we don't have a head start over them, in denying them a head start.

Much has been written about how to write good research proposals, including by many writers far better than myself. However, I would like to briefly add something that I think many other writers have missed: the advantages to a proposal being "shovel-ready."

In this context, "shovel-ready" means that a proposal can start to be put in place in a matter of weeks, if not sooner. To that end, it will likely recommend specific "point people" who can get started on at least the initial stages of the proposal on very short notice. These point people should be people who have useful specialist skills for the proposal while also being very adaptable. 

The speed advantage of shovel-ready proposals is obvious, but there is also an advantage in terms of political will. A shovel-ready proposal would likely be perceived as especially pragmatic, as well as easy to implement. This is true even in comparison to other AI safety proposals, as very few of the ones I have seen so far are truly shovel-ready.

The immediate benefits of shovel-ready proposals

I admit that the past three sections of this post have assumed that there is very little way to influence the amount of non-EA funding that goes into AI notkilleveryoneism. I admit that this is inaccurate. However, inasmuch as it is inaccurate, it functions as a further call for us to write shovel-ready AI notkilleveryoneism proposals. This is because writing these proposals and publishing them would serve a few distinct functions that will increase funding.

First, writing these proposals would put them on the political table as actions worthy of consideration. Bold ideas are a dime a dozen and politicians don't have time to consider all of them. On the other hand, detailed proposals are rare enough that a large share of them will be under serious political consideration whenever the broader topic is under consideration. This is even more true for shovel-ready proposals, which have the added advantage of both being and appearing extremely serious and pragmatic. While the power of the Overton window has been greatly exaggerated in recent years, it's certainly valuable to let politicians know that certain options are options.

Second, writing these proposals would give the AI notkilleveryoneism community a better sense that ambitious projects are possible. The community of late has acquired a sense of fatalism, defeatism, and general unwillingness to take on projects. I hope that this post was enough to show you that this reaction is deeply unwarranted. If it wasn't, then I expect that the best antidote would be a large supply of shovel-ready megaproject proposals which address the real problems of AI notkilleveryoneism and which are likely politically feasible to fund.

Personal challenges, and some questions for the community

While I can argue all day about the advantages of writing shovel-ready proposals for AI notkilleveryoneism megaprojects, I personally do not feel equipped to write these proposals, given that I have no prior experience writing up detailed shovel-ready proposals for projects, mega or otherwise. As such, I have a few questions I would like to ask the community here.

  1. How would I determine whether my lack of confidence here is impostor syndrome or whether it's indicative of a genuine lack of experience?
  2. How would I get experience here?
  3. If it turns out I need collaborators for one of my proposals, how would I find them?

Conclusion

As you can see, there is an ample opportunity for shovel-ready AI notkilleveryoneism megaproject proposals. While they will be quite difficult to write, there is substantial upside risk to writing them. This is both because they can take advantage of future funding and because they are likely to be on the table politically once written.

The next steps of this project, in addition to writing shovel-ready proposals, would be to compile and organize a set of resources on proposal-writing. After this point, starting an organization or group of organizations dedicated to writing these proposals would be extremely valuable.

1 comments

Comments sorted by top scores.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-21T01:59:40.824Z · LW(p) · GW(p)

Proposal in search of collaborators: privacy-preserving safety inspections run by temporary AI instances. This allows for competing parties to agree to abstain from dangerous techniques, by giving them assurance that their competitors are also abstaining.

The AI has to be able to search through the computer system it gets connected to, and give a report according to strictly defined rules that leaks no unintended information.

This could be used to, for instance, enable a global treaty with agreements to mutually inspect each other, ensuring compliance with a plan like davidads.