Should you increase AI alignment funding, or increase AI regulation?
post by Knight Lee (Max Lee) · 2024-11-26T09:17:01.809Z · LW · GW · 1 commentsContents
The Question Surviving ASI A Spherical Cow Approximation Regulation Funding Avoid building ASI forever Conclusion None 1 comment
We recently wrote A better “Statement on AI Risk?” [? · GW] an open letter we hope AI experts can sign. One commenter objected [LW · GW], saying that stopping the development of apocalyptic AI is a better focus than asking for AI alignment funding.
Our boring answer was that we think there is little conflict between these goals, and the community can afford to focus on both.
This answer is boring, and won't convince everyone since maybe people think AI regulation/pausing is so much more important, that focus on AI alignment funding distracts away from it, and is therefore counterproductive.
The Question
So how should we weigh the relative importances of AI alignment funding and AI regulation/pausing?
For humanity to survive, we either need to survive ASI by making it aligned/controlled, or avoid building ASI forever (millions of years).
Surviving ASI
To make ASI aligned/controlled, we either need to be lucky, or we need to get alignment/control right before we build ASI. In order to get alignment/control right, we need many trained experts working on alignment times a long enough time working on alignment.
Which is more important? In terms of raw numbers, we believe that a longer time is more important than the number of trained experts:
No matter how great the talent or efforts, some things just take time. You can't produce a baby in one month by getting nine women pregnant.
- Warren Buffett
Alignment work is a bit more forgiving than having babies, and more people might work faster. There is an innovative process to it, and sometimes twice the number of innovative people are twice as likely to stumble across a new idea. Our very rough estimate is this:
A Spherical Cow Approximation
- If we have twice as much time, we can make twice as much progress (by our definition of progress).
- If we have twice as many trained experts working on alignment, we can make times as much progress.
The total alignment progress can be very roughly approximated as
where is the duration, are the trained experts working on alignment, and is how productive alignment work is, given the level of AI capabilities at time .
If you don't like integrals we can further approximate it as
Regulation
Regulating and pausing AI increases , and will also increase because new people working on alignment can become trained experts. If regulating and pausing AI manages to delay ASI to take twice as long, both and might double, making alignment progress be times higher. Regulation and pausing AI may slow down capabilities progress more near the beginning than the end.[1] This means might be lower on average, and might increase by less than .
Funding
If asking for funding manages to double AI alignment funding, we might have twice as many trained experts working on alignment, making only times higher, and maybe a bit less.
That sounds like we should focus more on AI regulation/pausing, right? Not necessarily! The current AI safety spending is between $0.1 and $0.2 billion/year [? · GW]. The current AI capabilities spending is far more—four big tech companies are spending $235 billion/year on infrastructure that's mostly for AI.[2] Our rough guess is the US spends $300 billion/year in total on AI. The spending is increasing rapidly.[3]
Regulating/pausing AI to give us twice as much time, may require delaying the progress of these companies by 10 years and cost them $5000 billion in expected value. Of course the survival of humanity is worth far more than that, but these companies do not believe in AI risk enough to accept this level of sacrifice. They are fighting regulation and they are so far winning. Getting this increase in (alignment progress) by regulating/pausing AI is not easy and requires yanking $5000 billion away from some very powerful stakeholders. It further requires both the US and China to let go of the AI race. Americans who cannot tolerate the other party winning the election might never be convinced to tolerate the other country winning the race to ASI. China's handling of territorial disputes and protests does not paint a picture of compromise and wistful acceptance any better than the US election.
What about getting a increase in by increasing AI alignment spending? This requires increasing the current $0.2 billion/year by times, to $1.6 billion/year. Given that the US military budget is $800 billion/year, we feel this isn't an impossibly big ask. This is what our open letter was about.
One might argue that AI alignment spending will be higher anyways near the end, when is the highest. However, increasing it now may raise the Overton window for AI alignment spending, such that near the end it will still be higher. It also builds expertise now which will be available near the end.
Avoid building ASI forever
Surviving without AI alignment requires luck, or the indefinite prevention of ASI.
To truly avoid ASI forever, we'll need a lot more progress in world peace. As technology develops and develops over time, even impoverished countries like North Korea become capable of building things that only the most technologically and economically powerful countries could build a century ago. Many of the cheap electronics in a thrift store's dumpster are many times more powerful than the largest supercomputers in the world not too long ago. Preventing ASI forever may require all world leaders, even the ones in theocracies, to believe the risk of building ASI is greater than the risk of not building ASI (which depends on their individual circumstances). It seems very hard to convince all world leaders of this, since we have not convinced even one world leader to make serious sacrifices over AI risk.
It may be possible, but we should not focus all our efforts on this outcome.
Conclusion
Of course the AI alignment community can afford to argue for both funding and time.
The AI alignment community haven't yet tried open letters like our Statement on AI Inconsistency [? · GW] which argue for nontrivial amounts of funding relative to the military budget. It doesn't hurt to try this approach at the same time.
- ^
We speculate that when AI race pressures heat up near the end, there may be some speed up. “Springy” AI regulations might theoretically break and unleash sudden capability jumps.
- ^
https://io-fund.com/artificial-intelligence/ai-platforms/big-tech-battles-on-ai-heres-the-winner
and
forecasts $235 billion and $240 billion for 2024.
- ^
1 comments
Comments sorted by top scores.
comment by Seth Herd · 2024-11-27T05:08:44.415Z · LW(p) · GW(p)
The reason this is a difficult question is that we don't know how hard alignment will be. Opinions from different people with best-in-class expertise and time-on-task disagree wildly.
Therefore I'd argue that we should throw effort and funding into resolving that question by putting the reasoning processes of the relevant experts to wider scrutiny, and do a more systematic job of evaluating them.
Funding comes from a different resource pool than regulation, so you might mean which one should get your advocacy efforts. The same arguments apply to both of them, and to the meta-alignment question.