How bad would AI progress need to be for us to think general technological progress is also bad?
post by Jim Buhler (jim-buhler) · 2024-07-09T10:43:45.506Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 2 JBlack 1 Jim Buhler None 2 comments
It is widely believed in the EA community that AI progress is acutely harmful by substantially increasing X-risks. This has led to a growing priority on pushing against work advancing AI capabilities.[1]
On the other hand, economic growth, scientific advancements, and (non-AI) technological progress are generally viewed as highly beneficial, improving the quality of the future provided there are no existential catastrophes.[2]
But here’s the problem: contributing to this general civilizational progress that benefits humanity also substantially benefits AI researchers and their work.
My intuitive reaction here (and that of most, I assume) is maybe something like “yeah ok but surely this doesn’t balance out the benefits. We can’t tell the overwhelming majority of humans that we’re gonna slow down science, economic growth and improving their lives with these (and those of their descendants) until AI is safe just because these would also benefit a tiny minority that is making AI less safe”.
However, there has to be some threshold of harm (from AI development) beyond which we would think slowing down technological progress generally (and not only AI progress) would be worth it.
So what makes us believe that we’re not beyond this threshold?
- ^
For example, on his 80,000 hours podcast appearance, Zvi Mowshowitz claims that it is "the most destructive job per unit of effort that you could possibly have". See also the recent growth of the Pause AI movement.
- ^
For recent research and opinions that go in that direction, see Clancy 2023; Clancy and Rodriguez 2024.
Answers
I don't think your bolded conclusion holds. Why does there have to be such a threshold? There are reasonable world-models that have no such thing.
For example: suppose that we agreed not to research AI, and could enforce that if necessary. Then no matter how great our technological progress becomes, the risk from AI catastrophe remains at zero.
We can even suppose that increasing technological progress more generally includes a higher sanity waterline, and so makes such a coordination more likely to occur. Maybe we're near the bottom of a curve of technology-vs-AI-risk, where we're civilizationally smart enough to make destructive AI but not enough to coordinate to do something which is not that. That would be a case for accelerating technology that isn't AI as risk from AI in the model increases.
A few minutes thought reveals other models where no such threshold exists.
So there is a case where there may exist such a threshold, and perhaps we are beyond it if so. I don't see evidence that there must exist such a threshold.
↑ comment by Jim Buhler (jim-buhler) · 2024-07-10T14:59:42.927Z · LW(p) · GW(p)
Thanks, that's fair! Such a threshold exists if and only if you assume
- non-zero AI research (which is the scenario we're interested in here I guess),
- technological progress correlates with AI progress (which as you say is not guaranteed but that still seems very likely to me),
- maybe a few other crucial things I implicitly assume without realizing.
Some relevant resources I found:
- On the Value of Advancing Progress [EA · GW]
- How useful is "progress"?
- On Progress and Prosperity [EA · GW]
2 comments
Comments sorted by top scores.
comment by RHollerith (rhollerith_dot_com) · 2024-07-09T23:20:13.574Z · LW(p) · GW(p)
The problem is that the public correctly perceives that economic growth and technological progress make the average life better, so it is hard to get political support for any measures to slow them down. I can think of two policy proposals that already have a lot of support that we could throw our weight behind. Most supporters of these proposals are unaware of the significant progress-slowing effects of these proposals, which is the only reason they are as popular as they are. I don't want to say more in public because it makes us AI decelerationists look bad to casual readers, but I welcome PMs on the subject.
To directly answer your question: yes, if you value the survival of our species rather than just the experiences of the current generation of humans, a general slowdown of the economy and of human technology would be a good thing given the current situation around AI research. Unless you have some plan for actually effecting a slowdown that is a lot more effective (and less cynical and almost-dishonorable) than what I suggested in the previous paragraph, though, there are better courses of action for us to focus our attention on.
I'm assuming that the only effective ways of slowing down 'progress' is to get laws passed.
Replies from: jim-buhler↑ comment by Jim Buhler (jim-buhler) · 2024-07-10T15:06:41.008Z · LW(p) · GW(p)
Interesting points, thanks!
> The problem is that the public correctly perceives that economic growth and technological progress make the average life better, so it is hard to get political support for any measures to slow them down.
I mean, if we think these things are actually bad overall (which I'm not convinced of but maybe), we could at least avoid doing things that directly or indirectly promote or generate more economic growth for example. There are some very low-hanging fruits.