The Broader Fossil Fuel Community
post by Jeffrey Heninger (jeffrey-heninger) · 2023-05-06T14:49:38.633Z · LW · GW · 1 commentsContents
A short- to medium-term goal of humanity should be a society with abundant energy, and the most useful tools achieving this are fossil fuels. None 1 comment
In his blog post, Why Not Slow AI Progress?,[1] Scott Alexander notes that AI Safety researchers and AI Capabilities researchers seem to be part of the same community. This is kind of weird: fossil fuel companies and environmentalists seem to be much more socially distinct. Scott gave several reasons why this is the case and why we might want this to be the case.
There is another reason that these are the same community: AI Safety researchers and AI Capabilities researchers are philosophically much closer to each other than fossil fuel companies and environmentalists. In particular, both parts of the Broader AI Community share the following belief:
The long-term goal of humanity should be a technological utopia, and the most useful tool for building Our Glorious Future[2] is AGI.
The main disagreement between the two parts of the community is about how hard it will be to align the AGI with human values.
This statement is not at all obvious to people outside of the Broader AI Community. Reasonable people can and do disagree with all parts of it:[3]
- Should humanity have long-term goals?
- Should our goals be utopian?
- Are the problems that we will have to solve to get to Our Glorious Future mostly technological or mostly social?
- Is the best way to achieve Our Glorious Future through AGI?
AI Safety researchers and AI Capabilities researchers tend to answer these questions the same way, while other intellectual communities and the general public have more varied views. If anything, it seems as though the futurism of AI Safety researchers is more extreme.
The comparison to fossil fuels is telling. A similar sentence for these communities would be:
A short- to medium-term goal of humanity should be a society with abundant energy, and the most useful tools achieving this are fossil fuels.
This belief is not shared by both fossil fuel companies and environmentalists.
I will place environmentalists into three categories, based on how they related to this belief:
- De-growth environmentalists reject that society should have abundant energy. Instead, they want a Return to Nature with less economic activity overall. I am skeptical of this position because I doubt that most of these environmentalists recognize how much more human suffering is implied by their position, but this does seem to be a major force within the environmental movement.
- Alternative energy environmentalists accept that humanity should have abundant energy, but believe that that energy should not come from fossil fuels. Renewable energy advocates and nuclear power advocates both fall into this category, along with people working on new technologies like fusion and deep geothermal. This seems to me to be the most influential part of the environmental movement.
- Clean fossil fuel environmentalists accept that fossil fuels will be an important energy source for the future and try to make them less damaging to the environment. They advocate for scrubbing coal to reduce sulfur emissions, oil pipelines that are less likely to spill, transitions from coal to natural gas, and on-site carbon capture and storage.
I think that it would be fair to call clean fossil fuel environmentalists part of the Broader Fossil Fuel Community. I would not be surprised if they went to the same parties, or if someone who had worked on oil pipeline safety went on to found a fracking company.
But most environmentalists are not part of the Broader Fossil Fuel Community. They do not accept that fossil fuels will be an important part of the future.
AI Safety researchers are like clean fossil fuel environmentalists. They believe that AI is the most important thing about the future - perhaps even more than AI Capabilities researchers. It is unsurprising that they end up in the same community, that AI Safety work often advances AI Capabilities, and that AI Safety researchers regularly go work for - or create - AI Capabilities companies.
I would like to see more emphasis on the equivalent of alternative energy environmentalists for AI: people who care about long-term progress and try to work towards it, while refusing to accept that building AGI is inevitable, regardless of the existential risk.
What does Our Glorious Future entail? Space colonization?[4] Abundant energy? Eradicating disease? Human longevity? For each (or all) of these things, we could ask: Is the best[5] way to achieve this goal to build AGI and ask it to solve this problem, or is it better to attack this problem directly? It is far from obvious to me that building AGI is the best way to reach Our Glorious Future. How much compute do you need to colonize the stars?
- ^
Scott Alexander. Why Not Slow AI Progress? Astral Codex Ten. (2022) https://astralcodexten.substack.com/p/why-not-slow-ai-progress.
- ^
A short, explicit description of Our Glorious Future is:
Riva-Melissa Tez. The Grandest Vision for Humanity (Light). LessWrong. (2021) https://www.lesswrong.com/posts/rzruCSWMXja6x9BdN/the-grandest-vision-for-humanity-light [LW · GW].
See also: Nate Soares & Rob Bensinger. Superintelligent AI is necessary for an amazing future, but far from sufficient. LessWrong. (2022) https://www.lesswrong.com/posts/HoQ5Rp7Gs6rebusNP/superintelligent-ai-is-necessary-for-an-amazing-future-but-1 [LW · GW]
- ^
My personal answers to these questions are: Yes, Yes (with caution), Mixed (both seem important), and No.
I do think that there exist good arguments in favor of No for each question, and that a reasonable person could answer either way for each of them. If you think that no good argument in favor of one of these positions exists, ask, and I will provide one in the comments.
- ^
Nick Beckstead. Will we eventually be able to colonize other stars? EA Forum. (2014) https://forum.effectivealtruism.org/posts/5dgFWods87kkE9TpZ/will-we-eventually-be-able-to-colonize-other-stars-notes [EA · GW].
- ^
‘Best’ here could mean ‘the easiest strategy to make progress’ or it could mean ‘the strategy with less catastrophic risk’.
1 comments
Comments sorted by top scores.
comment by dr_s · 2023-05-07T07:31:36.508Z · LW(p) · GW(p)
I agree with this point, and I am among those who think that AGI is essentially a trap that never goes well and we shouldn't aspire to it. I think there's another interesting point to make: when it comes to climate change, I don't support degrowth because we're already way too deep into our dependency on energy and without it billions would die. But if back when the industrial revolution started we begun with an understanding that growing too dependent on fossil fuels was dangerous and we should use them only as a jumping pad to switch to other sources of energy ASAP, that would have produced slower growth and had some immediate opportunity costs, but odds are that the overall path would have included a lot less suffering. We're in a position much closer to that when it comes to AI, still, so even the equivalent "degrowth" position would be sustainable. Though not sure what that would be (no AI at all, I guess?).