Muddling Along Is More Likely Than Dystopia

post by Jeffrey Heninger (jeffrey-heninger) · 2023-10-20T21:25:15.459Z · LW · GW · 10 comments

Contents

  Introduction: An Intuition Pump
  Consequences of Stopping AGI?
  Specific Concerns
  Other Historical Examples
  Maybe AI Will Be Different
  How Long of a Pause?
  Conclusion
None
10 comments

Summary: There are historical precedents where bans or crushing regulations stop the progress of technology in one industry, while progress in the rest of society continues. This is a plausible future for AI.

Epistemic Status: My intuition strongly disagrees with other people here. I hope to explain my intuition, and provide enough historical evidence to make this intuition at least plausible.

 

Introduction: An Intuition Pump

Suppose you told someone in 1978 that no new nuclear power plants would be built in the US until 2023.[1] This would probably be very surprising. Nuclear power was supposed to be the power of the future.[2] The Nuclear Regulatory Commission had only been created 3 years earlier.

Given this information, someone in 1978 might predict that something terrible was about to happen. Maybe a nuclear war between the USA and USSR that destroys America’s industrial capacity. Maybe economic collapse due to overpopulation or global warming. Maybe an Orwellian police state in time for 1984, or a World Authority designed to regulate nuclear weapons that got out of hand.[3]

None of this happened. Instead, the Nuclear Regulatory Commission increased the regulatory ratchet[4] until building new nuclear power plants became uneconomical. These regulations only applied to the USA, but they seem to have significantly impacted nuclear power research globally. Countries that are building new nuclear power plants are still using designs that were developed before 1970.[5]

Regulation on nuclear power probably did slow US economic growth over the next 45 years compared to the counterfactual.[6] But the past 45 years have hardly been catastrophic. Economic growth and innovation did continue, driven by other industries.

 

Consequences of Stopping AGI?

Some people involved in the debate about slowing or pausing AI seem to think that successfully stopping AI progress over the long term would likely lead to death or dystopia:

Either we figure out how to make AGI go well or we wait for the asteroid to hit.

 - Sam Altman[7]

 

If we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela.

 - Scott Alexander[8]

 

I think we should be quite worried that the global government needed to enforce such a ban would greatly increase the risk of permanent tyranny, itself an existential catastrophe.

 - Nora Belrose[9]

 

It seems likely that we would need to create a worldwide police state, as otherwise [an indefinite AI pause] would fail in the long run.

 - Matthew Barnett[10]

It feels to me like this is the same sort of mistake that our hypothetical person from 1978 made. It might seem like AI will be an extremely important thing in the future, and so something dramatic would have to happen in order to prevent it. I think that we should put more probability on the boring future where regulation stifles this one field, while the rest of society continues as it had before.

This seems like an important disagreement. If you think that our descendants’ lives will be pretty good, and getting better if not unimaginably quickly, then stopping AI progress might be worth it for them. If you think that our descendants’ future will be “short and grim,”[8] then they might be less of a consideration when deciding whether to take this risk now.

 

Specific Concerns

Scott Alexander mentions several specific concerns that cause him to be pessimistic about a future without AI progress. Each of them seems like a real problem that people now and in the future should be trying to solve. We should be improving biosecurity,[11] promoting economic growth,[12] spreading democracy & freedom,[13] and giving people hope in future generations. But none of these things seem even close to a 50% chance of causing death or dystopia in the next 100 years. It would not be worth accepting existential risk from AI, which Scott Alexander estimates as having a ~20% chance of causing human extinction, to avoid these. 

Both Nora Belrose and Matthew Barnett are concerned that a global police state would be needed to enforce a long term ban on AI progress. This position does not seem uncommon in the AI safety community. The concerns are that research might shift to locations with fewer regulations, and that algorithmic progress will make AGI possible on a personal computer. The only way to avoid AGI then is a massive expansion of global government power. 

 

Other Historical Examples

I do not think that these concerns have been realized with other technologies. 

Regulations in one industry do not stop progress in all other industries. People in the Bay Area likely underestimate the importance of emerging technologies other than AI, or software more generally, because information technology is disproportionately important in the local economy.[14] I would similarly expect that people living in Detroit in 1950 would underestimate the importance of emerging technologies other than cars. Lots of progress is still possible without AI. Two emerging technologies I am particularly excited about are fusion and space colonization.

Regulations in one country can stop progress in a single industry. Progress stopping in a particular industry is not that uncommon.[15] Most innovation in a particular industry is done in one or a few cities. These clusters of innovation are difficult to build and maintain, so if one is crushed by regulation, it typically does not just move to another country. On a broader scale, some countries are much more innovative than others. In most industries, including heavily regulated ones, the USA is clearly more innovative than (most of) Europe or East Asia, which are much more innovative than the rest of the world. A lot has to go right: a high standard of living, an educated populace, the rule of law, the possibility of future profit, available capital, and a culture that encourages innovation. Countries which flaunt international regulations or norms typically do not attract innovation. Once a technology exists, it is much easier for other countries to copy it. The designs and skills needed already exist, and the benefits of the technology are clear. Regulation to prevent innovation is much easier than regulation to prevent proliferation.

I have previously investigated some Resisted Technological Temptations,[16] or technologies where a long term pause has been achieved through our current institutions:

Most technologies are not banned, nor have had their progress stifled by regulation. Most technologies are also not as scary as AI: I have a hard time imagining how solar panels or ballpoint pens could constitute an x-risk. Scary-sounding technologies, like weapons of mass destruction or some kinds of medical research, often do face bans or regulations that make their development no longer worth it, and the bans sometimes work.

I don’t want to say that effective bans on scary-sounding technologies happen by default. When they do work, they are the result of a concerted effort. But making a ban on a new, potentially dangerous technology seems very doable without disrupting the rest of society.

 

Maybe AI Will Be Different

While this post is mostly about historical precedents of other technologies being stopped, it seems worth saying a few words on AI in particular. There are several reasons why AI might be different from other technologies: 

  1. AI research is easier to do remotely than other emerging technologies. 
  2. Once an AI system is created, it can be transmitted easily, as software.
  3. Simple economic models suggest that powerful AI would be extremely economically advantageous to whoever adopts it.

All technologies are different. Some differences make regulations easier or harder, but none of these feel so different that they make regulation impossible:

  1. Law enforcement can enforce the law based on where the research is done or where the researchers live. The USA in particular has an expansive view of where its law can apply.[23]
  2. Most proposed regulations focus on the hardware required to train powerful AI.
  3. Policy makers do not know this. They know that someone is telling them this. They definitely do not know that they will get the economic promises of AGI on the timescales they care about, if they support this particular project. These promises are not that distinguishable from other technologies’ hype.[24]

There are also ways in which regulating AI is easier.

There are multiple stages of the supply chain where there are only one or a few companies in the world capable of cutting edge work. There are only a few actors who need to coordinate in order for regulation to be effective.

Current leading AI models require a lot of compute, which is capital intensive and easy to keep track of. This might change with enough improvements from algorithmic efficiency. But we should expect algorithmic progress to dramatically slow down in response to a long term pause on AI as capital and talent moves to other industries.

Lots of substances and items are regulated, and the details of this regulation vary widely based on what it is and what the government is trying to avoid.[25] Regulating GPUs will have some unique challenges, but does not seem impossible under our current institutions.

 

How Long of a Pause?

Most of the historical evidence is for global pauses that have lasted about 50 years. This is useful evidence for discussing a 100 year pause. If “long term” means 1,000 years, then there is much less historical evidence. Matthew Barnett has argued that a regulatory ratchet within existing institutions might accomplish a 50 year pause in AI research, but something more dramatic would be needed for a 1,000 year pause.

I am skeptical that a global police state would be easier to maintain than more normal regulations for 1,000 years. My model for how to sustain institutions on this time scale is:

  1. Build an institution that lasts for a generation.
  2. Convince the rising generation that this institution is a good thing to maintain.

If you fail at (2), then it does not matter what institution was built. If not even the elite believe that the police state is a good thing, then it will not maintain itself.[26] An institution which has less hard power, but is better at getting people to believe in it, is more likely to last 1,000 years.

 

Conclusion

Building AGI is an extremely uncertain endeavor. It might lead to Our Glorious Future. It might lead to human extinction. It might not even be possible. If we decide to not try to build AGI, the future seems much less uncertain. Society will continue to be clearly not optimal, but also far from dystopian. Making scientific, technological, economic, social, and political progress will continue to be hard, but people will continue to do it. We can continue to hope for at least marginal improvements for our children, and they for their children, long into the future.

It should not be surprising if a scary-sounding technology faces a regulatory ratchet that slows and then stops all progress in that field. This is not death or dystopia - it’s normal.
 

Thanks to Aaron Scher, Matthew Barnett, Rose Hadshar, Harlan Stewart, and Rick Korzekwa for useful discussion on this topic.

Preview image by Theen Moy: https://www.flickr.com/photos/theenmoy/8003177753.

  1. ^

    This is not quite fair because the date range extends from the start of construction of one plant (Shearon Harris) to the end of construction of a different plant (Vogtle Unit 3). Vogtle Unit 3 started construction in 2013. There is also a nuclear power plant (Watts Bar Unit 2) that started construction in 1973 and was completed in 2016.

  2. ^

    In 1973, the Atomic Energy Commission projected that 55.8% of the USA’s electricity would come from nuclear power by 2000, which was lower than it had previously projected. This did not happen: nuclear power has accounted for about 20% of the USA’s electricity since the late 1980s.

    Anthony Ripley. A.E.C. Lowers Estimate Of Atom Power Growth. New York Times. (1973) https://www.nytimes.com/1973/03/08/archives/aec-lowers-estimate-of-atom-power-growth.html.

  3. ^

    Some prominent people, including Bertrand Russell, were advocating the creation of a World Authority to prevent the existential risk from nuclear weapons:

    A much more desirable way of securing world peace would be by a voluntary agreement among nations to pool their armed forces and submit to an agreed International Authority. This may seem, at present, a distant and Utopian prospect, but there are practical politicians who think otherwise. A World Authority, if it is to fulfill its function, must have a legislature and an executive and irresistible military power. All nations would have to agree to reduce national armed forces to the level necessary for internal police action. No nation should be allowed to retain nuclear weapons or any other means of wholesale destruction. … In a world where separate nations were disarmed, the military forces of the World Authority would not need to be very large and would not constitute an onerous burden upon the various constituent nations.

    Bertrand Russell. Has Man A Future? (1961) Quoted from Global Governance Forum. (Accessed October 17, 2023) https://globalgovernanceforum.org/visionary/bertrand-russell/.

  4. ^

    Mark R. Lee. The Regulatory Ratchet: Why Regulation Begets Regulation. University of Cincinnati Law Review 87.3. (2019) https://scholarship.law.uc.edu/cgi/viewcontent.cgi?article=1286&context=uclr.

  5. ^

    For example, one “new” design for a nuclear power plant is a molten salt reactor. One currently exists: TMSR-LF1, an experimental reactor producing 2 MW of thermal power in northwestern China. The design is based on the molten salt reactor experiment (MSRE) which produced 7 MW of thermal power at Oak Ridge National Lab in the USA from 1965-1969. 

    Similarly, China has a small modular reactor which began power production in 2021, HTR-PM. It is a pebble-bed reactor, based on a demonstration reactor in Germany (AVR), which ran from 1967-1988. 

    All other nuclear power plants use reactor types that are even older.

  6. ^

    I have previously estimated the direct value foregone by the prohibitively high costs of nuclear power in the USA. I also expect there to have been additional indirect value as a result of having less expensive electricity.

    Resisted Technological Temptation: Nuclear Power. AI Impacts Wiki. (Accessed October 18, 2023) https://wiki.aiimpacts.org/responses_to_ai/technological_inevitability/incentivized_technologies_not_pursued/nuclear_power.

  7. ^
  8. ^

    The entire quote is:

    Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I don’t spend much time worrying about any of these, because I think they’ll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesn’t mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But it’s something on my mind.

    Scott Alexander. Pause for Thought: The AI Pause Debate. Astral Codex Ten. (2023) https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate.

  9. ^

    Nora Belrose. AI Pause Will Likely Backfire. EA Forum. (2023) https://forum.effectivealtruism.org/s/vw6tX5SyvTwMeSxJk/p/JYEAL8g7ArqGoTaX6 [? · GW].

  10. ^

    The entire quote is:

    Note that I am not saying AI pause advocates necessarily directly advocate for a global police state. Instead, I am arguing that in order to sustain an indefinite pause for sufficiently long, it seems likely that we would need to create a worldwide police state, as otherwise the pause would fail in the long run. One can choose to “bite the bullet” and advocate a global police state in response to these arguments, but I’m not implying that’s the only option for AI pause advocates.

    One reason to bite the bullet and advocate a global police state to pause AI indefinitely is that even if you think a global police state is bad, you could think that a global AI catastrophe is worse. I actually agree with this assessment in the case where an AI catastrophe is clearly imminent.

    However, while I am not dogmatically opposed to the creation of a global police state, I still have a heuristic against pushing for one, and think that strong evidence is generally required to override this heuristic. I do not think the arguments for an AI catastrophe have so far met this threshold. The primary existing arguments for the catastrophe thesis appear abstract and divorced from any firm empirical evidence about the behavior of real AI systems.

    Matthew Barnett. The possibility of an indefinite AI pause. EA Forum. (2023) https://forum.effectivealtruism.org/s/vw6tX5SyvTwMeSxJk/p/k6K3iktCLCTHRMJsY [? · GW].

  11. ^

    Toby Ord estimates the biosecurity x-risk over the next century to be about 1/30 in The Precipice. The biosecurity community seems to be being more successful at fighting x-risk than the AI safety community. There are already extensive regulations in the countries where most research is done and major international treaties against developing biological weapons. If you think that AI is more dangerous than synthetic biology, then it does not make sense to advance AI in order to improve biosecurity. It is not even clear if increasingly powerful AI would make biosecurity better or worse.

    For comparison, Toby Ord estimates the x-risk from asteroid impacts over the next century to be about 1/1,000,000. I interpret Sam Altman’s stated concern about asteroids as a proxy for all other existential risk. Otherwise, his risk estimates seem off by many orders of magnitude.

  12. ^

    I do not think that we have run out of human-achievable economic, technological, or scientific progress. The median person will likely be much wealthier in 100 years than today, even without AGI.

  13. ^

    Political and social trends in most countries over the last decade don’t seem good. Political and social trends in most countries over the last century seem wonderful. We should look at both when predicting the next century.

  14. ^

    What fraction of US GDP would you predict is in the information sector? The information sector includes both information technology and traditional media.

    5.5%

    https://www.bls.gov/emp/tables/output-by-major-industry-sector.htm.

  15. ^

    Examples of Progress for a Particular Technology Stopping. AI Impacts Wiki. (Accessed October 19, 2023) https://wiki.aiimpacts.org/ai_timelines/examples_of_progress_for_a_particular_technology_stopping.

  16. ^
  17. ^

    Resisted Technological Temptation: Nuclear Power. AI Impacts Wiki. (Accessed October 18, 2023) https://wiki.aiimpacts.org/responses_to_ai/technological_inevitability/incentivized_technologies_not_pursued/nuclear_power.

  18. ^

    Resisted Technological Temptation: Geoengineering. AI Impacts Wiki. (Accessed October 18, 2023) https://wiki.aiimpacts.org/responses_to_ai/technological_inevitability/incentivized_technologies_not_pursued/geoengineering.

  19. ^

    Resisted Technological Temptation: Vaccine Challenge Trials. AI Impacts Wiki. (Accessed October 18, 2023) https://wiki.aiimpacts.org/responses_to_ai/technological_inevitability/incentivized_technologies_not_pursued/vaccine_challenge_trials.

  20. ^

    I do not know what Israel’s nuclear program is like, or how much of it is the result of technology transfer from the US as opposed to indigenous innovation.

  21. ^

    Offensive biological weapons use is banned by the Geneva Protocol (1925) and development, production, acquisition, transfer, stockpiling & use of biological weapons is banned by the Biological Weapons Convention (1972). In addition to the treaties, biological weapons seem to have a significant taboo against their use.

    Michelle Bentley. The Biological Weapons Taboo. War on the Rocks. (2023) https://warontherocks.com/2023/10/the-biological-weapons-taboo/.

  22. ^

    Iulia Georgescu. Bringing back the golden days of Bell Labs. Nature Reviews Physics 4. (2022) p. 76-78. https://www.nature.com/articles/s42254-022-00426-6.

  23. ^

    For example, Sam Bankman-Fried is being tried in a US federal court, despite having moved himself and his business to The Bahamas.

    Another example involves the US Justice Department having FIFA officials from various countries arrested in Switzerland for corruption. “United States law allows for extradition and prosecution of foreign nationals under a number of statutes … In this case, she said, FIFA officials used the American banking system as part of their scheme.”

    Stephanie Clifford and Matt Apuzzo. After Indicting 14 Soccer Officials, U.S. Vows to End Graft in FIFA. New York Times. (2015) https://www.nytimes.com/2015/05/28/sports/soccer/fifa-officials-arrested-on-corruption-charges-blatter-isnt-among-them.html.

  24. ^

     For example, Project Excalibur promised to neutralize the threat of Soviet nuclear weapons by destroying dozens of ICBMs (with hundreds of warheads) as they launched. It ended up being infeasible.

  25. ^

    Examples of Regulated Things. AI Impacts Wiki. (Accessed October 19, 2023) https://wiki.aiimpacts.org/responses_to_ai/examples_of_regulated_things.

  26. ^

    This is my oversimplified model of what happened to the USSR.

10 comments

Comments sorted by top scores.

comment by Roman Leventov · 2023-10-22T08:12:31.386Z · LW(p) · GW(p)

Thanks for pointing this out. I find it particularly epistemically suspicious that all the quoted people (that currently predict grim future for humanity in the next 100 years if AGI won't be built and deployed massively) could hardly have median AGI timelines shorter than 30 years 5 years ago, with a significant probability weight for 70+ year timelines, yet they didn't voice this as a significant concern, even existential risk, at that time. And I don't think anything that has happened in the last 5 years should have made them so much more pessimistic. Trump already won the US election, Brexit was already voted for, the erosion of epistemic and civic commons, and the democratic backsliding was already happening, the climate situation was basically as obviously grave as it is today. Civilisational unpreparedness to a major pandemic was also obvious, as that nothing was being made to fix that.

This suggests to me that these people currently converged on thinking "x-risk from building AGI is high, but the risk for the civilisational collapse in the next 100 years without AGI is even higher" is a trick of mind that would probably not have happened if AGI timelines didn't so dramatically shrink for everyone.

I'm also aware that Will MacAskill argued for 30% chance of stagnation and civilisational collapse in this century in "What We Owe The Future" book last year, that could have prompted various people to update. But I was mostly unconvinced by that argument and I wonder if that argument itself was an example of the same psychological reaction (or a sort of counter-reaction) to massively shrinking timelines.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-10-22T08:44:07.569Z · LW(p) · GW(p)

I would further hypothesize that these inferences are the result of brain's attempt to make the fear and excitement about AGI coherent. If the person is not a longtermist, they typically reach to the idea that AGI will be such a massive upside for the people currently living (and I think Altman is in this camp, despite being quoted here). But for longtermists, such as Alexander, this "doesn't work" to explain intuitive excitement about AGI, so they reach to this idea of "massive risk without AGI".

I should say that I don't imagine these hypotheticals in the void, I feel something like "sub-excitement" (or proper excitement, which I deliberately suppress) about AGI myself, and also I was close to be convinced by arguments by MacAskill and Alexander.

comment by mako yass (MakoYass) · 2023-10-22T03:18:06.105Z · LW(p) · GW(p)

This article is mostly historical analogy. Historical analogy is blind to unprecedented change, and unprecedented change is routine today and in this case the author failed to address a pretty important impending predictable change in the cost of surveillance, and consequently, an unprecedented stabilization of the state's monopoly on force and the raising of the limits of the sophistication and thoroughness of policies that can be enforced at scale.

In 10 to 20 years, when tensor processors are cheap and power-efficient, it will be common for networks of self-replenishing autonomous drones to surveil and police vast areas of land. It's obvious that this will be deployed as soon as it's cost effective in Gaza (so, potentially even before analog tpus), and it's probable that Israel will try to legitimize it by presenting it as a law enforcement tool to be placed (mostly) under the control of their chosen Palestinian authorities. Hamas will cease to exist, Palestine will appear peaceful and it will rebuild. China will use that as a pretense to start using their own police swarms at home. I can't see further ahead than that. But it's entirely possible the practice just keeps spreading due to the obvious social benefits of simply not having violent crime any more.

And at that point it becomes possible for an increasingly correlated elite consensus to ban, utterly, more things. And with the threat of uprising totally banished, we may lose a moderating force that we didn't know we had. We may see a lot more restrictions. I don't know if that makes an indefinite ban on AGI a genuine risk, but it does invalidate an argument by historical analogy!

Matthew Barnett has argued that a regulatory ratchet within existing institutions might accomplish a 50 year pause in AI research

I'm also not sure that a 50 year ban isn't a dystopia, given that 50 years (plus 10) is long enough for most of the people I love to die of old age, and me also. I think I'm not alone in considering it to be... very conceivably arguable to that that another cycle of the churning of mortality would be almost as bad an outcome as extinction and misalignment. I'm still not sure, it depends on sub-questions about evolutionary tendencies of misaligned AGI ecosystems (IE, how kind would they be to each other and how beautiful a world would they build), and questions about the preferences of humanity that a few of us are wise enough or human enough to answer. But it's definitely not something I'd hope to see.

Replies from: stochastic_parrot
comment by stochastic_parrot · 2023-10-22T12:29:27.412Z · LW(p) · GW(p)

In 10 to 20 years, when tensor processors are cheap and power-efficient, it will be common for networks of self-replenishing autonomous drones to surveil and police vast areas of land.

Is there a betting market for this?

Replies from: MakoYass
comment by mako yass (MakoYass) · 2023-10-22T20:43:12.678Z · LW(p) · GW(p)

The thought of making one crossed my mind, but 10 year bets about things that seem obvious to me are unappealing. To bet in them is to stake my reputation not so much on the event, but on me being able to convince the market, soon enough before the resolution date for me to exit, of something that they're currently — for reasons I don't understand — denying (or if they are not in denial about it, I wont make much by betting). It's not a bet on reality, it's a bet on the consensus reality.

I'm not used to that yet.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2023-10-22T21:02:55.985Z · LW(p) · GW(p)

It feels like the game is, I make the market, this is the first time they've ever heard this take. If I present it well, they bet the same way as me and I make no mana. If I present it poorly, they narcissize and bet badly, but there's no guarantee they'll reverse their bets long enough before the resolution date for it to make it worth it to me.

This is an odd game.

So I guess masterful play would be to present the issue in a way that convinces people that I'm wrong about it, but in a way that's unstable and will reverse within a year.

A very odd game.

But not a meritless one. There's probably a lot of social good to be produced by learning to clown people like that.

comment by IKumar · 2023-10-23T09:59:08.240Z · LW(p) · GW(p)

Policy makers do not know this. They know that someone is telling them this. They definitely do not know that they will get the economic promises of AGI on the timescales they care about, if they support this particular project. 

I feel differently here. It seems that a lot of governments have woken up to AI in the past few years, and are putting it at the forefront of national strategies, e.g. see the headline here. In the past year there has been a lot of movement in the regulatory space, but I’m still getting undertones of ‘we realise that AI is going to be huge, and we want to establish global leadership in this technology’.

So going back to your nuclear example, I think the relevant question is: ‘What allowed policymakers to gain the necessary support to push stringent nuclear regulation through, even though it offered huge economic benefits?’. I think there are two things:

  1. It takes a significant amount of time, ~6-8 years, for a nuclear powerplant to be built and begin operating (and even longer for it to breakeven). So whilst they are economically practical in the long-term, it can be hard to garner the support for the huge initial investment. To make this clearer, imagine if it took ~1 year to build a nuclear power plant, and 2 years for it to breakeven. If that were the case, I think it would have been harder to push stringent regulation through.
  2. There was a lot of irrational public fear about anything nuclear, due to powerplant accidents, the Cold War and memories of nuclear weapon use during WWII.

 

With respect to AI, I don’t think (1) holds. That is, the economic benefits of AI will be far easier to realise than that for nuclear (you can train and deploy an AI system within a year, and likely breakeven a few years after that), meaning that policymaker support for regulation will be harder. 

(2) might hold, this really depends on the nature of AI accidents over the next few years, and their impacts on public perception. I’d be interested in your thoughts here.

Replies from: jeffrey-heninger
comment by Jeffrey Heninger (jeffrey-heninger) · 2023-10-23T18:07:23.085Z · LW(p) · GW(p)

It seems to me that governments now believe that AI will be significant, but not extremely advantageous. 

I don't think that many policy makers believe that AI could cause GDP growth of 20+% within 10 years. Maybe they think that powerful AI would add 1% to GDP growth rates, which is definitely worth caring about. It wouldn't be enough for any country which developed it to become the most powerful country in the world within a few decades, and would be an incentive in line with some other technologies that have been rejected.

The UK has AI as one of their "priority areas of focus", along with quantum technologies, engineering biology, semiconductors and future telecoms in their International Technology Strategy. In the UK's overall strategy document, 'AI' is mentioned 15 times, compared to 'cyber' (45 times), 'nuclear' (43), 'energy' (37), 'climate' (30), 'space' (17), 'health' (15), 'food' (8), 'quantum' (7), 'green' (6), and 'biology' (5). AI is becoming part of countries' strategies, but I don't think it's at the forefront. The UK government is more involved in AI policy than most governments.

comment by Kaj_Sotala · 2024-03-25T10:58:31.709Z · LW(p) · GW(p)

I'd note that while this has a nice list of technologies that were successfully stopped, it's missing examples for which this failed. Some examples:

  • The prohibition of alcohol failed
  • Various illegal drugs continue to be sold and used in enormous amounts despite massive efforts to stop this
  • Online piracy of music, movies, games, etc. seems to have declined from its peak due to better online marketplaces, but illegal file-sharing is still a thing
  • China failed to keep silk production just to itself

I think a common factor in these is that they are much easier to produce/do undetected and with limited resources than e.g. nuclear power.

comment by Donald Hobson (donald-hobson) · 2023-10-28T21:14:26.909Z · LW(p) · GW(p)

I think 2 concepts are being confused here. The conditional probabilities, and the marginal changes due to actions.

Its possible that

  1. Regulating AI inevitably causes a "muddling along" world.
  2. Across potential futures, regulation is so rare that there are far more "asteroid knocks us back to the stoneage" worlds than "muddling along without AI" worlds. 

Don't confuse the counterfactual future without AI to the conditional future without it.