Ten counter-arguments that AI is (not) an existential risk (for now)

post by Ariel Kwiatkowski (ariel-kwiatkowski) · 2024-08-13T22:35:15.341Z · LW · GW · 5 comments

Contents

  Competent non-aligned agents
  Second species argument
  Loss of control via inferiority
  Loss of control via speed
  Human non-alignment
  Catastrophic tools
  Powerful black boxes
  Multi-agent dynamics
  Large impacts
  Expert opinion
  Closing thoughts
None
6 comments

This is a polemic to the ten arguments [LW · GW] post. I'm not a regular LW poster, but I'm an AI researcher and mild-AI-worrier.

I believe that AI progress, and the risks associated with it, is one of the most important things to figure out as humanity in the current year. And yet, in most discussions about x-risk, I find myself unaligned with either side.

My overall thesis about AI x-risk is that it's absolutely real, but also far enough into the future that at this moment, we should simply continue progress on both capabilities and safety. I'm not trying to argue that sufficiently powerful AI could never pose an x-risk, this belief seems rather silly. 

Disclaimers:

  1. This is largely thinking out loud, describing why I personally disagree (or agree) with the listed arguments. In the best case scenario, maybe I'll convince someone, or someone will convince me - I'd hate to be wrong on this and actively contribute to our doom.
  2. In some cases, I could go a few steps ahead in the discussion, provide a rebuttal to my arguments and a re-rebuttal to that. I'm consciously not doing that for several reasons which are hopefully intuitive and not important enough to list here.
  3. I'm deliberately only commenting on the summaries, and not the entire body of work behind each summary, mainly to keep things tractable. If some piece of text convinces humanity about the seriousness of x-risk, it won't be an enormous Harry Potter fanfiction (no offense). I like brevity.

Competent non-aligned agents

  1. Humans will build AI systems that are 'agents', i.e. they will autonomously pursue goals
  2. Humans won’t figure out how to make systems with goals that are compatible with human welfare and realizing human values
  3. Such systems will be built or selected to be highly competent, and so gain the power to achieve their goals
  4. Thus the future will be primarily controlled by AIs, who will direct it in ways that are at odds with long-run human welfare or the realization of human values
  1. I agree that we will build agents - we already try to do that.
  2. We only need them to be as aligned as they are powerful. A chatbot that doesn't understand actions or function calling won't be an x-risk, no matter how violently misaligned it is.
  3. It does seem natural that we will favor competent systems. We will also favor aligned systems, at surface-aligned - in fact, the usefulness of a system is directly correlated with both the capabilities and the alignment.
  4. There's a huge leap from the previous point, to stating that the "the future will be primarily controlled by AIs". I don't even necessarily disagree in principle - but we're nowhere near the level of capabilities that can lead to future-controlling AI. 

Overall this is the core x-risk argument that I completely agree with - but I think it's unlikely we get there in the foreseeable future, with the current paradigms.

Second species argument

  1. Human dominance over other animal species is primarily due to humans having superior cognitive and coordination abilities
  2. Therefore if another 'species' appears with abilities superior to those of humans, that species will become dominant over humans in the same way
  3. AI will essentially be a 'species' with superior abilities to humans
  4. Therefore AI will dominate humans
  1. I fundamentally agree that intelligence is the main factor giving humanity its dominance over other species, and over (some of) our planet in general.
  2. Possibly, but not necessarily. Through natural selection, humanity is wired to optimize for survival and reproduction. This gives us certain incentives (world domination). On the other hand, AIs that we create won't necessarily have the same incentives.
  3. & 4. This feels like a very vibe-based argument. We use an intuition that AI is like a species, and a superior species would dominate us, therefore AI would dominate us. 

Not a fan of this argument. Might be effective as an intuition pump if someone can't even conceive of how a powerful AI could lead to x-risk, but I don't take it too seriously.

Loss of control via inferiority

  1. AI systems will become much more competent than humans at decision-making
  2. Thus most decisions will probably be allocated to AI systems
  3. If AI systems make most decisions, humans will lose control of the future
  4. If humans have no control of the future, the future will probably be bad for humans
  1. Sure - at some point in the future, maybe. 
  2. Maybe, maybe not. Humans tend to have a bit of an ego when it comes to letting a filthy machine make decisions for them. But I'll bite the bullet.
  3. There's several levels on which I disagree here. Firstly, we're assuming that "humans" have control of the future in the first place. It's hard to assign coherent agency to humanity as a whole, it's more of a weird mess of conflicting incentives, and nobody really controls it. Secondly, if those AI systems are designed in the right way, the might just become the tools for humanity to sorta steer the future the way we want it.
  4. This is largely irrelevant after my previous response - we don't really control the future, plus I don't buy the implication contained in this point.

This largely sounds like a rehash of the previous argument. AI will become more powerful, we can't control it, we're screwed. The argument has a different coat of paint, so my response is different, but ultimately the point is that an AI will take over the world with us as an under-species.

Loss of control via speed

  1. Advances in AI will produce very rapid changes, in available AI technology, other technologies, and society
  2. Faster changes reduce the ability for humans to exert meaningful control over events, because they need time to make non-random choices
  3. The pace of relevant events could become so fast as to allow for negligible relevant human choice
  4. If humans are not ongoingly involved in choosing the future, the future is likely to be bad by human lights
  1. "Very rapid" is very vague. So far, there are many barriers to mass adoption of the existing AI technologies. Could it accelerate? Sure. But pretty much all of technological progress is exponential anyways.
  2. This seems very general, but I'll take it - if things happen faster, they're harder to affect or control.
  3. I'm not sure what time scale we're talking about here. We could imagine the hypothetical superintelligent AGI that transcends within minutes of deployment and takes over the world, but this is extraordinarily unlikely. We could also see "a few months" as very fast (e.g. for legislation), but many things in our own decision making can be accelerated then.
  4. We are always involved in choosing the future, if nothing else, then by designing the AI that would go zoom.

In one sense, this argument is obviously true - if we get an AI that's superintelligent, super-quickly, and misaligned, then we're probably screwed because we won't react in time. But it's a spectrum, and the real x-risk is only on the extreme end of the spectrum.

Human non-alignment

  1. People who broadly agree on good outcomes within the current world may, given much more power, choose outcomes that others would consider catastrophic
  2. AI may empower some humans or human groups to bring about futures closer to what they would choose
  3. From 1, that may be catastrophic according to the values of most other humans
  1. Agreed - pretty much any moral system, taken to the extreme, will end up in some sort of totalitarian absurdity.
  2. Agreed - if it's entirely in the hands of one individual/group that doesn't have restraint in their desire for control.
  3. Agreed.

Here we go, full agreement. This really is an issue with any sufficiently powerful technology. If only one person/country had nukes, we'd probably be worse off than in the current multipolar situation. Can the same multipolar approach help in the specific case of AI? Maybe - that's why I tend to favor open-source approaches, at least as of 2024 with the current state of capabilities. So far, for other technologies, we're somehow handling things through governance, so we should keep doing this with AI - and everything else.

Catastrophic tools

  1. There appear to be non-AI technologies that would pose a risk to humanity if developed
  2. AI will markedly increase the speed of development of harmful non-AI technologies
  3. AI will markedly increase the breadth of access to harmful non-AI technologies
  4. Therefore AI development poses an existential risk to humanity
  1. Nukes, MassiveCO2EmitterMachine9000, FalseVacuumDecayer, sure
  2. It will increase the speed of development of all technologies
  3. It will increase the breadth of access to all technologies
  4. It can pose a risk, or it can save us. Same as most technologies.

Most technologies have good and bad uses. AI will be a force multiplier, but if we can't handle Nukes 2.0 obtained via AI, we probably can't handle Nukes 2.0 obtained via good ol' human effort. This is fundamentally an argument against technological progress in general, which could be a significantly larger argument. My overall stance is that technological progress is generally good.

Powerful black boxes

  1. So far, humans have developed technology largely through understanding relevant mechanisms
  2. AI systems developed in 2024 are created via repeatedly modifying random systems in the direction of desired behaviors, rather than being manually built, so the mechanisms the systems themselves ultimately use are not understood by human developers
  3. Systems whose mechanisms are not understood are more likely to produce undesired consequences than well-understood systems
  4. If such systems are powerful, then the scale of undesired consequences may be catastrophic
  1. Sure, that's often the case.
  2. Yes and no. I personally hate the "do we understand NNs" debacle, because it entirely depends on what we mean by "understand". We can't convert an LLM into a decision tree (much to the annoyance of people who still insist that it's all "if" statements). At the same time, there is a lot of research into interpreting of transformers and NNs in general. It's not inherently impossible for these systems to be interpretable.
  3. Maybe a bit, but I suspect this is largely orthogonal. We don't need to understand how each atom in a gas behaves to put it in an engine and produce work. If we understand it at a high level, that's probably enough - and we have a lot of high-level interpretability research.
  4. Take a small problem with a small model, multiply it by a lot for a powerful model, and you'll get an x-risk. Or at least a higher likelihood of x-risk. 

I was always a little bit anti-interpretability. Sure, it's better if a model is more interpretable than less interpretable, but at the same time, we don't need it to be fully interpretable to be powerful and aligned.

The core argument here seems to be that if the black boxes remain forever pitch black, and we multiply their potential side effects by a gazillion (in the limit of a "powerful" AI), then the consequences will be terrible. Which... sure, I guess. If it actually remains entirely inscrutable, and it becomes super powerful, then bad outcomes are more likely. But not by much in my opinion.

Multi-agent dynamics

  1. Competition can produce outcomes undesirable to all parties, through selection pressure for the success of any behavior that survives well, or through high stakes situations where well-meaning actors' best strategies are risky to all (as with nuclear weapons in the 20th Century)
  2. AI will increase the intensity of relevant competitions
  1. Tragedy of the commons, no disagreements here.
  2. Sure.

This feels like a fairly generic "force multiplier" argument. AI, just like any technology, will amplify everything that humans do. So if you take any human-caused risk, you can amplify it in the "powerful AI" limit to infinity and get an x-risk.

This goes back to technological progress in general. The same argument can be made for electricity, so while I agree in principle that it's a risk, it's not an extraordinary risk.

Large impacts

  1. AI development will have very large impacts, relative to the scale of human society
  2. Large impacts generally raise the chance of large risks
  1. As an AI optimist, I certainly hope so.
  2. Generally, sure.

Once again, take AI as a mysterious multiplier for everything that we do. Take something bad that we (may) do, multiply it by AI, and you get an x-risk.

Expert opinion

  • The people best placed to judge the extent of existential risk from AI are AI researchers, forecasting experts, experts on AI risk, relevant social scientists, and some others
  • Median members of these groups frequently put substantial credence (e.g. 0.4% to 5%) on human extinction or similar disempowerment from AI
  1. Hey, that's me!
  2. Laypeople should definitely consider expert opinion on things that they themselves are not that familiar with. So I agree that people generally should be aware of the risks, maybe even a bit worried. That being said, it's not an argument that should significantly convince people who know enough to form their own informed opinions - something something appeal to authority.

Closing thoughts

I liked that post. It's a coherent summary of the main AI x-risks that I can address. I largely agree with them in principle, but I'm still not convinced that any threat is imminent. Most discussions that I tried to have in the past usually started from step zero ("Let me explain why AI could even be a risk"), which is just boring and unproductive. Perhaps this will lead to something beyond that.

5 comments

Comments sorted by top scores.

comment by Steven Byrnes (steve2152) · 2024-08-14T01:05:43.435Z · LW(p) · GW(p)

I think it's unlikely we get there in the foreseeable future, with the current paradigms

It would be nice if you could define “foreseeable future”. 3 years? 10 years? 30? 100? 1000? What?

And I’m not sure why “with the current paradigms” is in that sentence. The post you’re responding to is “Ten arguments that AI is an existential risk”, not “Ten arguments that Multimodal Large Language Models are an existential risk”, right?

If your assumption is that “the current paradigms” will remain the current paradigms for the “foreseeable future”, then you should say that, and explain why you think so. It seems to me that the paradigm in AI has had quite a bit of change in the last 6 years (i.e. since 2018, before GPT-2, i.e. a time when few had heard of LLMs), and has had complete wrenching change in the last 20 years (i.e. since 2004, many years before AlexNet, and a time when deep learning as a whole was still an obscure backwater, if I understand correctly). So by the same token, it’s plausible that the field of AI might have quite a bit of change in the next 6 years, and complete wrenching change in the next 20 years, right?

comment by arisAlexis (arisalexis) · 2024-08-20T12:44:33.263Z · LW(p) · GW(p)

why would something that you admit that is temporary (for now) matter in an exponential curve? It's like saying it's ok to go out for drinks on March 5 2020. Ok sure but 10 days after it wasn't. The argument must stand for a very long period of time or it's better not said. And that is the best argument for why we should be cautious, because a) we don't know for sure and b) things change extremely fast.

Replies from: ariel-kwiatkowski
comment by Ariel Kwiatkowski (ariel-kwiatkowski) · 2024-08-20T23:25:37.625Z · LW(p) · GW(p)

Because you could make the same argument could be made earlier in the "exponential curve". I don't think we should have paused AI (or more broadly CS) in the 50's, and I don't think we should do it now.

Replies from: arisalexis
comment by arisAlexis (arisalexis) · 2024-08-22T10:27:07.182Z · LW(p) · GW(p)

but you are comparing epochs before and after Turing test passed. Isnt' that relevant? The Turing test unanimously was/is an inflection point and arguably most experts think we have already passed it in 2023.

comment by Benjamin Sturgeon (benjamin-sturgeon) · 2024-08-20T08:52:43.777Z · LW(p) · GW(p)
  1. Thus most decisions will probably be allocated to AI systems
  2. If AI systems make most decisions, humans will lose control of the future
  3. If humans have no control of the future, the future will probably be bad for humans
  1. Sure - at some point in the future, maybe. 
  2. Maybe, maybe not. Humans tend to have a bit of an ego when it comes to letting a filthy machine make decisions for them. But I'll bite the bullet.
  3. There's several levels on which I disagree here. Firstly, we're assuming that "humans" have control of the future in the first place. It's hard to assign coherent agency to humanity as a whole, it's more of a weird mess of conflicting incentives, and nobody really controls it. Secondly, if those AI systems are designed in the right way, the might just become the tools for humanity to sorta steer the future the way we want it.

I agree with your framing here that systems made up of rules + humans + various technological infrastructure are the actual things that control the future. But I think the key is that the systems themselves would begin to favour more non-human decision making because of incentive structures. 

Eg, corporate entities have a profit incentive to have the most efficient decision maker in charge of the company, and maybe that includes a CEO but the board might insist on the use of an AI assistant for that CEO, and if the CEO makes a decision that goes against the AI and it turns out to be wrong shareholders in that company will come to trust the AI system more and more of the time. They don't necessarily care about the ego of the CEO they just care about the outcomes, within the competitive market.

In this way, more and more decision making gets turned over to non-human systems because of the competitive structures which are very difficult to escape from. As this transition continues it becomes very hard to control the unseen externalities from these decisions. 

I suppose this doesn't seem too catastrophic in its fundamental form, but I think the outcomes of playing it forward essentially seem to be a significant potential for harm from these externalities, without much of a mechanism for recourse.

comment by cousin_it · 2024-08-14T11:02:08.725Z · LW(p) · GW(p)