AI as a powerful meme, via CGP Grey

post by TheManxLoiner · 2024-10-30T18:31:58.544Z · LW · GW · 8 comments

Contents

8 comments

In episode 158 of Cortext podcast, CGP Grey gives their high-level reason why they are worried about AI.

My one line summary: AI should not be compared to nuclear weapons but instead to biological weapons or memes, which evolve under the implicit evolutionary pressures that exist, leading to AI's that are good at surviving and replicating. 

The perspective is likely known by many in the community already, but I had not heard it before. Interestingly, there have actually been experiments where they just put random strings of code in an environment where they interact, and self-replicating code appeared. See Cognitive Revolution podcast on 'Computational Life: How Self-Replicators Arise from Randomness', with Google researchers Ettore Randazzo and Luca Versari.

I quote the relevant part of the podcast below, but I recommend listening because the emotion and delivery is impactful. It is from 1:22:00 onwards.

To be explicit and not beat around the bush, when I try to think, “Oh, what is beyond this barrier, beyond which it might be impossible to predict?” it's like, well, if I’m just in Vegas and placing odds on this roulette wheel, almost all of those outcomes are extraordinarily bad for the human species. There are potentially paths where it goes well, but most are extremely bad for a whole bunch of reasons.

I think of it like this: people who are concerned like me often analogize AI to something like building nuclear weapons. It’s like, “Ah, we’re building a thing that could be really dangerous.” But I just don’t think that’s the correct comparison, because a nuclear weapon is a tool. It's a tool like a hammer. It’s a very bad hammer, but it is fundamentally mechanical in a particular way.

But the real difference, where do I disagree with people, where do other people disagree with me, is that I think a much more correct way to think about AI is to compare it to biological weaponry. You’re building a thing able to act in the world differently than how you constructed it. That’s what biological weapons are—they’re alive. A nuclear bomb doesn’t accidentally leave the factory on its own, whereas biological weapons do, can, and have. And once a biological weapon is out in the world, it can develop in ways that you never anticipated ahead of time.

That’s the way I think about these AI systems.

[...]

The reason I like to talk about it this way, particularly with biological weapons, is because I want to shortcut discussions that can be distracting, like, “Are these things alive? Are they thinking thoughts? Blah blah blah” That’s an interesting conversation, but when you’re thinking about what to do, that whole conversation is nothing but a pure distraction. This is why I prefer the biological weapon analogy—no one is debating the intent of a lab-created smallpox strain. No one wonders if the smallpox virus is “thinking” or "does it have any thoughts of its own?". Instead, people understand that it doesn’t matter. Smallpox germs, in some sense, “want” something: they want to spread, they want to reproduce, they want to be successful in the world, and are competing with other germs for space in human bodies. They're competing for resources. The fact that they’re not conscious doesn’t change any of that.

So I feel like these AI systems act as though they are thinking, and fundamentally it doesn’t really matter whether they are actually thinking or not because externally the effect on the world is the same either way. That’s my main concern here: I think these systems are real dangerous because it is truly autonomous in ways that other tools we have ever built are not.

It's like, look, we can take this back to another video of mine, This video will make you angry, which is about thought germs and I have this line about thought germs which - I mean memes but I just don't want to say the word because I think that that's like distracting in the modern context - but it's like memes are ideas and they compete for space in your brain and their competition is not based on how true they are. Their competition is not based on how good for you they are. Their competition is based on how effectively they spread, how easily they stay in your brain, and how effective they are at repeating that process.

And so it's the same thing [as biological weapons] again. You have an environment in which there are evolutionary pressures that slowly change things. I really do think one of the reasons it feels like people have gotten harder to deal with in the modern world is precisely because we have turned up the evolutionary pressure on the kinds of ideas that people are exposed to. Ideas have in some sense become more virulent. They have become more sticky. They have become better at spreading because those are the only ideas that can survive once you start connecting every single person on Earth, and you create one gigantic jungle in which all of these memes are competing with each other.

What I look at with AI and with the kind of thing that we're making here is we are doing the same thing right now for autonomous and semi-autonomous computer code. We are creating an environment under which, not on purpose but just because that's the way the world works, there will be evolutionary pressure on these kinds of systems to spread and to reproduce themselves and to stay around and to like "accomplish whatever goals they have". In the same way that small pox is trying to accomplish its goals. In the same way that mold is trying to accomplish its goals. In the same way that anything which consumes and uses resources is under evolutionary pressure to stick around so that it can continue to do so.

That is my broadest highest level most abstract reason why I am concerned. I feel like getting dragged down sometimes into the specific of that always ends up missing that point. It's not about anything that's happening now. It's that we are setting up another evolutionary environment in which things will happen - which will not be happening because we directed them as such - they will be happening because this is the way the universe works.

8 comments

Comments sorted by top scores.

comment by Valentine · 2024-10-31T16:24:15.468Z · LW(p) · GW(p)

I like this way of expressing it. Thanks for sharing.

I think it's the same core thing I was pointing at in "We're already in AI takeoff [LW · GW]", only it goes in the opposite direction for metaphors. I was arguing that it's right to view memes as alive for the same reason we view trees and cats as alive. Grey seems to be arguing to set aside the question and just look at the function. Same intent, opposite approaches.

I think David Deutsch's article "The Evolution of Culture" is masterful at describing this approach to memetics.

comment by Noosphere89 (sharmake-farah) · 2024-11-02T22:12:34.844Z · LW(p) · GW(p)

This is an interesting view on AI, but IMO I don't really share this view, and think that the evolutionary/memetic aspect of AI is way overplayed, compared to other factors that make AI powerful.

A big reason for that is that there will be higher-level bounds on what exactly is selected for, and in particular one big difference between computer code used on AI and genetic code is that genetic code has way less ability to error-correct than basically all AI code, and it's in a weird spot of reliability where random mutations are frequent enough to drive evolution, but not so frequent as to cause organisms to outright collapse within seconds or minutes.

Another reason is that effective AI architectures can't go through simulated evolution, since that would use up too much compute for training to work (We forget that evolution had at a lower bound 10e46 FLOPs to 10e48 FLOPs to get to humans).

A better analogy is within human-lifetime learning.

I basically agree with Steven Byrnes's case against evolution, and think that evolutionary analogies are very overplayed in the popular press:

https://www.lesswrong.com/posts/pz7Mxyr7Ac43tWMaC/against-evolution-as-an-analogy-for-how-humans-will-create [LW · GW]

Replies from: nc, hastings-greer, TheManxLoiner
comment by nc · 2024-11-23T14:33:24.564Z · LW(p) · GW(p)

I agree that evolutionary arguments are frequently confused and oversimplified, but your argument is proving too much.

[the difference between] AI and genetic code is that genetic code has way less ability to error-correct than basically all AI code, and it's in a weird spot of reliability where random mutations are frequent enough to drive evolution, but not so frequent as to cause organisms to outright collapse within seconds or minutes.

This "weird spot of reliability" is itself an evolved trait, and even with the effects of mutation rate variation between species, the variation within populations is heavily constrained (see Lewontin's paradox of diversity). Even discounting purely genetic/code-based(?) factors, the amount of plasticity (?search) in behaviour is also an evolvable trait (see canalisation) - I think it's likely there are already terms for this within the AI field but it's not obvious to me how best to link the two ideas together. I'm more curious about the value drift evolutionary arguments but I don't see an a priori reason that these ideas don't apply.

It would be good if we could understand the conditions under which greater plasticity/evolvability is selected for, and whether we expect its effects to occur in a timeframe relevant to near-term alignment/safety. 

Another reason is that effective AI architectures can't go through simulated evolution, since that would use up too much compute for training to work (We forget that evolution had at a lower bound 10e46 FLOPs to 10e48 FLOPs to get to humans).

It's not obvious to me that this is a sharp lower-bound, particularly when AI are already receiving the benefits of prior human computation in the form of culture. Human evolution had to achieve the hard part of reifying the world into semantic objects whereas AI has a major head-start. If language is the key idea (as some have argued), then I think there's a decent chance that the lower bound is smaller than this.

comment by Hastings (hastings-greer) · 2024-11-23T00:37:59.831Z · LW(p) · GW(p)

prompts already go through undesigned evolution through reproductive fitness (rendered in 4k artstation flickr 2014)

comment by TheManxLoiner · 2024-11-03T15:12:15.492Z · LW(p) · GW(p)

The 'evolutionary pressures' being discussed by CGP Grey is not the direct gradient descent used to train an individual model. Instead, he is referring to the whole set of incentives we as a society put on AI models. Similar to memes - there is no gradient descent on memes.

(Apologies if you already understood this, but it seems your post and Steven Byrne's post focus on training of individual models)

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-11-03T15:31:37.945Z · LW(p) · GW(p)

Fair enough on that difference between the societial level incentives on AI models and the individual selection incentives on AI models.

My main current response is to say that I think the incentives are fairly weak predictors of the variance in outcomes, compared to non-evolutionary forces at this time.

However, I do think this has interesting consequences for AI governance (since one of the effects is to make societal level incentives become more relevant, compared to non-evolutionary forces.)

comment by Raemon · 2024-10-30T21:39:51.378Z · LW(p) · GW(p)

This actually was a new way of thinking about it or at least articulating it, for me. Thanks for the link!

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-11-02T21:51:44.526Z · LW(p) · GW(p)

In my post, A path to Human Autonomy [LW · GW], I describe AI and bioweapons as being in a special category of self-replicating threats. If autonomous self-replicating nanotech were developed, it would also be in this category.

Humanity has a terrible track record when it comes to handling self-replicating agents which we hope to deploy for a specific purpose. For example:

Rabbits in Australia

Cane Toads in Australia

Burmese Pythons in Florida