What if memes are common in highly capable minds?
post by Daniel Kokotajlo (daniel-kokotajlo)
score: 24 (10 votes) ·
This is a question post.
The meme-theoretic view of humans says: Memes are to humans as sailors are to ships in the age of sail.
If you want to predict where a ship will go, ask: Is it currently crewed by the French or the English? Is it crewed by merchants, pirates, or soldiers? These are the most important questions.
You can also ask e.g. "Does it have a large cargo hold? Is it swift? Does it have many cannon-ports?" But these questions are less predictive of where it will go next. They are useful for explaining how it got the crew it has, but only to a point--while it's true that a ship built with a large cargo hold is more likely to be a merchant for more of its life, it's quite common to encounter a ship with a large cargo hold that is crewed by soldiers, or for a ship built in France to be sailed by the English, etc. The main determinants of how a ship got the crew it currently has are its previous interactions with other crews, e.g. the fights it had, the money that changed hands when it was in port, etc.
The meme-theoretic view says: Similarly, the best way to explain human behavior is by reference to the memes in their head, and the best way to explain how those memes got there is to talk about the history of how those memes evolved inside the head in response to other memes they encountered outside the head. Non-memetic properties of the human (their genes, their nutrition, their age, etc.) matter, but not as much, just like how the internal layout of a ship, its size, its age, etc. matter too, but not as much as the sailors inside it.
Anyhow, the meme-theoretic view is an interesting contrast to the highly-capable-agent view. If we apply the meme-theoretic view to AI, we get the following vague implications:
--Mesa-alignment problems are severe. The paper already talks about how there are different ways a system could be psuedo-aligned, e.g. it could have a stable objective that is a proxy of the real objective, or it could have a completely different objective but be instrumentally motivated to pretend, or it could have a completely different objective but have some irrational tic or false belief that makes it behave the way we want for now. Well, on a meme-theoretic view these sorts of issues are the default, they are the most important things for us to be thinking about.
--There may be no stable objective/goal at all in the system. It may have an objective/goal now, but if the objective is a function of the memes it currently has and the memes can change in hard-to-predict ways based on which other memes it encounters...
--Training/evolving an AI to behave a certain way will be very different at each stage of smartness. When it is too dumb to host anything worthy of the name meme, it'll be one thing. When it is smart enough to host simple memes, it'll be another thing. When it is smart enough to host complex memes, it'll be another thing entirely. Progress and success made at one level might not carry over to higher levels.
--There is a massive training vs. deployment problem. The memes our AI encounters in deployment will probably be massively different from those in training, so how do we ensure that it reacts to them appropriately? We have no idea what memes it will encounter when deployed, because we want it go to out into the world and do all sorts of learning and doing on our behalf.
Thanks to Abram Demski for reading a draft and providing some better terminology
Comments sorted by top scores.
comment by DanielFilan
· score: 8 (4 votes) · LW
) · GW
My understanding of meme theory is that it considers the setting where memes mutate, reproduce, and are under selection pressure. This basically requires you to think that there's some population pool where the memes are spreading. So, one way to think about it might be to ask what memetic environment your AI systems are in.
- Are human memes a good fit for AI agents? You might think that a physics simulator is not going to be a good fit for most human memes (except perhaps for memes like "representation theory is a good way to think about quantum operators"), because your physics simulator is structured differently from most human minds, and doesn't have the initial memes that our memes are co-adapted with. That being said, GPT-8 might be very receptive to human memes, as memes are pretty relevant to what characters humans type on the internet.
- How large is the AI population? If there's just one smart AI overlord and then a bunch of MS Excel-level clever computers, the AI overlord is probably not exchanging memes with the spreadsheets. However, if there's a large number of smart AI systems that work in basically the same manner, you might think that that forms the relevant "meme pool", and the resulting memes are going to be different from human memes (if the smart AI systems are cognitively different from humans), and as a result perhaps harder to predict. You could also imagine there being lots of AI system communities where communication is easy within each community but difficult between communities due to architectural differences.
comment by Daniel Kokotajlo (daniel-kokotajlo)
· score: 6 (3 votes) · LW
) · GW
One scenario that worries me: At first the number of AIs is small, and they aren't super smart, so they mostly just host normal human memes and seem as far as we (and even they) can tell to be perfectly aligned. Then, they get more widely deployed, and now there are many AIs and maybe they are smarter also, and alas it turns out that AIs are a different environment than humans, in a way which was not apparent until now. So different memes flourish and spread in the new environment, and bad things happen.
comment by Viliam
· score: 4 (2 votes) · LW
) · GW
A part of the idea of "meme" is that the human mind is not designed as a unified algorithm, but consists of multiple parts, that can be individually gained or replaced. (The rest of the idea is that the parts are mostly acquired by learning from other humans, so their copies circulate in the population which provides an evolutionary environment for them.)
Could this first part make sense alone? Could an AI be constructed -- in analogy to "Kegan level 5" in humans -- in the way that it creates these parts (randomly? by mutation of existing ones?), then evaluates them somehow, keeps the good ones and discards the bad ones, with the idea that it may be easier to build a few separate models, and learn which one to use in which circumstances, than going directly for one unified model of everything? In other words, that the general AI would internally be an arena of several smaller, non-general AIs; with a mechanism to create, modify, and select new ones? Like, we want to teach the AI how to write poetry, so the AI will create a few sub-AIs that can do only poetry and nothing more, evaluate them, and then follow the most successful one of them. Another set of specialized sub-AIs for communicating with humans; another for physics; etc. With some meta mechanism which would decide when a new set of sub-AIs are needed (e.g. when all existing sub-AIs are doing poorly at solving the problem).
And, like, this architecture could work for some time; with greater capacity the general AI would be able to spawn more sub-AIs and cover more topics. And then at some moment, the process would generate a new sub-AI that somehow hijacks the meta mechanism and convices it that it is a good model for everything. For example, it could stumble upon an idea "hey, I should simply wirehead myself" or "hey, I should try being anti-inductive for a while and actually discard the useful sub-AIs and keep the harmful ones" (and then it would find out that this was a very bad idea, but because now it keeps the bad ideas, it would keep doing it).
Even if we had an architecture that does not allow full self-modification, so that wireheading or changing the meta mechanism is not possible, maybe the machine that cannot fully self-modify would find out that it is very efficient to simulate a smaller AI, such that the simulated AI can self-modify. And the simulated AI would work reasonably for a long time, and then suddenly start doing very stupid things... and before the simulating AI realizes that something went wrong, maybe some irrepairable damage already happened.
...this all is too abstract for me, so I even have no idea whether what I wrote here actually makes any sense. I hope a smarter minds may look at this and extract the parts that make sense, assuming there are any.
comment by wearsshoes
· score: 1 (1 votes) · LW
) · GW
(Speculation) Adding to DanielFilan, there are some other properties of ideas implied by meme theory, or that this analogy might cause one to erroneously assume are necessarily true of ideas. I have taken a shot at them below. We can leave it aside when discussing the model, but the validity of some of these properties seem highly contestable. At the very least we should avoid discarding other models of conceptual space.
- Ideas are transmissible. This is a natural implication and how we normally think of ideas anyways.
- Ideas are possessions of agents. There is no idea which exists and is not possessed by an agent, just as species which correspond to no organisms do not exist. As a corollary ideas can become extinct.
- Instances of ideas are possessions of discrete agents. In other words, agents are to ideas as cells are to genes. This is implied by reproducibility. In this model ideas in minds are tokens of a type (instances of an object). You have one token of an idea, and by telling it to me you transmit it to me and reproduce it in my mind. This seems natural compared to the alternative: that ideas exist in no minds but only exist immanently as an exterior result of cognition across many minds, seems impractical to operationalize. (In philosophy, I think this idea shows up in Deleuze's writings. In fiction, this is the conceptual model underlying QNTM's http://www.scp-wiki.net/antimemetics-division-hub, although he uses the term 'meme' there.)
- Ideas are functional. They produce behavior in agents or populations as genes produce traits in cells or organisms. There are probably null-function heritable ideas as there are heritable genes that don't encode proteins, but the selection pressure is much higher. I'm not sure whether ML models frequently contain nonfunctional ideas or not; they seem to contain inefficient ones. Alternately ideas are noncausally related to behavior, such as if the term 'idea' is limited to conscious thought: https://sci-hub.tw/10.1038/nn.2112
- No agent possesses two instances of the same idea. Not part of the analogy, just kind of one of those unstated assumptions. I probably don't have two identical notions of apples in my mind.
- All ideas evolve from other ideas. As in genes, spontaneous generation is uncommon. This seems intuitively wrong, and I think it is a little beyond the scope of Dawkins's original idea, but seems like a common assumption of the theory.
- Ideas are decomposable into basic units. I'm not really sure that this is a necessary feature of the analogy either, but is easily operationalized within a connectionist framework. In a neural net, the basic memetic unit is are model parameters. Training data is not any part of the meme; it is a form of speech. Speech tokens are not memes. Alternately you could argue that models are approximations of ideas; this other approach seems similar to Platonism.
comment by Lukas_Gloor
· score: 6 (3 votes) · LW
) · GW
In this answer [LW(p) · GW(p)] on arguments for hard takeoff, I made the suggestion that memes related to "learning how to learn" could be the secret sauce that enables discontinuous AI takeoff. Imagine an AI that absorbs all the knowledge on the internet, but doesn't have a good sense of what information to prioritize and how to learn from what it has read. Contrast that with an AI that acquires better skills about how to organize its inner models, making its thinking more structured, creative, and generally efficient. Good memes about how to learn and plan might make up an attractor, and AI designs with the right parameters could hone in on that attracter in the same way as "great minds think alike." However, if you're slightly off the attractor and give too much weight to memes that aren't useful for truth-seeking and good planning, your beliefs might resemble that of a generally smart person with poor epistemics, or someone low on creativity who never has genuine insights.