Liron Shapira vs Ken Stanley on Doom Debates. A review
post by TheManxLoiner · 2025-01-24T18:01:56.646Z · LW · GW · 0 commentsContents
High level summary Musings Chronological highlights and thoughts None No comments
I summarize my learnings and thoughts on Liron Shapira's discussion with Ken Stanley on the Doom Debates podcast. I refer to them as LS and KS respectively.
High level summary
Key beliefs of KS:
- Future superintelligence will be 'open-ended'. Hence, thinking of them as optimizers will lead to incomplete thinking and risk mitigations.
- P(doom) is non-zero, but no fixed number. Changes from day to day.
- Superintelligence is a risk, but that open-endedness is the root of the problem, not optimization.
- KS' main desire is to increase awareness that superintelligence will be open-ended, because most people (regardless of their p(doom)) do not discuss or believe this, and hence the strategies to reduce risk will not be appropriate.
- KS believes that superintelligence will not have significantly more foresight into the development of future technologies, because the task is fundamentally too complex and unpredictable. The only way to make progress is by doing incremental research and real world experiments, motivated by what is 'interesting'. The result is a gradual accumulation of 'stepping stones' that allow you to go further and further along the tech tree.
- KS is uncertain if super-intelligence will have significantly more optimizing ability than we do, essentially because we over-rate optimization and open-ended divergent processes is the key ingredient to intelligence.
- KS does not have strong opinions on policy proposals. KS does tentatively suggest that enforcing humans-in-the-loop (somehow) as one possibility to mitigate the risk. E.g. if the super-intelligence wanted to use 50% of the available resources to create a humungous particle accelerator, it needs to get our permission first.
- KS takes the risks due to AI seriously and this was a big factor for why he is currently not working on AI.
In contrast, LS does believe that superintelligence will be a super-optimizer and that it will be capable of feats of foresight and ingenuity that allow it to skip many of the stepping stones that we humans would have to take. E.g. LS believes that a superintelligence in early 1800s could have skipped the vacuum tubes and developed modern electronic computers on its first try.
Unfortunately, most of KS' claims are not justified. Partly this is because KS did not explain himself clearly so it took effort to pin down his beliefs, and also because LS chose to drill down into details that - in my opinion - were not the key cruxes.
Musings
I do not have fixed views on the nature of super-intelligence, and this is big source of uncertainty for me. I am persuaded by the notion of instrumental convergence. Presumably a non-optimizing open-ended superintelligence will avoid getting turned off. Or, it would accumulate resources to carry out its open-ended explorations. My general sense is that things are extremely unpredictable and that super-intelligence vastly increases the variance of future outcome. I do not have intuition for how to weigh the odds of positive outcomes versus negative.
Based only on this interview, LS has a simplistic view of intelligence, believing that raw intelligence can provide Godly foresight into everything, allowing one to skip doing experiments and interacting with the universe.
Here are some relevant ideas for this discussion:
- Joseph Henrich's work on limitations of human intelligence, and instead the power of culture and the accumulation of knowledge over time. See his book The Secret of our Success (or find summaries, e.g. his talk at Google).
- Observing digital self-replicators in a system consisting of simple interactions and mutations. Importantly there is no objective function being maximized, contrary to most evolutionary algorithms. See here for the paper or this interview on Cognitive Revolution podcast.
- Complexity theory. I know almost nothing here, but seems highly relevant.
- The notion of computational irreducibility, which Stephen Wolfram focusses on a lot in his research and in his computational theories of everything.
If you want to learn more about open-endedness:
- KS' book Why Greatness Cannot Be Planned, The Myth of the Objective.
- KS' interviews on ML Street Talk podcast. Interview 1 and Interview 2.
- Interview of Tim Rocktäschel on the ML Street Talk podcast.
(I have not yet listened to any of these, but the first few minutes of each sound promising for getting a more concrete understanding of open-endedness and what it means to be interesting.)
Chronological highlights and thoughts
I present these highlights and thoughts chronologically as they appear in the podcast.
A potential application of open-endedness is to generate high quality data for LLM training. KS does say later that he believes it is unlikely to be big part of OpenAI's pipeline.
00:03:26 LS. So your open endedness research, did that end up merging with the LLM program or kind of where did it end up?
KS. Yeah, to some extent. If you think about it from their big picture, not necessarily mine, it's aligned with this issue of what happens when the data runs out. [...] It's always good to be able to generate more really high quality data. And if you think about it, that's what open-endedness does in spades.
KS tries to explain how open-endedness is not just a different kind of objective. This is not clear at all. I wish LS pushed KS to be more formal about it, because the vague intuitive description is not convincing.
00:07:35. KS. Open-endedness is a process that doesn't have an explicit objective. Some people will say open-endedness is its own objective, and this can muddy the waters. But first, just want to make this kind of clear what the distinction is, which is that in open-endedness, you don't know where you're going by intent. And the way you decide things is by deciding what would be interesting. And so open-ended processes make decisions without actually a destination in mind. And open-ended processes that exist in the real world are absolutely grandiose. They're the most incredible processes that exist in nature.
And there's really only two. One is natural evolution. So you start from a single cell and then you wait like a billion years, and all of living nature is invented in - what would be from a computer science perspective - a single run.
There's nothing like that algorithmically that we do. It invented photosynthesis, flight, the human mind itself, the inspiration for Al all in one run. We don't do stuff like that with objective-driven processes. It's a very special divergent process.
And the second one is civilization. Civilization does not have a final destination in mind. [...]
What does it mean for evolution to 'decide' something? And what does it mean for something to be interesting? Does evolution find things interesting?
And a bit later:
00:09:32. KS: You could say, for example, that evolution has an objective of survive and reproduce. And this gets little bit hair-splitting, but I like to make a distinction there because I don't think of survive as formally the same kind of objective. I prefer not to call it an objective, because it's not a thing that you haven't achieved yet. When I think of objectives, I'm thinking of a target that I want to get to that I've not yet achieved. With survive and reproduce, the first cell did that. So I think of it more as a constraint. Everybody in this lineage needs to satisfy this constraint, but it's already been achieved and everybody in this lineage needs to satisfy this constraint.
LS argues that random mutations - some of which help and some of which hinder - are doing a kind of gradient descent in the loss landscape. E.g. if you have a patch of cells that are light sensitive, then some mutation might make it more light sensitive and hence more likely to survive. KS believes this is only valid for micro scale evolution but not macro scale.
00:11:47 KS. Yeah, I don't deny that optimization does happen in evolution. But it's important to highlight that the overall accounting for why there's all of the diversity of life on Earth is not well accounted for by just that observation. It's an astounding diversity of amazing inventions. To account for that requires other explanations.
On a macro scale, it is difficult to explain what we mean by better. How are we humans better than single-celled bacteria? We have less biomass, less offspring per generation, lower opulation. There's nothing objective to point to why we're better in terms of optimization. What's better about us is that we're more interesting.
A lot of evolution has to do with escaping competition - like finding a new niche - which is not an optimization problem.
This is doing something different and that's the divergent component. I argue that the convergent optimization subprocesses are less interesting and they don't account for the global macro process of evolution.
KS makes interesting observation that evolutionary algorithms are not divergent and so are not good metaphor for understanding full scope of evolution.
00:14:57. KS. Those kinds of algorithms work the way you describe. You do set an objective and fitness is measured with respect to the objective and you explicitly follow that gradient just as you would in another optimization algorithm. But think about what genetic algorithms do. They do converge. They have almost nothing to do with what's actually happening in nature. The intuition is off.
And so it's unfortunate to become this kind of misleading metaphor that lot of people key into. These are highly convergent algorithms that always converge to a single point or get stuck, just like conventional optimization. That's not what we see in nature.
LS is still generally confused (as am I) and asks a good question: what is the ultimate claim being made?
00:16:48 KS. Evolution is incredibly prolifically creative. And we don't have algorithms like that in computer science. So there's something to account for here that we have not abstracted properly. And yes, this is related to intelligence because civilization also has this property which is built on top of human intelligence.
And it's related to the superintelligence problem because my real deeper claim here is that superintelligence will be open-ended. It must be because the most distinctive characteristic of human intelligence is our prolific creativity. We will not get a super intelligence that's not open-ended and therefore we need to understand divergent processes. All of our optimization metaphors don't account for that property which can mislead us and lead us astray in analyzing what's in store for us in the future.
Finally something concrete to hold on to! KS believes that future superintelligence will be open-ended, and thinking of them as optimizers will lead to incomplete analysis and predictions.
But then oddly, LS goes back to the question of evolution and how evolution is or is not explained by following an objective function. Some interesting points come up but not central in my view. For example:
- 00:20:01. That environmental niches co-evolve with the adaptations to those niches
- 00:20:19. Idea of 'collection' as an additional mechanism in divergent processes. Collection means collecting 'stepping stones'. Stepping stones are things that can lead to other interesting things. Evolution is an amazing collector because every species that exists is now added to the collection, creating new opportunities for other new species to appear.
- 00:24:16. There are periods of 'slack' where there is not cut-throat survival-of-the-fittest, allowing for exploration and non-convergent processes.
- 00:27:20. Something called 'bilateral symmetry' was early in evolutionary tree, and is crucial to intelligence. Importantly, following a purely optimizing framework, hard to explain how intelligence evolved, because this bilateral symmetry was a pre-requisite but the causal link from bilateral symmetry to intelligence was not 'known' when bilateral symmetry appeared. "If you give IQ tests to the flatworms [with bilateral symmetry], it doesn't look like there's any progress along the intelligence dimension."
At around 30 minutes, they discuss a thought experiment of what would happen if you went to 1844 (100 years before ENIAC created) and tried directly optimizing for creation of computers. KS says it would fail because you would miss open-ended curiosity-driven explorations that lead to vacuum tubes, that were a crucial component of computers. LS (~00:32:00) says this is just a matter of lacking intelligence and foresight. With enough intelligence, you could create computer with direct optimization.
KS responds:
00:32:32. KS. A fundamental aspect of my argument is that we cannot foresee the tech tree. It's just complete futility. Omnipotence is impossible. We cannot understand how the universe works without experimentation. We have to try things. But it's important for me to highlight that trying things is not random. They [scientists] were highly informed because they understood that there's very interesting things about the properties of these technologies. And they wanted to see where that might lead, even though they don't know ultimately where it leads. And this is why the tech tree is expanding, because people follow these interesting stepping stones.
But we will not be able to anticipate what will lead to what in the future. Only when you're very close can you do that.
LS says this is a limitation of humans. What about superintelligence?
00:35:36 KS: The AI's hypotheses will be better, but it still needs to make hypotheses and it still needs to test those hypotheses. And it will still need to follow that tech tree and gradually discover things over time. Omnipotence is not on the table, even for AGI or superintelligence.
Great once again! Another concrete claim from KS about superintelligence, and something I would like to see the two discuss to find out why their intuitions disagree and what evidence could change either of their mind. But like last time, LS changes topic...
00:35:51. LS: One thing that you say in your book is that as soon as you create an objective, you ruin your ability to reach it. Unpack that.
KS says that using an objective works if the you have the necessary stepping stones to carry out the optimization and reach your objective. Such objectives are 'modest objectives'. However, for ambitious objectives, direct optimization will not work, because the necessary stepping stones will be things you simply would not even consider researching if you were directly optimizing for the goal.
The discussion moves to explore this in the context of evolution. LS asks whether in KS's framework, there even is a modest objective.
00:40:09. KS. I prefer to put it that way. Your argument caused me to contort myself in a way that I don't prefer, to describe this as an objective process once you're near something. There never is an explicit objective in evolution. My argument is this is why it's so prolifically creative. It's essential to have it for creative processes not to be objectively driven. And evolution is the greatest creative process that we know in the universe.
LS asks whether genetic fitness is an implicit objective that evolution optimizes for, even if that is not the explicit goal that evolution has. KS gives strong and bizarre claim in response about how all that matters is 'what is interesting':
00:42:10. KS. I don't think of inclusive genetic fitness as an objective. There's nothing wrong with something that has lower fitness. What we care about in evolution is what's interesting, ultimately, not necessarily what has higher fitness. Our fitness is probably on an objective basis lower than lots of species that we would consider less intelligent than us. It's an orthogonal issue. Fitness is not a target. Like I said earlier, it's a constraint.
I do not buy the idea that what we care about is what is interesting. I suspect this is KS just not phrasing himself very well, because it seems odd to claim that evolution does not care about fitness but it does care about 'being interesting'. Interesting to who??
KS tries a metaphor with Rube Goldberg machines:
00:43:47. KS. I was watching a TV show about this guy who is making Rube Goldberg machines to open a newspaper. And it was really funny because he invented the most complex things you could possibly imagine to do something completely trivial.
And this is effectively what's happening. [Evolution] is a Rube Goldberg machine generating system. [...] the complexity of those machines itself becomes what's interesting. It's not the fact that they're getting better. In some ways they're getting worse because it's crazy to go through all this complexity just to get another cell.
So we live in a Rube Goldberg machine generating universe, and this is just going on forever. It's a different story. It's a different story than this like deathmatch convergence type of view of evolution.
Discussion veers for several minutes in an unhelpful direction, in my opinion. LS doesn't look good here, saying things that just make KS repeat himself, without progressing the discussion, or digging into any of the previous key cruxes.
At 57 minutes in, LS moves to discussion of P(doom). Key points from KS:
- Not willing to give a concrete number
- His intuition changes significantly from day to day
- But it is definitely non-zero. He does take possibility seriously, and that is big reason he stopped doing ML research. 1:02:30 "I was actually feeling bad at some level because I wasn't sure that what I'm doing is actually something can be proud of. l need to step back and reconcile what I really believe here so that I can feel proud of myself that I'm actually contributing to a better future."
- He does think there are good arguments on both sides of p doom debate, and is currently grappling with those to learn where he stands.
- Main reason he wants to avoid giving P doom is that his main point and contribution to the discussion is that people on both sides of the debate are ignoring open-endedness, the fact that future intelligence will be open-ended, and by focussing on optimization framework the discussions are all flawed.
- Not claiming that open-endedness makes things safe. "1:00:34. Open-endedness is a different kind of danger and we need to grapple with that. My agenda is I want us to be grappling with the dangers of open-endedness."
- Big trade-off for him with AI development is balancing the potential upsides (e.g. greatly reducing suffering all over the world) with the potential downsides (e.g. human extinction). "Both sides of this are just totally unacceptable."
At around 1:11:00, some discussion on what 'interesting means'. KS basically says its subjective but believes there is commonality amongst all humans. Nothing concrete or insightful.
At 1:15:46, LS asks what KS' ideal scenario is. KS is unsure how to tradeoff between well-being and interesting-ness/curiosity. There will be things that are interesting but dangerous.
At 1:20:14, LS asks what the headroom is above human intelligence. This leads to revealing more cruxes, and LS does a valiant job pinning down what KS thinks about whether future AI can be better optimizers than humans.
- KS starts by saying there is head-room but not sure how much and agree there is a form of intelligence that would dominate over us, just like we dominate over monkeys.
- LS mentioned idea of an intelligence being able to optimize better than us. KS repeated that this is his big disagreement. At 1:24:23, "I think that intelligence will be divergent. It's not an optimization process. It will have mastered divergence, which is creative exploration. And so it's not going to have those properties that you're describing."
- LS pushes back, asking if an intelligence could be more capable at optimization than us, in addition to being open-ended. KS does not reply clearly, saying its intelligence wont arise out of better optimization, but better ability to 'explore everything that is possible in the universe'.
- LS asks if we optimize better than chimps? KS says yes, but that is not important / interesting. What is important is our divergent open-ended thinking.
- LS asks if some AI will dominate us on optimization power (separate from whether this is interesting / important). KS says no! This is at 1:26:23.
- LS then asks if humans are at or near the peak of optimization power. KS: "We might be, but I don't know. I don't think it [optimization power] is like this infinite ladder. There are limits to what you can do because of the fact there is no way to pierce through the fog of the stepping stones. You have to discover them. [...] That's ultimately a process of futility. And then superintelligence would recognize that right away. It will understand how the world works. To do amazing things requires divergence. It's not going to organize itself to just set goals and achieve them."
- LS asks clarification on whether super AI's optimization ability will only be slightly higher than humans. KS basically replies saying optimization is not the relevant axis and that 'It will have interests. Because if it has not interests, then it doesn't have the ability to be super intelligent in my view."
- Some further discussion but I do not see further clarification. Essentially KS believes there is strong limit to how much value there is in more optimization power, as key thing is deciding what next to explore to help you accumulate more and more stepping stones, and that setting some goal will just make one worse at this process.
At 1:38:41, LS asks if KS thinks instrumental convergence is a thing in superintelligence. KS: "I think my answer here is no. There seems to be widely shared assumption that they're going to converge to goal oriented behaviour." And then KS repeats himself that superintelligence will be open-ended, and that this brings its own dangers.
I would have liked at this stage for LS to ask about other examples of instrumental convergence, e.g. would the AI avoid being turned off, or would the AI want to accumulate resources to allow it to carry out its explorations.
There is then some discussion about the example of what would happen if superintelligence found idea of 'going to Pluto' interesting. Conversation gets confusing here with little progress. One insight is that KS says that intelligence will not take 'extreme' actions to satisfy its interests (e.g. enslave all humanity, which is what LS suggests might happen), because it will have many many interests and taking extreme actions will have large opportunity cost. This is one way interests are different from goals.
At 1:55:53, LS asks KS about ideas for mitigating risks from open-ended intelligences. KS once again emphasises that his main point is that we should think about open-endedness. Then tentatively suggests looking at how open-endedness has been controlled in the past - what institutions have we set up. LS pushes KS for a policy that KS would recommend for this year [2024]. KS (weakly) suggests having human sign off on big decisions and likely that we need humans in the loop forever.
Discussion continues for another 30 minutes, but I do not think further insights are uncovered, with mostly repetition of ideas already mentioned. I think LS' attempt at summarizing KS' perspective reveals that the conversation did not bring a lot of clarity:
LS. You have a P(doom). It's significant. You don't know exactly what it is.
Your definition of what's good and bad might be different from mine because you see the future universe as being so likely to have so much good divergence in it. This gives you kind of a default sense that things are probably going to go okay, even thought it could also backfire.
Then there's this other point we disagree on. You think that optimizing toward some goal doesn't work that well. You just have to explore in the present until a solution will reveal itself later.
The second point surprised me. KS at multiple occasions says he thinks AI will be dangerous and that open-endedness is dangerous, and his top concern is that by focussing on optimization, people are misunderstanding the main issues and will not think of appropriate measures.
It is a shame that the discussion ended up being confusing and less fruitful than other Doom Debates interviews, because there is potentially a lot to learn from understanding KS' perspective.
0 comments
Comments sorted by top scores.