Autopoietic systems and difficulty of AGI alignment
post by jessicata (jessica.liu.taylor) · 2017-08-20T01:05:10.000Z · LW · GW · 14 commentsContents
Autopoietic cognitive systems Fully automated autopoietic cognitive systems Difficulty of aligning a fully automated autopoietic cognitive system Almost-fully-automated autopoietic cognitive systems Difficulty of aligning an almost-fully-automated autopoietic cognitive system Non-autopoietic cognitive systems that extend human autopoiesis Where does Paul's agenda fit in? My position None 14 comments
I have recently come to the opinion that AGI alignment is probably extremely hard. But it's not clear exactly what AGI or AGI alignment are. And there are some forms of aligment of "AI" systems that are easy. Here I operationalize "AGI" and "AGI alignment" in some different ways and evaluate their difficulties.
Autopoietic cognitive systems
From Wikipedia:
The term "autopoiesis" refers to a system capable of reproducing and maintaining itself.
This isn't entirely technically crisp. I'll elaborate on my usage of the term:
-
An autopoietic system expands, perhaps indefinitely. It will feed on other resources and through its activity gain the ability to feed on more things. It can generate complexity that was not present in the original system through e.g. mutation and selection. In some sense, an autopoietic system is like an independent self-sustaining economy.
-
An autopoietic system, in principle, doesn't need an external source of autopoesis. It can maintain itself and expand regardless of whether the world contains other autopoietic systems.
-
An autopoietic cognitive system contains intelligent thinking.
Some examples:
-
A group of people on an island that can survive for a long time and develop technology is an autopoietic cognitive system.
-
Evolution is an autopoietic cognitive system (cognitive because it contains animals).
-
An economy made of robots that can repair themselves, create new robots, gather resources, develop new technology, etc is an autopoietic cognitive system.
-
A moon base that necessarily depends on Earth for resources is not autopoietic.
-
A car is not autopoietic.
-
A computer with limited memory not connected to the external world can't be autopoietic.
Fully automated autopoietic cognitive systems
A fully automated autopoietic cognitive system is an autopoietic cognitive system that began from a particular computer program running on a computing substrate such as a bunch of silicon computers. It may require humans as actuators, but doesn't need humans for cognitive work, and could in principle use robots as actuators.
Some might use the term "recursively self-improving AGI" to mean something similar to "fully automated autopoietic cognitive system".
The concept seems pretty similar to "strong AI", though not identical.
Difficulty of aligning a fully automated autopoietic cognitive system
Creating a good and extremely-useful fully automated autopoietic cognitive system requires solving extremely difficult philosophical and mathematical problems. In some sense, it requires answering the question of "what is good" with a particular computer program. The system can't rely on humans for its cognitive work, so in an important sense it has to figure out the world and what is good by itself. This requires "wrapping up" large parts of philosophy.
For some intuitions about this, it might help to imagine a particular autopoietic system: an alien civilization. Imagine an artificial planet running evolution at an extremely fast speed, eventually producing intelligent aliens that form a civilization. The result of this process would be extremely unpredictable, and there is not much reason to think it would be particularly good to humans (other than the decision-theoretic argument of "perhaps smart agents cooperate with less-smart agents that spawned them because they want this cooperation to happen in general", which is poorly understood and only somewhat decision-relevant).
Almost-fully-automated autopoietic cognitive systems
An almost-fully-automated autopoietic cognitive system is an autopoietic cognitive system that receives some input from humans, but a quite-limited amount (say, less than 1,000,000 total hours from humans). After receiving this much data, it is autopoietic in the sense that it doesn't require humans for doing its cognitive work. It does a very large amount of expansion and cognition after receiving this data.
Some examples:
- Any "raise the AGI like you would raise a child" proposal falls in this category.
- An AGI that thinks on its own but sometimes gives queries to humans would fall in this category.
- ALBA doesn't use the ontology of "autopoietic systems", but if Paul Christiano's research agenda succeeded, it would eventually produce an aligned almost-fully-automated autopoietic cognitive system (in order to be competitive with an unaligned almost-fully-automated autopoietic cognitive system)
Difficulty of aligning an almost-fully-automated autopoietic cognitive system
My sense is that creating a good and extremely-useful almost-fully-automated autopoietic cognitive system also requires solving extremely difficult philosophical and mathematical problems. Although getting data from humans will help in guiding the system, there is only a limited amount of guidance available (the system does a bunch of cognitive work on its own). One can imagine an artificial planet running at an extremely fast speed that occasionally pauses to ask you a question. This does not require "wrapping up" large parts of philosophy immediately, but it does require "wrapping up" large parts of philosophy in the course of the execution of the system.
(Of course artifical planets running evolution aren't the only autopoietic cognitive systems, but it seems useful to imagine life-based autopoietic cognitive systems in the absence of a clear alternative)
Like with unaligned fully automated autopoietic cognitive systems, unaligned almost-fully-automated autopoietic cognitive systems would be extremely dangerous to humanity: the future of the universe would be outside of humanity's hands.
My impression is that the main "MIRI plan" is to create an almost-fully-automated autopoietic cognitive system that expands to a high level, stops, and then assists humans in accomplishing some task. (See: executable philosophy; task-directed AGI).
Non-autopoietic cognitive systems that extend human autopoiesis
An important category of cognitive systems are ones that extend human autopoiesis without being autopoietic themselves. The Internet is one example of such a system: it can't produce or maintain itself, but it extends human activity and automates parts of it.
This is similar to but more expansive than the concept of "narrow AI", since they in principle they could be domain-general (e.g. a neural net policy trained to generalize across different types of tasks). The concept of "weak AI" is similar.
Non-autopoietic automated cognitive systems can present existential risks, for the same reason other technologies and social organizations (nuclear weapons, surveillance technology, global dictatorship) present existential risk. But in an important sense, non-autopoietic cognitive systems are "just another technology" contiguous with other automation technology, and managing them doesn't require doing anything like wrapping up large parts of philosophy.
Where does Paul's agenda fit in?
[edit: see this comment thread]
As far as I can tell, Paul's proposal is to create an almost-fully-automated autopoietic system that is "seeded with" human autopoiesis in such a way that, though afterwards it grows without human oversight, it eventually does things that humans would find to be good. In an important sense, it extends human autopoiesis, though without many humans in the system to ensure stability over time. It avoids value drift over time through some "basin of attraction" as in Paul's post on corrigibility. (Paul can correct me if I got any of this wrong)
In this comment, Paul says he is not convinced that lack of philosophical understanding is a main driver of risk, with the implication that humans can perhaps create aligned AI systems without understanding philosophy; this makes sense to the extent that AI systems are extending human autopoiesis and avoiding value drift rather than having their own original autopoiesis.
I wrote up some thoughts on Paul Christiano's agenda already. Roughly, my take is that is that getting corrigibility right (i.e. getting an autopoietic system to extend human autopoiesis without much human oversight and without having value drift) requires solving very difficult philosophical problems, and it's not clear whether these are easier or harder than those required for the "MIRI plan" of creating an almost-fully-automated autopoietic cognitive system that does not extend human autopioesis but does assist humans in some task. Of course, I don't have all of Paul's intuitions on how to do corrigibility.
I would agree with Paul that, conditioned on the AGI alignment problem not being very hard, it's probably because of corrigibility.
My position
I would summarize my position on AGI alignment as:
- Aligning a fully automated autopoietic cognitive system, or an almost-fully-automated autopoietic cognitive system, both seem extremely difficult. My snap judgment is to assign about 1% probability to humanity solving this problem in the next 20 years. (My impression is that "the MIRI position" thinks the probability of this working is pretty low, too, but doesn't see a a good alternative)
- Consistent with this expectation, I hope that humans do not develop almost-fully-automated autopoietic cognitive systems in the near term. I hope that they instead continue to develop and use non-autopoietic cognitive systems that extend human autopoiesis. I also hope that, if necessary, humans can coordinate to prevent the creation of unaligned fully-automated or almost-fully-automated autopoietic cognitive systems, possibly using non-autopoietic cognitive systems to help them coordinate.
- I expect that thinking about how to align almost-fully-automated autopoietic cognitive systems with human values has some direct usefulness and some indirect usefulness (for increasing some forms of philosophical/mathematical competence), though actually solving the problem is very difficult.
- I expect that non-autopoietic cognitive systems will continue to get better over time, and that their use will substantally change society in important ways.
14 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2017-08-18T16:28:08.000Z · LW(p) · GW(p)
I have recently come to the opinion that AGI alignment is probably extremely hard.
I'm curious what initially triggered this.
My snap judgment is to assign about 1% probability to humanity solving this problem in the next 20 years.
This seems a bit low, given that there's a number of disjunctive ways that it could happen. Besides MIRI and Paul's approaches, there's IRL (and related ideas), and using ML to directly imitate humans (including long-term behaviors and thought processes). The last one doesn't seem to necessarily require solving many philosophical problems. Oh, there's also whole brain emulation.
But in an important sense, non-autopoietic cognitive systems are “just another technology” contiguous with other automation technology, and managing them doesn’t require doing anything like wrapping up large parts of philosophy.
I'm pretty worried that such technology will accelerate value drift within the current autopoietic system. Considering that we already have things like automation-mediated social media addiction / echo chambers and automation-enhanced propaganda/disinformation, the situation seems likely to get worse as technology keeps improving. The underlying problem here appears to be that it's easier to apply automation technology when the goal can be clearly defined and measured. We know how to define and measure things like engagement and making someone believe something; we don't know how to define and measure normative correctness. We seem to need that to help defend against those offensive technologies and prevent value drift.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2017-08-18T18:35:36.000Z · LW(p) · GW(p)
I’m curious what initially triggered this.
I tried to solve the problem and found that I thought it was very hard to make the sort of substantial progress that would meaningfully bridge the gap from our current epistemic/philosophical state to the state where the problem is largely solved. I did make incremental progress, but not the sort of incremental progress I saw as attacking the really hard problems. Towards the later parts of my work at MIRI, I was doing research that seemed to be largely overlapping with complex systems theory (in order to reason about how to align autopoietic systems similar to evolution) in a way that made it hard to imagine that I'd come up with useful crisp formal definitions/proofs/etc.
This seems a bit low, given that there’s a number of disjunctive ways that it could happen.
I feel like saying 2% now. Not sure what caused the update.
I’m pretty worried that such technology will accelerate value drift within the current autopoietic system.
I'm also worried about something like this, though I would state the risk as "mass insanity" rather than "value drift". ("Value drift" brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2017-08-19T01:48:38.000Z · LW(p) · GW(p)
I hope you stay engaged with the AI risk discussions and maintain your credibility. I'm really worried about the self-selection effect where people who think AI alignment is really hard end up quitting or not working in the field in the first place, and then it appears to outsiders that all of the AI safety experts don't think the problem is that hard.
I’m also worried about something like this, though I would state the risk as “mass insanity” rather than “value drift”. (“Value drift” brings to mind an individual or group trying to preserve their current object-level values, rather than trying to preserve somewhat-universal human values and sane reflection processes)
I'm envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won't be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.
You didn't respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?
Replies from: jessica.liu.taylor, paulfchristiano↑ comment by jessicata (jessica.liu.taylor) · 2017-08-19T08:57:20.000Z · LW(p) · GW(p)
I agree that selection bias is a problem. I plan on discussing and writing about AI alignment somewhat in the future. Also note that Eliezer and Nate think the problem is pretty hard and unlikely to be solved.
You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?
Automation technology (in an adversarial context) is kind of like a very big gun. It projects a lot of force. It can destroy lots of things if you point it wrong. It might be hard to point at the right target. And you might kill or incapacitate yourself if you do something wrong. But it's inherently stupid, and has no agency by itself. You don't have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them. (Certainly, some of these things involve philosophy, but they don't necessarily require fully formalizing anything). The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2017-08-20T19:19:06.000Z · LW(p) · GW(p)
You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them.
Do you have ideas for how to do these things, for the specific "big gun" that I described earlier?
The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.
If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn't seem very different from a big gun possessed by an alien soul.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2017-08-23T06:55:07.000Z · LW(p) · GW(p)
Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?
Roughly, minimize direct contact with things that cause insanity, be the sanest people around, and as a result be generally more competent than the rest of the world at doing real things. At some point use this capacity to oppose things that cause insanity. I haven't totally worked this out.
If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.
It's hard to corrupt human values without corrupting other forms of human sanity, such as epistemics and general ability to do things.
↑ comment by paulfchristiano · 2017-08-19T04:03:10.000Z · LW(p) · GW(p)
defending against this type of technology does seem to require solving hard philosophical problems
Why is this?
The case you describe seems clearly contrary to my preferences about how I should reflect. So a system which helped me implement my preferences would help me avoid this situation (in the same way that it would help me avoid being shot, or giving malware access to valuable computing resources).
It seems quite plausible that we'll live to see a world where it's considered dicey for your browser to uncritically display sentences written by an untrusted party.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2017-08-19T05:21:56.000Z · LW(p) · GW(p)
It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.
How would your browser know who can be trusted, if any of your friends and advisers could be corrupted at any given moment (or just their accounts taken over by malware and used to spread optimized disinformation)?
The case you describe seems clearly contrary to my preferences about how I should reflect.
How would an automated system help you avoid it, aside from blocking off all outside contact? (I doubt I'd be able to ever figure out what my values actually are / should be, if I had to do it without talking to other humans.) If you're thinking of some sort of meta-execution-style system to help you analyze arguments and distinguish between correct arguments and merely convincing ones, I think that involves solving hard philosophical problems. My understanding is that Jessica agrees with me on that, so I was asking why she doesn't think the same problem applies in the non-autopoietic automation scenario.
Replies from: cousin_it↑ comment by cousin_it · 2017-08-19T18:46:13.000Z · LW(p) · GW(p)
figure out what my values actually are / should be
I think human ideas are like low resolution pictures. Sometimes they show simple things, like circles, so we can make a high resolution picture of the same circle. That's known as formalizing an idea. But if the thing in the picture looks complicated, figuring out a high resolution picture of it is an underspecified problem. I fear that figuring out my values might be that kind of problem.
So apart from hoping to define a "full resolution picture" of human values, either by ourselves or with the help of some AI or AI-human hybrid, it might be useful to come up with approaches that simply don't require it at any stage. That was my motivation for this post, which relies on using our "low resolution picture" to describe some particular nice future without considering all possible ones. It's certainly flawed, but there might be other similar ideas.
Does that make sense?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2017-08-22T08:33:29.000Z · LW(p) · GW(p)
I think I understand what you're saying, but my state of uncertainty is such that I put a lot of probability mass on possibilities that wouldn't be well served by what you're suggesting. For example, the possibility that we can achieve most value not through the consequences of our actions in this universe, but through their consequences in much larger (computationally richer) universes simulating this one. Or that spreading hedonium is actually the right thing to do and produces orders of magnitude more value than spreading anything that resembles human civilization. Or that value scales non-linearly with brain size so we should go for either very large or very small brains.
While discussing the VR utopia post, you wrote "I know you want to use philosophy to extend the domain, but I don't trust our philosophical abilities to do that, because whatever mechanism created them could only test them on normal situations." I have some hope that there is a minimal set of philosophical abilities that would allow us to eventually solve arbitrary philosophical problems, and we already have this. Otherwise it seems hard to explain the kinds of philosophical progress we've made, like realizing that other universes probably exist, and figuring out some ideas about how to make decisions when there are multiple copies of us in this universe and others.
Of course it's also possible that's not the case, and we can't do better than to optimize the future using our current "low resolution" values, but until we're a lot more certain of this, any attempt to do this seems to constitute a strong existential risk.
comment by IAFF-User-228 (Imported-IAFF-User-228) · 2017-08-20T09:43:55.000Z · LW(p) · GW(p)
I'm thinking about semi-evolutionary systems that fits in the almost-fully-automated category. So this discussion is relevant to my interests.
Firstly it is worth noting that most computational evolutionary systems haven't produced much interesting stuff, compared to real evolution. These systems tend to produce stable systems because simple stable strategies can emerge and dominate. Unless the fitness landscape is changing there is no need for the strategies to change. Complexity needs to be forced. See the evolution of complexity section of this paper: http://people.reed.edu/~mab/publications/papers/BedauTICS03.pdf . I will try and find a better link.
Earth's history forced organisms to be intelligent/adaptive over time.
So in my system a human's purpose is to control the evolutionary landscape and force the programs to become more complex. This by itself would not mean the system was aligned, but I think culture/verbal transmission of information is a lot more important than other people in AI community. So I see it not as transmitting information, but transmitting parts of our own programming.
So the analogy would be a one world (the human) interacting with another alien world (the computer) by sending individual members of the human world to it. And also by shaping the evolutionary landscape of the alien world to favour the humans sent.
Eventually it would be great if the worlds were combined as one, so that there could be free flow of people both directions (assuming everything was seeded from a human).
comment by paulfchristiano · 2017-08-18T17:41:56.000Z · LW(p) · GW(p)
-
A competitive system can use a very large number of human hours in the future, as long as it uses relatively few human hours today.
-
By "lack of philosophical understanding isn't a big risk" I meant: "getting object-level philosophy questions wrong in the immediate future, like how to trade off speed vs. safety or how to compromise amongst different values, doesn't seem to destroy too much value in expectation." We may or may not need to solve philosophical problems to build aligned AGI. (I think Wei Dai believes that object-level philosophical errors destroy a lot of value in expectation.)
-
I think autopoietic is a useful category and captures half of what is interesting about "recursively self-improving AGI." There is a slightly different economic concept, of automation that can be scaled up using fixed human inputs, without strongly diminishing returns. This would be relevant because it changes the character and pace of economic growth. It's not clear whether this is equivalent to autopoiesis. For example, Elon Musk seems to hope for technology which is non-autopoeitic but has nearly the same transformative economic impact. (Your view in this post is similar to my best guess at Elon Musk's view, though more clearly articulated / philosophically crisp.)
↑ comment by jessicata (jessica.liu.taylor) · 2017-08-18T18:26:39.000Z · LW(p) · GW(p)
-
That makes sense.
-
OK, it seems like I misinterpreted your comment on philosophy. But in this post you seem to be saying that we might not need to solve philosophical problems related to epistemology and agency?
-
That concept also seems useful and different from autopoiesis as I understand it (since it requires continual human cognitive work to run, though not very much).
↑ comment by paulfchristiano · 2017-08-19T04:05:04.000Z · LW(p) · GW(p)
- I think that we can avoid coming up with a good decision theory or priors or so on---there are particular reasons that we might have had to solve philosophical problems, which I think we can dodge. But I agree that we need or want to solve some philosophical problems to align AGI (e.g. defining corrigibility precisely is a philosophical problem).