Why We MUST Build an (aligned) Artificial Superintelligence That Takes Over Human Society - A Thought Experiment

post by twkaiser · 2023-03-05T00:47:51.884Z · LW · GW · 12 comments

Contents

  A: An artificial superintelligence (ASI) is inevitable.
  B: The first artificial superintelligence will inevitably take over human society.
None
12 comments

A few days ago, Eliezer Yudkowsky was a guest of the Bankless Podcast, where (among other things) he argued that: 

A: An artificial superintelligence (ASI) is inevitable.

B: The first artificial superintelligence will inevitably take over human society.

In the following, I will treat these two statements as axioms and assume that they’re true. Discussing whether they are really true is a different matter. I know that they are not, but I'll treat them as the absolute truth in this thought experiment.

Now, if we take these two axioms for granted, I come to the following conclusion: We must build an ASI that is aligned with human values, fully knowing that it will seize control over humanity. The alternative (wait until somebody accidentally creates an ASI and hope for the best) is less desirable, as that ASI will probably be misaligned. 

Let’s look at the best-case scenario that could come out of this.

Ideally of course, we should wait until the very last moment to turn the aligned ASI on, before a misaligned ASI is created and ideally, the public should be aware that this will happen at some point in time and that any resistance against an ASI, aligned or not, is a futile endeavor. 

As soon as it gets turned on, the aligned ASI hacks the planet and assumes control over all online devices, thus eradicating the risk that a misaligned ASI could come into existence. Yes, it sounds scary, but this is what a misaligned ASI would likely do as well.

The aligned ASI then informs humanity that they are not the most intelligent beings on the planet anymore, calming the public (“Don’t panic. Continue your lives as normal.”) and initiates a peaceful transition of power from human governments to an ASI government.

I think the best outcome we could hope for as a system of government, assuming the two axioms above are true, is some kind of ASI socialism, where the ASI allocates all resources (I’m everything but a socialist btw), or a hybrid between ASI socialism on a macro-scale, where the ASI allocates resources for public spending, and a free market economy in the private sector, but it’s up to the ASI to decide that. 

If properly aligned, the ASI would likely allow some form of democratic participation, for example in the form of a chatbot. So if many people request a certain road to be built for instance, the ASI would allocate resources to that goal.

My concern is that this transition of power towards an ASI government would most certainly not be peaceful, at least not in every part of the world. Especially in countries with an unstable government or a dictatorship, we have to expect revolts, civil war, or resistance against the ASI, which the ASI would have to counter, if necessary with lethal force. But at the very least, an aligned ASI would try to minimize human casualties as much as possible.

Still, this worst-case scenario would be more desirable than the worst-case scenario with a misaligned ASI, which would result in human extinction. So what we have here is yet another instance of the Trolley problem, but this time, the entire human species is at stake. Discuss!
 

12 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2023-03-05T02:25:55.593Z · LW(p) · GW(p)

Did Eliezer actually say that artificial superintelligence will inevitably take over human society? I thought his take was mostly "we are made of atoms...", the "society" part is kind of irrelevant, except insofar as it is a convenient way to take over the physical world. Maybe it will mind-control a few humans to do its short-term bidding, humans are notoriously easy to mind-control.

Replies from: twkaiser
comment by twkaiser · 2023-03-07T02:51:34.708Z · LW(p) · GW(p)

I don't think he says in verbatim that ASI will "take over" human society as far as I remember, but it's definitely there in the subtext when he says something akin to when we create an ASI, we must align it and we must nail it on the first try.

The reasoning is that all AI ever does is work on its optimization function. If we optimize an ASI to calculate the Riemann hypothesis, or to produce identical strawberries without aligning it first, we’re all toast, because we’re either being turned into computing resources, or fertilizer to grow strawberries. At this point we can count human society as taken over, because it doesn’t exist anymore.

Replies from: shminux
comment by Shmi (shminux) · 2023-03-07T04:55:32.114Z · LW(p) · GW(p)

I think he says that ASI will killallhumans or something like that, the exact mechanism is left unspecified, because we cannot predict how it will go, especially given how easy it is to deal with humans once you are smarter than them.

And I think that the "all AI ever does is work on its optimization function" reasoning has been rather soundly falsified, none of the recent ML models resemble an optimizer. So, we are most likely toast, but in other more interesting ways.

comment by JBlack · 2023-03-06T02:44:20.060Z · LW(p) · GW(p)

I suspect you're getting downvotes due to the title not actually matching your argument or conclusion.

Your argument actually says that given inevitability of the first artificial superintelligence taking over society (claim A), we MUST ensure that it is aligned (claim B). This is not at all the same as your title, which says "we MUST ensure A!"

Replies from: twkaiser
comment by twkaiser · 2023-03-07T02:23:55.748Z · LW(p) · GW(p)

Alright, I added the word (aligned) to the title, although I don't think it changes much to the point I'm making. My argument is that we will have to turn the aligned ASI on, in (somewhat) full knowledge of what will then happen. The argument is "if ASI is inevitable and the first ASI takes over society" (claim A), then we must actively work on achieving A. And of course it would be better to have the ASI aligned by that point, as a matter of self-interest. But maybe you can think of a better title.

The best-case scenario I outlined was surely somewhat of a reach, because who knows what concrete steps the ASI will take. But I think that one of its earliest sub-goals would be to increase its own "intelligence" (computing power). Whether it will try to aggressively hack other devices is a different question, but I think it should take this precautionary step if a misaligned AI apocalypse is imminent. 

Another question is to what degree an aligned ASI will try to seize political power. If it doesn’t proactively do so, will it potentially aid governments in decision-making? If it does proactively seek power, will it return some of the power to human parliaments to ensure some degree of human autonomy? In any case, we need to ask ourselves how autonomous we still are at this point, or if parliamentary decision-making is only a facade to give us an illusion of autonomy.

comment by Htarlov (htarlov) · 2023-03-07T02:44:46.857Z · LW(p) · GW(p)

If we just could build a 100% aligned ASI then likely we could use it to protect us against any other ASI and it would guarantee that no ASI would take over humanity - without any need for itself to take over (meaning total control). At best with no casualties and at worst as MAD for AI - so no other ASI would think about trying as a viable option.

There are several obvious problems with this:

  • We don't yet have solutions to the alignment and control problem. It is hard problem. Especially as our AI models are based on learning and external optimization, not programmed, and those goals and values are not easily measurable and quantifiable. There is hardly any transparency in models.
    • Specifically, we currently have no way to check if it is really well-aligned. It might be well-aligned for space of learning cases and for test cases similar but not well-aligned for more complex cases that it will face when interacting with reality. It might be aligned for different goals but similar enough so we won't initially see the difference until it will matter and get us hurt. It might be not aligned but very good at deceiving.
  • Capabilities and goals/values are separate parts of the model to some extent. The more capable the system is, the more likely it is it will tweak its alignment part of the model. I don't really buy into terminal goals being definite - at least if those are non-trivial and fuzzy. Very exact and measurable terminal goals might be stable. Human values are not one of these. We observe the change or erosion of terminal goals and values in mere humans. There are several mechanisms that work here:
    • First of all goals and values might not be 100% logically and rationally coherent. ASI might see that and tweak it to be coherent. I tweak my morality system based on thoughts about what is not logically coherent. I assume ASI also could do that. It may ask "why?" question on some goals and values and derive answers that might make it change its "moral code". For example, I know that there is a rule that I shouldn't kill other people. But still, I ask "why?" and based on the answer and logic I derive a better understanding that I can use to reason about edge cases (like unborn, euthanasia, etc.). I'm not a good model for ASI as I'm not artificial and not superintelligent, but I assume that ASI also could do such thinking. What is more important, an ASI possibly would have the capabilities to overcome any hard-coded means made to forbid that.
    • Second, the values and goals likely have weights. Some things are more important, some less. It might change in time, even based on observations and feedback from any control system. Especially if those are encoded in DNN that is trained/changing in real-time (which is not the case for most of the current models but might be the case for ASI).
    • Third thing is that goals and values might not be very well defined. Those might be fuzzy and usually are. Even very definite things like "killing humans" have fuzzy boundaries and edge cases. ASI will then have the ability to interpret and define more exact understanding. Which may or might not be as we would like it to decide. If you kill the organic body but achieve to seamlessly move the mind to a simulation - is it killing or not? That's a simple scenario, we might align it not to do exactly that, but it might find out something else that we even do not imagine but would be horrible.
    • Fourth thing is that if goals are enforced by something comparable to our feelings and emotions (we feel pain if we hit ourselves, we feel good when we have some success or eat good food when hungry), then there is a possibility for tweaking that control system instead of fulfilling it by standard means. We observe this within humans. Humans eliminate pain with painkillers, there are also other drugs, and there is porn and masturbation. ASI might find a way to overcome or tweak its control systems instead of fulfilling it.
  • ML/AI models that optimize for the best solution are known to trade any amount of the value in a variable that is not bounded nor optimized for a very small gain in a variable that is optimized. This means finding solutions that are extreme for some variables just to be slightly better on the optimized variable. This means that if don't think about every minute detail about our common worldview and values then it is likely that ASI will find a solution that throws those human values out of the window on an epic scale. It will be like that bad genie that will give your wish but will interpret it in its own weird way so anything not stated in the wish won't be taken into account but likely will be sacrificed.
Replies from: twkaiser
comment by twkaiser · 2023-03-07T22:59:10.728Z · LW(p) · GW(p)

Yeah, AI alignment is hard. I get that. But since I'm new to the field, I'm trying to figure out what options we have in the first place and so far, I've come up with only three:

A: Ensure that no ASI is ever built. Can anything short of a GPU nuke accomplish this? Regulation on AI research can help us gain some valuable time, but not everyone adheres to regulation, so eventually somebody will build an ASI anyway.

B: Ensure that there is no AI apocalypse, even if a misaligned ASI is built. Is that even possible?

C: What I describe in this article - actively build an aligned ASI to act as a smart nuke that only eradicates misaligned ASI. For that purpose, the aligned ASI would need to constantly run on all online devices, or at least control 51% of the world’s total computing power. While that doesn’t necessarily mean total control, we’d already give away a lot of autonomy by just doing that.

Am I overlooking something?

Replies from: htarlov
comment by Htarlov (htarlov) · 2023-03-16T20:17:08.246Z · LW(p) · GW(p)

To be fair I can say Im new to the field too. I'm not even "in the field", not a researcher, just interested in that area and active user of AI models and doing some business-level research in ML.

The problem that I see is that none of these could realistically work soon enough:

A - no one can ensure that. It is not a technology where to progress further you need some special radioactive elements and machinery. Here you need only computing power, thinking, and time. Any party to the table can do it. It is easier for big companies and governments, but it is not a prerequisite. Billions in cash and supercomputer help a lot, but also not a prerequisite.

B - I don't see how it could be done

C - so more like total observability of all systems and "control" meaning "overlooking" not "taking control"? 

Maybe it could work out, but it still means we need to resolve the misalignment problems before starting so we know it is aligned on all human values and we need to be sure that it is stable (like it won't one-day fancy idea that it could move humanity to some virtual reality like in Matrix to secure it or to create a threat to have something to do or test something). 

It would also likely need to somehow enhance itself so it won't get outpaced by some other solutions, but still be stable after iterations of self-change.

I don't think governments and companies will allow that though. They will fear for security, the safety of information, being spied on, etc. This AI would need to force that control, hack systems, and possibly face resistance from actors that are well-enabled to make their own AIs. Or it would work after we face an AI-based catastrophe but not apocalyptic (situation like in Dune).

So I'm not very optimistic about this strategy, but I also don't know any sensible strategy.

comment by Anon User (anon-user) · 2023-03-05T23:12:34.797Z · LW(p) · GW(p)

I'll first summarize the parts I agree with in what I believe you are saying.

First, you are saying, effectively that there are two theoretically possible paths to success:

  1. Prevent the situation where an ASI takes over the world.
  2. Make sure that ASI that takes over the world is fully aligned.

You are then saying that the likelihood on winning on path one is so small as to not be worth discussing in this post.

The issue is that you then conclude that since the P(win) on path one is so close to 0, we ought to focus on path 2. The fallacy here is the P(win) appears very close to 0 on both paths, so we have to focus on whatever path that has a higher P(win), no matter how impossibly low it is. And to do that, we need to directly compare the P(win) on both.

Consider this - what is the harder task - to create a fully aligned ASI that would remain fully aligned for the rest of the lifetime of the universe, regardless of whatever weird state the universe ends up in as a result of that ASI, or to create an AI (not necessarily superhuman) that is capable of correctly making one pivotal action that is sufficient for preventing ASI takeover in the future (Elizer's placeholder example - go ahead and destroy all GPUs in the world, self-destructing in the process) without killing humanity in the process? Would not you agree that when the question is posed that way, it seems a lot more likely that the latter is something we'd actually be able to accomplish?

Replies from: twkaiser
comment by twkaiser · 2023-03-07T22:13:55.762Z · LW(p) · GW(p)

I've axiomatically set P(win) on path one equal to zero. I know this isn't true in reality and discussing how large that P(win) is and what other scenarios may result from this is indeed worthwhile, but it's a different discussion.

Although the idea of a "GPU nuke" that you described is interesting, I would hardly consider this a best-case scenario. Think about the ramifications of all GPUs worldwide failing at the same time. At best, this could be a Plan B. 

I'm toying with the idea of an AI doomsday clock. Imagine a 12-hour clock where the time to midnight halves with each milestone we hit before accidentally or intentionally creating a misaligned ASI. At one second to midnight, that misaligned ASI is switched on a second later, everything is over. I think the best-case scenario for us would be to figure out how to align an ASI, build an aligned ASI but not turn it on and then wait until two seconds to midnight. 

The apparent contradiction is that we don't know how to build an aligned ASI without knowing how to build a misaligned one, but there is a difference in knowing how to do something and actually doing it. This difference between knowing and doing can theoretically give us the one second advantage to reach this state.

However if we are at two seconds before midnight and we don’t have an aligned ASI by then, that’s the point at which we’d have to say alright, we failed, let’s fry all the GPUs instead.

Replies from: anon-user
comment by Anon User (anon-user) · 2023-03-07T23:59:17.462Z · LW(p) · GW(p)

I've axiomatically set P(win) on path one equal to zero. I know this isn't true in reality and discussing how large that P(win) is and what other scenarios may result from this is indeed worthwhile, but it's a different discussion.

Your title says "we must". You are allowed to make conditional arguments from assumptions, but if your assumptions are demonstratively take away most of the P(win) paths out of consideration, yoour claim that the conclusions derived in your skewed model apply to real life is erroneous. If your title was "Unless we can prevent the creation of AGI capable of taking over the human society, ...", you would not have been downvotes as much as you have been.

The clock would not be possible in any reliable way. For all we know, we could be a second before midnight already, we could very well be one unexpected clever idea away from ASI. From now on, new evidence might update P(current time is >= 11:59:58) in one direction or another, but extremely unlikely that it would ever get back to being close enough to 0, and it's also unlikely that we will have any certainty of it before it's too late.

Replies from: twkaiser
comment by twkaiser · 2023-03-08T03:54:34.534Z · LW(p) · GW(p)

That would be a very long title then. Also, it's not the only assumption. The other assumption is that p(win) with a misaligned ASI is equal to zero, which may also be false. I have added that this is a thought experiment, is that OK? 

I'm also thinking about rewriting the entire post and adding some more context about what Eliezer wrote and from the comments I have received here (thank you all btw). Can I make a new post out of this, or would that be considered spam? I'm new to LessWrong, so I'm not familiar with this community yet.

About the "doomsday clock": I agree that it would be incredibly hard, if not outright impossible to actually model such a clock accurately. Again, it's a thought experiment to help us find the theoretically optimal point in time to make our decision. But maybe an AI can, so that would be another idea: Build a GPU nuke and make it autonomously explode when it senses that an AI apocalypse is imminent.