O(“AGI Safety”)>O(“Stop Tyrants”)
post by AnthonyRepetto · 2023-02-04T18:38:55.133Z · LW · GW · 11 commentsContents
11 comments
~ AGI safety is at LEAST as hard as a protocol which prevents tyranny ~
When we want to keep ourselves safe from AGI, then one of the key criteria is “can we turn it off if it tries to go berserk?” That is the same requirement whenever we put some person in charge of an entire government: “can we depose this person, without them using the military to stop us?”
AGI has extra risks and problems, being super-intelligent, while most dictators are morons. Yet, if someone told me “we solved AGI safety!” then I would happily announce “then you’ve also solved protecting-governments-from-dictators!” You might be able to avoid all dictators in a way which does NOT cure us of AGI risks… though, if you reliably prevent AGI-pocalypse, then you’ve definitely handled dictators, using the same method.
So, does that mean we’ll soon find panacea to AGI-death threats? Considering that we haven’t stopped dictators in the last few… centuries? Millenia? Yeah, we might be screwed. Considering that dictators have nukes, and can engineer super-viruses… Oh, and that would imply: “Dictators are the existential risk of ‘a berserk machine we can’t turn-off’… meaning that we need to fight those AGI overlords today.”
11 comments
Comments sorted by top scores.
comment by tailcalled · 2023-02-04T20:35:02.126Z · LW(p) · GW(p)
AGI safety has the benefit that people get to decide the code for the AGI (or for the system that makes the AGI), whereas tyrants has the problem that the "code" for a tyrant or dictator was created by a combo of evolution and self-promotion, which is relatively outside of deliberate control.
comment by Viliam · 2023-02-05T18:58:46.359Z · LW(p) · GW(p)
Sometimes, solving a more general problem is easier than solving a partial problem (1, 2). If you build a Friendly superhuman AI, I would expect that some time later all dictators will be removed from power... for example by an army of robots that will unseen infiltrate the country, and at the same moment destroy its biggest weapons and arrest the dictator and other important people of the regime.
(What exactly does "solving" the AGI mean: a general theory published in a scientific paper, an exact design that solves all the technical issues, having actually built the machine? Mere theory will not depose dictators.)
comment by JBlack · 2023-02-05T01:50:11.746Z · LW(p) · GW(p)
I'm not sure that the implication holds.
Dictators gain their power by leverage over human agents. A dictator that kills all other humans has no power, and then lives the remainder of their shortened life in squalor. A superintelligent AI that merely has the power of a human dictator for eternity and relies on humans to do 99% of what it wants is probably in the best few percent of outcomes from powerful AI. Succeeding in limiting it to that would be an enormous success in AGI safety even if it wasn't the best possible success.
This is probably another example of the over-broadness of the term "AGI safety", where one person can use it to mean mostly "we get lots of good things and few bad things" and another to mean mostly "AGI doesn't literally kill everyone and everything".
comment by JBlack · 2023-02-05T01:42:45.588Z · LW(p) · GW(p)
What does the "O(x)" notation in the title mean?
The only thing I'm familiar with is the mathematical meaning of a class of functions defined in terms of being bounded by the argument, but that clearly isn't intended here. Other things that came to mind were P(x) for probability and U(x) for utility, but it doesn't mean either of these either.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-02-05T07:26:42.042Z · LW(p) · GW(p)
big O of generating a solution, I think
comment by Donald Hobson (donald-hobson) · 2023-02-04T22:01:26.705Z · LW(p) · GW(p)
There are solutions to the AI safety problem that don't help with dictators, beyond the obvious "friendly superintelligence retiring all dictators".
Suppose RLHF or something just worked wonderfully. You just program a particular algorithm, and everything is fine, the AI does exactly what you wanted. The existance and human findability of such an algorithm wouldn't stop dictators (until a friendly AI does). So we gain no evidence that such algorithm doesn't exist by observing dictators.
There are various reasons dictators aren't as much of an X-risk, they don't make stuff themselves, they hire people to do it, and very few people are capable and willing enough to make make the super-plague.
Replies from: None↑ comment by [deleted] · 2023-02-04T22:12:57.300Z · LW(p) · GW(p)
Enslavement happens mostly to people who aren't capable of getting out of their dire situations. If people aren't doing what they want, their emotions will be misaligned with their livelihoods, thus costing much of their mental capacity for soothing and related activities. Subsistence becomes the bottomline, and people are no longer thriving without the extra mental baggage. We live on a little planet. It's really not hard to see what other people are doing, and it is pretty difficult to deviate from the universal societal model without at least spending hundreds of years researching and experimenting with alternatives just so you can try something else out. In the meantime, you will miss all the progress made in the universal model. No civilization is stupid enough choose this path.
Propaganda is very different from reality though. It may seem like there are so many differences between us, but we are more or less the same. Technology gets shared quickly.
Replies from: lahwran, donald-hobson↑ comment by the gears to ascension (lahwran) · 2023-02-04T23:51:31.055Z · LW(p) · GW(p)
parse error, can you use 4x more words to make this point?
Replies from: None↑ comment by [deleted] · 2023-02-05T00:35:53.343Z · LW(p) · GW(p)
I'll try. For analysis of the structural properties that make up modern society, we need to not only look at what exists in the present but give the same amount of importance to historical instances that would contribute to our structural analysis.
The overarching theme of this post seems to be connecting two different existential threats and comparing which one is worse as if they somehow can be aligned on the same axis and compared that way. Tyrants have been the normal throughout most of history. I think, not confident at all, that this is due to the size of civilization that the tyrannical paradigm has been challenged. Tyranny has a single point of failure, and that has long been the argument against it for very good reasons. In the historical past, the rules of tyrants were small, a little village here, a little town there, that's it. We have many tyrants, so one tyrant fails, we still have other good tyrants to rely on. People can migrate, but migration is limited to the technologies, animals, wheels and ships. We are at 8 billions right now.
When we analyze a society, we can't just look at its government structure. It says very little about the society. It only says anything about the processes of the enforcement institutions for the society they govern. They can be removed and replaced like they've done. Enslavement happens not directly because of a choice in the governing structure design but because of how the power struggle inside the government gets abused. Of course, any design can also implement fail-safe against such systemic failure. These don't really make their way into public discourse unfortunately but are crucial if you want to align your own understanding with reality. Whether those fail-safe actually work is a different story. There are many levels of interactions but we aren't really equipped to talk about them because we lack background education and the type of research think tanks can provide. They do this for a living.
I brought up aboriginal tribes as a form of enslavement. They enslave their members through fear of the unknown, just like how most types of tribal groups or cults control their members. They fear the unknown, so there is no agency lost, thus no need for soothing except when they encounter unknowns in real life occasions. The soothing scenario happens when the majority of your population knows about the outside world, thus you can only enforce their enslavement through fear of violence. That's what North Korea has done. Their society is too big to self-contain. A lot of human potential has been sacrificed to make things more easily manageable. Governments don't really do much. It's just a small aspect of society as a whole. So the human potential lost covers a much larger area than the gains of ease of management. I'm sure I'm missing information regarding North Korea since we hear very little about it, so I am definitely not very confident in my statements. Actual human agencies and potentials lost should probably left to the people who do this for a living. They don't seem to publicize this type of information due to geopolitical reasons. Think tanks don't think people need to know. They just need to know that they are bad. End of story.
Public discourse focus on topics that are based on popularity, a completely statistical metric, more aligned with geopolitical agenda than anything else practical. This doesn't mean we have enough information regarding these topics to discuss them productively. Most people already know it's pointless to argue with others about religion solely because we know so little. Why doesn't this consensus apply to other areas of interests?
↑ comment by Donald Hobson (donald-hobson) · 2023-02-04T23:03:22.802Z · LW(p) · GW(p)
It isn't at all clear what you are trying to say.
Are you arguing that dictators aren't an X risk, or AI won't enslave humans, or AI tech will be widely distributed or what?
Replies from: None↑ comment by [deleted] · 2023-02-04T23:42:27.230Z · LW(p) · GW(p)
None of these that you have stated. These reductions don't really align with what I wrote. How did you arrive to these conclusions? I can't answer your question if I don't know how you got there.
Enslavement refers to extreme power structure where complexity is sacrificed for simplicity and reduction of more easily manageable human resources. It's not hard to see how it detracts from maximizing human potential.
Nobody currently really lives like people did even just 100 years ago. Enslavement on large scale requires isolation from other large societies, thus we don't really see large groups of people living like they are 100 years ago. We might be able to find certain aboriginal tribes and such, but that isn't large scale, as it is easy to isolate on a pretty much abandoned piece of land in the middle of an ocean. I guess it isn't very politically correct to call aboriginal tribes as a form of enslavement. When you have some established government like in North Korea, it is easy to apply that label in modern rhetoric.
I did not mention anything regarding AI safety. I don't really understand how "stop tyrants" has anything to do with "AGI safety." These two things don't really have anything to do with each other whatsoever. Comparing them seem to be an intellectual disservice.
Popular topics of public discourse don't necessarily mean they have any meaningful value added to the discussion as propaganda affects what people occupy their minds with everyday.