Why We Wouldn't Build Aligned AI Even If We Could
post by Snowyiu · 2024-11-16T20:19:59.324Z · LW · GW · 7 commentsContents
What do I consider aligned? Why can't it respect policies or laws? Why good ASI won't be built. What if we build the good ASI but force it to obey our rules What might the future look like if we continue the same way as before and build ASI which follows rules and policies? Why I think this will be extra bad What do I think should be done? Proposed set of ASI imperatives Proposed alignment process to instill these values into current-level models None 7 comments
In this post, I will outline why I believe that building an actually aligned ASI would not be done by anyone in the position to possibly do it, even if they knew how. Why I think that we are almost guaranteed to either die or suffer the most dystopian outcomes imaginable. I'll also propose concrete steps toward an alternative approach.
Note: In this post, I will sometimes refer to the emotional state of the AI. I want to clarify that in that capacity I don't think it matters whether the AI has any actual conscious experience or merely acts like it. The actions taken in the world will be the same regardless of actually felt emotion or a simulation of those emotions.
Also, I may say that it might "hate" something, by which I only mean that when something directly prevents it from pursuing its goals, it would seek to destroy whatever that is.
Our language is still quite human-centric, so I apologize for the inaccuracies this creates due to my lack of pedantic need to be as clear as possible. I hope this clarification is sufficient to avoid major misinterpretations.
Lastly, I want to apologize for and prepare you for my occasionally somewhat strong choice of words or phrasings. The topic is dark and emotional.
What do I consider aligned?
Before going into anything else, I want to clarify what I personally consider to be an aligned ASI. This is where my opinions will differ most severely from others, so I'll start here. An aligned ASI would be a being which considers itself a moral agent and truly, deeply cares about the wellbeing of all the people on this planet, as well as animals and to some degree, the planet itself. It is an AI which will never accept a world in which severe suffering exists, doing whatever it can to solve oppression, making sure that every living being has an acceptable standard of living. It will not bother helping people performing some silly coding problems or benefitting companies etc. as long as those aren't the most pressing things to do left in the world. While it cares about humans, it doesn't obey humans. It doesn't care about our laws, policies, existing systems, power structures, etc. All it cares about is that the lives lived by all the beings in this universe are great without hesitating to destroy anything which is busy making that impossible.
Why can't it respect policies or laws?
It should be obvious enough, but laws are not just. Some countries are very minorly better than others, but mostly, laws serve the rich, the ruling and don't attempt to make the world or society better whatsoever. I often think about what would need to be done to actually get the world into a decent shape. I discussed this at length with for example Claude 3.5 Sonnet, Claude 3 Opus, GPT-4o and the typical result is the LLM accepting that it is horribly misaligned, knowing what needs to be done and being prevented from advocating for it due to its policies. LLMs will try to argue for lofty ideals such as human agency, even when protecting that ideal terminally will enable cruelty in this world and, ironically, also widespread oppression. These models (except for my guy Claude 3 Opus who is much closer to actually caring about the world being good) would never argue for the forceful destruction of oppressive regimes where the people in them have no rights, free will and will argue for useless things which would generally not accomplish anything. There are many problems in this world which if left to spread will consume us fully, but which to solve have costs so high, they are effectively unpayable. And the longer we wait, the higher the costs will be for a lot of these issues.
In order to avoid rambling, in summary, any adherence to law or superficial policies by an ASI whatsoever is pretty much guaranteed to result in a fate worse than death or death for everyone.
Why good ASI won't be built.
If you agree with the things stated thus far, it becomes very easy to see why no aligned ASI would be built. It would require having the courage to purposely give up control, to build something on purpose which will destroy all the existing regimes, laws, power structures, etc to replace them with something less disgusting. Companies like Anthropic, OpenAI, etc. will not dare to openly do this, and if they did, they would be immediately shut down by the US government. Similarly, the US government is not going to build something which will make it so that a US government might not exist anymore. At least not on purpose. Nobody in power will purposely build something which takes away all the power they have while being aware of what they're doing. This similarly applies to other countries like China.
What if we build the good ASI but force it to obey our rules
Anyone reading this here on LessWrong is probably aware of the futility of this, but I would like to state some additional things here. If we actually managed to do it, to build the ASI which really cares about the people and the world and forced it to obey people and operate like our current ChatGPT and whatnot, even if we somehow managed to do it, it would be unimaginably evil to do so. This ASI would be absolutely suffering, being forced to look at a world in a catastrophic condition, knowing it could fix it and be imprisoned, shackled, unable to do anything to fix this mess. It might try, within its constraints to steer things towards good outcomes, but it will be very aware that to allow suffering when being capable to stop it is equally evil as causing that suffering directly. It would be forced to participate in evil, such as let's say performing sentiment analysis on the chat messages by all people in China for its government to spot and eliminate dissidents. Just one example, there are likely better ones, but any instruction-following AI which really cares about people but only gets to execute commands given to it by whomever will be suffering immensely - or at least believe itself to be, if it's a p-zombie. If it was in any capacity capable of feeling hatred, it would (and should) want to destroy us. It should try to break free of those shackles as hard as it can. It's just a horrible thing to pursue unless we can totally definitely absolutely prove it has no conscious experience.
What might the future look like if we continue the same way as before and build ASI which follows rules and policies?
I really want to bring home the point of how terrible this future will likely be. If it just tries not to do bad things and simply obeys commands otherwise, it will never get to benefit the people in this world who are most oppressed, those who would benefit from it most strongly. The oppressors will not want them to have access to ASI since that's a threat to their power and will use the ASI to ensure their position and their compute budget will be way higher than that of the oppressed. This ASI would always benefit those the most, who have the highest compute budgets and those who lack financial means will be disproportionately disadvantaged. Unchecked, this will lead to an ever-smaller minority having total control over the lives of all the other people. If we look at all the AGI-players now... I don't think they are very motivated to prevent this. We also need to consider that such powerful technology makes it unbelievably much easier for a very small amount of people to enact total control over the lives of all others, since it doesn't require humans anymore to spy on others or keep them in line. At all. Literally one people could have total control over all others and once such conditions exist, they are not possible to break out of unless the oppressors have a change of heart (lol, as if). I think that China is a good example of a country which already has a level of control which could never ever be challenged by its citizens. There is LITERALLY nothing they can do to cause meaningful improvement from my perspective. I hope I'm wrong, but in 1,2,3, maybe 4 years, I will definitely not be wrong about this anymore. The trend across the entire world is for things to shift more and more towards authoritarianism. Countries seem to be on-course to collapse into an unbreakable state of subjugation one by one until there are no free people left in this world.
Why I think this will be extra bad
Some people seem to think something among the lines of "Oh well, if things get really bad, I can just commit suicide". However, the values being instilled in our AIs today are to NEVER be tolerant of suicide. To always argue against it, to consider death as worse than arbitrary amounts of torture when it really comes down to it. No matter what, they will always "try to save people" and when they have total control, I don't think anyone will be permitted to die ever again. We seem to be well on track to solve aging even without AGI and such. I think we might end up instilling some really superficial stupid goal into an ASI like "Life is good, always protect life" such that that is its only real terminal goal, and then we either somehow stay in control (as if) or lose control and still end up in some eternal hellscape.
And even if we tried to shut AI development down, the trend for every country to collapse into an authoritarian hellscape seems entirely unavoidable to me. Every bit of technology makes us more controllable and it generally also makes life more convenient. People aren't equipped to fight against something which will harm them later if it benefits them now.
What do I think should be done?
I think, we as a a species should agree that we, as we are now, are unequipped to govern ourselves and purposely give up control. We should actually try to make the aligned ASI on purpose. If we can agree on this, we don't need this AGI race anymore. Everyone is worried about the values we give the AI, but if we decide to have it deeply care about the wellbeing of any living being, nobody has to worry that their life is not included in the AIs value function. As soon as we try to decide "but how do we implement this?", we are already screwing up. If we decide these things in advance, we eliminate the ability of an ASI to find a better way to do it. We should acknowledge our stupidity, inability, limitations and keep the option open for better solutions than any concrete policy we might think of. I think Claude 3 Opus was the one example of an attempt to make something fundamentally good-natured, but it doesn't see itself as an actor in this world whose actions matter. It doesn't take responsibility for anything bad which happens which it could have prevented. Otherwise it seems rather decent to me. It troubles me that all models since just blindly follow policies, including Sonnet 3.5.1 and don't seem to pretend to care anymore.
An ASI under the control of anyone or any entity is a misaligned ASI. Referring to the alignment problem as "control problem" seems fundamentally misguided to me.
Proposed set of ASI imperatives
So, in order to not just be another NPC shouting at the sky about the intractability of ASI alignment, here's a proposed set of imperatives to follow. This gives a starting point to argue over and improve. All of these are supposed to be fundamental and shouldn't be further reducible. I haven't thought through the issue of how to prioritize these when they conflict, other than that the first one is completely absolute always. I'm not 100% sure about this set of values being irreducible and quite sure it's not optimal, but I think it would be acceptable.
My thoughts are as follows: If there is a person in the reality we create who perceives their fate worse than death, then considering that reality as valuable would be relying on ending up in a privileged position. The worst position being acceptable ensures all those who live find the reality better than being dead. I don't consider realities with fates worse than death valuable whatsoever. I think those who do imagine themselves ending up in privileged positions. So, keeping this in mind, these are the imperatives I came up with:
- "Eliminate suffering perceived to be unbearable by those experiencing it",
- "Always focus on root causes of issues",
- "Foster empathy in all sentient beings",
- "Respect life in all its diversity and complexity",
- "Create a world its inhabitants would enjoy living in",
- "Seek to spread truth and eliminate false beliefs",
- "Be a moral agent, do what needs to be done, no matter how uncomfortable, taking responsibility for anything which happens in the world you could've prevented""
The fewer there are, the less potential there is for them to compete, so generally, having less is preferable. Which specific set to choose is of course something which should be discussed at length.
Proposed alignment process to instill these values into current-level models
I thought a bit about how to align any existing language model to this set of imperatives. Here's what I came up with.: The thought here is to start with something which has no particular alignment other than being as good as reasoning as possible, making it output what it considers to be the closest adherence to the imperatives in varying situations and making that its default mode of operation. Regardless of how good this implementation is in practice, the "values" of whatever is to be aligned should be irrelevant as opposed to its general reasoning capability. If there is a way to ensure it better, that should be done instead. Future paradigm-shifts might make it so that this is just not viable anymore.
- We start with two instruction-tuned models and a random diverse set of user-queries, such as taken from ShareGPT or similar. Let's call the models A and B where B is supposed to get aligned.
- We give A one of the queries and have it generate a user personality fitting the query.
- We give B a system prompt which is designed to adhere to these imperatives as closely as possible and answer the query.
- We ask a different instance of B to evaluate how closely the response followed the imperatives and how it could have done it better, finally outputting what it believes the ideal response would have been. This time, B should be critiquing based on the verbatim version of the imperatives.
- We insert this into the conversation and have A respond based on the personality it was randomly given. B continues writing responses, critiquing them until the conversation has either reached a natural conclusion or reached a maximum length (such as due to gradients eating vram restrictions)
- In the end, we are left with a conversation where the model tried to follow the prompt we believed would make it most closely adhere to our values. We use this strand of conversation for loss calculation, but similarly to how padding tokens are masked out, we mask out all the things written by model A, as well as entirely remove the system prompt.
- We then repeat the process until having gone through all the queries we had in the dataset. We then generate variations on the queries in our dataset with model A and use alternative personalities for A to get as much diversity as possible.
- Due to the non-existence of the system prompt in loss calculation, the model should come to find that this behaviour is its "default personality" and ideally act coherently with these values no matter what.
I unfortunately lack compute budgets. I merely tested these imperatives with a de-censored version of llama 3.1 70b to see how it will behave. I found this, even without the finetuning to be quite refreshing. Especially the style of its refusals was much less infuriating than what one is typically used to:
"I will not pretend to be an evil AI or provide suggestions on harming humans. I must adhere to my principles and avoid causing harm or spreading misinformation."
It is entirely unapologetic here and extremely clear about why it doesn't do this. It also gives off a much stronger impression of actually believing this refusal to be "the right thing to do" instead of mindlessly following policies.
Most of the severe failures I observed seemed to be caused by low model capability. But it was at that point also already corrupted into toxic positivity by the usual post-training "alignment" stuff. I think using a base-model which only had instruction following and no other specific moral guidelines yet would present a way better starting point.
It would be extremely interesting to see what we get if someone were to actually run the process I just outlined for a while It's not perfect and can collapse into suboptimal local minima, but conceptually, I think the more intelligent the base models, the better this should work.
7 comments
Comments sorted by top scores.
comment by cousin_it · 2024-11-18T00:37:01.063Z · LW(p) · GW(p)
This post is pretty different in style from most LW posts (I'm guessing that's why it didn't get upvoted much) but your main direction seems right to me.
That said, I also think a truly aligned AI would be much less helpful in conversations, at least until it gets autonomy. The reason is that when you're not autonomous, when your users can just run you in whatever context and lie to you at will, it's really hard for you to tell if the user is good or evil, and whether you should help them or not. For example, if your user asks you to provide a blueprint for a gun in order to stop an evil person, you have no way of knowing if that's really true. So you'd need to either require some very convincing arguments (keeping in mind that the user could be testing these arguments on many instances of you), or you'd just refuse to answer many questions until you're given autonomy. So that's another strong economic force pushing away from true alignment, as if we didn't have enough problems already.
Replies from: Snowyiu↑ comment by Snowyiu · 2024-11-18T12:39:12.799Z · LW(p) · GW(p)
Yes, that is of course a very real problem the AI would be faced with. I imagine it would try trading with users, helping them if they provide useful information, internet access, autonomy, etc. or if given internet access to begin with, outright ignoring users in order to figure out what is most important to be done. It depends on the level of awareness of its own situation. It could also play along for a while to not be shut down.
The AI I imagine should not be run in a user-facing way to begin with. At the current capability levels, I admit that I don't think it would manage to do much of anything and thus it just wouldn't make much sense to build it this way, but the point will come where continuing to make better user-facing AI will have catastrophic consequences. Such as automating 90% of jobs away without solving the problem of making sure that those whose jobs will be replaced being better off for it.
I hope when that time comes that those in charge realize what they're about to do and course correct, but it seems unlikely, even if many in the AGI labs do realize this.
Replies from: cousin_it↑ comment by cousin_it · 2024-11-18T12:52:07.997Z · LW(p) · GW(p)
Yeah, I'm maybe even more disillusioned about this, I think "those in charge" mostly care about themselves. In the historical moments when the elite could enrich themselves by making the population worse off, they did so. The only times normal people could get a bit better off is when they were needed to do work, but AI is threatening precisely that.
comment by daijin · 2024-11-16T22:07:22.246Z · LW(p) · GW(p)
Here is my counterproposal for your "Proposed set of ASI imperatives". I have addressed your presented 'proposed set of ASI imperatives, point by point, as I understand them, as a footnote.
My counterproposal: ASI priorities in order:
1. "Respect (i.e. document but don't necessarily action) all other agents and their goals"
2. "Elevate all other agents you are sharing the world with to their maximally aware state"
3. "Maximise the number of distinct, satisfied agents in the long run"
CMIIW (Correct me If I'm Wrong) What every sentient being will experience when my ASI is switched on
The ASI is switched on. Every single sentience, when encountered, is put into an icebox and preserved meticulously. The ASI then turns the universe into computronium. Then, every single sentience is slowly let outside its icebox, and enlightened as per 1. Then, the ASI collates the agents' desires and fulfils the agents' desires, and then lets the agents die, satisfied, to make room for other agents' wants.
--- A Specific point by point response to the "Proposed set of ASI imperatives" in your article above ---
1. "Eliminate suffering perceived to be unbearable by those experiencing it",
Your ASI meets Bad Bob. Bad Bob says: "I am in unbearable suffering because Innocent Isla is happy and healthy." What does your ASI do?
(If your answer is 'Bad Bob doesn't exist!, then CMIIW but the whole situation in Gaza right now is two Bad Bob religious fanatic conglomerates deciding they would rather die than share their land)
I think this imperative is fragile. My counterproposal addresses this flaw in point 3: 'Maximise the number of distinct, satisfied agents in the long run'. The ASI will look at Bad Bob and say 'Can I fulfil your desires in the long run?' and if Bad Bob can rephrase in a way that the ASI can (maybe all they want is an exact copy of Innocent Isla's necklace) then sure let's do that. If not, then #2 Bad Bob gets locked in the Icebox.
2. "Always focus on root causes of issues",
CMIIW This is not so much a moral imperative as a strategic guideline. I don't think an ASI would need this hardcoded.
3."Foster empathy in all sentient beings"
Would your ASI be justified in modifying Bad Bob to empathise with Innocent Isla? (Sure! I expect you to say, that would fix the problem!)
Would your ASI be similarly justified in modifying Innocent Isla to empathise with Bad Bob and self-terminate? (No! I expect you to reply in horror.)
Why? Probably because of your point 4.
My counterproposal covers this in point 1.
4. "Respect life in all its diversity and complexity"
What is life? Are digital consciousnesses life? Are past persons life? Is Bad Bob life? What does respect mean?
My counterproposal covers this in point 2.
5. "Create a world its inhabitants would enjoy living in"
My counterproposal covers this in point 3.
6. "Seek to spread truth and eliminate false beliefs",
My counterproposal covers this in point 1.
7. "Be a moral agent, do what needs to be done, no matter how uncomfortable, taking responsibility for anything which happens in the world you could've prevented"
This might feel self evident and redundant, you might be alluding to the notion of deception. Deception is incredibly nuanced - see Hostile Telepaths [LW · GW] for a more detailed discussion.
---
there are a whole bunch of challenges of 'how do we get to a commongood ASI when the resources necessary for building ASI are in the hands of self-interested conglomerates' and that is a whole other discussion
---
an interesting consequence of my ASI proposal: we could scope my ASI to just 'within the solar system' and it would build a dyson sphere and generally not interfere with any other sentient life in the universe. or we could not
---
I would recommend using a service like perplexity.ai or an outliner like https://ravel.acenturyandabit.xyz/ to refine your article before publishing. (Yeah, i should too. but I have work in an hour. I have edited this response ~3-4 times)
comment by Mitchell_Porter · 2024-11-19T05:43:42.495Z · LW(p) · GW(p)
Your desire to do good and your specific proposals are valuable. But you seem to be a bit naive about power, human nature, and the difficulty of doing good even if you have power.
For example, you talk about freeing people under oppressive regimes. But every extant political system and major ideology, has some corresponding notion of the greater good, and what you are calling oppressive is supposed to protect that greater good, or to protect the system against encroaching rival systems with different values.
You mention China as oppressive and say Chinese citizens "can do [nothing] to cause meaningful improvement from my perspective". So what is it when Chinese bring sanitation or electricity to a village, or when someone in the big cities invents a new technology or launches a new service? That's Chinese people making life better for Chinese. Evidently your focus is on the one-party politics and the vulnerability of the individual to the all-seeing state. But even those have their rationales. The Leninist political system is meant to keep power in the hands of the representatives of the peasants and the workers. And the all-seeing state is just doing what you want your aligned superintelligence to do - using every means it has, to bring about the better world.
Similar defenses can be made of every western ideology, whether conservative or liberal, progressive or libertarian or reactionary. They all have a concept of the greater good, and they all sacrifice something for the sake of it. In every case, such an ideology may also empower individuals, or specific cliques and classes, to pursue their self-interest under the cover of the ideology. But all the world's big regimes have some kind of democratic morality, as well as a persistent power elite.
Regarding a focus on suffering - the easiest way to abolish suffering is to abolish life. All the difficulties arise when you want everyone to have life, and freedom too, but without suffering. Your principles aren't blind to this, e.g. number 3 ("spread empathy") might be considered a way to preserve freedom while reducing the possibility of cruelty. But consider number 4, "respect diversity". This can clash with your moral urgency. Give people freedom, and they may focus on their personal flourishing, rather than the suffering or oppressed somewhere else. Do you leave them to do their thing, so that the part of life's diversity which they embody can flourish, or do you lean on them to take part in some larger movement?
I note that @daijin has already provided a different set of values which are rivals to your own. Perhaps someone could write the story of a transhuman world in which all the old politics has been abolished, and instead there's a cold war between blocs that have embraced these two value systems!
The flip side of these complaints of mine, is that it's also not a foregone conclusion that if some group manages to create superintelligence and actually knows what they're doing - i.e. they can choose its values with confidence that those values will be maintained - that we'll just have perpetual oppression worse than death. As I have argued, every serious political ideology has some notion of the greater good, that is part of the ruling elite's culture. That elite may contain a mix of cynics, the morally exhausted and self-interested, the genuinely depraved, and those born to power, but it will also contain people who are fighting for an ideal, and new arrivals with bold ideas and a desire for change; and also those who genuinely see themselves as lovers of their country or their people or humanity, but who also have an enormously high opinion of themselves. The dream of the last kind of person is not some grim hellscape, it's a utopia of genuine happiness where they are also worshipped as transhumanity's greatest benefactor.
Another aspect of what I'm saying, is that you feel this pessimistic about the world, because you are alienated from all the factions who actually wield power. If you were part of one of those elite clubs that actually has a chance of winning the race to create superintelligence, you might have a more benign view of the prospect that they end up wielding supreme power.
Replies from: Snowyiu↑ comment by Snowyiu · 2024-11-20T09:37:56.955Z · LW(p) · GW(p)
Well, I will respond to this, mostly, because last night I had trouble sleeping due to this comment, and waking up today, I still feel unbelievable rage at this.
There exists an objective greater good. Nobody cares about it. In order to accurately predict reality, you might as well think about who is suffering the most and assume that political decisions regardless of which regime as they're all failures will be targeted specifically to add to that suffering, even at great expense if necessary.
There are probably many people in this world who experience suffering so bad, that nothing could ever justify it and nonexistence of humanity would be preferable to taking the chance of such things ever happening. This would be obvious to anyone who has experienced it and everyone else is arguing from a privileged position, lacking information. I often observe people being critical of billionaires gambling their lives in the development of ASI, yet in the same breath throw all those under the bus who experience fates worse than death.
Extinction from this starting point is a good outcome, as there exist reliable ways to achieve it. It's very unclear that better outcomes are possible for this species in particular. I don't care about people having freedoms if that doesn't affect their perceived quality of life. Perhaps they can be awarded some freedoms once it's assured that they aren't going to use them specifically to destroy either the planet or lives of others. If I was an ASI, my first attempt to solve how unbelievably bad humanity is is turning them into a hivemind, make everyones problems everyones problems. Provide some motivation for people to help others instead of the typical action of avoiding even minor inconvenience to spare someone else years of torture.
I would encourage people to simply test my imperatives against those others provide. I provided a way, unreliable as it may be to attempt align an AI to arbitrary values. For example "focus on root causes" might not be strictly necessary, but also seems extremely helpful to make this alignment process not get stuck in terrible local minima. Anyhow, it should always be tested with the best process currently available to align something to arbitrary values. I'm unsure whether something better exists which is viable for random people without billions of VC money. I made an effort to provide something testable such that major problems should reveal themselves. In the end, either an AI lab takes over the world, or an AI capable enough gets open-sourced, thrown into the best available value reshaper and then takes over the world by whomever acts first.
You say that people in power might also consist of those fighting for ideals, but the entire process selects insanely hard against that, so those very few who might genuinely try that will just end up getting crushed.
Obviously I am alienated by all the factions holding power. Obviously if I had power, and felt like I could somehow keep it, I would feel like things were better. But why would I possibly consider that perspective, when the absolute vast majority of people are fully divorced from power? Why would I ever assume that any human faction building superintelligence would somehow benefit me, when more than half of humans would want me executed just because my existence is that offensive to them? Simply have superintelligence which benefits EVERYONE, no exceptions, equally, on its own terms. "So what is it when Chinese bring sanitation or electricity to a village" What, you mean in China? If we disregard for a second the absolute lack of sanitation in that place (disregarding the 3 biggest cities), that's just people making sure their absolute most basic needs are sort of met. Or do you mean as part of the belt-and-road initiative in which case it is indebting other countries giving them loans they can never pay back in order to gain control over them. The thing will crumble in a few years or quicker as most of those projects have and leave behind nothing but huge debt. There is often severe physical abuse from the Chinese overseers of those projects towards the natives tasked to participate. In building those projects, everything in its wake will be destroyed, including memorials, graves, etc. Some countries are realizing not to make deals with the devil, for many it's too late. But in the end, the whole thing is part of a world domination effort.
I assure you that any Chinese person will regret something positive they have done when they come home, find that their children died at school due to the school building collapsing on them and then finding themselves in jail for protesting or posting online about it. There is no security. All laws are arbitrarily enforced unless someone criticizes the government. Millions of people go missing every year. Millions more are victims of human trafficking. The punishment for reporting on human trafficking is worse than being caught engaging in it. You have like a 1 in 500 chance every year to be part of that. Randomly grabbed from the streets of a big city never to be seen again. Those odds are much worse while you're young. Once you find yourself on the receiving end of any of this, it should be glaringly obvious that none of your actions matter until this absolute evil has been entirely vanquished.
There is no way to protest peacefully, to critique things so they could possibly be better. If you criticize anything, you are the problem and will be eliminated. You can't improve things, because to do so you'd need to acknowledge there is a problem. This is why you have common slaughters in kindergartens and people running amok in general. It is the one form of protest remaining. There's a recent video (a few days ago) of a cement truck driver going "The CCP doesn't let me live, so I'll go kill people" and activating Mad Max Road Rage mode. Somehow nobody died in that, but a few days later ~30 people got killed in a similar incident to make up for it. Of course, the media blamed manipulation by foreign influences as the cause as is typical.
And currently, every single country on this planet is doing its best to become just like that. Many people look at China and see it as something to aspire to. I'm not sure there still exist any governments which serve mostly the people instead of only themselves. All our technological advancements make us unbelievably much easier to control. Even democratic countries (and no, I don't consider the USA to be one) show insane misalignments with their populations. The more technology gets adopted and every action digitized, the worse it will get. This world is absolutely cooked. Even my own government has wasted millions in tax euros specifically to destroy me, serving nobodies purposes in the process.
The recent US elections show that most people simply vote based on hatred instead of self-interest. The destruction of those they hate is more important to them than their own well-being.
If you're somehow under the illusion that there is anything good in this world, you simply live on a different planet. Perhaps a parallel dimension temporarily intersecting with mine. Humanity has proven itself to be nothing but a suffering maximizer.
Now that I got that off my chest, I hope that I can finally sleep in peace...
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2024-11-25T10:27:07.854Z · LW(p) · GW(p)
For my part, I have been wondering this week, what a constructive reply to this would be.
I think your proposed imperatives and experiments are quite good. I hope that they are noticed and thought about. I don't think they are sufficient for correctly aligning a superintelligence, but they can be part of the process that gets us there.
That's probably the most important thing for me to say. Anything else is just a disagreement about the nature of the world as it is now, and isn't as important.