Forcing Freedom
post by vlad.proex · 2020-10-06T18:15:13.596Z · LW · GW · 12 commentsContents
Wretched Worlds First Second Third Fourth Fifth Sixth Thoughts None 13 comments
Wretched Worlds
You are a freshly recruited cadet in the Intergalactic Liberation Corps (ILC). With a handful of valiant companions, you will travel across the galaxy, helping to free oppressed Beta Species from the claws of their Alpha masters, and fighting slavery whenever it dares to rear its ugly head.
The Intergalactic Council is very serious about slavery. The practice is condemned in the Galactic Bill of Rights, which guarantees the equality and autonomy of all sentient species. But the government of the Republic doesn't have enough leverage to abolish it everywhere. In assembling the ILC, the Council hopes to start an intergalactic movement of liberation that will bring war to the enslavers’ backyards.
The basic strategy is simple: infiltrate a planet where slavery is practiced, establish contact with the oppressed species, and help them spark an insurgency. As discretion is paramount, you will infiltrate in small groups. Your task will be to provide the insurgents with motivation, intelligence and specialized knowledge. As you’re about to find out, this is more difficult than it seems.
First
As you descend on the first planet, you are greeted by a spectacle of appalling suffering: the Alphas torture and humiliate the Betas liberally, sparing them no cruelty. You immediately start working to set up a resistance movement. To your dismay, not one of the Betas is interested in your proposals. After a few weeks, you understand why: the Betas, once a free species, have been genetically re-engineered in order to serve as willing slaves. When separated from their masters, they become hopelessly depressed and starve themselves to death. You consider three options: a) attempt to reverse the genetic modifications (but this is forbidden by the Galactic Bill of Rights); b) exterminate the Betas to remove them from their fate; c) do nothing and leave. Eventually you decide for the third option.
Second
The Alphas have populated the atmosphere with a parasite. When inhaled by a Beta, the parasite acts on their brain, turning them into a willing slave. As long as Betas are on the planet's surface, their mind is altered and they feel a compelling desire to serve their masters. If they stop breathing the planet's atmosphere, the parasite dies and they regain their senses.
Since staging a rebellion is impossible, you begin hatching a plot to get as many Betas away from the planet as possible. But the Alphas find out about your plan and capture you. They expose you in the presence of a crowd of Betas. Then they ask them:
"Do you want to leave this planet?".
"No! We want to serve you!" the Betas answer unanimously.
"Very well. We totally respect your choice to stay here and worship us." the Alphas say. "These infiltrators, on the other hand, have been planning to kidnap you and deport you to another planet against your will. And they have the nerve to accuse us of slavery!"
The crowd breaks into a rage and attempts to kill you.
Third
Shortly after landing, you make contact with a Beta representative. You propose an uprising, but she shakes her head: "Every week, our masters hold a freedom lottery. Everybody must participate; whoever wins is set free. The probability of staging a successful uprising, if we all cooperate, is one in a thousand. The probability of winning the lottery is also one in a thousand. Though it would be nice to rebel and free everybody, the reality is that everyone prefers to play the lottery rather than risk a horrible death. Furthermore, our masters are cunning. If the probability of overturning them increases - say, because the troops are away on some war - they improve the odds of the lottery accordingly; so that it is never convenient for anyone to participate in an uprising."
Fourth
Upon making contact with the Beta leader, he courteously asks you to leave. "My counselors and I have already devised a plan, one that would probably win us freedom. But there are two problems. The first is that the plan takes three generations to be completed, so that only the children of our children will taste the fruits of freedom. The second is that the mere idea that slavery is evil and should be defeated fills most of our people with fear and righteous rage. You see, the main force that has kept our people sane until now is our religion. It teaches that slavery is a mark of the Gods’ favor; whoever toils as a slave in this life will enjoy an eternal reward in the next one. Our people derive great strength and consolation from this doctrine; they will not give it up to work at our plan. If the benefits of freedom were within their reach, perhaps they would feel differently. But the thought of depriving themselves and their children of the only consolation they have found in this dreary existence, so that their grandchildren may get a chance at freedom, will not stir them to action. They cannot live with the knowledge that slavery is evil as well as inevitable.”
Fifth
“Your galactic charters label us as ‘slaves’, so you come here with your guns and your spaceships to try and save us” the Beta leaders say to you. “But we don’t want to be liberated, thank you very much. It is true, our masters tend to overwork us, and the mortality rate is still high. But do you know how we lived before we were enslaved? Our home planet was ravaged by floods, meteorites and earthquakes. Food was scarce, fuel was scarcer still. The winters lasted years and left many of us dead. Our present condition is not ideal, but it is better than our original one. What do you offer us instead? A rebellion which, if it fails, will result in our collective death; and if successful, will lead us – where? Back to the state of nature. Besides, our masters have already begun to share some of their riches with us (well, at least with us leaders). In a few generations, we shall learn to serve them better, and they will take better care of us. It is not entirely impossible that eventually we might be endowed with equal rights and absorbed in their society altogether, another one of the myriad species who make up their diverse empire. I doubt you have anything better to offer us.”
Sixth
The Beta leaders capture you, hook you up to a machine and scan your brain. “See this?” one of them says, showing you a picture. “This is a chip that was implanted in your brain. It bears the mark of the Lords of Arlak. You came to this planet thinking to liberate us. In fact, you are merely the unwitting slave of an evil species, far more powerful than our masters. The Lords of Arlak dwell in the shadows, programming the thoughts and behavior of their servants to suit their will. Your memories are false, as are your beliefs. Your goal is not liberate us, but to steal us for the benefit of your masters. Because we pity you, we will not kill you, provided you leave immediately. But take this chance to reflect upon your condition. The chip cannot be removed without killing its host; soon, you will be reprogrammed to forget this conversation. It is up to you to decide whether you prefer to go on living as a puppet and an unwitting enslaver, or whether you should kill yourself now and die in the presence of the truth.”
Thoughts
These parables deal with the problems involved in trying to liberate someone who doesn’t want to be liberated.
On the first planet, Betas have been genetically engineered to be willing slaves. I would argue that the Betas are not the same species they were before enslavement. They are more like an army of robots built for the purpose of serving. While it would be unethical to kill them, perhaps there is no way to save them; in a sense, they are already extinct.
The problem with the second planet is that freeing the slaves requires committing an act of overt violence and ignoring their revealed preferences. I suppose the question is: do the ends justify the means [LW · GW]? From a utilitarian point of view, the liberators might decide that the suffering they will inflict is offset by the benefits of liberation, but see the last planet for a challenge to this view.
For the third planet, I was inspired by some Marxists’ claim that the proletariat was not revolting because of the probability of upward mobility. Granted, it is a highly simplified model: each slave acts as a maximizer, choosing the action that is most likely to bring them freedom; when the payoffs are identical, they choose the action that brings less risk. There is no solidarity towards other slaves, and no moral value for being free. The point is to suggest that a collective can be kept in submission if each individual has a non-trivial hope of receiving a large prize (or avoiding a large punishment) in exchange for their submission; so that each individual will decide to cooperate with the authority rather than with its peers. This can only work when there is no cohesion within the group; if the slaves cared about each other, the payoff of freeing everyone would be higher than the payoff of being freed through the lottery [1].
The fourth planet was inspired by anti-deathism and its critics. Anti-deathists say that death is bad and we should immediately start working on the cure. Although it’s likely that we could remove death eventually (conditional on humanity not going extinct) anti-deathists cannot promise that it can be done within this generation’s lifetime. A great proportion of humans, arguably, have developed pro-death beliefs in order to cope with the fear of death [2]. These beliefs, which include religious doctrines and philosophical arguments, are cherished because they help people cope with the prospect of death. The anti-deathist wants them to abandon these crutches, but cannot promise anything concrete in return, except the empowerment that comes with embracing truth and the satisfaction of working for the generations to come. Since to many this trade-off is unacceptable, the 'real work' never begins, and any real progress is pushed back. More generally, this parable is about the dangers of adaptive preference formation, a topic which requires its own post.
The fifth planet highlights the problem of trying to free someone who doesn’t have better options. The liberators are focusing on negative freedom: freedom from enslavement, from the masters. The Betas point out that they are not being offered any positive freedoms. At least their condition gives them hope of some future benefits.
The final planet highlights the thorniest problem of all. I see an individual or a group which, in my view, are unfree. They appear to be engaging in self-deceit, i.e. their beliefs are not in line with their interests. Moreover, their preferences might have reversed under the influence of adaptive preference formation, as in the case of the Betas' slavery-praising religion.
Perhaps I have a chance to free them, but to do so I must act against their beliefs and revealed preferences. I may conclude that the expected benefits will outweigh the negative impact of my actions, but the bottom line is, who am I to decide? What gives me the right to accuse other people of false consciousness? To ignore people’s overt preferences, and inflict pain on them because I assume I am acting in their best interest?
What about my own beliefs and preferences: how well do I understand them? What if I am the one who is deceived, and whose preferences are perverted, as in the case of the liberator with a chip in their brain? I would have no way of knowing that I am being deceived. I would act on what I think is right, and I would be wrong.
Perhaps I should shrug, remind myself that I am running on corrupted hardware [LW · GW], embrace a theory of universal non-interference and go on with my life.
Then again... You see people who are oppressed, whose way to freedom is barred by delusions, perverted preferences and false values, and you feel it is your moral duty to intervene.
There might be a third way between righteous action and elevated inaction. But what if there isn't?
Freedom is problematic.
[1] This reminds me of what totalitarian societies such as Nazi Germany and Stalin’s URSS were trying to do: build a system where everyone would always choose to cooperate with the authorities at the expense of the people around them (even friends and family). And this principle was mirrored in the gulags and concentration camps, where some prisoners would collaborate with the camp authorities in the hope of being raised to a somewhat higher standard of living.
[2] This would be a process of adaptive preference formation. I am not claiming that everyone who professes deathist views is deceiving themselves. I am claiming that if deathists were presented with an actual choice – if indefinite life extension was within their reach – at least some professed anti-deathists would experience a preference reversal. Others, I suppose, would be steadfast in their decision.
12 comments
Comments sorted by top scores.
comment by Stuart Anderson (stuart-anderson) · 2020-10-07T18:53:34.716Z · LW(p) · GW(p)
Replies from: vlad.proex-
↑ comment by vlad.proex · 2020-10-08T17:44:30.111Z · LW(p) · GW(p)
Personally, I am strongly inclined towards non-interference. I have little trouble accepting that people choose wrong, knowing how fallible I am myself. I also think that, given how complex the universe is for us, it will always be easier to find arguments for inaction than for action.
And this is precisely why I am interested in arguments for interference. Most of the time, the option of non-interference is the easiest for me; which makes me at least a bit suspicious. It makes me wonder: have I carefully considered all the opposing arguments?
'Moralising' implies that I am considering interventionism in defense of my own values. I was thinking more of situations when the other is in danger or experiencing evident suffering, which evoke an empathy response.
If you saw someone fallen on the train tracks, you wouldn't shrug and say: "It's a feature of agency, let evolution work". You would try and save them. This is the kind of experience I was trying to convey.
Replies from: stuart-anderson↑ comment by Stuart Anderson (stuart-anderson) · 2020-10-09T02:58:49.435Z · LW(p) · GW(p)
-
comment by Pattern · 2020-10-07T17:54:18.825Z · LW(p) · GW(p)
Eventually you decide for the latter.
What does this mean in the context of 3 options?
Replies from: vlad.proex↑ comment by vlad.proex · 2020-10-07T18:22:10.846Z · LW(p) · GW(p)
TIL 'former' and 'latter' are used to distinguish between two things. Corrected.
comment by seed · 2020-10-07T11:54:43.399Z · LW(p) · GW(p)
A great proportion of humans, arguably, have developed pro-death beliefs in order to cope with the fear of death.
According to this survey, only 23% of Americans don't want to live longer than a normal lifespan. They can't really be holding the rest of you down.
Replies from: vlad.proex↑ comment by vlad.proex · 2020-10-07T13:59:03.555Z · LW(p) · GW(p)
The survey is quite simplistic. 19% said "I want to live forever", while 42% said "I want to live longer than a normal lifespan, but not forever". The problem is in the ambiguity. What does 'forever' mean? A million years? Until the heat death of the universe?
And what is 'longer than a normal lifespan'? Ten years longer? A million years longer?
My guess is that most people who chose the second option want to live until they're 100 or something, and that is in fact "longer than the average lifespan" which is 79 in US.
This is confirmed by the age effect. The proportion of people who want to live forever drops from 24% to 13% from the youngest to the oldest age group. And the proportion of people who want the normal lifespan increases from 19% to 29%. But the proportion of people who want to live longer than a normal lifespan stays unchanged.
So if there is an age effect that makes anti-deathism attractive to the young but not to the old (here is a first-person account of the phenomenon) the fact that it doesn't show in the "longer than a normal lifespan" category suggests that the people in this bucket (who are the majority) are not anti-deathists. They just want to live one or two decades more than the average, and this preference stays constant throughout the life course.
Hence, I suspect that at most, only the 19% who said "I want to live forever" qualify as anti-deathists. Although some anti-deathists may have been lost to the second category because they were put off by the fact that it's impossible to literally live forever.
However, "I want to live forever" does not automatically translate in: "I would support scientific research into fighting death." It might be an idle statement that you don't intend to act upon, like when my dad says "I want a Ferrari". And what about religions or magical systems who promise immortality?
It's hard to disentangle all this from a single question. But I would argue that the proportion of genuine anti-deathists is probably lower than 19%. While the proportion of deathists is at least 60%.
Replies from: AnthonyC↑ comment by AnthonyC · 2020-10-07T14:56:22.343Z · LW(p) · GW(p)
That linked account seems to assume that people who want to live forever expect to "get old" along the way, in the same way they do now, and I don't think that's accurate. I wouldn't want to live even for centuries, let alone forever, in a 90 year old's body, in world where most of the people I know and love are gone forever. But many of those same 90 year olds will gladly profess to believe, or at least hope, to be reunited with loved ones in death and remain with them forever. But if you offer me the chance to stay in a 25 or 30 year old's body/level of health, and everyone else I love would get the same, I'd at least like the chance to see what it's like and (Ian Banks' Culture-style) get to choose my lifespan, not all at once but each and every day, based on how well it works out. I have no idea if I would actually want to live for TREE(3) years, but I'd much rather have the choice, and not have to make it within the next 50 years.
it's impossible to literally live forever.
Are you sure? That seems like a question of physics, and the accessible energy reserves and computational capacity of our light cone (the latter of which may be infinite even if the former is not).
Any survey of this type runs into, not just the nuances of the questions and how they're asked, but how little most people have really thought about the question, or what the different answers would actually imply.
Replies from: vlad.proex↑ comment by vlad.proex · 2020-10-08T17:52:59.119Z · LW(p) · GW(p)
I agree with you, though I don't think the linked account expects an "eternal old age"; what made you think that? As I see it, it's actually an argument about the inner experience of humans and how the author thinks we wouldn't be happy with a very long lifespan. I don't agree with the author, but I linked the post as anecdotal evidence that some people who are no longer young may reject the idea of a very long lifespan because of a general feeling of life-weariness (to what extent this feeling is connected to the biological phenomenon of aging is to be ascertained).
Are you sure? That seems like a question of physics, and the accessible energy reserves and computational capacity of our light cone (the latter of which may be infinite even if the former is not).
How would computational capacity be infinite in the presence of finite energy?
Replies from: AnthonyC↑ comment by AnthonyC · 2020-10-14T18:14:42.027Z · LW(p) · GW(p)
You're right, nothing explicitly stated anything about old age, but the study itself has "burials" right up in the headline. IDK if respondents knew those questions were coming when they answered the "lifespan" question, but if they did, I doubt most people automatically assume an increased lifespan meant they'd start being younger than they currently were. That's all conjecture on my part, but I think it's similarly plausible as psychological life-weariness as an explanation.
How would computational capacity be infinite in the presence of finite energy?
As I understand it, the theoretical limits on energy efficiency of irreversible computing are a function of ambient temperature (because they involve dumping heat/entropy into the environment). That means if the future universe keeps getting colder as it expands, the amount of computing you can do with a fixed supply of stored energy goes up without bound, as long as you use it slowly enough. That's basically Dyson's Eternal Intelligence, though I don't think anyone knows what the computing architecture would look like. Things like the Omega Point spacetime in a collapsing universe seem more speculative to me but still might be possible.
comment by [deleted] · 2020-10-07T14:02:24.125Z · LW(p) · GW(p)
.
Replies from: vlad.proex↑ comment by vlad.proex · 2020-10-08T18:08:43.121Z · LW(p) · GW(p)
Your position is consistent, though to me somewhat troubling.
I wouldn't equate "unable to have different preferences or to envision a better situation" with "happy". Perhaps Plato's cave applies here. Or consider a child who is born in an underground prison, Banelike, and never sees the light of sun. Who is then offered the opportunity of freedom on the surface and refuses out of fear or ignorance. Would you think they are "happy"? Perhaps, but they could be happier. Or at least they could experience a richer level of existence, given that humans evolved to enjoy fresh air and nature landscapes and the feeling of the sun on their skin, something they can't even imagine at the moment.
Imagine writing a sort of will for altered-mind situations. If you fell under a hypnotism that turned you into a slave, would you want to be liberated? Or would you want people to always stop at your currently expressed preferences?
Doesn't this mean that you would connect yourself to Nozick's machine, since it would be easier to be "happy" in a state of brainwashed slavery than in the complex life of a free agent?
Replies from: None