M. Y. Zuo's Shortform
post by M. Y. Zuo · 2021-11-07T01:42:46.261Z · LW · GW · 45 commentsContents
52 comments
45 comments
Comments sorted by top scores.
comment by M. Y. Zuo · 2022-10-03T19:32:25.651Z · LW(p) · GW(p)
Is there a case for corporeal punishment instead of prison time?
There's some talk online of the possibility of reintroducing corporeal punishment to lower prison costs, reduce the size of the prison system, break the school-to-prison pipeline, and improve the efficiency of justice.
Eliding all the detailed arguments, the most compelling argument to me also is the simplest. Which is that 5 lashes on the butt seems a lot kinder then a year in prison, especially if the criminal is young.
comment by M. Y. Zuo · 2022-02-07T04:16:08.552Z · LW(p) · GW(p)
George Orwell’s review of Mein Kampf by Adolf Hitler, excerpt:
”
….Also he has grasped the falsity of the hedonistic attitude to life. Nearly all western thought since the last war, certainly all “progressive” thought, has assumed tacitly that human beings desire nothing beyond ease, security and avoidance of pain. In such a view of life there is no room, for instance, for patriotism and the military virtues. The Socialist who finds his children playing with soldiers is usually upset, but he is never able to think of a substitute for the tin soldiers; tin pacifists somehow won’t do. Hitler, because in his own joyless mind he feels it with exceptional strength, knows that human beings don’t only want comfort, safety, short working-hours, hygiene, birth-control and, in general, common sense; they also, at least intermittently, want struggle and self-sacrifice, not to mention drums, flags and loyalty-parades. However they may be as economic theories, Fascism and Nazism are psychologically far sounder than any hedonistic conception of life. The same is probably true of Stalin’s militarised version of Socialism. All three of the great dictators have enhanced their power by imposing intolerable burdens on their peoples. Whereas Socialism, and even capitalism in a more grudging way, have said to people “I offer you a good time,” Hitler has said to them “I offer you struggle, danger and death,” and as a result a whole nation flings itself at his feet. Perhaps later on they will get sick of it and change their minds, as at the end of the last war. After a few years of slaughter and starvation “Greatest happiness of the greatest number” is a good slogan, but at this moment “Better an end with horror than a horror without end” is a winner. Now that we are fighting against the man who coined it, we ought not to underrate its emotional appeal.
“
Orwell suggests that minimizing suffering is not a good axiom for maximizing utility, or even for human happiness, along with quite a few other implications. Thoughts?
Replies from: Viliam, Gunnar_Zarncke↑ comment by Viliam · 2022-02-13T21:52:34.409Z · LW(p) · GW(p)
Humans seem to need some meaning in their lives. Some of us are able to find or define that meaning for ourselves. (For example, you may decide to spend your life studying some science, or getting good at certain art, or you may notice some problem and decide to fix it.) However it seems that many people lack this skill -- they either get some meaning from outside (usually some form of "follow the herd") or they do some form of consumption and being angry at world in general and people around them specifically.
People who are good at finding meaning, often imagine a perfect society like: "remove the big problems, and allow people to find their meaning and follow it". Life, Liberty and the pursuit of Happiness!
The problem is, many people suck at the "pursuit of Happiness" part. Some of them would gladly trade their Liberty, sometimes even their Life, in return for someone giving them a simple recipe for Happiness. Unfortunately, people like Hitler or Stalin, who offer them such trade, are usually unwilling to leave other people alone, so as a result everyone loses their Liberty and many people lose their Life.
Is there a solution that would satisfy everyone -- provide some benevolent "herd regime" for the people who need it, and leave along those who don't? I am not sure how to design it. Because, on one hand I am afraid that people suck at guessing what would be best for them; on the other hand, deciding for other people because "I know what is best for them better than they do" sounds like an obvious villain speech.
Like, imagine that everyone is by default brought up to be a member of a herd, but people can freely opt out, and those who do are left alone. But, imagine that some of those people who opted out become popular, and suddenly many people will opt out of the herd just because it is fashionable to do... and now we have the same problem again.
To a small degree, this is already achieved by unproductive activities such as sport. But for most people, sport does not provide a meaning for life. Also, people can opt out of sport, without having any alternative way of spending time.
Individualism is the official ideology of the West. But the fact is, most people do not want to be individuals. They may try it, because it is cool and sometimes convenient, but then they get disappointed and organize some revolt against individualism. For starters, perhaps we should not push them so far. Maybe we should make individualism possible but uncool, so that only people who really desire it will choose it?
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-02-14T00:14:00.383Z · LW(p) · GW(p)
It’s an interesting point you raise, that balancing the varying preferences to mutual satisfaction may not even be possible. There possibly is no solution for a single society in isolation. Though in a world with multiple competing societies, and some amount of movement between them, at the whole society level, there will be competitive pressure to maximize human potential. Perhaps through this dynamic the techniques that are most effective will, eventually, rise to the top. Though considering societies of hundreds of millions of people this process will likely take many centuries.
Additionally, the possibilities of inheritance of epigenetic and genetic factors that induce docility or rebelliousness, etc., could possibly speed up or retard this process, depending on how such knowledge is applied.
↑ comment by Gunnar_Zarncke · 2022-02-07T10:40:44.248Z · LW(p) · GW(p)
Depends on what you mean with utility here, or suffering,
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-02-08T00:16:22.123Z · LW(p) · GW(p)
I go by the OED standard definitions.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2022-02-08T00:25:31.934Z · LW(p) · GW(p)
OK.
the quality of being useful
Useful for whom? All and with equal weight, I guess.
Useful in industrial production and material consumption sense? Or in physical health and mental wellbeing and happiness sense? I mix of both, I guess. The relative weighing probably is what makes the difference.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-02-08T02:42:33.837Z · LW(p) · GW(p)
thankfully the dictionary further elaborates, in addition to the everyday usage, for utility:
c. Philosophy. The ability, capacity, or power of a person, action, or thing to satisfy the needs or gratify the desires of the majority, or of the human race as a whole.
d. The intrinsic property of anything that leads an individual to choose it rather than something else; in game theory, that which a player seeks to maximize in any situation where there is a choice; the value of this, as (actually or notionally) estimated numerically.
Which in the context of Orwell’s quote would likely mean useful for the majority of German citizens circa 1933-1939 in all senses that pertain to satisfying their needs, i.e. all aspects of an industrial society that could further improve its ability, capacity, and power
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2022-02-08T03:23:24.198Z · LW(p) · GW(p)
The German population overall was indeed well organized, cooperating well, well cared for, well trained, and united in shared purpose. The symbol of the fascists - a fasces - is a bundle for a reason. The problem is not that. The things that worked well still motivate some mostly smaller and covert but also larger organizations in Germany and why it has no difficulties to appeal to youth looking for a shared purpose. Maybe something beneficial could be learned from it, but how to untangle this from the connotations in practice? It seems better to reinvent from scratch the good parts in contexts and systems that have suitable checks and balances. Like Germany has right now.
comment by M. Y. Zuo · 2021-11-07T01:42:46.686Z · LW(p) · GW(p)
Hi everyone, I’m fairly new to the community, though I’ve been lurking on and off for a few years, and I would like to hear the opinions on a key question I am unsure about.
What is the ultimate goal of the rationalist enterprise?
I understand there are clear goals to establish ‘Friendly AI’, to realize intelligence ‘upgrading’, if achievable, life extension, and so on. But what is unclear to me is what comes next in the ideal case where all these goals have been achieved, and to what ultimate end.
For example,
I’ve encountered discussions about eudaimonia scenarios (private galaxies, etc.), though I’m not sure how seriously to take those, as surely the possibilities of the co-moving light cone that is within our capacity to inhabit are exhaustible in finite time, especially if all these designs reach their ultimate fruition?
Replies from: ChristianKl, Viliam, None↑ comment by ChristianKl · 2021-11-07T08:49:02.453Z · LW(p) · GW(p)
There are no shared ultimate goals of the rationalist enterprise. Different rationalist have different goals.
Replies from: None↑ comment by Viliam · 2021-11-07T23:50:48.832Z · LW(p) · GW(p)
I think the idea is to have as much fun as possible [LW · GW], and keep doing science (which might expand our opportunities to have fun).
In the very long term, if the universe runs out of energy and nothing in the new science allows us to overcome this issue, then, sadly, we die.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2021-11-08T13:31:26.384Z · LW(p) · GW(p)
Well that is an article that, although interesting, seems to miss a key factor in presenting the eudaemonia scenario ('maximizing fun'). Because it does not define 'fun'. e.g. a paperclip maximizer would consider more paperclips brought into existence as more 'fun'.
And we know from game theory that when there is more than 1 player in any game... the inter-player dynamics ultimately decide their actions as rational agents.
So I cannot see how an individual's aspirations ('fun') are relevant to determining a future state without considering the total sum of all aspirations (sum of all 'funs') as well. Unless there is only 1 conscious entity remaining, which to be fair is not out of the realm of possibility in some very distant future.
Also, this section of the article:
Fun Theory is also the fully general reply to religious theodicy (attempts to justify why God permits evil). Our present world has flaws even from the standpoint of such eudaimonic considerations as freedom, personal responsibility, and self-reliance. Fun Theory tries to describe the dimensions along which a benevolently designed world can and should be optimized, and our present world is clearly not the result of such optimization.
Is not convincing, because it does not actually refute Leibniz's old argument, that only an omniscient and omnipresent being could 'clearly' see whether the world is benevolently designed or not, whether it has been optimized, along all dimensions, to the greatest extent possible or not, even whether it even has any flaws on a total net basis or not.
And I've not yet seen a convincing disproof of those arguments.
Now of course I personally am leery of believing those claims to be true, but then I also cannot prove with 100% certainty that they are false. And the 'Fun Theory' article is certainly presented as if there was such proof.
Replies from: None↑ comment by [deleted] · 2021-11-16T15:32:50.087Z · LW(p) · GW(p)
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2021-11-16T16:46:35.305Z · LW(p) · GW(p)
So why must we prevent paperclip optimizers from bringing about their own ‘fun’?
Replies from: None↑ comment by [deleted] · 2021-11-16T19:13:29.545Z · LW(p) · GW(p)
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2021-11-18T05:35:08.665Z · LW(p) · GW(p)
What’s the rational basis for preferring all mass-energy consuming grey goo created by humans over all mass-energy consuming grey goo created by a paperclip optimizer? The only possible ultimate end in both scenarios is heat death anyways.
Replies from: None↑ comment by [deleted] · 2021-11-18T06:50:09.430Z · LW(p) · GW(p)
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2021-11-20T03:07:04.237Z · LW(p) · GW(p)
If no one’s goals can be definitely proven to be better than anyone else’s goals, then it doesnt seem like we can automatically conclude the majority of present or future humans, or our descendants, will prioritize maximizing fun, happiness, etc.
If some want to pursue that then fine, if others want to pursue different goals, even ones that are deleterious to overall fun, happiness, etc., then there doesn’t seem to be a credible argument to dissuade them?
Replies from: None↑ comment by [deleted] · 2021-11-20T03:22:39.972Z · LW(p) · GW(p)
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2021-11-20T17:16:08.007Z · LW(p) · GW(p)
Those appear to be examples of arguments from consequences, a logical fallacy. How could similar reasoning be derived from axioms, if at all?
Replies from: None↑ comment by [deleted] · 2021-11-20T17:53:15.754Z · LW(p) · GW(p)
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2021-11-28T17:28:19.751Z · LW(p) · GW(p)
Let’s think about it another way. Consider the thought experiment where a single normal cell is removed from the body of any randomly selected human. Clearly they would still be human.
If you keep on removing normal cells though eventually they would die. And if you keep on plucking away cells eventually the entire body would be gone and only cancerous cells would be left, i.e. only a ‘paperclip optimizer’ would remain from the original human, albeit inefficient and parasitic ‘paperclips’ that need a organic host.
(Due to the fact that everyone has some small number of cancerous cells at any given time that are taken care of by regular processes)
At what point does the human stop being ‘human’ and starts being a lump of flesh? And at what point does the lump of flesh become a latent ‘paperclip optimizer’?
Without a sharp cutoff, which I don’t think there is, there will inevitably be inbetween cases where your proposed methods cannot be applied consistently.
The trouble is if we, or the decision makers of the future, accept even one idea that is not internally consistent then it hardly seems like anyone will be able to refrain from accepting other ideas that are internally contradictory too. Nor will everyone err in the same way. There is no rational basis to accept one or another as a contradiction can imply anything at all, as we know from basic logic.
Then the end result will appear quite like monkey tribes fighting each other, agitating against each and all based on which inconsistencies they accept or not. Regardless of what they call each other, humans, aliens, AI, machines, organism, etc…
Replies from: None↑ comment by [deleted] · 2021-11-07T13:38:50.719Z · LW(p) · GW(p)
But what is unclear to me is what comes next in the ideal case where all these goals have been achieved
You live happily ever after.
I’ve encountered discussions about eudaimonia scenarios (private galaxies, etc.), though I’m not sure how seriously to take those, as surely the possibilities of the co-moving light cone that is within our capacity to inhabit are exhaustible in finite time, especially if all these designs reach their ultimate fruition?
Where is the contradiction here?
Replies from: M. Y. Zuocomment by M. Y. Zuo · 2022-08-23T13:58:20.943Z · LW(p) · GW(p)
"The slave begins by demanding justice and ends by wanting to wear a crown." - Albert Camus, Thoughts?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2022-08-23T14:21:51.032Z · LW(p) · GW(p)
Replies from: M. Y. Zuo, M. Y. ZuoAnd always there will be kings, more or less cruel, barons, more or less savage, and always there will be an ignorant people, who harbor admiration toward their oppressors and hatred toward their liberators. And all of it because the slave far better understands his master, even the most cruel one, than he does his liberator, for each slave perfectly well imagines himself in the master’s place, but there are few who imagine themselves in the place of the selfless liberator.
↑ comment by M. Y. Zuo · 2022-08-23T21:02:13.728Z · LW(p) · GW(p)
As a followup, the likely consequences of such a state of affairs were expressed by George Orwell:
Progress is not an illusion, it happens, but it is slow and invariably disappointing. There is always a new tyrant waiting to take over from the old - generally not quite so bad, but still a tyrant. Consequently two viewpoints are always tenable. The one, how can you improve human nature until you have changed the system? The other, what is the use of changing the system before you have improved human nature? They appeal to different individuals, and they probably show a tendency to alternate in point of time. The moralist and the revolutionary are constantly undermining one another.
comment by M. Y. Zuo · 2022-08-23T17:39:03.719Z · LW(p) · GW(p)
"Thus, a people may prefer a free government, but if, from indolence, or carelessness, or cowardice, or want of public spirit, they are unequal to the exertions necessary for preserving it; if they will not fight for it when it is directly attacked; if they can be deluded by the artifices used to cheat them out of it; if by momentary discouragement, or temporary panic, or a fit of enthusiasm for an individual, they can be induced to lay their liberties at the feet even of a great man, or trust him with powers which enable him to subvert their institutions; in all these cases they are more or less unfit for liberty: and though it may be for their good to have had it even for a short time, they are unlikely long to enjoy it."
- John Stuart Mill
Mill enumerates some common scenario in the political development of nation-states. Thoughts?
comment by M. Y. Zuo · 2022-08-22T16:33:59.363Z · LW(p) · GW(p)
Why are there so many cargo cults embraced by one segment of the population or another?
Is it something to do with the need to feel special and unique? To sublimate away the fear of being forgotten after death?
To compete for social status? An outlet for unrefined creative energies?
In some cases it's even more extreme, and seemingly even more maladaptive, than the Pacific Islanders spending their energy on building wooden radar dishes and control towers in hopes of attracting cargo planes.
Such as the unironic collectors of anime girl pillows. Or those single mothers that spend all their money buying luxury handbags, shoes, etc. Or young men going into debt to afford expensive upbadged versions of normal cars (in some cases the cars aren't even faster or more comfortable, just more expensive with flashy trim)
This leads to a scary thought, is the human reward system that easy to hijack?
Replies from: lc↑ comment by lc · 2022-08-25T10:58:14.832Z · LW(p) · GW(p)
I really don't think it's that deep. People cargo cult because they want cargo and have an improper model of how cargo is lifted onto the island. Sometimes they get that improper model because of motivated reasoning but just as often it's because people are stupid.
Replies from: M. Y. Zuocomment by M. Y. Zuo · 2022-07-02T17:41:05.666Z · LW(p) · GW(p)
Hayek preferred a 'liberal dictator' over a 'democratic government lacking liberalism' if given the choice of systems for a transitionary period, Thoughts?
Well, I would say that, as long-term institutions, I am totally against dictatorships. But a dictatorship may be a necessary system for a transitional period. At times it is necessary for a country to have, for a time, some form or other of dictatorial power. As you will understand, it is possible for a dictator to govern in a liberal way. And it is also possible for a democracy to govern with a total lack of liberalism. Personally I prefer a liberal dictator to democratic government lacking liberalism.
Friedrich Hayek, Interview in El Mercurio (1981)
Personally, I can understand the appeal of a dictatorship that defends minority groups, even for purely self-interested purposes, as well as the odiousness of a fair 'democratic government' that would reduce the same for even the most well reasoned and widely agreed upon purposes.
And if given an either-or choice I can see why Hayek would side with the dictatorship.
Replies from: Dagon↑ comment by Dagon · 2022-07-02T22:28:43.213Z · LW(p) · GW(p)
Would you like a "good process that gives bad outcomes" or a "bad process that gives good outcomes" is a VERY weird thing to ask a consequentialist. A process that gives good outcomes IS a good process. The hidden part of Hayek's example is the timeframes and exit from a "transitional period", and the difficult question of where to find this dictator and make the populace accept it well enough that the dictator feels safe being liberal in policy.
IMO, dictatorships are less likely to be or stay liberal than democratic governments. And this gives democratic governments a pretty big edge, especially over timeframes that span generations. But we don't actually have many examples of dictatorships that last that long, or are liberal enough to qualify, nor of pure democracies, so I can't say whether I'd prefer any specific dictatorship over some specific democracy-like example.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-07-03T00:06:10.081Z · LW(p) · GW(p)
A process that gives good outcomes IS a good process.
This seems to be some variant of the ends justifying the means?
Replies from: gwern, Dagon, JBlack↑ comment by gwern · 2022-07-03T00:25:22.967Z · LW(p) · GW(p)
Most people here are consequentalists, and so would ask, what else could justify the means?
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-07-03T01:36:06.414Z · LW(p) · GW(p)
The means could be self justifying, the initial conditions could justify the means, the environment could justify the means, pure self-interest, etc... Having the ends, and only the ends, justify the means seems like a very unlikely position for the majority of the human population to hold, given the huge array of possibilities.
EDIT: And some may even say there are no justifications at all, the very idea itself fallacious, also for a variety of reasons such as:
- Free will doesn't truly exist, usually expressed technically as humans are like every other thermodynamic process in the universe, determinists, super-determinists, predestination theologians (with suitable religious phrasing), etc., belong to this category.
- Justifications are always relative to some reference frame, moral relativists, cultural relativists, etc., belong to this category.
- Words themselves lack meaning, lack sufficient rigour, lack some metaphysical quality, etc., to express this kind of relationship of 'justifying means', Wittgenstein, Heidegger, Popper, et. al, belong to this category.
- and so on
↑ comment by Dagon · 2022-07-03T02:05:56.309Z · LW(p) · GW(p)
Kind of. The means (and their consequences) are part of the ends. Most people trying to justify good ends through bad means forget that, and are actually pursuing bad ends. But if the sum of the results along the way is good, that's good.
Replies from: M. Y. Zuo↑ comment by JBlack · 2022-07-03T12:46:23.578Z · LW(p) · GW(p)
The phrase "ends justify the means" originally came from a context of ruling a state where it meant more like "beneficial longer term outcomes may matter more than whatever condemnation comes in the short term". It was never about whether such acts are good or bad, just that from a wider point of view they might be judged worthwhile.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2022-07-03T13:12:02.502Z · LW(p) · GW(p)
I don't think the phrase originally came from any singular context or source. As it was a common enough view in all the major ancient civilizations, Ancient Egypt, Mesopotamia, Indus, and the North China Plain.
It also seems unlikely to have originated in ruling a state generally since recent historiography is confident that certain professions, such as prostitution, likely predate any recorded organized state.
comment by M. Y. Zuo · 2021-12-12T05:15:50.207Z · LW(p) · GW(p)
- Some of the people on death row today might not be there if the courts had not been so lenient on them when they were first offenders. Thomas Sowell
The problem of ‘justice’. Over what time scales do we refer to when we say ‘justice’? Near ‘justice’ can be in reality a far tyranny. And vice versa. Thoughts?
Replies from: ChristianKl↑ comment by ChristianKl · 2021-12-12T12:38:18.881Z · LW(p) · GW(p)
Some of them are also on death row because the courts were too harsh and put them together with other criminals and got them to join a gang to survive in a harsh enviroment.
Replies from: frontier64, M. Y. Zuo↑ comment by frontier64 · 2021-12-13T17:52:56.805Z · LW(p) · GW(p)
This happens exceedingly rarely. The thing you should understand about the American court system before judging it is that non-murderers rarely get sent to prison for more than a year for a first-time offense. If it's truly non-violent drug possession or dealing then in all likelihood a first-time offender won't get more than probation. The prison system is not creating felons by punishing people too harshly.
Look at the kid sent to prison for 7 years for threatening to shoot up his school on runescape. That's one of those examples of a way over-harsh punishment only given to send a message. The kid didn't fall in with a bad crowd because he was never a bad person. Prison doesn't force good people to become murderers when they get out. This kid did his unjust sentence, got out, and he's still a nice a dude.
It's fairly easy to picture violent criminals as similar to you but unlucky to have grown up in a worse situation or been screwed by the system. It's hard to understand them as they are: people who are incredibly different from you and have a value system that is not aligned with society.
↑ comment by M. Y. Zuo · 2021-12-12T15:02:58.931Z · LW(p) · GW(p)
Sounds like a possible scenario as well. Are they both just, both unjust, one or the other, variable?
And what period of time should we use as a standard?
The same for both scenarios, lenient punishments encouraging more severe crimes later on, and onerous punishments also encouraging more severe crimes. since both lead to the same outcome?
Or different?
Replies from: ChristianKl↑ comment by ChristianKl · 2021-12-12T21:26:52.595Z · LW(p) · GW(p)
When it comes to interacting with complex systems expecting them to work according to your own preconceptions is generally a bad idea. You want policy to be driven by evidence about effects of interventions and not just based on thought experiments. You want to build feedback system into your system to optimize it's actions.
You want to produce institutions that assume that they can't know the answer to question like this just by thinking about but that think about how to gather the evidence to make informed policy choices.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2021-12-12T23:00:49.283Z · LW(p) · GW(p)
Well that’s all well and good but all organizations, including all conceivable institutions, will eventually seek to optimize towards goals that we, present day people, cannot completely control.
i.e. they will carry out their affairs using whatever is at hand through their own preconceptions, regardless of how perfect our initial designs are or what we want their behaviours to be or how much we wish for them to lack preconceptions. They will seek answers to similar questions, perhaps with the same or different motivations.
So then if we proceed along such a path the same problem appears at the meta level. How do we take into account what future actors will consider what period of time they should use as a standard? (In order to build the ‘feedback system’ for them to operate in)