Policy Debates Should Not Appear One-Sided
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-03T18:53:08.000Z · LW · GW · Legacy · 187 commentsContents
187 comments
Robin Hanson proposed stores where banned products could be sold.1 There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals. But even so (I replied), some poor, honest, not overwhelmingly educated mother of five children is going to go into these stores and buy a “Dr. Snakeoil’s Sulfuric Acid Drink” for her arthritis and die, leaving her orphans to weep on national television.
I was just making a factual observation. Why did some people think it was an argument in favor of regulation?
On questions of simple fact (for example, whether Earthly life arose by natural selection) there’s a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called “balance of evidence” should reflect this. Indeed, under the Bayesian definition of evidence, “strong evidence” is just that sort of evidence which we only expect to find on one side of an argument.
But there is no reason for complex actions with many consequences to exhibit this onesidedness property. Why do people seem to want their policy debates to be one-sided?
Politics is the mind-killer. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.
One should also be aware of a related failure pattern: thinking that the course of Deep Wisdom is to compromise with perfect evenness between whichever two policy positions receive the most airtime. A policy may legitimately have lopsided costs or benefits. If policy questions were not tilted one way or the other, we would be unable to make decisions about them. But there is also a human tendency to deny all costs of a favored policy, or deny all benefits of a disfavored policy; and people will therefore tend to think policy tradeoffs are tilted much further than they actually are.
If you allow shops that sell otherwise banned products, some poor, honest, poorly educated mother of five kids is going to buy something that kills her. This is a prediction about a factual consequence, and as a factual question it appears rather straightforward—a sane person should readily confess this to be true regardless of which stance they take on the policy issue. You may also think that making things illegal just makes them more expensive, that regulators will abuse their power, or that her individual freedom trumps your desire to meddle with her life. But, as a matter of simple fact, she’s still going to die.
We live in an unfair universe. Like all primates, humans have strong negative reactions to perceived unfairness; thus we find this fact stressful. There are two popular methods of dealing with the resulting cognitive dissonance. First, one may change one’s view of the facts—deny that the unfair events took place, or edit the history to make it appear fair.2 Second, one may change one’s morality—deny that the events are unfair.
Some libertarians might say that if you go into a “banned products shop,” passing clear warning labels that say THINGS IN THIS STORE MAY KILL YOU, and buy something that kills you, then it’s your own fault and you deserve it. If that were a moral truth, there would be no downside to having shops that sell banned products. It wouldn’t just be a net benefit, it would be a one-sided tradeoff with no drawbacks.
Others argue that regulators can be trained to choose rationally and in harmony with consumer interests; if those were the facts of the matter then (in their moral view) there would be no downside to regulation.
Like it or not, there’s a birth lottery for intelligence—though this is one of the cases where the universe’s unfairness is so extreme that many people choose to deny the facts. The experimental evidence for a purely genetic component of 0.6–0.8 is overwhelming, but even if this were to be denied, you don’t choose your parental upbringing or your early schools either.
I was raised to believe that denying reality is a moral wrong. If I were to engage in wishful optimism about how Sulfuric Acid Drink was likely to benefit me, I would be doing something that I was warned against and raised to regard as unacceptable. Some people are born into environments—we won’t discuss their genes, because that part is too unfair—where the local witch doctor tells them that it is right to have faith and wrong to be skeptical. In all goodwill, they follow this advice and die. Unlike you, they weren’t raised to believe that people are responsible for their individual choices to follow society’s lead. Do you really think you’re so smart that you would have been a proper scientific skeptic even if you’d been born in 500 CE? Yes, there is a birth lottery, no matter what you believe about genes.
Saying “People who buy dangerous products deserve to get hurt!” is not tough-minded. It is a way of refusing to live in an unfair universe. Real tough-mindedness is saying, “Yes, sulfuric acid is a horrible painful death, and no, that mother of five children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.” Can you imagine a politician saying that? Neither can I. But insofar as economists have the power to influence policy, it might help if they could think it privately—maybe even say it in journal articles, suitably dressed up in polysyllabismic obfuscationalization so the media can’t quote it.
I don’t think that when someone makes a stupid choice and dies, this is a cause for celebration. I count it as a tragedy. It is not always helping people, to save them from the consequences of their own actions; but I draw a moral line at capital punishment. If you’re dead, you can’t learn from your mistakes.
Unfortunately the universe doesn’t agree with me. We’ll see which one of us is still standing when this is over.
1Robin Hanson et al., “The Hanson-Hughes Debate on ‘The Crack of a Future Dawn,’” 16, no. 1 (2007): 99–126, http://jetpress.org/v16/hanson.pdf.
2This is mediated by the affect heuristic and the just-world fallacy.
187 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by HalFinney · 2007-03-03T21:41:14.000Z · LW(p) · GW(p)
Like much of Eliezer's writings, this is dense and full of interesting ideas, so I'll just focus on one aspect. I agree that people advocating positions should fully recognize even (or especially) facts that are detrimental to their side. People advocating deregulation need to accept that things exactly like Eliezer describes will happen.
I'm not 100% sure that in a public forum where policy is being debated, that people should feel obligated to advance arguments that work to their side's detriment. It depends on what the ground rules are (possibly implicit ones). If everyone is making a good faith attempt to provide this kind of balance in their statements, it could work well in theory. But if one side does this and the other does not, it will lead to an unbalanced presentation of the issues. Since in practice it seems that most people aren't so even-handed in their arguments, that would explain why when someone does point out a fact that benefits one side, the audience will assume he favors that side, as happened to Eliezer.
Reading the above, I get the impression that Eliezer does in fact favor regulation in this context, and if so, then the audience conclusion was correct. He was not pointing out a fact that worked to oppose his conclusion, but rather he was providing a factual point that supports and leads to his position. So this would not be the best example of this somewhat idealized view of how policy debates should be conducted.
I note that thousands of people die every year in motorcycle accidents, a death rate far higher than in most modes of transportation. However I do not support banning motorcycles, for various reasons I won't go into at this time.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-03T22:51:58.000Z · LW(p) · GW(p)
Hal, I don't favor regulation in this context - nor would I say that I really oppose it. I started my career as a libertarian, and gradually became less political as I realized that (a) my opinions would end up making no difference to policy and (b) I had other fish to fry. My current concern is simply with the rationality of the disputants, not with their issues - I think I have something new to say about rationality.
I do believe that people with IQ 120+ tend to forget about their conjugates with IQ 80- when it comes to estimating the real-world effects of policy - either by pretending they won't get hurt, or by pretending that they deserve it. But so long as their consequential predictions seem reasonable, and so long as I don't think they're changing their morality to try to pretend the universe is fair, I won't argue with them whether they support or oppose regulation.
Replies from: bio_logical, HungryHobo↑ comment by bio_logical · 2013-10-17T19:07:00.666Z · LW(p) · GW(p)
I favor the thesis statement here ("Policy debates should not appear one-sided"), but I don't favor the very flawed "argument" that supports it. One-sided policy debates should, in fact, appear one-sided, GIVEN one participant with a superior intelligence. Two idiots from two branches of the same political party arguing over which way to brutalize a giant body of otherwise uninvolved people (what typically amounts for "policy debate") should not appear "one sided" ...except to the people who know that there's only one side being represented, (typically, the side that assumes that coercion is a good solution to the problem).
Hal, I don't favor regulation in this context - nor would I say that I really oppose it.
This is a life or death issue, and you don't have a moral opinion? What purpose could you possibly have for calling yourself a "libertarian" then? If the libertarian philosophy isn't consistent, or doesn't work, then shouldn't it be thrown out? Or, if it doesn't pertain to the new circumstances, then shouldn't it be known as something different than "libertarianism"? (Maybe you'd be a "socialist utopian" post-singularity, but pre-singularity when lots of people have IQs of <2,000, you're a libertarian. In this case, it might make more sense to call yourself a Hayekian "liberal," because then you're simply identifying with a historical trend that leads to a certain predicted outcome.)
I started my career as a libertarian, and gradually became less political as I realized that (a) my opinions would end up making no difference to policy
Gosh, I'm glad that Timothy Murphy, Lysander Spooner, and Frederick Douglass didn't feel that way. I'm also glad that they didn't feel that way because they knew something about how influential they could be, they understood the issues at a deep level, and they were highly-motivated. Just because the Libertarian Party is ineffectual and as infiltrated as the major parties are corrupt doesn't mean it has to be. Moreover, there are far more ways to influence politics than by getting involved with a less-corrupt third party. This site itself could be immensely influential, and actually could obtain a more rational, laissez-faire outcome from politics (although it couldn't do that by acting within the confines established by the current political system). Smart people should act in a smart way to get smart results: even in the domain of politics. If politics is totally corrupted (as I believe it is) then such smart people should act in a manner that is philosophically external to the system, and morally superior to it.
and (b) I had other fish to fry.
This is a legitimate concern. We all have priorities. That's actually the purpose of philosophy itself. If I didn't think you had chosen wisely, I probably wouldn't be on this site. That said, nothing stops you from at least passively being as right as Thoreau was, over 100 years ago.
My current concern is simply with the rationality of the disputants, not with their issues - I think I have something new to say about rationality.
Rationality has something to say about every issue, and political issues are especially important, because that's where one mostly-primate MOSH has a gun, and a stated intention of using it.
I do believe that people with IQ 120+ tend to forget about their conjugates with IQ 80- when it comes to estimating the real-world effects of policy - either by pretending they won't get hurt, or by pretending that they deserve it.
As if these were the only two options. (And as if regulation helped the poor! LOL!) This makes a "straw man" of libertarianism. Walter Block points out that the law of unintended consequences indicates that the abuse of force, as minimum wage "regulation" allegedly intended to help the poor, actually hurts them. He also points out that the people making and enforcing the policies know this, because they have the evidence to know it, but that they often don't care, or are beholden to perverse interests, such as unions who wish to put less-skilled labor out of business. Occasionally, if such regulation hurts the poor in combination with a set of regulations and "corrections" to those regulations, so one mustn't narrow their criticism to just one political intervention. It's a good idea to think about this until you comprehend it at a deep level.
But so long as their consequential predictions seem reasonable, and so long as I don't think they're changing their morality to try to pretend the universe is fair, I won't argue with them whether they support or oppose regulation.
This is a moral failing on your part, if you think that your argument could possibly lead to a better outcome, and if you WOULD argue with them about the right to contract with cryonics companies, in the case where a person you love will either die for good, or have the chance of life. This is not a criticism of you, because I go through my day in a continual stream of moral failures, as does everyone who lacks the ability to solve a really large moral problem. If my IQ was 2,000 and I allowed to prison industrial complex to continue to operate, and even paid taxes to support it, that would be an immense moral failing. If, with my far lower IQ I pay taxes because I'm stupid and coerced, that's a lesser moral failing, but a moral failing none the less. Unless I stop complying with evil, as Thoreau did, I'm guilty of a moral failing (Thoreau was only guilty of a mental and physical failing). Lysander Spooner and Frederick Douglass were both guilty of physical and mental failing as well, but to a lesser extent. They were fairly effective, even if they were far less effective than an artilect with and IQ of 2,000 would have been.
p( overall fairness of law | unfairness) is probably a bad way to look at this, because it's using a suitcase word "unfairness" that means something different to different people. Even given this context, I could point out that the universe trends toward fairness, given intelligence (but that the world is very unintelligent now, because it's only at human-level intelligence, which is USUALLY scarcely more philosophical than animal intelligence). The concept of individual rights requires emotional distance, given occasional "unavoidable under any system" bad outcomes. The bad outcomes are often too difficult to analyze for "unfairness" or "fairness" but bad outcomes that seem cruel are always useful to the politician, because every law they make definitely enhances their illegitimate power. If we're smart enough to recognize that this is a universal, and that this has caused the complete degradation of our once-life-saving-but-now-life-destroying system of property and law, then why shouldn't we always point it out? The abolitionists only gained ground in defeating chattel slavery when they refused to be silent.
Moreover, since the common law has been thrown out, the politicians and their agents will predictably have free rein to enforce the new laws in whatever manner they choose. This is also a known fact of reality that can and should factor into every argument we make.
You can see the dead mother holding the unlabeled bottle of "sulfuric acid," but you can't see the society that refrained from ever going down the path of a regulatory big brother, where the courts and the media had functioned properly in their information-sending capacity for 100 years. You can't see the carefulness of a society that reads labels because it might really matter, since the government can't and won't protect you; and you also don't see the benevolence of a society that hasn't been trained to mindlessly trust everything that carries an FDA-label. You can see the bad result, but can't imagine the good alternative. If you compound this error by appealing to force to solve the problem, you mark yourself as a low-intelligence human. This is clear to anyone who has been paying attention. The fact that you say that you neither favor nor oppose regulation indicates that, on this issue, you had better things to do than pay attention.
But let's consider the admittedly "unfair" but vastly "fairer" universe as it looks with far less regulation, and let's not make the stupid (unwittingly self-destructive) blunder of assuming regulation the saved life of an idiot. In actuality, in the unregulated universe, there are orders of magnitude fewer dead mothers, from all causes, not just mindless mistakes of their own causing. Additionally, there is then a pressure against "moral hazard" in that universe. Without this moral hazard, the universe is far more intelligent, and thus far fairer.
You'll also never see the 100 years of unregulated progression in the direction of the laissez-faire "fairer" universe. You only see the alleged "fast track" to justice (where the legislators have drowned us all in unenforceable laws with perverse outcomes for over 100 years), and you begin arguing in that muddy environment. Of course you'll lose any argument unless you argue based on a deep philosophical conviction, because the subject will remain narrow, and the political side of the argument will be able to keep the focus of the discussion sufficiently narrow. If you attempt to reference the larger picture, you'll be accused of being "impractical" or "off topic."
Would you have "argued" with a slavery advocate in the time of abolitionism? (Or at least "stated your opposition to them.") Would you have argued with a Hitler supporter in 1930 Germany? If you'd like to think that you would, then next time someone defends the truly indefensible (not what is considered to be indefensible by the sociopath-directed conformist majority), then you should point out that their ideas are stupid and murderous. It's the least you can do for the mother whose only choice of medicine is an FDA-approved version of sulfuric acid.
Replies from: frankybegs↑ comment by frankybegs · 2020-04-03T06:15:05.533Z · LW(p) · GW(p)
I think you need to read more of the writings here re: scepticism of one's own beliefs
↑ comment by HungryHobo · 2015-03-17T18:12:29.513Z · LW(p) · GW(p)
I think you may be conflating 2 common meanings of the word "deserve" which may be magnifying some of the conflict over your statement.
deserve [moral], ie that someone deserves it like someone might deserve prison for a terrible crime and deserve as in [events that happen to them as a result of their own actions] which need not have any moral elements.
Someone who goes walking into the Sahara without water, shelter or navigation equipment has done no moral wrong. They don't deserve [moral] bad things to happen to them but bad things may happen to them as a result of their unwise choices and may be entirely their own doing. In that sense they get what they have earned or "deserve" [non-moral, as in "have a claim to" or outcome they have brought about]. It's not something malicious that's been forced upon them by others.
Someone who steps in a hidden bear trap has been unfairly maimed by a cruel uncaring universe and does not deserve [moral] or deserve[reaping earned results of their own actions] it.
Someone who, against the advice of others, ignoring all safety warnings, against even their own better judgement uses a clearly marked bear trap as a sex toy has done no moral wrong. They don't deserve[moral] bad things to happen to them but bad things may happen to them as a result of their unwise choices.
comment by TGGP3 · 2007-03-03T23:03:38.000Z · LW(p) · GW(p)
Nobody chooses their genes or their early environment. The choices they make are determined by those things (and some quantum coin flips). Given what we know of neuroscience how can anyone deserve anything?
Replies from: BlueAjah, smk, EngineerofScience↑ comment by BlueAjah · 2013-01-12T15:01:54.646Z · LW(p) · GW(p)
"Nobody chooses their genes or their early environment. The choices they make are determined by those things (and some quantum coin flips)."
All true so far... but here comes the huge logical leap...
"Given what we know of neuroscience how can anyone deserve anything?"
What does neuroscience showing the cause of why bad people choose to do bad things, have to do with whether or not bad people deserve bad things to happen to them?
The idea that bad people who choose to do bad things to others deserve bad things to happen to them has never been based on an incorrect view of neuroscience, and neuroscience doesn't change that even slightly.
Replies from: Chrysophylax↑ comment by Chrysophylax · 2013-01-16T10:43:51.674Z · LW(p) · GW(p)
The point TGGP3 is making is that they didn't choose to do bad things, and so are not bad people - they're exactly like you would be if you had lived their lives. Always remember that you are not special - nobody is perfectly rational, and nobody is the main character in a story. To quote Eliezer, "You grew up in a post-World-War-Two society where 'I vas only followink orders' is something everyone knows the bad guys said. In the fifteenth century they would've called it honourable fealty." Remember that some Nazis committed atrocities, but some Nazis were ten years old in 1945. It is very difficult to be a "good person" (by your standards) when you have a completely different idea of what being good is. You are displaying a version of the fundamental attribution error - that is, you don't think of other people as being just like you and doing things for reasons you don't know about, so you can use the words "bad person" comfortably. The idea "bad people deserve bad things to happen to them" is fundamentally flawed because it assumes that there is such a thing as a bad person, which is unproven at best - even the existence of free will is debatable.
There are people who consider themselves to be bad people, but they tend to be either mentally ill or people who have not yet resolved the conflict between "I have done X" and "I think that it is wrong to do X" - that is, they have not adjusted to having become new people with different morals since they did X (which is what criminal-justice systems are meant to achieve).
Replies from: twanvl↑ comment by twanvl · 2013-01-16T11:10:29.428Z · LW(p) · GW(p)
The point TGGP3 is making is that they didn't choose to do bad things, and so are not bad people - they're exactly like you would be if you had lived their lives.
I can only interpret a statement like this as "they are exactly like you would be if you were exactly like them", which is of course a tautology.
The idea "bad people deserve bad things to happen to them" is fundamentally flawed because it assumes that there is such a thing as a bad person
If you first accept a definition of what is good and what is bad, then certainly there are bad people. A bad person is someone who does bad things. This is still relative to some morality, presumably that of the speaker.
Replies from: MugaSofer, Chrysophylax↑ comment by MugaSofer · 2013-01-16T13:55:58.272Z · LW(p) · GW(p)
I can only interpret a statement like this as "they are exactly like you would be if you were exactly like them", which is of course a tautology.
No. If they were, say, psycopaths, or babyeater aliens in human skins, then living their life - holding the same beliefs, experienceing the same problems - would not make you act the same way. It's a question of terminal value differences and instrumental value differences. The former must be fought, (or at most bargained with,) but the latter can be persuaded.
If you first accept a definition of what is good and what is bad, then certainly there are bad people. A bad person is someone who does bad things. This is still relative to some morality, presumably that of the speaker.
So anyone who's actions have negative consequences "deserves" Bad Things to happen to them?
Replies from: twanvl↑ comment by twanvl · 2013-01-16T14:54:36.498Z · LW(p) · GW(p)
So anyone who's actions have negative consequences "deserves" Bad Things to happen to them?
I am not saying that. I was only replying to the part "... is fundamentally flawed because it assumes that there is such a thing as a bad person".
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-18T12:30:52.617Z · LW(p) · GW(p)
My point is that the distinction between "Bad Person" and "Good Person" seems ... well, arbitrary. Anyone's actions can have Bad Consequences. I guess that didn't come across so well, huh?
Replies from: Peterdjones, army1987↑ comment by Peterdjones · 2013-01-18T13:08:48.610Z · LW(p) · GW(p)
This is a flaw with (ETA: simpler versions of) consequentialism: no one can accurately predict the long range consequences of their actions. But it is unreasonable to hold someone culpable, to blame them, for what they cannot predict. So the consequentialist notion of good and bad actions doesn't translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise. This line of thinking can lead to a kind of fusion of deontology and consequentialism: we praise someone for following the rules ("as a rule, try to save a life where you can") even if the consequences were unwelcome ("The person you saved was a mass murderer");
Replies from: TheOtherDave, ArisKatsaris, JGWeissman, None↑ comment by TheOtherDave · 2013-01-18T15:55:28.414Z · LW(p) · GW(p)
I agree that if what I want is a framework for assigning blame in a socially useful fashion, consequentialism violates many of our intuitions about reasonableness of such a framework.
So, sure, if the purpose of morality is to guide the apportionment of praise and blame, and we endorse those intuitions, then it follows that consequentialism is flawed relative to other models.
It's not clear to me that either of those premises is necessary.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-01-18T16:20:54.578Z · LW(p) · GW(p)
There's a confusion here between consequentialistically good acts (ones that have good consequences) and consequentialistically good behaviour (acting according to your beliefs of what acts have good consequences).
People can only act according to their model of the consequences, not accoriding to the consequences themselves.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-18T23:43:02.341Z · LW(p) · GW(p)
I find your terms confusing, but yes, I agree that classifying acts is one thing and making decisions is something else, and that a consequentialist does the latter based on their expectations about the consequences, and these often get confused.
↑ comment by ArisKatsaris · 2013-01-18T16:17:10.496Z · LW(p) · GW(p)
A consequentialist considers the moral action to be the one that has good consequences.
But that means moral behaviour is to perform the acts that we anticipate to have good consequences.
And moral blame or praise on people is likewise assigned on the consequences of their actions as they anticipated them...
So the consequentialist assigns moral blame if it was anticipated that the person saved was a mass murderer and was likely to kill multiple times again....
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T16:22:00.345Z · LW(p) · GW(p)
And how do we anticipate or project, save on the basis of relatively tractable rules?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-01-18T16:49:33.308Z · LW(p) · GW(p)
We must indeed use rules as a matter of practical necessity, but it's just that: a matter of practical necessity. We can't model the entirety of our future lightcone in sufficient detail so we make generic rules like "do not lie" "do not murder" "don't violate the rights of others" which seem to be more likely to have good consequences than the opposite.
But the good consequences are still the thing we're striving for -- obeying rules is just a means to that end, and therefore can be replaced or overriden in particular contexts where the best consequences are known to be achievable differently...
A consequentialist is perhaps a bit scarier in the sense that you don't know if they'll stupidly break some significant rule by using bad judgment. But a deontologist that follows rules can likewise be scary in blindly obeying a rule which you were hoping them to break.
In the case of super-intelligent agents that shared my values, I'd hope them to be consequentialists. As intelligence of agent decreases, there's assurance in some limited type of deontology... "For the good of the tribe, do not murder even for the good of the tribe..."
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T17:42:30.450Z · LW(p) · GW(p)
That's the kind of Combination approach I was arguing for.
Replies from: DaFranker↑ comment by DaFranker · 2013-01-18T18:14:07.371Z · LW(p) · GW(p)
My understanding of pure Consequentialism is that this is exactly the approach it promotes.
Am I to understand that you're arguing for consequentialism by rejecting "consequentialism" and calling it a "combination approach"?
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-20T15:39:36.735Z · LW(p) · GW(p)
That would be why he specified "simpler versions", yes?
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-20T17:15:33.066Z · LW(p) · GW(p)
Yes
↑ comment by JGWeissman · 2013-01-18T16:38:52.488Z · LW(p) · GW(p)
So the consequentialist notion of good and bad actions doesn't translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise.
What I want out of a moral theory is to know what I ought to do.
As far as blame and praise go, consequentialism with game theory tells you how to use a system of blame and praise provide good incentives for desired behavior.
Replies from: Peterdjones, fubarobfusco↑ comment by Peterdjones · 2013-01-18T17:40:43.149Z · LW(p) · GW(p)
What I want out of a moral theory is to know what I ought to do.
So you don't want to be able to understand how punishments and rewards are morally justified--why someone ought, or not, be sent to jail?
Replies from: None↑ comment by [deleted] · 2013-01-18T17:53:00.485Z · LW(p) · GW(p)
It seems to me that judging people and sending them to jail is on the level of actions, like whether you should donate to charity. Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.
I don't think a moral theory has to have special cases built in for judging other people's actions, and then prescribing rewards/punishments. It should describe constriants on what is right, and then let you derive individual cases like the righteusness of jail from what is right in general.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T18:03:08.324Z · LW(p) · GW(p)
Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.
But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.
I don't think a moral theory has to have special cases built in for judging other people's actions, and then prescribing rewards/punishments
Universalisability rides again.
Replies from: JGWeissman, None↑ comment by JGWeissman · 2013-01-18T18:25:23.832Z · LW(p) · GW(p)
But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.
The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn't seem very meaningful. In general, I don't want people to go to jail because jail is unpleasant, it prevents people from doing many useful things, and its dehumanizing nature can lead to people becoming more criminal. I want specific people to go jail because it prevents them from repeating their bad actions, and having jail as a predictable consequence for a well defined set of bad behaviors is an incentive for people not to execute those bad behaviors. (And I want our criminal justice system to be more efficient about this.) I don't see why it has to be more complicated, or more fundamental, than that. Nyan is exactly right, judging other people's actions is just another sort of action you can choose, it is not fundamentally a special case.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T19:11:30.875Z · LW(p) · GW(p)
The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn't seem very meaningful.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They're either in jail or they are not.
Nyan is exactly right, judging other people's actions is just another sort of action you can choose, it is not fundamentally a special case.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.
Replies from: DaFranker↑ comment by DaFranker · 2013-01-18T19:35:40.135Z · LW(p) · GW(p)
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do.
No. It's about what JGWeissman in general ought to do, including "JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman".
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we're having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate "Give fish or not?"
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn't directly impact or impacts it in a non-obvious way.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Also note that "judging something good" and "giving praise and rewards", as well as "judging something bad" and "attributing blame and giving punishment", are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Your mental judgments are actions, in the useful sense when discussing metaethics.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T19:53:08.099Z · LW(p) · GW(p)
No. It's about what JGWeissman in general ought to do, including "JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman".
Is it? That isn't relevant to me. It isn't relevant to interaction between people, it isn't relevant to society as a whole, and it isn't relevant to criminal justice. I don't see why I should call anything so jejune "morality".
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we're having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate "Give fish or not?"
Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don't know what you think is blocking that off.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone's wallet although the money is morally neutral.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones.
That is not an fact about morality that is a implication of the naive consequentualist theory of morality -- and one that is often used as an objection against it.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
Also note that "judging something good" and "giving praise and rewards", as well as "judging something bad" and "attributing blame and giving punishment", are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Replies from: DaFranker↑ comment by DaFranker · 2013-01-18T20:16:38.199Z · LW(p) · GW(p)
(...)
Is it? That isn't relevant to me. It isn't relevant to interaction between people, it isn't relevant to society as a whole, and it isn't relevant to criminal justice. I don't see why I should call anything so jejune "morality".
(...)
Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don't know what you think is blocking that off.
Indeed. "Judge actions of Person X" leads to better consequences than not doing it as far as they can predict. "Judging past actions of others" is an action that can be taken. "Judging actions of empirical cluster Y" is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include "punish the idiot who did that" and "blame the person" and whatever other moral judgments are appropriate).
Did I somehow communicate that something was blocking that off? If you hadn't said "I don't know what you think is blocking that off.", I'd have assumed you were perfectly agreeing with me on those points.
(...)
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
If you want to put your own labels on everything, then yes, that's exactly what my theory is and that's exactly how it works.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
So yes, by your words, I'm being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some incredible coincidence, that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans.
How incredibly coincidental and curious!
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Your mental judgments are actions, in the useful sense when discussing metaethics
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you'll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it's more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called "morally good" themselves.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T20:30:17.030Z · LW(p) · GW(p)
Indeed. "Judge actions of Person X" leads to better consequences than not doing it as far as they can predict. "Judging past actions of others" is an action that can be taken. "Judging actions of empirical cluster Y" is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include "punish the idiot who did that" and "blame the person" and whatever other moral judgments are appropriate).
The point being what? That moral judgments have an instrumental value? That, they don't have a moral value? That morality collapses into instrumentality.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
Yes, but the idiosyncratic disposition of your values doesn't make egoism into standard c-ism.
How incredibly coincidental and curious!
That was mean sarcastically: so it isn't coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea.
Your mental judgments are actions, in the useful sense when discussing metaethics
What is the point of that comment?
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant.
That is not obvious.
To return to your previous words, I believe you'll agree that someone who
That is incomplete.
Replies from: DaFranker↑ comment by DaFranker · 2013-01-18T20:40:07.541Z · LW(p) · GW(p)
To return to your previous words, I believe you'll agree that someone who
That is incomplete.
Oh, sorry. I was jumping from place to place. I've edited the comment, what I meant to say was:
"To return to your previous words, I believe you'll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it's more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called "morally good" themselves."
Your mental judgments are actions, in the useful sense when discussing metaethics
What is the point of that comment?
For me, it's a good heuristic that judgments and thoughts also count as actions when I'm thinking of metaethics, because thinking that something is good or judging an action as bad will influence how I act in the future indirectly.
So a good metaethics has to also be able to tell which kinds of thoughts and judgments are good or bad, and what methods and algorithms of making judgments are better, and what / who they're better for.
The point being what? That moral judgments have an instrumental value? That, they don't have a moral value? That morality collapses into instrumentality[?]
Mu, yes, no, yes.
Moral judgments are instrumentally valuable for bringing about more morally-good behavior. Therefore they have moral value in that they bring about more expected moral good. Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the "considered better" is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
(...)
Yes, but the idiosyncratic disposition of your values doesn't make egoism into standard c-ism.
I suppose. The wikipedia page for Consequentialism seems to suggest that a significant portion of consequentialism takes a view very similar to this.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T21:12:55.416Z · LW(p) · GW(p)
Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the "considered better" is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
That isn't a reduction that can be performed by real-world agents. You are using "reduction" in the peculiar LW sense of "ultimately composed of" rather than the more usual "understandable in terms of". For real-world agents, morality does not reduce (2) to instrumentality: they may be obliged to overide their instrumental concerns in order to be moral.
Replies from: DaFranker↑ comment by DaFranker · 2013-01-18T21:34:10.694Z · LW(p) · GW(p)
they may be obliged to overide their instrumental concerns in order to be moral.
Errh, could you reduce/taboo/refactor "instrumental concerns" here?
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just "considered moral now" but would result in lots of moral bad later.
One weird example here is making computer programs. Isn't it rather a moral good to make computer programs that are useful to at least some people? Should this override the instrumental part where the computer program in question is an unsafe paperclip-maximizing AGI?
I'm not sure I understand your line of reasoning for that last part of your comment.
On another note, I agree that I was using "reduction" in the sense of describing a system according to its ultimate elements and rules, rather than...
"understandable in terms of"? What do you even mean? How is this substantially different? The wikipedia article's "an approach to understanding the nature of complex things by reducing them to the interactions of their parts" definition seems close to the sense LW uses.
In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states. The desirability of a world-state is a black-box process that compares the world-state to "ideal" world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates, the recursive stack being most easily described as "worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred", etc. etc. and then you get the standard Evolution Theory statements.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-20T18:47:21.769Z · LW(p) · GW(p)
Errh, could you reduce/taboo/refactor "instrumental concerns" here?
If I am morally prompted to put money in the collecting tin, I lose its instrumental value As before, I am thinking in "near" (or "real") mode.
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just "considered moral now" but would result in lots of moral bad later.
Huh? I don't think "instrumental" means "actually will work form an omniscicent PoV". What we think of as instrumental is just an approximation, and so is what we think of as moral.. Given our limitations, "don't kill unless there are serious extenuating circumsntaces" is both "what is considered moral now" and as instrumental as we can achieve.
One weird example here is making computer programs. Isn't it rather a moral good to make computer programs that are useful to at least some people?
I don't see why. Is it moral for trees to grow fruit that people can eat? Morality involves choices,and it involves ends. You can choose to drive a nail in with a hammer, or to kill someone with it. Likewise software.
I'm not sure I understand your line of reasoning for that last part of your comment.
It's what I say at the top: If I am morally prompted to put money in the collecting tin, I lose its instrumental value
On another note, I agree that I was using "reduction" in the sense of describing a system according to its ultimate elements and rules, rather than...
You may have been "using" in the sense of connoting, or intending that, but you cannot have been using it in the sense of denoting or referencing that, since no such reduction exists (in the sense that a reduction of heat to molecular motion exists as a theory).
"understandable in terms of"? What do you even mean?
Eg:"All the phenomena associated with heat are understandable in terms of the disorganised motion of the molecules making up a substance".
How is this substantially different? The wikipedia article's "an approach to understanding the nature of complex things by reducing them to the interactions of their parts" definition seems close to the sense LW uses.
That needs tabooing. It explains "reduction" in terms of "reducing".
"In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states."
Says who? if the non-cognitivists are right, you have an inaccessible black-box source of moral insights. If the opponents of hedonism are right, morality cannot be conceptually equated with desirabililty. (What a world of heroin addicts desire is not necessaruly what is good).
The desirability of a world-state is a black-box process
Or an algorithm that can be understood and written down, like the "description" you mention above? That is a rather important distinction.
that compares the world-state to "ideal" world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates,
How does that ground out? The whole point of instrumental values is that they are instrumental for something.
the recursive stack being most easily described as "worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred", etc. etc. and then you get the standard Evolution Theory statements.
There's not strong reason to think that something actually is good just because our genes say so. It's a form of Euthyphro. As EY has noted.
↑ comment by [deleted] · 2013-01-18T18:26:05.294Z · LW(p) · GW(p)
Universalisability rides again.
If I'm parsing that right, you misunderstood my point. Sorry.
I am not trying to lose information by applying a universalizing instinct. It is fully OK, on the level of a particular moral theory, to make such judgements and prescriptions. I'm saying, though, that this is a matter of normative ethics, not metaethics.
As a matter of metaethics, I don't think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on "you". As a matter of normaitive ethics, I think it is terminally good to punish the evil and reward the just, (though it is also instrumentally a good idea for game thoery reasons), but this should not leak into metaethics.
Do you understand what I'm getting at better now?
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T19:17:33.847Z · LW(p) · GW(p)
I don't think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on "you"
What I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind that ought to be done. Those are surley different ways of saying the same thing.
Why would you differ? Maybe it's the "double emphasis on you", The situations in which I morally ought not do something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.
Replies from: DaFranker, None↑ comment by DaFranker · 2013-01-18T19:43:57.326Z · LW(p) · GW(p)
Soooo...
Suppose I hypnotize all humans. All of them! And I give them all the inviolable command to always praise murder and genocide. I'm so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being "Rape people" and "Kill people".
By the argument you're giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they "always" praise it, without diminishing returns or habituation effects or desensitization).
Clearly this is not the same as what you ought to do.
(In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.)
For more exploration into this, suppose I'm always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare.
On the other hand, if I'm a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?
Replies from: Peterdjones, shminux↑ comment by Peterdjones · 2013-01-18T19:58:00.573Z · LW(p) · GW(p)
By the argument you're giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they "always" praise it, without diminishing returns or habituation effects or desensitization).
No. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don't change that kind of relationship by re-arranging atoms..
Replies from: DaFranker↑ comment by DaFranker · 2013-01-18T20:20:37.202Z · LW(p) · GW(p)
And what's the rule, the algorithm, then, for deciding which acts should be praised?
The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don't have access to and try to estimate using our intuitions.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-"consequentialism" as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T20:38:54.319Z · LW(p) · GW(p)
Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions,
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
and points towards some form of something-close-to-what-I-would-call-"consequentialism" as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
But that wasn't what you were saying before. Before you were saying it was all abut JGWeissman.
Replies from: DaFranker↑ comment by DaFranker · 2013-01-18T20:51:52.011Z · LW(p) · GW(p)
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
Yes. There's a tautology-style relationship between Good and Praiseworthy. That's almost tautological. If it's good, it's "worthy of praise", because we want what's good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is "praiseworthy"?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace "praiseworthy" with "good", I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can't implement it into a computer program yet.
But that wasn't what you were saying before. Before you were saying it was all abut JGWeissman.
I might have let some of that bleed through from other subthreads.
Replies from: Peterdjones, fubarobfusco↑ comment by Peterdjones · 2013-01-18T21:04:18.693Z · LW(p) · GW(p)
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is "praiseworthy"?
No one can do that whatever theory they have. I don't see how it is relevant.
I notice that when I ask myself this, I return to consequentialism
Which isn't actually computable.
Replies from: None, DaFranker↑ comment by [deleted] · 2013-01-18T21:10:51.545Z · LW(p) · GW(p)
Which isn't actually computable.
Niether is half of math. Many differential equations are uncomputable, and yet they are very useful. Why should a moral theory be computable?
(and "maximize expected utility" can be approximated computably, like most of those uncomputable differential equations)
↑ comment by DaFranker · 2013-01-18T21:18:44.301Z · LW(p) · GW(p)
Which isn't actually computable.
I've never seen any proof of this. It's also rather easy to approximate to acceptable levels of certainty:
I've loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I'm pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I'm rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you'd call this "morally neutral", since there's no moral good being made by the shooting of glass bottles in itself, and it isn't exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model's accuracy, there is a tractable probability of lives saved.
I'm having a hard time seeing what else could be missing.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T21:24:51.825Z · LW(p) · GW(p)
Which isn't actually computable.
I mean there is no runnable algorithm, I can't see how "approximations" could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
↑ comment by fubarobfusco · 2013-01-18T21:24:08.302Z · LW(p) · GW(p)
Yes. There's a tautology-style relationship between Good and Praiseworthy. That's almost tautological. If it's good, it's "worthy of praise", because we want what's good.
Doesn't that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is "chocolateworthy", if chocolate breaks your diet.
↑ comment by Shmi (shminux) · 2013-01-18T20:25:17.199Z · LW(p) · GW(p)
However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally.
How can you hate something yet praise it internally? I'm having trouble coming up with an example.
Replies from: DaFranker↑ comment by [deleted] · 2013-01-18T20:35:26.495Z · LW(p) · GW(p)
I don't see what you're getting at. I'll lay out my full position to see if that helps.
First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I'm asking about whether 4 is an integer.
So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is "what should I do?", because it's the only one I can act on. I don't think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics.
Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I've decided answers the question "what ought to be done"), I claim that I ought to act in such a way as achieves the "best" outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don't care which is done. You can call this "consequentialism" if you like. Then, unpacking "best" a bit, we find all the good things like fun, happiness, freedom, life, etc.
Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like "he didn't know any better" and "can we really expect people to...", which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you've worked out what is terminally valueable.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T20:48:55.714Z · LW(p) · GW(p)
What's wrong with sticking with "what ought to be done" as formulation?
I claim that I ought to act in such a way as achieves the "best" outcome,
Meaning others shouldn't? Your use of the "I" formulation is making your theory unclear.
I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like "he didn't know any better" and "can we really expect people to...",
They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
I don't see why. Do you think you are much better at making predictions?
↑ comment by fubarobfusco · 2013-01-18T17:47:18.923Z · LW(p) · GW(p)
What I want out of a moral theory is to know what I ought to do.
Knowledge without motivation may lend itself to akrasia. It would also be useful for a moral theory to motivate us to do what we ought to do.
↑ comment by [deleted] · 2013-01-18T16:50:39.423Z · LW(p) · GW(p)
That's not a flaw in consequentialism. It's a flaw in judging other people's morality.
Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T17:35:22.600Z · LW(p) · GW(p)
. It's a flaw in judging other people's morality
judging the moral worth of others actions is something a moral theory should enable one to do. It's not something you can just give up on.
Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
So two consequentialists would decide that each of them has moral responsibility and the other doesn't? Does that make sense? It is intended as a reductio ad absurdum of consequentialism, or as a bullet to be bitten.
Replies from: None↑ comment by [deleted] · 2013-01-18T18:14:56.874Z · LW(p) · GW(p)
judging the moral worth of others actions is something a moral theory should enable one to do.
What for? It doesn't help me achieve good things to know whether you are morally good, except to the extent that "you are morally good" makes useful predictions about your behaviour that I can use to achieve more good. And that's a question for epistemology, not morality.
So two consequentialists would decide that each of them has moral responsibility and the other doesn't? Does that make sense?
They would see it as a two-place concept instead of a one-place concept. Call them A and B. For A, A is morally responsible for everything that goes on in the world. Likewise for B. For A, the question "what is B morally responsible for" does not answer the question "what should A do", which is the only question A is interested in.
A would agree that for B, B is morally responsible for everything, but would comment that that's not very interesting (to A) as a moral question.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
Replies from: DaFranker, Peterdjones↑ comment by DaFranker · 2013-01-18T18:23:55.578Z · LW(p) · GW(p)
By extension, however, in case this corollary was lost in inferential distance:
For A, "What should A do?" may include making moral evaluations of B's possible actions within A's model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B's actions on the world as much as possible, since this influence is one possible action A can take that influences A's own moral responsibility towards the world.
Replies from: None↑ comment by Peterdjones · 2013-01-18T19:22:19.576Z · LW(p) · GW(p)
What for? It doesn't help me achieve good things to know whether you are morally good, except to the extent that "you are morally good" makes useful predictions about your behaviour that I can use to achieve more good. And that's a question for epistemology, not morality.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called "justice"
"what should A do", which is the only question A is interested in.
I don't see how that follows from consequentialism or anything else.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
Then it is limited.
Replies from: None↑ comment by [deleted] · 2013-01-18T20:47:59.717Z · LW(p) · GW(p)
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called "justice"
I get it now. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort. And I take it you don't see things this way.
I don't see how that follows from consequentialism or anything else.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A's actions isn't interesting to A.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T21:01:01.173Z · LW(p) · GW(p)
. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort.
It doesn't follow from that that you have no interest in praise and blame.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A's actions isn't interesting to A.
Isn't A interested in the actions of B and C that impinge on A?
Replies from: None, DaFranker↑ comment by [deleted] · 2013-01-18T21:06:03.532Z · LW(p) · GW(p)
It doesn't follow from that that you have no interest in praise and blame.
Yes, and it doesn't follow that because I am interested in praise and blame, I must hold other people to the same standard I hold myself. I said right there in the passage you quoted that I do in fact hold other people to some standard, it's just not the same as I use for myself.
Isn't A interested in the actions of B and C that impinge on A?
Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T21:18:35.354Z · LW(p) · GW(p)
Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
Your metaethics treats everyone as acting but not acted on?
↑ comment by DaFranker · 2013-01-18T21:08:38.277Z · LW(p) · GW(p)
Isn't A interested in the actions of B and C that impinge on A?
A is interested in:
1) The state of the world. This is important information for deciding anything.
2) A's possible actions, and their consequences. "Their consequences" == expected future state of the world for each action.
"actions of B and C that impinge on A" is a subset of 1) and "giving praise and blame" is a subset of 2). "Influencing the actions of B and C" is also a subset of 2).
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-18T21:15:48.341Z · LW(p) · GW(p)
A is interested in:
1) The state of the world. This is important information for deciding anything. 2) A's possible actions, and their consequences. "Their consequences" == expected future state of the world for each action.
Or, briefly "The Union of A and not-A"
or, more briefly still:
"Everything".
↑ comment by A1987dM (army1987) · 2013-01-18T16:22:42.649Z · LW(p) · GW(p)
But some people take more actions that have Bad Consequences than others, don't they?
Replies from: DaFranker, MugaSofer↑ comment by DaFranker · 2013-01-18T17:08:17.149Z · LW(p) · GW(p)
Yes, but even that is subject to counter-arguments and further debate, so I think the point is in trying to find something that more appropriately describes exactly what we're looking for.
After all, proportionality and other factors have to be taken into account. If Einstein takes more actions with Good Consequences and less actions with Bad Consequences than John Q. Eggfart, I don't anticipate this to be solely because John Q. Eggfart is a Bad Person with a broken morality system. I suspect Mr. Eggfart's IQ of 75 to have something to do with it.
Replies from: bio_logical↑ comment by bio_logical · 2013-10-17T17:59:50.968Z · LW(p) · GW(p)
I wonder if 1,000 people upvoted this comment, in series with 1,000 people voting it down. I'd like to know 1/(# of reads) or 1/(number of votes). Can we use network theory to assume that people here conform to the first-mover theory? (ie: "If a post starts getting upvoted, it then continues to be upvoted, whereas if a post starts getting downvoted or ignored, it continues to get downvoted or ignored, or at least has a greater probability of being so.")
I suspect Mr. Eggfart's IQ of 75 to have something to do with it.
He also might be a sociopath with an IQ superior to Einstein's. He also might be a John von Neumann, (successfully?) arguing in favor of nuking Russia, because he thinks that Russia is evil (correct) and that Russia is full of scientists who are almost as smart as himself (maybe correct), and because it's logical to do so (possibly correct, but seemingly not, based on the outcome), or he might think that everyone is as logical as possible (incorrect), or he might not have empathy for those who don't take the opportunities they're given (who's to say if he's right?). In hindsight, I'm really glad the USA didn't nuke Russia. In hindsight, I'm very glad that Von Neumann wasn't killed in order to minimize his destructiveness, but that democracy managed to mitigate his (and Goldwater's) destructiveness. (Goldwater was the better candidate overall, on all subjects, but his willingness to use the bomb was a fatal, grotesque, and unacceptable flaw in that otherwise "better overall." Goldwater's attitude towards the bomb was similar to, and seemingly informed by, von Neumann.)
I do support punishing sociopaths legally, even if they didn't think it was wrong when they raped and murdered your wife. What the sociopath thinks doesn't diminish the harm they knowingly caused. The legal system should be a disincentive toward actual wrong. When the legal system operates properly, it is a blessing that allows the emergence of market-based civilization. The idea of a "right" is not necessarily a deontological philosophical claim, but a legal one.
As a consequentialist, I don't necessarily hate sociopaths. I understand why they exist, from an evolutionary perspective. ...But I might still kill one if I had to, in order to serve what I anticipated to be the optimal good. I might also kill one in retaliation, because they had taken something valuable from me (such as the life of a loved one), and I wished to make it clear to them that their choice to steal from me rightfully enraged me (vengeance, punishment).
While I don't think that (even righteous) punishment is the grandest motive, I also don't deny others their (rightful) desires for punishment. There is a "right" and a "wrong" external to outcomes, based on philosophy that is mutually-compatible with consequentialism. If we were all submissive slaves, there would be a lot of "peace," but I still wouldn't likely choose such an existence over a violent but possibly more free existence.
↑ comment by MugaSofer · 2013-01-19T14:53:20.983Z · LW(p) · GW(p)
If you mean that some people choose poorly or are simply unlucky, yes.
If you mean that some people are Evil and so take Evil actions, then ... well, yes, I suppose, psychopaths. But most Bad Consequences do not reflect some inherent deformity of the soul, which is all I'm saying.
Classifying people as Bad is not helpful. Classifying people as Dangerous ... is. My only objection is turning people into Evil Mutants - which the comment I originally replied to was doing. ("Bad Things are done by Bad People who deserve to be punished.")
Replies from: bio_logical↑ comment by bio_logical · 2013-10-17T18:07:50.092Z · LW(p) · GW(p)
If you mean that some people are Evil and so take Evil actions, then ... well, yes, I suppose, psychopaths. But most Bad Consequences do not reflect some inherent deformity of the soul, which is all I'm saying.
I'd prefer to leave "the soul" out of this.
How do you know that most bad consequences don't involve sociopaths or their influence? It seems unlikely that that's not the case, to me.
Also, don't forget conformists who obey sociopaths. Franz Stangl said he felt "weak in the knees" when he was pushing gas chamber doors shut on a group of women and kids. ...But he did it anyway.
Wagner gleefully killed women and kids.
Yet, we also rightfully call Stangl an evil person, and rightfully punish him, even though he was "Just following orders." In hindsight, even his claims that the democide of over 6 million Jews and 10 million German dissidents and dissenters was solely for theft and without racist motivations, doesn't make me want to punish him less.
Replies from: MugaSofer, MugaSofer↑ comment by MugaSofer · 2013-11-23T15:53:09.448Z · LW(p) · GW(p)
In before this is downvoted to the point where discussion is curtailed.
I'd prefer to leave "the soul" out of this.
And yet here you are arguing for Evil Mutants.
I'm aware many people who believe this don't literally think of it in terms of the soul - if only because they don't think about it all - but I think it's a good shorthand for the ideas involved.
How do you know that most bad consequences don't involve sociopaths or their influence?
Observing simple incompetence in the environment.
Franz Stangl [...] Wagner
I should probably note I'm not familiar with these individuals, although the names do ring a faint bell.
Franz Stangl said he felt "weak in the knees" when he was pushing gas chamber doors shut on a group of women and kids. ...But he did it anyway.
Seems like evidence for my previous statements. No?
Wagner gleefully killed women and kids.
These are Nazis, yes? I wouldn't be that surprised if some of them were "gleeful" even if they had literally no psychopaths among their ranks - unlikely from a purely statistical standpoint.
Yet, we also rightfully call Stangl an evil person, and rightfully punish him, even though he was "Just following orders."
While my contrarian tendencies are screaming at me to argue this was, in fact, completely unjust ... I can see some neat arguments for that ...
We punished Nazis who were "just obeying orders" - and now nobody can use that excuse. Seems like a pretty classic example of punishment setting an example for others. No "they're monsters and must suffer" required.
In hindsight, even his claims that the democide of over 6 million Jews and 10 million German dissidents and dissenters was solely for theft and without racist motivations, doesn't make me want to punish him less.
I'm probably more practiced at empathising with racists, and specifically Nazis - just based on your being drawn from our culture - but surely racist beliefs is a more sympathetic motivation than greed?
(At least, if we ignore the idea of bias possibly leading to racist beliefs that justify benefiting ourselves at their expense, which you are, right?)
Replies from: More_Right↑ comment by More_Right · 2014-04-24T08:56:30.502Z · LW(p) · GW(p)
There are a lot of people who really don't understand the structure of reality, or how prevalent and how destructive sociopaths (and the conformists that they influence) are.
In fact, there is a blind spot in most people's realities that's filled by their evolutionarily-determined blindness to sociopaths. This makes them easy prey for sociopaths, especially intelligent, extreme sociopaths (total sociopathy, lack of mirror neurons, total lack of empathy, as described by Robert Hare in "without conscience") with modern technology and a support network of other sociopaths.
In fact, virtually everyone who hasn't read Stanley Milgram's book about it, and put in a lot of thought about its implications is in this category. I'm not suggesting that you or anyone else in this conversation is "bad" or "ignorant," but just that you might not be referencing an accurate picture of political thought, political reality, political networks.
The world still doesn't have much of a problem with the "initiation of force" or "aggression." (Minus a minority of enlightened libertarian dissenters.) ...Especially not when it's labeled as "majoritarian government." ie: "Legitimized by a vote." However, a large and growing number of people who see reality accurately (small-L libertarians) consistently denounce the initiated use of force as grossly sub-optimal, immoral, and wrong. It is immoral because it causes suffering to innocent people.
Stangl could have recognized that the murder of women and children was "too wrong to tolerate." In fact, he did recognize this, by his comment that he felt "weak in the knees" while pushing women and children into the gas chamber. That he chose to follow "the path of compliance" "the path of obedience" and "the path of nonresistance" (all those prior paths are different ways of saying the same thing, with different emphasis on personal onus, and on the extent to which fear plays a defensible part in his decision-making).
The reason I still judge the Nazis (and their modern equivalents) harshly is because they faced significant opposition, but it was almost as wrong as they were. The levellers innovated proper jury trials in the 1600s, and restored them by the 1670, in the trial of William Penn. It wasn't as if Austria was without its "Golden Bull" either. Instead, they chose a mindless interpretation of "the will to power."
The rest of the world viewed Hitler as a raving madman. There were plenty of criticisms of Nazism in existence at the time of Hitler's rise to power. Adam Smith had written "The Wealth of Nations" over a century earlier. The Federalist and Anti-Federalists were right in incredible detail again, over a century earlier.
Talk about the prison industrial complex with anyone, and talk with someone who has family members imprisoned for a victimless crime offense. Talk with someone who knows Schaeffer Cox, (one of the many political prisoners in the USA). Most people will choose not to talk to these people (to remain ignorant) because knowledge imparts onus to act morally, and stop supporting immoral systems. To meet the Jews is to activate your mirror neurons, is to empathize with them, ...a dangerous thing to do when you're meeting them standing outside of a cattle car. Your statistical likelihood of being murdered by your own government, during peacetime, worldwide.
Replies from: MugaSofer, TheAncientGeek, soreff, hairyfigment, MugaSofer↑ comment by MugaSofer · 2014-04-25T21:35:39.284Z · LW(p) · GW(p)
I'm on a mobile device right now - I'll go over your arguments, links, and videos in more detail later, so here are my immediate responses, nothing more.
In fact, there is a blind spot in most people's realities that's filled by their evolutionarily-determined blindness to sociopaths.
Wait, why would evolution make us vulnerable to sociopaths? Wouldn't patching such a weakness be an evolutionary advantage?
This makes them easy prey for sociopaths, especially intelligent, extreme sociopaths (total sociopathy, lack of mirror neurons...
Wouldn't a total lack of mirror neurons make people much harder to predict, crippling social skills?
I'm not suggesting that you or anyone else in this conversation is "bad" or "ignorant," but just that you might not be referencing an accurate picture of political thought, political reality, political networks.
"Ignorant" is not, and should not be, a synonym for "bad". If you have valuable information for me, I'll own up to it.
The world still doesn't have much of a problem with the "initiation of force" or "aggression."
Those strike me as near-meaningless terms, with connotations chosen specifically so people will have a problem with them despite their vagueness.
That he chose to follow "the path of compliance" "the path of obedience" and "the path of nonresistance" (all those prior paths are different ways of saying the same thing, with different emphasis on personal onus, and on the extent to which fear plays a defensible part in his decision-making).
Did you accidentally a word there? I don't follow your point.
The reason I still judge the Nazis ... they chose a mindless interpretation of "the will to power." The rest of the world viewed Hitler as a raving madman. There were plenty of criticisms of Nazism in existence at the time of Hitler's rise to power.
And clearly, they all deliberately chose the suboptimal choice, in full knowledge of their mistake.
Your statistical likelihood of being murdered by your own government, during peacetime, worldwide.
You're joking, right?
Statistical likelihood of being murdered by your own government, during peacetime, worldwide.
i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)
Replies from: More_Right↑ comment by More_Right · 2014-04-26T09:00:13.013Z · LW(p) · GW(p)
i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)
I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.
If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn't give so much of our wealth to it. Yet, social systems repeatedly, and cyclically fail for this reason, just as the USA is now, once again, proceeding down this well-worn path (to the greatest extent allowed by the nation's many "law students" who become "licensed lawyers." What if all those law students had become STEM majors, and built better machines and technologies?) I dare say that that simple desire for an easier paycheck might be the cause of sociopathy on a grand scale. I have my own theories about this, but for a moment, nevermind _why.
If societies typically fall to over-parasitism, (too many looters, too few producers), we should ask ourselves what part we're playing in that fall. If societies don't fall entirely to over-parasitism, then what forces ameliorate parasitism?
And, how would you know how likely you are to be killed by a system in transition? You may be right: maybe the graph doesn't take into account changes in the future that make societies less violent and more democratic. It just averages the past results over time.
But I think R. J. Rummel's graph makes a good point: we should look at the potential harm caused by near-existential (extreme) threats, and ask ourselves if we're not on the same course. Have we truly eliminated the variables of over-legislation, destruction or elimination of legal protections, and consolidation of political power? ...Because those things have killed a lot of people in the past, and where those things have been prevented, a lot of wealth and relative peace has been generated.
But sure, the graph doesn't mean anything if technology makes us smart enough to break free from past cycles. In that case, the warning didn't need to be sounded as loudly as Rummel has sounded it.
...And I don't care if the graph looks "skeevy." That's an ad-hominem attack that ignores the substance of the warning. I encourage you to familiarize yourself with his entire site. It contains a lot of valuable information. The more you rebel against the look and feel of the site, the more I encourage you to investigate it, and consider that you might be rebelling against the inconsequential and ignoring the substance.
Truth can come from a poorly-dressed source, and lies can (and often do) come in slick packages.
Replies from: TheAncientGeek, MugaSofer↑ comment by TheAncientGeek · 2014-04-26T12:23:55.796Z · LW(p) · GW(p)
Getting maths right is useless when youmhave got concpets wrong. Your graph throws Liberal democracies in with authoritarian and totalitarianism regimes. From which you derive that mugasofer is AA likely to be killed by Michael Higgins as he is by Pol Pot.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-04-26T19:54:17.929Z · LW(p) · GW(p)
You're making lots of typos these days; is there something wrong with your keyboard or something?
↑ comment by MugaSofer · 2014-05-04T13:39:10.594Z · LW(p) · GW(p)
There are a lot of people who really don't understand the structure of reality, or how prevalent and how destructive sociopaths (and the conformists that they influence) are.
You know, this raises an interesting question: what would actually motivate a clinical psychopath in a position of power? Well, self-interest, right? I can see how there might be a lot of environmental disasters, defective products, poor working conditions as a result ... probably also a certain amount of skullduggery would be related to this as well.
Of course, this is an example of society/economics leading a psychopath astray, rather than the other way around. Still, it might be worth pushing to have politicians etc. tested and found unfit if they're psychopathic.
In fact, there is a blind spot in most people's realities that's filled by their evolutionarily-determined blindness to sociopaths.
I remain deeply suspicious of this sentence.
In fact, virtually everyone who hasn't read Stanley Milgram's book about it, and put in a lot of thought about its implications is in this category [...] you might not be referencing an accurate picture of political thought, political reality, political networks.
This seems reasonable, actually. I'm unclear why I should believe you know better, but we are on LessWrong.
The world still doesn't have much of a problem with the "initiation of force" or "aggression." (Minus a minority of enlightened libertarian dissenters.) ...Especially not when it's labeled as "majoritarian government." ie: "Legitimized by a vote." However, a large and growing number of people who see reality accurately (small-L libertarians) consistently denounce the initiated use of force as grossly sub-optimal, immoral, and wrong. It is immoral because it causes suffering to innocent people.
I ... words fail me. I seriously cannot respond to this. Please, explain yourself, with actual reference to this supposed reality you perceive, and with the term "initiation of force" tabooed.
Talk about the prison industrial complex with anyone, and talk with someone who has family members imprisoned for a victimless crime offense.
And this is the result of ... psychopaths? Human psychological blindspots evolved in response to psychopaths?
Talk with someone who knows Schaeffer Cox, (one of the many political prisoners in the USA).
Well, that's ... legitimately disturbing. Of course, it may be inaccurate, or even accurate but justified ... still cause for concern.
Your statistical likelihood of being murdered by your own government, during peacetime, worldwide.
You know, my government could be taken down with a few month's terrorism, and has been. There are actual murderers in power here, from the ahem glorious revolution. I actually think someone who faced this sort of thing here might have a real chance of winning that fight, if they were smart.
This contributes to my vague like of american-style maintenance-of-a-well-organized-militia gun ownership, despite the immediate downsides.
And, of course, no other government is operating such attacks in Ireland, to my knowledge. I think I have a lot more to fear from organized crime than organized law, and I have a lot more unpopular political opinions than money.
I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.
The site appears to be explicitly talking about genocide etc. in third-world countries.
If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn't give so much of our wealth to it. Yet, social systems repeatedly, and cyclically fail for this reason, just as the USA is now, once again, proceeding down this well-worn path [...] societies typically fall to over-parasitism, (too many looters, too few producers), we should ask ourselves what part we're playing in that fall.
Citation very much needed, I'm afraid. You are skirting the edge of assuming your own conclusion, which suggests it's a large part of your worldview; am I right?
What if all those law students had become STEM majors, and built better machines and technologies?
I'm going to say "surprisingly little". Eh, it's worth a shot in at least a state-level trial.
If societies don't fall entirely to over-parasitism, then what forces ameliorate parasitism?
And, how would you know how likely you are to be killed by a system in transition? You may be right: maybe the graph doesn't take into account changes in the future that make societies less violent and more democratic. It just averages the past results over time.
Assuming "past" and "future" here are metaphorically referring to more/less advanced societies, absolutely.
But I think R. J. Rummel's graph makes a good point: we should look at the potential harm caused by near-existential (extreme) threats, and ask ourselves if we're not on the same course.
This doesn't seem likely to fall into even the same order of magnitude as X-risks. In fact, I think the main effect would be the possible impact on reducing existential threats.
Have we truly eliminated the variables of over-legislation, destruction or elimination of legal protections, and consolidation of political power? ...Because those things have killed a lot of people in the past, and where those things have been prevented, a lot of wealth and relative peace has been generated.
And you blame these on ... psychopaths?
Truth can come from a poorly-dressed source, and lies can (and often do) come in slick packages.
Hmm. Have you considered dressing better? Because those youtube documentaries are borderline unwatchable, and I am right only barely motivated enough to watch them because I would feel bad at potentially neglecting a source of info. (If they continue to consist of facts I already know and raw, unsupported declarations I will, in fact, stop watching them.)
↑ comment by TheAncientGeek · 2014-04-26T11:52:43.119Z · LW(p) · GW(p)
↑ comment by soreff · 2014-04-26T19:39:25.446Z · LW(p) · GW(p)
Concern about sociopaths applies to both business and government:
http://thinkprogress.org/justice/2014/01/09/3140081/bridge-sociopathy/
One paper examining a sizable sample of business folk found that percentage of sociopaths in the corporate world is 3.5 times higher than in the general population. Another study of 346 white-collar workers found that the percentage of corporate sociopaths increased as you go up the corporate ladder. That’s consistent with the reasons why politicians tend to be sociopaths: corporate leaders have lots of power over others and arguably even less need for empathy and conscience than politicians.
↑ comment by hairyfigment · 2014-04-28T23:00:56.713Z · LW(p) · GW(p)
So, is this trolling? You cite the Milgram experiment, in which the authorities did not pretend to represent the government. The prevalence and importance of non-governmental authority in real life is one of the main objections to libertarianism, especially the version you seem to promote here (right-wing libertarianism as moral principle).
↑ comment by MugaSofer · 2014-05-06T20:25:00.019Z · LW(p) · GW(p)
Having reviewed your links:
Your first link (https://www.youtube.com/watch?v=MgGyvxqYSbE) both appears to be, and is, a farly typical YouTube conspiracy theory documentary that merely happens to focus on psychopaths. It was so bad I seriously considered giving up on reviewing your stuff. I strongly recommend that, whatever you do, you cease using this as your introductory point.
"The Psychology of Evil" was mildly interesting; although it didn't contain much in the way of new data for me, it contained much that is relatively obscure. I did notice, however, that he appears to be not only anthropomorphizing but demonizing formless things. Not only are most bad things accomplished by large social forces, most things period are. It is easier for a "freethinker" to do damage than good, although obviously, considering we are on LW, I consider this a relatively minor point.
I find the identification of "people who see reality accurately" with "small-l libertarians" extremely dubious, especially when it goes completely unsupported, as if this were a background feature of reality barely worth remarking on.
Prison industrial complex link is meh; this, on the other hand, is excellent, and I may use it myself.
Schaeffer Cox is a fraud, although I can't blame him for trying and I remain concerned about the general problem even if he is not an instance of it.
The chart remains utterly unrelated to anything you mentioned or seem particularly concerned about here.
↑ comment by Chrysophylax · 2013-01-30T17:41:31.154Z · LW(p) · GW(p)
A bad person is someone who does bad things.
If doing "bad" things (choose your own definition) makes you a Bad Person, then everyone who has ever acted immorally is a Bad Person. Personally, I have done quite a lot of immoral things (by my own standards), as has everyone else ever. Does this make me a Bad Person? I hope not.
You are making precisely the mistake that the Politics is the Mind-Killer sequence warns against - you are seeing actions you disagree with and deciding that the actors are inherently wicked. This is a combination of correspondence bias, or the fundamental attribution error, (explaining actions in terms of enduring traits, rather than situations) and assuming that any reasonable person would agree to whatever moral standard you pick. A person is moral if they desire to follow a moral standard, irrespective of whether anyone else agrees with that standard.
Replies from: Vaniver↑ comment by Vaniver · 2013-01-30T17:55:35.836Z · LW(p) · GW(p)
If a broken machine is a machine that doesn't work, does that mean that all machines are broken, because there was a time for each machine when it did not work?
More clearly: reading "someone who does bad things" as "someone who has ever done a bad thing" requires additional assumptions.
↑ comment by smk · 2013-10-14T05:02:36.415Z · LW(p) · GW(p)
how can anyone deserve anything?
They can't. The whole idea of "deserving" is... icky. I try not to use it in figuring out my own morals, although I do sometimes use the word "deserve" in casual speech/thought. When I'm trying to be more conscientious and less casual, I don't use it.
↑ comment by EngineerofScience · 2015-07-21T20:32:34.594Z · LW(p) · GW(p)
This article might answer that questionDiseased thinking: dissolving questions about disease
comment by pdf23ds · 2007-03-04T04:58:28.000Z · LW(p) · GW(p)
TGGP, I think we have to define "deserve" relative to social consensus--a person deserves something if we aren't outraged when they get it for one reason or another. (Most people define this based on the consensus of a subset of society--people who share certain values, for instance.) Differences in the concept of "deserve" are one of the fundamental differences (if not the primary difference) between conservatism and liberalism.
Replies from: JJ10DMAN, ericn↑ comment by JJ10DMAN · 2010-08-10T16:01:44.005Z · LW(p) · GW(p)
I agree strongly with everything in the above paragraph, especially the end. And so should you. Greens 4 life!
Replies from: rela, handoflixue↑ comment by handoflixue · 2011-05-21T00:06:46.926Z · LW(p) · GW(p)
Voted up due to political phrasings (and assumed effort goal of humor :))
↑ comment by ericn · 2010-12-26T05:05:49.849Z · LW(p) · GW(p)
Do we need a definition of "deserve"? Perhaps it does not correspond to anything in reality. I would certainly argue that it doesn't correspond to anything in politics.
For instance, should we have a council that doles out things people deserve? It just seems silly.
Politics is ideally a giant cost/benefit satisficing operation. Practically, it is an agglomeration of power plays. I don't see where "deserve" fits in.
Replies from: CWGcomment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-04T05:25:25.000Z · LW(p) · GW(p)
TGGP, if the mind were not embodied in the brain, it would be embodied in something else. You don't need neuroscience to see the problem with the naive conception of free will.
The reason I don't think idiots deserve to die is not because their genes played a role in making them idiots. Suppose it were not the genes. So what? The point is that being stupid is not the same as being malicious, or dishonest. It is simply being stupid, no more and no less. Drinking Sulfuric Acid Drink because you wishfully think it will cure your arthritis, is simply not on a moral par with deliberately burning out someone's eyes with hot pokers. No matter what you believe about the moral implications of determinism for sadistic torturers, in no fair universe would mere sloppy thinking be a capital crime. As it has always been, in this our real world.
Replies from: DSimon, Ender, bio_logical↑ comment by DSimon · 2010-09-10T20:55:12.864Z · LW(p) · GW(p)
In no fair universe would mere sloppy thinking be a capital crime.
What about when sloppy thinking leads a person to hurt other people, i.e. a driver who accidentally kills a pedestrian while distracted by a call they thoughtlessly answered in motion?
↑ comment by bio_logical · 2013-10-17T19:14:20.286Z · LW(p) · GW(p)
in no fair universe would mere sloppy thinking be a capital crime. As it has always been, in this our real world.
And, in no fair universe would the results of sloppy thinking be used as an excuse to create coercive policies that victimize thousands of sloppy thinkers for every sloppy thinker that is (allegedly) benefited by them. Yet, because even the philosophers, and rationality blog-posters of our universe are sloppy thinkers (in relation to artilects with 2000 IQs), some of us continue to accept the idea that the one-sided making of coercive laws (by self-interested, under-educated sociopaths) constitutes a legitimate attempt at a political solution. Nothing could be further from the truth.
comment by David_Brayton · 2007-03-04T16:03:39.000Z · LW(p) · GW(p)
I am not normally a nit pick (well, maybe I am) but this jumped out at me: an example of a fact--"whether Earthly life arose by natural selection." Because natural seletion is one of the cornerstones of modern biology, I thought I'd take a few seconds to enter this comment.
Natural selection is a biological process by which favorable traits that can be gentically inherited become more common in successive generations of a population of reproducing organisms, and unfavorable traits that can be inherited become less common. The driving force is the need to survive. So, for example, cheetahs that can run faster because of inheritable traits catch more food and tend to live so as to pass on the traits.
So, natural selection doesn't say anything about how life arose. As a factual matter, the example is a non sequitur.
You might have been thinking of "common descent". From Wikipedia: "A group of organisms is said to have common descent if they have a common ancestor. In biology, the theory of universal common descent proposes that all organisms on Earth are descended from a common ancestor or ancestral gene pool."
But, common descent doesn't say how life arose. It says that all life on Earth can be traced back to one initial set of genes/DNA. How that initial pool of chemicals became what we call life is not addressed by common descent.
Replies from: AndyCossyleon↑ comment by AndyCossyleon · 2010-11-03T22:00:59.809Z · LW(p) · GW(p)
"whether Earthly life arose by natural selection" was a bad example of Eliezer's.
Natural selection does not account for how life arose, and dubitably accounts for how even the diversity of life arose*. Natural selection accounts, and only accounts, for how specified (esp. complex & specified) biological artifacts arose and are maintained.
An infinitely better example would have been "whether terrestrial life shares a common ancestor," because that is a demonstrable fact.
*This has probably mostly to do with plate tectonics carting around life forms from place to place and with genetic drift.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-04T18:29:23.000Z · LW(p) · GW(p)
Sorry, Brayton. I do know better, it was simply an accident of phrasing. I hadn't meant to imply that abiogenesis itself occurred by selective processes - "arose" was meant to refer to life's ascent rather than sparking.
Though, in my opinion, the very first replicator (or chemical catalytic hypercycle) should not really count as "life", because it merely happens to have the accidental property of self-replication and was not selectively optimized to this function. Thus, it properly belongs to the regime of accidental events rather than the regime of (natural) optimization.
comment by Alex3 · 2007-03-05T10:27:14.000Z · LW(p) · GW(p)
The problem here is bias to one's own biases, I think. After all, we're all stupid some of the time, and realising this is surely a core component of the Overcoming Bias project. Robin Hanson may not think he'd ever be stupid enough to walk into the Banned Shop, but we all tend to assume we're the rational one.
You also need to consider the real-world conditions of your policy. Yes, this might be a good idea in its Platonic ideal form, but in practice, that actually doesn't tell us very much. As an argument against "regulation", I think, with a confidence value of 80, that it's worse than useless.
Why? In practice, you're not going to have "Banned Shops" with big signs on them. If enough people want to buy the banned products, and we know they do want them because their manufacturers are profitable, the rest of the retail trade will instantly start lobbying for the right to sell them, maybe on a Banned Shelf next to the eggs. That's an unrealistic example, but then it's an unrealistic proposal.
What's more likely is a case of Pareto inefficiency - if you relax, say, medicines control on the grounds that it's a step towards the ideal, the growth in ineffective, dangerous, or resistance-causing quackery is probably going to be a significant disbenefit.
Replies from: cypher197↑ comment by cypher197 · 2013-03-16T23:48:00.914Z · LW(p) · GW(p)
I, for one, imagine that I could easily walk into the Banned Shop, given the right circumstances. All it takes is one slip up - fatigue, drunkness, or woozy medication would be sufficient - to lead to permanent death.
With that in mind, I don't think we should be planting more minefields than this reality currently has, on purpose. I like the idea of making things idiot-proof, not because I think idiots are the best thing ever, but because we're all idiots at least some of the time.
Replies from: Nornagest↑ comment by Nornagest · 2013-03-17T01:19:55.714Z · LW(p) · GW(p)
Certain types of content labeling might work a lot like Hanson's Banned Shop, minus the trivial inconvenience of going to a different shop: the more obvious and dire the label, the closer the approximation. Cigarettes are probably the most advanced example I can think of.
Now, cigarettes have also been extensively regulated in other ways, so we can't infer from this too well, but I think we can tentatively describe the results as mixed: it's widely understood that cigarettes stand a good chance of killing you, and smoking rates have indeed gone down since labeling laws went into effect, but it's still common. Whether or not we count this as a win probably depends on whether, and how much, we believe smokers' reasons for smoking -- or dismiss them as the dribble of a hijacked habit-formation system.
comment by HalFinney · 2007-03-05T18:44:31.000Z · LW(p) · GW(p)
Alex raises an interesting point: do most of us in fact assume that we would never walk into a Banned Shop? I don't necessarily assume that. I could envision going there for a medical drug which was widely available in Europe, but not yet approved by the U.S. FDA, for example. Or how about drugs that are supposed to only be available by prescription, might Banned Shops provide them to anyone who will pay? I might well choose to skip the time and money of a doctor visit to get a drug I've taken before without problems (accepting the risk that unknown to me, some subtle medical condition has arisen that now makes the drug unsafe, and a doctor would have caught it). Or for that matter, what about recreational drugs? If Banned Shops sold marijuana to anyone with a 100 IQ, I'm sure there are many list members who would partake.
comment by Alex3 · 2007-03-06T11:41:39.000Z · LW(p) · GW(p)
It's a similar argument to my proposal of Rational Airways, an airline that asks you to sign a release when buying a ticket to the effect that you realise how tiny the risk of a terrorist attack is, and therefore are willing to travel with Rational, who do not apply any annoying security procedures.
Replies from: Jiro↑ comment by Jiro · 2015-04-30T17:45:24.788Z · LW(p) · GW(p)
(Responding to old post)
This has another problem that other people haven't mentioned so far: it's not really possible to trace a terrorist attack to a specific cause such as lack of a particular security procedure. This means that Rational Airways will cut out their annoying security procedures, but the release they will make you sign will release them from liability to all terrorist attacks, not just to terrorist attacks related to them cutting down those security procedures. That's a bad deal for the consumer--the consumer wants to avoid intrusive searches, finds an airline which lets them avoid the searches by signing a release, but the release also lets the airline hire known serial killers as stewardesses as well as not search the passengers, and you can't sue them for it because the release is all-encompassing and is not just limited to terrorism that would have been caught by searches.
Furthermore, then all the other airlines see how Rational Airlines works and decide to improve on it. They get together and decide that all passengers will have to either submit to being stripped fully naked, or sign a release absolving the airline of responsibility for terrorists. The passengers, of course, sign the releases, and the result is that the airlines never have to worry about hiring serial killers or any other forms of negligence either. (Because not screening the stewardesses for serial killers saves them money, any airline that decides not to do this cannot compete on price.)
Later, some smart airlines decide they don't actually need the excuse and just say "there's an unavoidable base rate of terrorism and we don't want to get sued for that" and make everyone, period, sign a release acknowledging that before getting on the plane (and therefore absolving the airline of all responsibility for terrorism whether it is part of the base rate or not.)
Even later, another airline decides to just make its customers promise not to sue them for anything at all (whether terrorism, mechanical failure, or other) before getting on the plane.
Similar things happen in real life, like insurance companies that won't pay if you have a preexisting condition (regardless of whether the preexisting condition is related to the condition you want them to pay for).
Replies from: Jiro↑ comment by Jiro · 2016-08-03T19:48:01.971Z · LW(p) · GW(p)
In fact, let me add a comment to this. Someone may be willing to assume some risk but not a higher level of risk. But there's no way to say "I'm willing to accept an 0.5% chance of something bad but not a 5% chance" by signing a disclaimer--the effect of the disclaimer is that when something bad happens, you can't sue, which is an all or nothing thing. And a disaster that results from an 0.5% chance looks pretty much like a disaster that results from a 5% chance, so you can't disclaim only one such type of disaster.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-08T02:26:20.000Z · LW(p) · GW(p)
Alex, a possible problem is that Rational would then attract all the terrorists who would otherwise have attacked different airlines.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-08T02:27:32.000Z · LW(p) · GW(p)
PS: And, the risk might not be tiny if you took off all the safety precautions. But, yes, you could dispense with quite a few costly pointless ostentatious displays of effort, without changing the security risk in any significant sense.
comment by james_Wilson · 2007-03-08T02:41:11.000Z · LW(p) · GW(p)
One little thing, Mr. Yudkowsky, a bit off subject. But then again, capital punishment was a bit off subject. No, the convict cannot learn from his mistakes when we kill him. The fact is, he never learned from his mistakes, and that is why we are killing him. He likes the way he is, and wants more of it. Not all killers, and not even most killers. But then,few killers are executed.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-08T03:05:38.000Z · LW(p) · GW(p)
James, my comment on drawing the moral line at capital punishment was addressed to the universe in general. Judicial executions count for a very small proportion of all death penalties - for example, the death penalty that you get for just being alive for longer than a century or so.
Replies from: mat33↑ comment by mat33 · 2011-10-04T08:54:19.899Z · LW(p) · GW(p)
"...the death penalty that you get for just being alive for longer than a century or so."
The "ethics of gods" most probably is the ethics of evolution. "Good" (in this particular sence) Universe have to be "bad" enough to allow the evolution of live, mind and [probabbly] technology. The shaw is natural selection - and the shaw must go on. Even as it includes aforementioned death penalty...
comment by Robin_Powell · 2007-04-23T22:48:36.000Z · LW(p) · GW(p)
The experimental evidence for a purely genetic component of 0.6-0.8 is overwhelming
Erm. 0.6-0.8 what?
-Robin
Replies from: tut, Celercomment by Michael_Bishop · 2007-12-14T05:15:00.000Z · LW(p) · GW(p)
I realize it has little to do with the main argument of the post, but I also have issues with Eliezer's claim:
"The experimental evidence for a purely genetic component of 0.6-0.8 is overwhelming..."
Genes matter a lot. But there are a number of problems with the calculation you allude to. See Richard Nisbett's work.
Replies from: kremlincomment by Jamesofengland · 2008-06-27T08:20:00.000Z · LW(p) · GW(p)
"Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn't deserve it, but we're going to keep the shops open anyway because we did this cost-benefit calculation." Can you imagine a politician saying that? Neither can I.
--60 Minutes (5/12/96) Lesley Stahl on U.S. sanctions against Iraq: We have heard that a half million children have died. I mean, that's more children than died in Hiroshima. And, you know, is the price worth it?
Secretary of State Madeleine Albright: I think this is a very hard choice, but the price--we think the price is worth it.
She later expressed regret for it, after taking an awful lot of flack at the time, but this does sometimes happen.
Replies from: JDM↑ comment by JDM · 2013-06-04T23:33:28.642Z · LW(p) · GW(p)
I think your point that she took a lot of flak for it is evidence for the original point. The only other reasonable responses to that could have been changing her mind on the spot, or disputing the data, and neither of those responses would have brought similar backlash on her. Conceding weak points to your arguments in politics is often looked upon as a weakness when it shouldn't be.
comment by vroman2 · 2008-12-26T03:45:07.000Z · LW(p) · GW(p)
its unfair to caricaturize libertarians as ultra-social-darwinists saying "stupid ppl who accidently kill themselves DESERVED it". if that quote was ever literally uttered, I would tend to think it was out of exasperation at the opposing viewpoint that govt has a paramount responsibility to save its citizens from themselves to the point of ludicrous pandering.
Replies from: caiuscamargarus↑ comment by caiuscamargarus · 2010-05-02T21:26:48.263Z · LW(p) · GW(p)
"Everyone gets what they deserve" is the unironic (and secular) motto of a close family friend who is wealthy in Brazil, one of the countries with the greatest levels of economic inequality in the world. I have heard the sentiment echoed widely among the upper and upper middle class. Maybe it's not as extreme as that, but it is a clear expression of the idea that unfortunate people deserve their misfortune to the point that those who have the resources to help them should not bother. This sentiment also characterizes Objectivism, which is commonly (though not always) associated with libertarianism.
Replies from: cupholder, SRStarin↑ comment by cupholder · 2010-05-02T21:39:06.890Z · LW(p) · GW(p)
Sounds like our good friend the just-world fallacy.
↑ comment by SRStarin · 2011-02-02T01:49:21.647Z · LW(p) · GW(p)
You misunderstand Rand's Objectivism. It's not that people who bad-luck into a bad situation deserve that situation. Nor do people who good-luck into a good situation deserve that reward. You only deserve what you work for. That is Objectivism, in a nutshell. If I make myself a useful person, I don't owe my usefulness to anyone, no matter how desperate their need. That may look like you're saying the desperate deserve their circumstances, but that is just the sort of fallacy Eliezer was writing about in the OP.
Where libertarian political theory relates to Objectivism is in the way the government often oversteps its bounds in expecting successful people to do extra work to help others out. Many libertarians are quite charitable--they just don't want the government forcing them to be so.
Replies from: shokwave↑ comment by shokwave · 2011-02-02T02:41:14.368Z · LW(p) · GW(p)
You misunderstand Rand's Objectivism. It's not that people who bad-luck into a bad situation deserve that situation. Nor do people who good-luck into a good situation deserve that reward. You only deserve what you work for. That is Objectivism, in a nutshell.
You only deserve what you work for - do you get what you deserve? If you don't, then what purpose does the word "deserve" serve? If you do get what you deserve, how come the world looks like it's full of people work for something, deserve it, and don't get it?
Replies from: SRStarin↑ comment by SRStarin · 2011-02-02T14:03:54.522Z · LW(p) · GW(p)
I'm only trying to correct the comment's incorrect assertions about objectivism and libertarianism. To address your comment, I'll start by pointing out Objectivism is a system of ethics, a set of rules for deciding how to treat other people and their stuff. It's not a religion, so it can't answer questions like "Why do some people who work hard and live right have bad luck?"
So, I will assume you are saying that people who work hard in our society seem to you to systematically fail to get what they work for. To clarify my comment, objectivism says you only deserve to get what you work for from other people. That is, you don't in any way deserve to receive from others what they didn't already agree to pay you in exchange for your work.
But, some people can't find anyone to pay them to work. Some can't work at all. Some can sell their work, but can't get enough to make a living. Because of the size and complexity of our society, there are huge numbers of people who have these problems. Sometimes it's their fault--maybe they goofed off in high school or college--but often it's not. If we were cavemen, we'd kick them out of the cave and let them starve, but we're not. We have multiple safety mechanisms, also because of the size and complexity of our society, through neighbors, schools, churches, and local, state and national governments, that help most people through hard times. The fact that I'm OK with governments being in that sentence is a major reason I can't call myself a strict Objectivist, but I'm still more a libertarian than anything else, politically. I think the ideal is that no one should fall through our safety nets, but there will always be people who do, just like the mother of five in the OP.
And when everyone is having a harder time than usual, more people will fall through the safety nets.
And if your problem is with whole nations of people who seem to work hard for very little, well, I probably agree with you, and our beef is with the history of colonialism.
Replies from: jbay↑ comment by jbay · 2013-03-15T19:42:41.659Z · LW(p) · GW(p)
"To clarify my comment, objectivism says you only deserve to get what you work for from other people. That is, you don't in any way deserve to receive from others what they didn't already agree to pay you in exchange for your work."
Although it might work as a system of ethics (or not, depending on your ethics), this definitely doesn't function as a system of economics. First of all, it makes the question of wealth creation a chicken-and-egg problem: If every individual A only deserves to receive what individual B agrees to pay them for work X, how did individual B obtain the wealth to pay A in the first place?
The answer is probably that you can also work for yourself, creating wealth that did not exist without anyone paying you. So your equation, as you've expressed it, does not quite balance. You're missing a term.
Wealth creation is very much a physical thing, which makes it hard to tie to an abstract system of ethics. The wealth created by work X is the value of X; whether it's the food grown from the earth, or the watch that has been assembled from precisely cut steel, glass, and silicon. That is the wealth that is added to the pool by labour and ingenuity, regardless of how it gets distributed or who deserves to get paid for it. And that wealth remains in the system, until the watch breaks or the food spoils (or gets eaten; it's harder to calculate the value of consumed food). It might lose its value quickly, or it might remain a treasure for centuries after the death of every individual involved in the creation of that wealth, like a work of art. It might also be destroyed by random chance well before its predicted value has been exploited.
Who deserves to benefit from the wealth that was created by the work of, and paid for by, people who have been dead for generations? The question of who deserves to benefit from the labour X, and how much, becomes very tricky when the real world is taken into account...
One might argue that that is what Wills are for, but a Will is usually a transfer of wealth in exchange for no work at all. Does an individual morally deserve their inheritance, even if they didn't work at all for it?
It also gets tricky when the nature of humans as real people and not abstract entities is taken into account. People are born helpless, have finite lifespans, and their lifespans are in some way a function of their material posessions. A child is not physically capable of executing much labour, and will die without access to food and water. If children are treated as individuals, then no child deserves to live, because no child can perform the work to pay for their upbringing. Unless they are signed into a loan, but this would need to be done before they have the decision-making capacity to enter a contract.
But the mortality of people is still an issue. A human cannot physically survive zero wealth for more than a few days. So a human on the edge of poverty cannot realistically negotiate a contract either, because the party that offers them pay has infinite bargaining power. One might argue that they don't need bargaining power if there is competition between multiple individuals offering contracts, which will drive the contract toward something reasonable. But again that abstraction ignores reality -- this individual will die after a few days of no food, and even the process of competitive bidding for contracts takes time.
In this case, a person with little wealth will do work X in exchange for very little pay, much less than the value of X, and in practice just enough to keep them alive enough to continue to do X the following day. But simply because that is what they agreed to receive (due to their inability to reject the deal), does that mean that is what they morally deserve to receive?
Finally, some goods are just too difficult (computationally) to manage as contracts between individuals. The value of the resource might not even be presently known by science, although it exists (for example, the economic value of an intact ecosystem). The trespasses and exchanges might be so frequent and poorly documented that the consumption of the resource cannot be managed by legal contracts between owners and licensees (for example, the air we breathe).
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-28T05:36:53.116Z · LW(p) · GW(p)
Turns out this has a name: http://en.wikipedia.org/wiki/Just-world_phenomenon
comment by PhilGoetz · 2010-02-18T16:56:28.067Z · LW(p) · GW(p)
I recently spoke with someone who was in favor of legalizing all drugs, who would not admit that criminalizing something reduces the frequency at which people do it.
Replies from: mattnewport, AndyCossyleon↑ comment by mattnewport · 2010-02-18T18:42:35.396Z · LW(p) · GW(p)
Was that actually his claim or was he saying that it doesn't necessarily reduce the frequency at which people do it? Clearly the frequency of drug use has gone up since they were made illegal. Now perhaps it would have gone up faster if drug use had not been made illegal but that's rather hard to demonstrate. It's at least plausible that some of the popularity of drugs stems from their illegality as it makes them a more effective symbol of rebellion against authority for teenagers seeking to signal rebelliousness.
Claiming that criminalizing can't possibly reduce the frequency at which people do something would be a pretty ridiculous claim. Claiming that it hasn't in fact done so for drugs is quite defensible.
Replies from: AlexSchell↑ comment by AlexSchell · 2012-09-25T23:20:52.487Z · LW(p) · GW(p)
In the real world, PhilGoetz's interlocutor was almost certainly not making the sophisticated point that in some scenarios making X illegal makes it more desirable in a way that outweighs the (perhaps low) extra costs of doing X. If the person had been making this point, it would be very hard to mistake them for the kind of person PhilGoetz describes.
↑ comment by AndyCossyleon · 2010-11-03T22:08:57.839Z · LW(p) · GW(p)
Portugal, anyone? There is a point when arguments need to be abandoned and experimental results embraced. The decriminalization of drugs in Portugal has seen a scant increase in drug use. QED
The same goes for policies like Don't Ask, Don't Tell. Many countries around the world have run the experiment of letting gays serve openly and there have been no ill effects.
Abandon rationalization, embrace reality.
Replies from: AlexSchell, None↑ comment by AlexSchell · 2012-09-25T23:21:32.796Z · LW(p) · GW(p)
The decriminalization of drugs in Portugal has seen a scant increase in drug use. QED
So you think an increase in drug use following decriminalization supports your view? And you were upvoted?
Replies from: gwern, thomblake, AndyCossyleon↑ comment by gwern · 2012-09-25T23:57:17.419Z · LW(p) · GW(p)
The claim of sensible consequentialist (as opposed to moralizing) drug control advocates who are in favor of the War on Drugs is that the War on Drugs, however disastrous, expensive, destructive of liberties, and perverting of justice (to whatever degree they will accept such claims - can't make an omelette without breaking eggs, etc.), is a lesser evil than the consequences of unbridled drug use. This claim is most obviously falsified by a net decrease in drug use, yes, but also falsified by a small increase which is not obviously worse than the War on Drugs since now the anti-War person can use the same argument as the pro-War person was: legalization is the lesser of two evils.
The benefits and small costs in Portugal are, at least at face value, not worse than a War. Hence, the second branch goes through: the predicted magnitude of consequences did not materialize.
Replies from: AlexSchell↑ comment by AlexSchell · 2012-09-27T14:47:37.124Z · LW(p) · GW(p)
I agree completely.
Note that PhilGoetz, following the subject of the thread, pointed out a good consequence of drug control (that is, good on its own terms) that an opponent of drug control refused to acknowledge. AndyCossyleon apparently thought that the Portugal example is a counterpoint to what PhilGoetz said, which it isn't (though as you point out it is evidence against some views held by drug control advocates). In retrospect, I should have said "rebuts PhilGoetz's point" instead of "supports your view" in the grandparent.
↑ comment by AndyCossyleon · 2012-09-27T22:19:12.660Z · LW(p) · GW(p)
AlexSchell, "scant" is essentially a negative, much like "scarce(ly)" or "hardly" or "negligible/y". Rewriting: "The decriminalization of drugs in Portugal has scarcely seen an increase in drug use." I'd argue that these sentences mean the same thing, and that together, they mean something different from "The decriminalization ... has seen a small increase ..." which is what you seem to have interpreted my statement as, though not completely illegitimately.
Replies from: Solarian↑ comment by Solarian · 2012-09-28T00:20:00.047Z · LW(p) · GW(p)
I would still read that as an increase. "Scant," "scarcely," etc., all mean "an amount so small it is negligible." But that's still an increase. 1 + 99^99 isn't 99^99. I understand what is trying to be said in the argument concerning decriminalization, but strictly-speaking, that is an increase in drug use.
↑ comment by [deleted] · 2012-09-26T00:20:08.597Z · LW(p) · GW(p)
There is something fishy about the words "legalize" and "decriminalize." Buying, selling, making and consuming wine are legal activities in Portugal. Not marijuana.
comment by JJ10DMAN · 2010-08-10T21:56:17.111Z · LW(p) · GW(p)
fixed:
Real tough-mindedness is saying, "Yes, sulfuric acid is a horrible painful death, but it ought to have happened to her because a world without consequence - without cause and effect - is meaningless to either idealize or pursue... and as far as we can peer into a hypothetical, objective, pragmatic view of what ought to be, she totally deserved it."
Replies from: ata↑ comment by ata · 2010-08-10T22:07:54.913Z · LW(p) · GW(p)
"World where stupid, gullible, or desperate people are punished for bad decisions by death" versus "world without consequence, without cause and effect" is a pretty huge false dilemma.
Edit: What is an "objective, pragmatic view of what ought to be", in your view? Specifically, what makes it objective, and what is the "pragmatic" criterion for determining what people deserve?
comment by ErnstMuller · 2011-04-15T19:57:40.367Z · LW(p) · GW(p)
The point of banned goods is not that they are banned because of the hazards for the people alone who buy them but for everyone else also. Sulphuric acid for example is easily usable as a weapon especially in concentrated form. (It grows very hot if it touches water. And it is very acidic. So, by using a simple acid proof squirt gun one can do serious damage.)
And, that's not really all: Suppose I could go into such a shop, proof that I'm sufficiently intelligent to handle dangerous stuff without being a danger for myself and buy a) a PCR machine b) a flu virus genome sequence c) a HIV genome sequence d) some assorted chemicals e) some literature about virology f) lungs tissue cell cultures g) some pigs/monkeys to test it
and wipe out Japans population just for the sake of it? (Contagious like flu and deadly as AIDS. It would take some months or even years to clone it but it would be manageable.)
Well, Japan wouldn't actually a sensible target for that. To much risk of travelers spreading the virus worldwide. Choose your own isolated country to test.
I sleep better in my bed each night because I know it is not that easy to get really dangerous stuff in Shops.
Replies from: guineapig, JoshuaZcomment by buybuydandavis · 2011-09-26T09:26:41.659Z · LW(p) · GW(p)
Real tough-mindedness is saying, "Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn't deserve it, but we're going to keep the shops open anyway because we did this cost-benefit calculation." Can you imagine a politician saying that? Neither can I.
I can imagine it, but I can't say that I can remember it in a similar case. The "if it saves just one life...." arguments have always struck me as idiotic, but apparently there is a large market for it. Is it really the case that so many people think that way? If so, we're screwed.
Identifying and acknowledging trade offs is step one to intellectual honesty. It's fairly rare. The more telling aspect in an argument is when people simply ignore the costs when they are pointed out to them, and refuse to address the trade off.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-26T09:32:58.229Z · LW(p) · GW(p)
Alicorn already told you about how to do quotations.
comment by taelor · 2011-09-30T09:39:44.198Z · LW(p) · GW(p)
Real tough-mindedness is saying, "Yes, sulfuric acid is a horrible painful death, and no, that mother of 5 children didn't deserve it, but we're going to keep the shops open anyway because we did this cost-benefit calculation."
Interestingly, I independently came to a similar conclusion regarding drug legalization a few days ago, which I expressed during a class discussion on the topic. Out of about forty people in the class, one person other than me seemed to respond positively to this, everyone else (including people who were in favor of legalization) seemed to ignore it.
comment by mat33 · 2011-10-04T09:35:10.488Z · LW(p) · GW(p)
"But there is no reason for complex actions with many consequences to exhibit this onesidedness property. Why do people seem to want their policy debates to be one-sided?"
We do like to vote, you know. We do like to see other people vote. We do expect to see some kind of propagand, some kind of pitch to cast our votes in some certain way. We tend to feel fooled, than we don't see that, what we do expect to see in the right place. No, it isn't reserved exclusively for the politic issues.
"I don't think that when someone makes a stupid choice and dies, this is a cause for celebration. I count it as a tragedy."
These tragedies are the way of evolution, the greatest cost of evolution, probably. And - yes, any sentient being would like to take the progress of it's spices in it's hands, paws, tentacles, whatever. And - no, we aren't really "there". We are very, very close. But not there, yet.
comment by A1987dM (army1987) · 2011-12-27T18:02:17.580Z · LW(p) · GW(p)
Replies from: thomblakeI don't think that when someone makes a stupid choice and dies, this is a cause for celebration.
comment by CornellEngr2008 · 2012-01-02T23:34:31.685Z · LW(p) · GW(p)
I was just making a simple factual observation. Why did some people think it was an argument in favor of regulation?
I've noticed that Argument by Innuendo is unfortunately common, at least in in-person discussions. Basically, the arguer makes statements that seem to point to some conclusion or another, but stops a few steps short of actually drawing a conclusion, leaving the listener to draw the conclusion themselves. When I've caught myself doing this and ask myself why, there are a few reasons that come up, including:
- I'm testing my audience's intelligence in a somewhat subtle and mean way.
- I'm throwing ideas out there that I know are more than one or two inferential steps away, and seeing if my audience has heard of them, is curious enough to ask about them, or neither and just proceeds as if I didn't say anything.
- I want to escape the criticism of the conclusion I'm suggesting, and by making someone else connect the last few dots, I can redirect the criticism towards them instead, or at least deflect it from myself by denying that that was the conclusion that I was suggesting (even if it was).
Needless to say, this is pretty manipulative, and a generally Bad Thing. But people have sort of been conditioned to fall into the trap of Argument by Innuendo - to not look stupid (or "slow"), they want to try to figure out what you're getting at as quickly as possible instead of asking you, and then argue against it (possibly by innuendo themselves so they can make you look stupid if you don't get it right away). Of course, this makes it extremely easy to argue past each other without realizing it, and might leave one side bewildered at the reaction that their innocent-seeming statement of fact has provoked. I think that this has simply become part of how we reason in real-time in-person discussions.
(To test this claim, try asking "so what?" or "what's the conclusion you're getting at?" when you notice this happening. Note the facial expressions and tone you get in response. In my experience, either the arguer treats you as stupid to ask for clarification on such an "obvious" point, or they squirm in discomfort as their forced to state explicitly the conclusion that they were trying to avoid criticism for proposing, and may weasel into an entirely different position altogether that isn't at all supported by their statements.)
So, I'd venture to say that that's what's going on here - your audience heard your factual observation, interpreted it as laden with a point to be made, and projected that conclusion back onto you, all in the blink of an eye.
Replies from: NickRetallack, Document↑ comment by NickRetallack · 2013-07-20T18:42:38.959Z · LW(p) · GW(p)
I think it's a good thing to do this. It is analogous to science.
If you're a good reasoner and you encounter evidence that conflicts with one of your beliefs, you update that belief.
Likewise, if you want to update someone else's belief, you can present evidence that conflicts with it in hopes they will be a good reasoner and update their belief.
This would not be so effective if you just told them your conclusion flat out, because that would look like just another belief you are trying to force upon them.
↑ comment by Document · 2013-08-03T05:48:15.042Z · LW(p) · GW(p)
Possibly related: When Truth Isn't Enough.
comment by keddaw · 2012-04-18T14:47:23.922Z · LW(p) · GW(p)
Do you really think you're so smart that you would have been a proper scientific skeptic even if you'd been born in 500 C.E.?
Yes. "But your genes would be different." Then it wouldn't be me. "Okay, same genes, but no scientific education." Then it wouldn't be me.
As much as such a thing as 'me' exists then it comes with all the knowledge and skills I have gained either through genetics, training or learning. Otherwise it isn't 'me'.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-18T15:08:36.867Z · LW(p) · GW(p)
So who was that person who started learning the skills that you now have?
Replies from: keddaw↑ comment by keddaw · 2012-04-19T13:03:47.967Z · LW(p) · GW(p)
Well, the person who started typing this reply was someone incredibly similar, but not identical, to the person who finished (neither of who are the present me). It was a person who shared genes, who had an almost identical memory of childhood and education, who shares virtually all my goals, interests and dreams and is more like me than any other person that has ever lived. However, that person was not the me who exists now.
Extrapolate that backwards, becoming less and less like current me over time and you get an idea of who started learning the skills I currently have.
It's not my fault if people have a broken view of what/who they actually are.
Replies from: asparisi, TheOtherDave↑ comment by asparisi · 2012-04-19T13:38:00.825Z · LW(p) · GW(p)
Shouldn't that answer then result in a "Invalid Question" to the original "Would you be a proper scientific skeptic if you were born in 500 CE?" question?
I mean, what you are saying here is that it isn't possible for you to have been born in 500 C.E., that you are a product of your genetics and environment and cannot be separated from those conditions that resulted in you. So the answer isn't "Yes" it is "That isn't a valid question."
I'm not saying I agree, especially since I think the initial question can be rephrased as "Given the population of humans born in 500 C.E. and the historical realities of the era, do you believe that any person born in this era could have been a proper scientific skeptic and given that, do you believe that you would have developed into one had your initial conditions been otherwise identical, or at least highly similar?" Making it personal (Would you be...) is just a way of conferring the weight of the statement, as it is assumed that the readers of LW all have brains capable of modelling hypothetical scenarios, even if those scenarios don't (or can't even in principle) match reality.
The question isn't asking if it is ACTUALLY possible for you to have been born in 500 CE, it is asking you to model the reality of someone in the first person as born in 500 CE and, taking into account what you know of the era, ask if you really think that someone with otherwise equivalent initial starting conditions would have grown into a proper scientific skeptic.
It's also shorter to just bring in the personal hypothetical, which helps.
Replies from: keddaw↑ comment by keddaw · 2012-04-19T17:11:59.992Z · LW(p) · GW(p)
Correct. I made the jump of me appearing as is in 530CE as opposed to 'baby me' since I do not in any logical sense think that baby me is me. So yes, the question is invalid (in my view) but I tried to make it valid by altering the question without explicitly saying I was doing so (i.e. "If you were to pop into existence in 530 CE would you be a scientific skeptic?")
↑ comment by TheOtherDave · 2012-04-19T14:09:35.307Z · LW(p) · GW(p)
Nor, by your reasoning, could it possibly ever be your fault, since my current view of what I am has causes in the past, and you didn't exist in the past. By the same reasoning, nothing else could possibly ever be your fault, except possibly for what you are doing in the instant that I blame you for it... not that it matters for practical purposes, since by the time I got around to implementing consequences of that, you would no longer exist.
That strikes me as even more broken a view than the one you wish for it to replace... it destroys one of the major functions we use the notion of "a person" to perform.
comment by Ezra · 2012-06-23T09:10:17.110Z · LW(p) · GW(p)
I was surprised and pleased to discover that the rock band Switchfoot have a song about the terrible cost to oneself of treating one's arguments as soldiers. It's called "The Sound in My Mouth". (Youtube link, with incorrect lyrics below it; better ones can be found at the bottom of this fansite page)
It focuses on the social costs rather than the truth-finding costs, but it's still well ahead of where I usually expect to find music.
Replies from: TheNuszAbides↑ comment by TheNuszAbides · 2013-07-04T17:52:11.614Z · LW(p) · GW(p)
to save those who would bother to trouble themselves as i just did... the trouble, the second link is for the album Oh! Gravity but "The Sound in My Mouth" is on the Oh! EP.
comment by roryokane · 2012-09-21T06:17:14.921Z · LW(p) · GW(p)
Alternate title: “debates should acknowledge tradeoffs”. I think that mnemonic is more helpful.
Longer summary: “Debates should acknowledge tradeoffs. Don’t rationalize away apparent good points for the other side; it’s okay and normal for the other side to have some good points. Presumably, those points just won’t be strong enough in total to overwhelm yours in total. (Also, acknowledging tradeoffs is easier if you don’t think of the debate in terms of ‘your side’ and ‘their side’.)”
comment by Robert Miles (robert-miles) · 2012-10-15T18:09:12.814Z · LW(p) · GW(p)
An implicit assumption of this article which deserves to be made explicit:
"All negative effects of buying things from the banned store accrue to the individual who chose to purchase from the banned store"
In practical terms this would not be the case. If I buy Sulphuric Acid Drink from the store and discover acid is unhealthy and die, that's one thing. If I buy Homoeopathic Brake Pads for my car and discover they do not cause a level of deceleration greater than placebo, and in the course of this discovery run over a random pedestrian, that's morally a different thing.
The goal of regulation is not just to protect us from ourselves, but to protect us from each other.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-10-15T19:06:38.050Z · LW(p) · GW(p)
"All negative effects of buying things from the banned store accrue to the individual who chose to purchase from the banned store"
Or, the individual who chooses to purchase from the banned store is able to compensate others for any negative effects.
Replies from: cypher197↑ comment by cypher197 · 2013-03-17T00:05:47.779Z · LW(p) · GW(p)
Unfortunately we have not yet discovered a remedy by which court systems can sacrifice the life of a guilty party to bring back a victim party from the dead.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-03-17T06:30:14.442Z · LW(p) · GW(p)
No, but several historical cultures and a few current ones legitimize the notion of blood money as restitution to a victim's kin.
Replies from: cypher197↑ comment by cypher197 · 2013-03-23T23:01:39.977Z · LW(p) · GW(p)
No amount of money can raise the dead. It's still more efficient to prevent people from dying in the first place.
All people are idiots at least some of the time. I don't accept the usage of Homeopathic Brake Pads as a legitimate decision, even if the person using them has $1 billion USD with which to compensate the innocent pedestrians killed by a speeding car. I'll accept the risk of occasional accident, but my life is worth more to me than the satisfaction some "alternative vehicle control systems" nut gets from doing something stupid.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-03-24T19:01:48.969Z · LW(p) · GW(p)
"Homeopathic brake pads" are a reductio-ad-absurdum of the actual proposal, though — which has to do with products that are not certified, tested, or guaranteed in the manner that you're used to.
There are lots of levels of (un)reliability between Homeopathic (works 0% of the time) and NHTSA-Certified (works 99.99% of the time). For instance, there might be Cheap-Ass Brake Pads, which work 99.95% of the time at 10% of the cost of NHTSA-Certified; or Kitchen Sponge Brake Pads, which work 90% of the time at 0.05% of the cost.
We do not have the option of requiring everyone to only do things that impose no danger to others. So if someone chooses to use a product that is incrementally more dangerous to others — whether because this lets them save money by buying Cheap-Ass Brake Pads; or because it's just more exciting to drive a Hummer than a Dodge minivan — how do we respond?
Replies from: cypher197↑ comment by cypher197 · 2013-04-12T09:24:49.667Z · LW(p) · GW(p)
how do we respond?
Well, as a society, at some point we set a cut-off and make a law about it. Thus some items are banned while others are not, and some items are taxed and have warnings on them instead of an outright ban.
And it's not just low intelligence that's a risk. People can be influenced by advertising, social pressure, information saturation, et cetera. Let's suppose we do open this banned goods shop. Are we going to make each and every customer fill out an essay question detailing exactly how they understand these items to be dangerous? I don't mean check a box or sign a paper, because that's like clicking "I Agree" on a EULA or a security warning, and we've all seen how well that's worked out for casual users in the computer realm, even though we constantly bombard them with messages not to do exactly the things that get them in trouble.
Is it Paternalist arrogance when the system administrator makes it impossible to download and open .exe attachments in Microsoft Outlook? Clearly, there are cases where system administrators are paternalist and arrogant; on the other hand, there are a great many cases where users trash their machines. The system administrator has a much better knowledge about safely operating the computer; the user knows more about what work they need to get done. These things are issues of balance, but I'm not ready to throw out top-down bans on dangerous-to-self products.
comment by Huma · 2012-11-04T00:38:59.634Z · LW(p) · GW(p)
I think it is useful here to distinguish politics as a consequence of morality from politics as a agreed set of methods of public decision-making. With the first politics, or politics(A), yes, one has to present all facts as they are regardless of whether they favor one’s stance IF one is to believe there is a moral duty to be rational. In a world where humans all share that particular view on morality, there won’t be a need for the second kind of politics, or politics(B). Because, in that world, the set of methods for rational decision making suffice as the method for public decision making.
But what if some of us do not share that particular view? I, for example, could believe one’s utmost moral duty is to conserve all forms of life, and, regardless of whether I am rational, I would present a view biased towards the outcome favored by my moral code. In that case, my bias is not the result of lack of rationality but one of my morality.
I agree with Eliezer that politics(B) is not an ideal place for rationality but I think it was never meant as such. I think(meaning my opinion) the democratic political system is envisioned to be an arena not of rationality but of morality. As such, it shouldn’t really matter how an issue is presented. Rational arguments appeal to rational voters. It was not a flaw of the system that some voters are irrational and someone presents an irrational argument to appeal to them.
comment by NickRetallack · 2013-07-20T18:57:30.075Z · LW(p) · GW(p)
Debates can easily appear one-sided, for each side. For example, some people believe that if you follow a particular conduct in life, you will go to heaven. To these people, any policy decision that results in sending less people to heaven is a tragedy. But to people who don't believe in heaven, this downside does not exist.
This is not just an arbitrary example. This shows up all the time in US politics. Until people can agree on whether or not heaven exists, how can any of these debates not seem one-sided?
comment by NickRetallack · 2013-07-20T23:33:35.329Z · LW(p) · GW(p)
There is so much wrong with this example that I don't know where to start.
You make up a hypothetical person who dies because she doesn't heed an explicit warning that says "if you do this, you will die". Then you make several ridiculous claims about this hypothetical person:
1) You claim this event will happen, with absolute certainty. 2) You claim this event occurs because this individual has low intelligence, and that it is unfair because a person does not choose to be born intelligent. 3) You claim this event is a tragedy.
I disagree with all of these, and I will challenge them individually. But first, the meta-claim of this argument is that I am supposed to consider compromises that I don't even believe in. Why would I ever do that? Suppose that the downside of a policy decision is "less people will go to heaven". If you are not religious, this sounds like a ridiculous nonsensical downside, and thus no downside at all. And where do you draw the line on perceived downsides anyway? Do you allow people to just make up metaphysical superstitious downsides, and then proceed to weigh those as well? Because that seems like a waste of time to me. Perhaps you do weigh those possibilities, but you assign them so low a probability that they effectively disappear, but clearly your opponent doesn't assign the same probabilities to them as you do. So you have to take the argument to the place where the real disagreements occur. Which leads me to these three claims.
1) You claim this event will happen, with absolute certainty.
1 is not a probability. Besides, the original article mentions safeguards that should reduce the probability that this event ever happens. The type of safeguards depend on your hypothetical person, of course. Lets say your hypothetical person is drunk. The clerk could give a breathalyzer test. Maybe your hypothetical person isn't aware of the warnings. The clerk could read them off at the checkout. Maybe the person doesn't listen or understand. The clerk could quiz them on the content he just read to ensure it sinks in.
But then, I guess the real point of the article is that the hypothetical person doesn't believe the warnings, which brings us to:
2) You claim this event occurs because this individual has low intelligence, and that it is unfair because a person does not choose to be born intelligent.
Receiving a warning explicitly stating "if you do this, you will die" is hardly a mental puzzle. Is this really even a measure of intelligence? This seems like a stretch.
Bleach is sold at normal stores, without any restrictions. If you drink it, you could die. Many people have heard this warning. Do people disbelieve it? Do they risk testing the hypothesis on theirself? Why would anyone risk death like this? I am genuinely curious as to how this can be related to intelligence. Someone please explain this to me.
Generally if someone drinks bleach, it is because they believed the warning and wanted to die. Is this a tragedy? Should we ban bleach? This brings me to:
3) You claim this event is a tragedy.
Is it really?
People are hardly a valuable resource right now. In fact, there are either too many of us, or there will be soon. If one person dies, everyone else gets more space and resources. It's kind of like your article on dust specs vs torture, except that a suicidal person selects theirself, rather than being randomly selected. Unless you apply some argument about determinism and say that a person doesn't choose to be born suicidal (or choose to lead a life whose circumstances would lead anyone to be suicidal, etc).
Should a person be allowed to commit suicide? If we prevent them from doing so, are we infringing on their rights? Or are they infringing on their own rights? I don't really know. I do know and love some amazing people who have committed suicide, and I wish I could have prevented them. This is a real complication to this issue for me, because I value different people differently: I'd gladly allow many people I've never met to die if it would save one person I love. But I understand that other people don't value the same people I do, so this feeling is not easy to transfer into general policies.
Is evolution not fair? If we decide to prop up every unfit individual and prevent every suicide, genetic evolution becomes severely neutered. We can't really adapt to our environment if we don't let it select from us. Thus it would be to our genetic benefit to allow people to die, as it would eventually select out whatever genes caused them to do this. But then, some safety nets seem reasonable. We wouldn't consider banning glasses in order to select for better vision. We need to strike some sort of balance here though, and not waste too many resources propping up individuals who will only multiply their cost to everyone with future generations of their genes and memes. I think that, currently, the point at which this balance is set is when it simply costs too much cash to keep someone alive, though we will gladly provide all people with a certain amount of food and shelter. The specific amount provided is under constant debate.
So, are we obligated to protect every random individual ever born? Is it a tragedy if anyone dies? I think that's debatable. It isn't a definite downside. In fact, it could even be an upside.
comment by Become_Stronger · 2013-10-05T00:23:35.059Z · LW(p) · GW(p)
I'd like to point out that the statistical value of human life is used by economists for calculations such as Eliezer mentions, so at some point someone has managed to do the math.
comment by AshwinV · 2014-02-07T05:39:59.328Z · LW(p) · GW(p)
"I was just making a simple factual observation. Why did some people think it was an argument in favor of regulation?"
A (tiny) note of dissonance here. As noted earlier, any knowledge/understanding naturally constrains anticipation. Wont it naturally follow that a factual observation shall naturally concentrate the probability density in favour of one side of the debate (assuming, of course, that the debate is viewed as having only two possible outcomes, even if each outcome is very broad and contains many variants).
In this particular example, if the object of the debate is to decide whether maximum gain (or benefit, or however else it is to be called) can be gained from regulation, then the point about Dr. Snakeoil's sulphuric acid being harmful to a (very real) section of the population, certainly implies an argument in favour of one side, even if not made with that intention.
I realise of course that this is an honest attempt to understand the problem and discuss it thoroughly before proposing a solution/ coming to a decision, but is there truly a way to be 100% neutral? Especially when in reality, most facts do have consequences that usually point to one side or another (even if the debate is much balanced in the eyes of the public).
What (if any) can be the "litmus test" to distinguish between a factual consideration and a clearly formed opinion? And are there shades of grey in between?
comment by EngineerofScience · 2015-07-21T20:33:38.382Z · LW(p) · GW(p)
There is two problems with making stores that can sell banned things-hurting the public and people that are uneducated. I could go into one of these stores and buy poison and fill my brother's glass with it. That would be a drawback because it would affect my brother who did not go into a store and ignore a safety warning and pick up a bottle of poison and drink it. This would be a problem. An uneducated mother of five children that drinks poison doesn't deserve to die, her children don't deserve to be orphans, and that is asumming that she drinks it herself and doesn't give it to her children. Libertarians that say that making stores that can sell illegal goods is completely good and not bad at all is completely wrong. Very little if anything is gained by illegal goods being available to the public while the reasons I wrote above show that there is a drawback- someone who buys and drinks a can of poison does not deserve to die- the person could have been bullied, been driven insane buy a disease or by a drug that someone else tricked him into drinking or forced down his throat. In fact, such a drug might only be able to be purchased at such store.
Replies from: Wes_W↑ comment by Wes_W · 2015-07-21T20:59:59.404Z · LW(p) · GW(p)
But... you can already buy many items that are lethal if forcefully shoved down someone's throat. Knives, for example. It's not obvious to me that a lack of lethal drugs is currently preventing anyone from hurting people, especially since many already-legal substances are very dangerous to pour down someone's throat.
From the Overcoming Bias link, "risky buildings" seem to me the clearest example of endangering people other than the buyer.
Replies from: EngineerofScience↑ comment by EngineerofScience · 2015-07-22T20:14:14.367Z · LW(p) · GW(p)
I can see that, and I realize that there are advantages to having a store that can sell illegal things. I would now say that such a store would be benificial. There would have to be some restrictions to what that type of store could sell. Explosives like fireworks still could be for use by a licensed person, and nukes would not be sold at all.
comment by karl-friedrich · 2016-03-17T23:22:55.615Z · LW(p) · GW(p)
I found this post particularly ironic. The statement that a mother of five would drink sulfuric acid but for government regulation is not "a simple factual observation." How could it be? Since we are imagining an alternative world and the statement is not based on any universal law of human action (nor even historical precedent, in which case it would be a probabilistic statement, not a statement of fact), it is speculation. And a very debatable speculation at that. That is, why would anyone bother to market such a product? Surely it would not be profitable, as there is no demand for such a product (indeed, if one were intent on swallowing poison they are not going to be stopped by product safety regulation). Furthermore, they would practically be begging for a gigantic tort judgment against them. It is curious that the author does not mention tort law, which has a much greater effectiveness in regulating commercial behavior than codified regulations enforced by bureaucrats. Such a judgment would most likely be the case in her speculatory hypothetical, since it sounds as if the mother of 5 is totally unaware that she is drinking poison. But, even if it were the case that the author's hypothetical were likely (which I contend it is not), it is not in any way a "factual observation."
Replies from: gjm, ChristianKl↑ comment by gjm · 2016-03-17T23:57:40.073Z · LW(p) · GW(p)
The statement [...] is not a "simple factual observation"
No, but I'm pretty sure it's shorthand for something like this:
Experience has shown many, many instances where (in the absence of paternalist regulation with real teeth) charlatans and cranks persuade people to do things that are very much not in their best interests, and those people end up harmed.
which is a simple factual observation, plus this:
If there were "banned products stores" within which the rules against selling dangerous products were suspended, there's no reason to suppose that they would be a unique exception to that pattern; so, almost certainly, sooner or later (and probably sooner) someone would buy something dangerous in one of those stores and come to serious harm.
which, while in principle it's "speculation", seems about as speculative as "if we set up a stall in the street offering free cake, some people would eat it".
(I take it it's obvious that "Sulfuric Acid Drink" was intended as hyperbole, to indicate something not quite so transparently harmful, masquerading as a cure. If it isn't, you might want to consider why Eliezer called it "Dr Snakeoil's".)
Apparently you disagree on the grounds that actually no one would be selling such things even if such shops existed. I think they very decidedly might.
Surely it would not be profitable
Selling fake cures for real diseases (or in some cases fake diseases) has historically been very profitable for some people, and some of those fake cures have been poisonous.
begging for a gigantic tort judgment
That's a stronger argument. I think Robin may have been envisaging -- and, whether or not he was, Eliezer may have taken him to be envisaging -- that selling in the Banned Products Store exempts you from more than just standard-issue regulatory red tape. I am not an expert on US tort law, so I'll take your word for it that Dr Snakeoil would not be able to get out of trouble just by protesting that he honestly thought his Sulfuric Acid Drink was good against arthritis; if so, then indeed the Banned Products store might be substantially less dangerous than Eliezer suggests.
Replies from: tdb↑ comment by tdb · 2016-11-04T01:14:46.183Z · LW(p) · GW(p)
Maybe we need a banned products store and a tort-proof banned products store, both.
Some libertarians might say that if you go into a "banned products shop", passing clear warning labels that say "THINGS IN THIS STORE MAY KILL YOU", and buy something that kills you, then it's your own fault and you deserve it. If that were a moral truth, there would be no downside to having shops that sell banned products. It wouldn't just be a net benefit, it would be a one-sided tradeoff with no drawbacks.
I don't quite follow. Even when people "deserve" what they get, if what they "deserve" is death, their loved ones see that as a negative. Does this mean there are no moral truths, since every choice has a downside? Or am I overgeneralizing when I interpret it as "moral truths have no downside."
Replies from: gjm↑ comment by gjm · 2016-11-06T14:58:20.126Z · LW(p) · GW(p)
I'm not certain I understand Eliezer's argument there, but I think he simply made a mistake: I agree with you that if you do something that deserves a bad outcome and the bad outcome happens, it can still be bad that that happened and that can be a downside to whatever may have made it easier for you to do the bad thing.
↑ comment by ChristianKl · 2016-11-04T17:10:00.669Z · LW(p) · GW(p)
Furthermore, they would practically be begging for a gigantic tort judgment against them. It is curious that the author does not mention tort law, which has a much greater effectiveness in regulating commercial behavior than codified regulations enforced by bureaucrats.
Tort laws mean that the decision about what products are dangerous enough to warrant getting effectively banned aren't made by scientific literate experts but by laypeople in the jury.
Uncertainty about what is and isn't allowed is also bad for business.
comment by DavidA · 2019-06-07T19:10:47.679Z · LW(p) · GW(p)
“Yes, sulfuric acid is a horrible painful death, and no, that mother of five children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.” Can you imagine a politician saying that? Neither can I. But insofar as economists have the power to influence policy, it might help if they could think it privately—maybe even say it in journal articles, suitably dressed up in polysyllabismic obfuscationalization so the media can’t quote it.
This speaks to a very significant issue we face today. Vast swathes of public policy appear to be predicated upon the belief that we can create utopia. So the response to any apparent cost is always to attempt to eliminate it, usually through regulation.
I strongly suspect that not only is utopia an impossible dream, but attempts to regulate it into existence are counter-productive and the end result of such efforts is dystopia.
So I think it is incumbent upon economists not just to think this privately (I'm sure many do), but to say it loudly, publicly and frequently.
comment by EniScien · 2021-11-09T10:02:47.252Z · LW(p) · GW(p)
This is similar to the chain about "The end justifies the means," that is, as a supporter of vaccination would say that it is good that children are hurt when injected. Although this is obviously logically unreasonable. But in the case of "fools who deserve" it does not seem obvious, because stupidity is something bad, and therefore stupid people are evil, so they deserve punishment for their anger.
comment by tmercer · 2022-07-06T19:09:10.527Z · LW(p) · GW(p)
A few ideas:
You can't save a life. Every living thing is doomed to die. You can only postpone deaths.
Morality ought to be based on the expected values of decisions people make or actions they do, not the actual outcomes. Morality includes the responsibility to correctly evaluate EV by gathering sufficient evidence.
comment by Caperu_Wesperizzon · 2022-08-21T07:06:33.049Z · LW(p) · GW(p)
Saying “People who buy dangerous products deserve to get hurt!” is not tough-minded. It is a way of refusing to live in an unfair universe. Real tough-mindedness is saying, “Yes, sulfuric acid is a horrible painful death, and no, that mother of five children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.” Can you imagine a politician saying that? Neither can I.
There's another reason to say the former rather than the latter. Most people will hear the latter this way:
"Yes, sulfuric acid is a horrible painful death ..."
"TLDR. Okay, you're for regulation."