Diseased thinking: dissolving questions about disease

post by Scott Alexander (Yvain) · 2010-05-30T21:16:19.449Z · LW · GW · Legacy · 357 comments

Contents

  What is Disease?
  Hidden Inferences From Disease Concept
  Sympathy or Condemnation?
  The Ethics of Treating Marginal Conditions
  Summary
None
357 comments

Related to: Disguised Queries [? · GW], Words as Hidden Inferences [? · GW], Dissolving the Question [? · GW], Eight Short Studies on Excuses [? · GW]

Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity

-- George Will, townhall.com

Sandy is a morbidly obese woman looking for advice.

Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while?

Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass.

Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down.

When she tells each of her friends about the opinions of the others, things really start to heat up.

Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet.

Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead.

Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma.

Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people.

Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband.

The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue.

What is Disease?

In Disguised Queries [? · GW] , Eliezer demonstrates how a word refers to a cluster of objects related upon multiple axes. For example, in a company that sorts red smooth translucent cubes full of vanadium from blue furry opaque eggs full of palladium, you might invent the word "rube" to designate the red cubes, and another "blegg", to designate the blue eggs. Both words are useful because they "carve reality at the joints" - they refer to two completely separate classes of things which it's practically useful to keep in separate categories. Calling something a "blegg" is a quick and easy way to describe its color, shape, opacity, texture, and chemical composition. It may be that the odd blegg might be purple rather than blue, but in general the characteristics of a blegg remain sufficiently correlated that "blegg" is a useful word. If they weren't so correlated - if blue objects were equally likely to be palladium-containing-cubes as vanadium-containing-eggs, then the word "blegg" would be a waste of breath; the characteristics of the object would remain just as mysterious to your partner after you said "blegg" as they were before.

"Disease", like "blegg", suggests that certain characteristics always come together. A rough sketch of some of the characteristics we expect in a disease might include:

1. Something caused by the sorts of thing you study in biology: proteins, bacteria, ions, viruses, genes.

2. Something involuntary and completely immune to the operations of free will

3. Something rare; the vast majority of people don't have it

4. Something unpleasant; when you have it, you want to get rid of it

5. Something discrete; a graph would show two widely separate populations, one with the disease and one without, and not a normal distribution.

6. Something commonly treated with science-y interventions like chemicals and radiation.

Cancer satisfies every one of these criteria, and so we have no qualms whatsoever about classifying it as a disease. It's a type specimen, the sparrow as opposed to the ostrich [? · GW]. The same is true of heart attack, the flu, diabetes, and many more.

Some conditions satisfy a few of the criteria, but not others. Dwarfism seems to fail (5), and it might get its status as a disease only after studies show that the supposed dwarf falls way out of normal human height variation. Despite the best efforts of transhumanists, it's hard to convince people that aging is a disease, partly because it fails (3). Calling homosexuality a disease is a poor choice for many reasons, but one of them is certainly (4): it's not necessarily unpleasant.

The marginal conditions mentioned above are also in this category. Obesity arguably sort-of-satisfies criteria (1), (4), and (6), but it would be pretty hard to make a case for (2), (3), and (5).

So, is obesity really a disease? Well, is Pluto really a planet? Once we state that obesity satisfies some of the criteria but not others, it is meaningless to talk about an additional fact of whether it "really deserves to be a disease" or not.

If it weren't for those pesky hidden inferences [? · GW]...

Hidden Inferences From Disease Concept

The state of the disease node, meaningless in itself, is used to predict several other nodes with non-empirical content. In English: we make value decisions based on whether we call something a "disease" or not.

If something is a real disease, the patient deserves our sympathy and support; for example, cancer sufferers must universally be described as "brave". If it is not a real disease, people are more likely to get our condemnation; for example Sandy's husband who calls her a "pig" for her inability to control her eating habits. The difference between "shyness" and "social anxiety disorder" is that people with the first get called "weird" and told to man up, and people with the second get special privileges and the sympathy of those around them.

And if something is a real disease, it is socially acceptable (maybe even mandated) to seek medical treatment for it. If it's not a disease, medical treatment gets derided as a "quick fix" or an "abdication of personal responsibility". I have talked to several doctors who are uncomfortable suggesting gastric bypass surgery, even in people for whom it is medically indicated, because they believe it is morally wrong to turn to medicine to solve a character issue.

While a condition's status as a "real disease" ought to be meaningless as a "hanging node" after the status of all other nodes have been determined, it has acquired political and philosophical implications because of its role in determining whether patients receive sympathy and whether they are permitted to seek medical treatment.

If we can determine whether a person should get sympathy, and whether they should be allowed to seek medical treatment, independently of the central node "disease" or of the criteria that feed into it, we will have successfully unasked the question "are these marginal conditions real diseases" and cleared up the confusion.

Sympathy or Condemnation?

Our attitudes toward people with marginal conditions mainly reflect a deontologist libertarian (libertarian as in "free will", not as in "against government") model of blame. In this concept, people make decisions using their free will, a spiritual entity operating free from biology or circumstance. People who make good decisions are intrinsically good people and deserve good treatment; people who make bad decisions are intrinsically bad people and deserve bad treatment. But people who make bad decisions for reasons that are outside of their free will may not be intrinsically bad people, and may therefore be absolved from deserving bad treatment. For example, if a normally peaceful person has a brain tumor that affects areas involved in fear and aggression, they go on a crazy killing spree, and then they have their brain tumor removed and become a peaceful person again, many people would be willing to accept that the killing spree does not reflect negatively on them or open them up to deserving bad treatment, since it had biological and not spiritual causes.

Under this model, deciding whether a condition is biological or spiritual becomes very important, and the rationale for worrying over whether something "is a real disease" or not is plain to see. Without figuring out this extremely difficult question, we are at risk of either blaming people for things they don't deserve, or else letting them off the hook when they commit a sin, both of which, to libertarian deontologists, would be terrible things. But determining whether marginal conditions like depression have a spiritual or biological cause is difficult, and no one knows how to do it reliably.

Determinist consequentialists can do better. We believe it's biology all the way down. Separating spiritual from biological illnesses is impossible and unnecessary. Every condition, from brain tumors to poor taste in music, is "biological" insofar as it is encoded in things like cells and proteins and follows laws based on their structure.

But determinists don't just ignore the very important differences between brain tumors and poor taste in music. Some biological phenomena, like poor taste in music, are encoded in such a way that they are extremely vulnerable to what we can call social influences: praise, condemnation, introspection, and the like. Other biological phenomena, like brain tumors, are completely immune to such influences. This allows us to develop a more useful model of blame.

The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. Consequentialists don't on a primary level want anyone to be treated badly, full stop; thus is it written: "Saddam Hussein doesn't deserve so much as a stubbed toe." But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future. And, one might infer, although alcoholics may not deserve condemnation, societal condemnation of alcoholics makes alcoholism a less attractive option.

So here, at last, is a rule for which diseases we offer sympathy, and which we offer condemnation: if giving condemnation instead of sympathy decreases the incidence of the disease enough to be worth the hurt feelings, condemn; otherwise, sympathize. Though the rule is based on philosophy that the majority of the human race would disavow, it leads to intuitively correct consequences. Yelling at a cancer patient, shouting "How dare you allow your cells to divide in an uncontrolled manner like this; is that the way your mother raised you??!" will probably make the patient feel pretty awful, but it's not going to cure the cancer. Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness. The cancer is a biological condition immune to social influences; the laziness is a biological condition susceptible to social influences, so we try to socially influence the laziness and not the cancer.

The question "Do the obese deserve our sympathy or our condemnation," then, is asking whether condemnation is such a useful treatment for obesity that its utility outweights the disutility of hurting obese people's feelings. This question may have different answers depending on the particular obese person involved, the particular person doing the condemning, and the availability of other methods for treating the obesity, which brings us to...

The Ethics of Treating Marginal Conditions

If a condition is susceptible to social intervention, but an effective biological therapy for it also exists, is it okay for people to use the biological therapy instead of figuring out a social solution? My gut answer is "Of course, why wouldn't it be?", but apparently lots of people find this controversial for some reason.

In a libertarian deontological system, throwing biological solutions at spiritual problems might be disrespectful or dehumanizing, or a band-aid that doesn't affect the deeper problem. To someone who believes it's biology all the way down, this is much less of a concern.

Others complain that the existence of an easy medical solution prevents people from learning personal responsibility. But here we see the status-quo bias at work, and so can apply a preference reversal test. If people really believe learning personal responsibility is more important than being not addicted to heroin, we would expect these people to support deliberately addicting schoolchildren to heroin so they can develop personal responsibility by coming off of it. Anyone who disagrees with this somewhat shocking proposal must believe, on some level, that having people who are not addicted to heroin is more important than having people develop whatever measure of personal responsibility comes from kicking their heroin habit the old-fashioned way.

But the most convincing explanation I have read for why so many people are opposed to medical solutions for social conditions is a signaling explanation by Robin Hans...wait! no!...by Katja Grace. On her blog, she says:

...the situation reminds me of a pattern in similar cases I have noticed before. It goes like this. Some people make personal sacrifices, supposedly toward solving problems that don’t threaten them personally. They sort recycling, buy free range eggs, buy fair trade, campaign for wealth redistribution etc. Their actions are seen as virtuous. They see those who don’t join them as uncaring and immoral. A more efficient solution to the problem is suggested. It does not require personal sacrifice. People who have not previously sacrificed support it. Those who have previously sacrificed object on grounds that it is an excuse for people to get out of making the sacrifice. The supposed instrumental action, as the visible sign of caring, has become virtuous in its own right. Solving the problem effectively is an attack on the moral people.

A case in which some people eat less enjoyable foods and exercise hard to avoid becoming obese, and then campaign against a pill that makes avoiding obesity easy demonstrates some of the same principles.

There are several very reasonable objections to treating any condition with drugs, whether it be a classical disease like cancer or a marginal condition like alcoholism. The drugs can have side effects. They can be expensive. They can build dependence. They may later be found to be placebos whose efficacy was overhyped by dishonest pharmaceutical advertising.. They may raise ethical issues with children, the mentally incapacitated, and other people who cannot decide for themselves whether or not to take them. But these issues do not magically become more dangerous in conditions typically regarded as "character flaws" rather than "diseases", and the same good-enough solutions that work for cancer or heart disease will work for alcoholism and other such conditions (but see here [? · GW]).

I see no reason why people who want effective treatment for a condition should be denied it or stigmatized for seeking it, whether it is traditionally considered "medical" or not.

Summary

People commonly debate whether social and mental conditions are real diseases. This masquerades as a medical question, but its implications are mainly social and ethical. We use the concept of disease to decide who gets sympathy, who gets blame, and who gets treatment.

Instead of continuing the fruitless "disease" argument, we should address these questions directly. Taking a determinist consequentialist position allows us to do so more effectively. We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.

357 comments

Comments sorted by top scores.

comment by Vladimir_M · 2010-05-31T00:08:01.222Z · LW(p) · GW(p)

Yvain:

The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. [...] But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future.

Or as Oliver Wendell Holmes put it more poignantly:

If I were having a philosophical talk with a man I was going to have hanged or electrocuted, I should say, "I don't doubt that your act was inevitable for you, but to make it more avoidable by others we propose to sacrifice you to the common good. You may regard yourself as a soldier dying for your country if you like. But the law must keep its promises."

(I am not a consequentialist, much less a big fan of Holmes, but he sure had a way with words.)

Replies from: torekp
comment by torekp · 2010-05-31T18:10:59.815Z · LW(p) · GW(p)

The law must keep its promises? That doesn't sound particularly Utilitarian, or even particularly consequentialist. Deontologists could endorse the focus on the distinction between behaviors that are responsive to praise/blame and those, like the development of cancer, that are not. Or to put it another way, on the distinction between behaviors that are responsive to talk and those that are not. Here, "talk" includes self-talk, which includes much reasoning.

Replies from: Kaj_Sotala, None
comment by Kaj_Sotala · 2010-05-31T23:14:44.314Z · LW(p) · GW(p)

The law must keep its promises? That doesn't sound particularly Utilitarian, or even particularly consequentialist.

In this case, the law must "keep its promises" because of what would follow if it turned out that the law didn't actually matter. That's a very consequentialist notion.

Replies from: torekp, Psychohistorian
comment by torekp · 2010-06-02T02:09:04.734Z · LW(p) · GW(p)

I'm just trying to point out that we can agree with a central point of Yvain's post without endorsing consequentialism. For example, Anthony Ellis offers a deontological deterrence-based justification of punishment.

The same goes for Holmes's quip, even if in his case it was motivated by consequentialist reasoning. Especially if we take "your act was inevitable for you" to be an (overblown) restatement of the simple fact of causal determination of action.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-02T02:45:32.242Z · LW(p) · GW(p)

I'm just trying to point out that we can agree with a central point of Yvain's post without endorsing consequentialism.

Oh, right. Yeah, sure - I agree with that.

comment by Psychohistorian · 2010-06-02T05:59:03.122Z · LW(p) · GW(p)

The whole thing is a bit of a half-measure. Even if people's actions are predetermined, and they are not morally accountable for them, we need to hang them anyways because without such an incentive, even more people may be billiard-balled into doing the same things.

Of course, this is all quite besides the point. If you swallow billiard-ball determinism hook, line, and sinker, there's really no point in talking about why we do anything, though, as it works out, there's also little point in objecting to people discussing why we do anything, since it's all going to happen just as it happens no matter what.

ETA: I'm referring to the commonly misconceived notion of determinism that thinks free will cannot exist because the universe is "merely" physical. I mean "billiard-ball determinism" as something of a pejorative, not as an accurate model of how the universe really works. I'm not claiming that a deterministic universe is incompatible with free will; indeed, I believe the opposite.

Replies from: cousin_it, torekp
comment by cousin_it · 2010-06-03T00:03:35.395Z · LW(p) · GW(p)

I'm pretty sure this is wrong. A billiard-ball world would still contain reasons and morals.

Imagine a perfectly deterministic AI whose sole purpose in life is to push a button that increments a counter. The AI might reason as you did, notice its own determinism, and conclude that pushing the button is pointless because "it's all going to happen just as it happens no matter what". But this is the wrong conclusion to make. Wrong in a precisely definable sense: if we want that button pushed and are building an AI to do it, we don't want the AI to consider such reasoning correct.

Therefore, if you care about your own utility function (which you presumably do), this sort of reasoning is wrong for you too.

Replies from: Psychohistorian, Psychohistorian
comment by Psychohistorian · 2010-06-03T05:51:57.885Z · LW(p) · GW(p)

I was evidently unclear. When I say "billiard-ball determinism" I mean the caricature of determinism many people think of, the one in which free will is impossible because everything is "merely" physical. If no decision were "free," any evaluative statement is pointless. It would be like water deciding whether or not it is "right" to flow downhill - it doesn't matter what it thinks, it's going to happen.

I agree that this is not an accurate rendition of reality. I just find it amusing that people who do think it's an accurate rendition of reality still find the free-will debate relevant. If there is no free will in that sense, there is no point whatsoever to debating it, nor to discussing morality, because it's a done deal.

comment by Psychohistorian · 2010-06-03T05:55:20.476Z · LW(p) · GW(p)

I would also add that this example assumes free will - if the AI can't stop pushing the button, it doesn't really matter what it thinks about its merits. If it can, then free will is not meaningless, because it just used it.

Replies from: cousin_it
comment by cousin_it · 2010-06-03T08:45:07.742Z · LW(p) · GW(p)

I'm not sure what exactly you mean by "can't". Imagine a program that searches for the maximum element of an array. From our perspective there's only one value the program "can" return. But from the program's perspective, before it's scanned the whole array, it "can" return any value. Purely deterministic worlds can still contain agents that search for the best thing to do by using counterfactuals ("I could", "I should"), if these agents don't have complete knowledge of the world and of themselves. The concept of "free will" was pretty well-covered in the sequences.

Replies from: Psychohistorian
comment by Psychohistorian · 2010-06-03T15:00:55.496Z · LW(p) · GW(p)

You're right, but you're not disagreeing with me. My original statement assumed an incorrect model of free will. You are pointing out that a correct model of free will would yield different results. This is not a disputed point.

Imagine you have an AI that is capable of "thinking," but incapable of actually controlling its actions. Its attitude towards its actions is immaterial, so its beliefs about the nature of morality are immaterial. This is essentially compatible with the common misconception of no-free-will-determinism.

My point was that using an incorrect model that decides "there is no free will" is a practical contradiction. Pointing out that a correct model contains free-will-like elements is not at odds with this claim.

Replies from: cousin_it, adamisom
comment by cousin_it · 2010-06-03T22:51:35.073Z · LW(p) · GW(p)

Yes, I misunderstood your original point. It seems to be correct. Sorry.

comment by adamisom · 2012-04-26T07:27:24.429Z · LW(p) · GW(p)

Psychohistorian disagrees that cousin_it was disagreeing with him.

Very cute ;)

comment by torekp · 2010-06-02T23:57:47.923Z · LW(p) · GW(p)

Is billiard-ball determinism a particular variant? If so, what does the billiard-ball part add?

comment by [deleted] · 2019-12-20T22:48:12.948Z · LW(p) · GW(p)
The law must keep its promises? That doesn't sound particularly Utilitarian

Granting a bit of poetic license and interpretive wiggle-room to Holmes: if we are *rule* utilitarians and the (then-current) laws are the utility-maximizing rules and the legal system is tasking with enforcing those rules, enforcing them—keeping the promise—is the rule-utilitarian'ly morally required thing to do.

I think that's a utilitarian interpretation that neither does excessive violence to Holmes' quote nor the category or concept of utilitarianism.

comment by pjeby · 2010-06-01T01:52:23.672Z · LW(p) · GW(p)

Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness.

That depends a lot on whether or not the reason they're not working is because they already feel they're worthless... in which case the result isn't likely to be an improvement.

Replies from: Ivan_Tishchenko
comment by Ivan_Tishchenko · 2010-06-01T17:55:27.960Z · LW(p) · GW(p)

yes, but this still does not classify their laziness as a desease, does it?

Replies from: pjeby, None
comment by pjeby · 2010-06-01T18:06:03.911Z · LW(p) · GW(p)

yes, but this still does not classify their laziness as a desease, does it?

Maybe you should read the article again, or the previous articles on definitions and question-dissolving, because you seem to have missed the part where "is it a disease?" isn't a real question.

"Disease" is just a node in your classification graph - it doesn't have any real existence in the outside world. It's a bit like an entry in a compression algorithm's lookup table. It might contain an entry for 'Th' when compressing text, because a capital T is often followed by a lower-case 'h'. But this doesn't mean anything - it's just an artifact of the compression process.

And so is the idea of a "disease".

comment by [deleted] · 2011-10-24T13:33:01.841Z · LW(p) · GW(p)

Well, I won't much sympathize with them, but I would offer them a medical treatment, if it existed and they asked for it.

comment by cousin_it · 2010-06-06T19:43:15.886Z · LW(p) · GW(p)

Here's a perfect illustration: Halfbakery discusses the idea of a drug for alleviating unrequited love. Many people speak out against the idea, eloquently defending the status quo for no particular reason other than it's the status quo. I must be a consequentialist, because I'd love to have such a drug available to everyone.

Replies from: NancyLebovitz, TropicalFruit
comment by NancyLebovitz · 2010-06-07T10:17:45.050Z · LW(p) · GW(p)

Thanks for the link-- very entertaining discussion.

I don't think anyone came out explicitly with the idea that unrequited love works well in some people's lives and badly in others, and people would have their own judgement about whether to take a drug for it.

Instead, at least the anti-drug contingent reacted as though the existence of the drug meant that unrequited love would go away completely.

For another example, see The End of My Addiction, a book by a cardiologist who became an alcoholic and eventually found that Baclofen, a muscle relaxant, eliminated the craving and also caused him to quit being a shopoholic. He's been trying to get a study funded to see whether there's solid evidence that high doses of the drug undo addictions, but there isn't sufficient interest. It isn't just that the drug is off patent, it's that most people don't see alcohol craving as a problem in itself.

Replies from: cupholder
comment by cupholder · 2010-06-07T14:53:08.201Z · LW(p) · GW(p)

He's been trying to get a study funded to see whether there's solid evidence that high doses of the drug undo addictions, but there isn't sufficient interest.

There are a few randomized trials of baclofen, if those count:

  • Addolorato et al. 2006. 18 drinkers got baclofen, 19 got diazepam (the 'gold standard' treatment, apparently). Baclofen performed about as well as diazepam.

  • Addolorato et al. 2007.61814-5) 42 drinkers got baclofen, 42 got a placebo. More baclofen patients remained abstinent than placebo patients, and the baclofen takers stayed abstinent longer (both results were statistically significant).

  • Assadi et al. 2003. 20 opiate addicts got baclofen, 20 got a placebo. (Statistically) significantly more of the baclofen patients stayed on the treatment, and lessened depressive & withdrawal symptoms. The baclofen patients also did insignificantly better on 'opioid craving and self-reported opioid and alcohol use.'

  • Shoptaw et al. 2003. 35 cokeheads got baclofen, 35 a placebo. 'Univariate analyses of aggregates of urine drug screening showed generally favorable outcomes for baclofen, but not at statistically significant levels. There was no statistical significance observed for retention, cocaine craving, or incidence of reported adverse events by treatment condition.'

  • Heinzerling et al. 2006. Just found this one: 25 meth addicts got baclofen, 26 got gabapentin, and 37 got a placebo. Going by the abstract, across the whole sample, neither baclofen nor gabapentin beat the placebo, but an after-the-fact statistical analysis suggested that baclofen had a significantly stronger effect than placebo among the patients who were stricter about taking the baclofen.

  • Franklin et al. 2009. Editing in this one too: 30 smokers who were thinking of quitting took baclofen, 30 took a placebo. Both groups smoked progressively fewer cigarettes a day during the trial, but the baclofen users had a significantly steeper decline than the placebo users. However, they did not report significantly less craving feelings.

  • Kahn et al. 2009. Last one, I promise: 80 cocaine addicts from around the USA got baclofen and 80 got a placebo. There were no statistically significant differences in treatment retention, cocaine use, measures of craving and withdrawal, or any of the other things the researchers tested for, except on a couple of post hoc tests. The researchers hint that the dose used (60mg) might have been too small.

Most of these studies are a few years old now, and there are also case reports, uncontrolled trials like this one and studies done on rodents. I'm kind of surprised no one's tried doing a larger scale trial of baclofen for alcohol. I haven't looked at these in detail - maybe the effect is only statistically significant and not clinically significant, or there's some subtle methodological issue I'm missing.

(Edited this comment a few times because Chrome helpfully posted it prematurely for me.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-07T15:15:23.642Z · LW(p) · GW(p)

Thanks for looking this up.

Unless I've missed something, only the third study might have used baclofen in high enough dosages to test Arneisen's hypothesis.

From his FAQ:

Q How much baclofen does a patient need?

A It varies from patient to patient, depending probably on physical size, extent of dependency, and other factors. Studies have shown that animals lose all motivation to consume addictive substances when they are given baclofen in the range of 1 to 5 milligrams per kilogram (2.2 pounds) of body weight.

The evidence from my case and other patients is that the threshold dose needed to break the cycle of addictive craving, preoccupation, and obsessive thoughts is higher than the maintenance dose needed to keep a patient completely free from addiction.

Replies from: cupholder
comment by cupholder · 2010-06-08T14:38:45.950Z · LW(p) · GW(p)

Good point. I hadn't read Ameisen's FAQ; I just went running off to Google Scholar as soon as I read your comment. I might come back later and see what doses were used in each of those studies and see whether the studies with higher doses had more positive results.

comment by TropicalFruit · 2021-09-02T01:19:21.236Z · LW(p) · GW(p)

I think this is actually a case where the status quo bias shines, and the natural pull towards conservatism may be warranted.

Do we have any way of reasonably predicting the social effects of curing unrequited love? What if it’s serving some behavioral or mating function that’s critical to the social order?

I don’t think it’s appropriate to casually mess with complex systems like human mating, getting rid of something as widespread as unrequited love because it’s uncomfortable. That seems dangerous, and it seems like caution is the correct approach with systems that broad.

comment by sketerpot · 2010-05-31T03:14:03.361Z · LW(p) · GW(p)

Someone once quipped about a Haskell library that "You know it's a good library when just reading the manual removes the problem it solves from your life forever." I feel the same way about this article. That's a compliment, in case you were wondering.

The one criticism I would make is that it's long, and I think you could spread this to other sites and enlighten a lot of people if you wrote an abridged version and perhaps illustrated it with silly pictures of cats.

Replies from: Yvain, rwallace
comment by Scott Alexander (Yvain) · 2010-05-31T21:34:39.685Z · LW(p) · GW(p)

Thank you very much. That's exactly the feeling I hoped people would have if this dissolved the question and it's great to hear.

I can't think of how to make this shorter without removing content (especially since this is already pitched at an advanced audience - anything short of LW and I'd have to explain status quo biases, preference reversal tests, and actually justify determinism).

I can, however, give you an lolcat if you want one.

comment by rwallace · 2010-05-31T04:40:20.328Z · LW(p) · GW(p)

I ran a Google search for the line you quoted, but no results; I'd be interested to know what the original author meant by it, I don't suppose you have any links handy?

Replies from: sketerpot
comment by sketerpot · 2010-05-31T20:02:01.334Z · LW(p) · GW(p)

It was on a wiki page that was lost in a shuffle years ago. HOWEVER! I managed to track down a copy of the page, and hosted it myself. Here's the one I was paraphrasing:

stepcut: You know a library is good when just reading about it removes the particular task it performs from your life altogether.

It's a pretty funny quotes page, if you like Haskell. And I wouldn't feel right if I didn't include my favorite thing from that page, concerning the proper indentation of C code:

"In My Egotistical Opinion, most people's C programs should be indented six feet downward and covered with dirt." -- Blair P. Houghton

Having fought far too many segfaults, and been irritated by the lack of common data structures in libc, I can only agree.

Replies from: phob, rwallace
comment by phob · 2010-06-02T15:47:05.571Z · LW(p) · GW(p)

Thank you for this.

comment by rwallace · 2010-06-01T02:24:41.141Z · LW(p) · GW(p)

Thanks! excellent reading.

comment by clarissethorn · 2010-06-07T13:36:36.714Z · LW(p) · GW(p)

This is a really interesting post and I will most likely respond on my own blog sometime. In the meantime, I haven't read the whole comment thread, but I don't think this article has been linked yet (I did search for the title): http://www.nytimes.com/2010/01/10/magazine/10psyche-t.html?pagewanted=all

It's called "The Americanization of Mental Illness". Definitely worth a read; in particular, here is an excellent quotation:

It turns out that those who adopted biomedical/genetic beliefs about mental disorders were the same people who wanted less contact with the mentally ill and thought of them as more dangerous and unpredictable. This unfortunate relationship has popped up in numerous studies around the world. In a study conducted in Turkey, for example, those who labeled schizophrenic behavior as akil hastaligi (illness of the brain or reasoning abilities) were more inclined to assert that schizophrenics were aggressive and should not live freely in the community than those who saw the disorder as ruhsal hastagi (a disorder of the spiritual or inner self). Another study, which looked at populations in Germany, Russia and Mongolia, found that “irrespective of place . . . endorsing biological factors as the cause of schizophrenia was associated with a greater desire for social distance.”

Even as we have congratulated ourselves for becoming more “benevolent and supportive” of the mentally ill, we have steadily backed away from the sufferers themselves. It appears, in short, that the impact of our worldwide antistigma campaign may have been the exact opposite of what we intended.

Replies from: clarissethorn
comment by clarissethorn · 2010-06-07T13:43:26.606Z · LW(p) · GW(p)

Also: I recently saw a list of diseases ranked by doctors from most to least stigmatized; the list was accompanied by analysis that claimed that more respected doctors work on less stigmatized illnesses. I saw it on the Internet but alas, I can't find it now. I did find this, though: http://healthpolicy.stanford.edu/news/internet_use_can_help_patients_with_stigmatized_illness_study_finds_2006127/

comment by colah · 2010-05-31T17:58:51.328Z · LW(p) · GW(p)

Perhaps I'm misunderstanding, but

There are several very reasonable objections to treating any condition with drugs, whether it be a classical disease like cancer or a marginal condition like alcoholism. The drugs can have side effects. They can be expensive. They can build dependence. They may later be found to be placebos whose efficacy was overhyped by dishonest pharmaceutical advertising.. They may raise ethical issues with children, the mentally incapacitated, and other people who cannot decide for themselves whether or not to take them. But these issues do not magically become more dangerous in conditions typically regarded as "character flaws" rather than "diseases", and the same good-enough solutions that work for cancer or heart disease will work for alcoholism and other such conditions.

seems to summarise to:

(1) Medical treatments (drugs, surgery, et cetera) for conditions that can be treated in other ways can have negative consequences. (2) But so do those for conditions without other treatments and we use those. (3) Therefore: we should not object to these treatments on the grounds of risks.

I'd question the validity of this argument. Consider a scenario where there are two treatments for a condition: A and B. A has lower risks than B. Where is the flaw in the following argument:

(1) Treating the condition with B has risks. (2) But the treatments used for other conditions have similar risks. (3) Therefore: we should not object to B on the grounds of risks.

The problem with the argument is that it draws a false analogy between this condition (where there is a lower and higher risk treatment) and others where the only treatment is of similar risk to the high risk treatment for this condition.

I'm not saying the people with conditions like obesity shouldn't get medical treatment: there are compelling advantages to it, such as the decreased amount of effort involved and faster progress... But I think that this argument isn't valid.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-05-31T21:58:41.724Z · LW(p) · GW(p)

If I understand you right, you're saying that allowing drugs might discourage people from even trying the willpower-based treatments, which provides a cost of allowing drugs that isn't present in diseases without a willpower-based option.

It's a good point and I'm adding it to the article.

comment by Nick_Tarleton · 2010-05-31T02:30:48.734Z · LW(p) · GW(p)

Sort-of nitpick:

The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. Consequentialists don't on a primary level want anyone to be treated badly, full stop; thus is it written: "Saddam Hussein doesn't deserve so much as a stubbed toe." But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences.

I would say "utilitarians" rather than "consequentialists" here; while both terms are vague, consequentialism is generally more about the structure of your values, and there's no structural reason a consequentialist (/ determinist) couldn't consider it desirable for blameworthy people to be punished. (Or, with regard to preventative imprisonment of innocents, undesirable for innocents to be punished, over and above the undesirability of the harm that the punishment constitutes.)

Replies from: Yvain, utilitymonster
comment by Scott Alexander (Yvain) · 2010-05-31T21:42:51.675Z · LW(p) · GW(p)

I installed a mental filter that does a find and replace from "utilitarian" to "consequentialist" every time I use it outside very technical discussion, simply because the sort of people who don't read Less Wrong already have weird and negative associations with "utilitarian" that I can completely avoid by saying "consequentialist" and usually keep the meaning of whatever I'm saying intact.

Less Wrong does deserve better than me mindlessly applying that filter. But you'd need a pretty convoluted consequentialist system to promote blame (and if you were willing to go that far, you could call a deontologist someone who wants to promote states of the world in which rules are followed and bad people are punished, and therefore a consequentialist at heart). Likewise, you could imagine a preference utilitarian who wants people to be punished just because e or a sufficient number of other people prefer it. I'm not sufficiently convinced enough to edit the article, though I'll try to be more careful about those terms in the future.

Replies from: thomblake, utilitymonster
comment by thomblake · 2010-06-01T20:48:05.133Z · LW(p) · GW(p)

I installed a mental filter that does a find and replace from "utilitarian" to "consequentialist" every time I use it outside very technical discussion,

I, for what it's worth, think this is a good heuristic.

comment by utilitymonster · 2010-06-01T02:02:39.832Z · LW(p) · GW(p)

you'd need a pretty convoluted consequentialist system to promote blame (and if you were willing to go that far, you could call a deontologist someone who wants to promote states of the world in which rules are followed and bad people are punished, and therefore a consequentialist at heart). Likewise, you could imagine a preference utilitarian who wants people to be punished just because e or a sufficient number of other people prefer it.

I'm not sure how complicated it would have to be. You might have some standard of benevolence (how disposed you are to do things that make people happy) and hold that other things being equal, it is better for benevolent people to be happy. True, you'd have to specify a number of parameters here, but it isn't clear that you'd need enough to make it egregiously complex. (Or, on a variant, you could say how malevolent various past actions are and hold that outcomes are better when malevolent actions are punished to a certain extent.)

Also, I don't think you can do a great job representing deontological views as trying to minimize the extent to which rules are broken by people in general. The reason has to do with the fact that deontological duties are usually thought to be agent-relative (and time-relative, probably). Deontologists think that I have a special duty to see to it that I don't break promises in a way that I don't have a duty to see to it that you don't break promises. They wouldn't be happy, for instance, if I broke a promise to see to it that you kept two promises of roughly equal importance. Now, if you think of the deontologists as trying to satisfy some agent-relative and time-relative goal, you might be able to think of them as just trying to maximize the satisfaction of that goal. (I think this is right.) If you find this issue interesting (I don't think it is all that interesting personally), googling "Consequentializing Moral Theories" should get you in touch with some of the relevant philosophy.

comment by utilitymonster · 2010-05-31T10:43:43.558Z · LW(p) · GW(p)

Agreed. (Though I agree with the general structure of your post.)

A better name for your position might be "basic desert skepticism". On this view, no one is intrinsically deserving of blame. One reason is that I don't think the determinism/indeterminism business really settles whether it is OK to blame people for certain things. As I'm sure you've heard, and I'd imagine people have pointed out on this blog, the prospects of certain people intrinsically deserving blame, independently of benefits to anyone, are not much more cheering if everything they do is a function of the outcome of indeterministic dynamical laws.

Another reason is that you can have very similar opinions if you're not a consequentialist. Someone might believe that it is quite appropriate, in itself, to be extra concerned about his own welfare, yet agree with you about when it is a good idea to blame folks.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-06-01T02:40:33.709Z · LW(p) · GW(p)

Another reason is that you can have very similar opinions if you're not a consequentialist. Someone might believe that it is quite appropriate, in itself, to be extra concerned about his own welfare, yet agree with you about when it is a good idea to blame folks.

Hmm? There's no reason a consequentialist can't be extra concerned about his own welfare. (Did I misunderstand this?)

Replies from: utilitymonster
comment by utilitymonster · 2010-06-01T10:57:55.770Z · LW(p) · GW(p)

Well, you clearly could be extra concerned about your own welfare because it is instrumentally more valuable than the welfare of others (if you're happy you do more good than your neighbor, perhaps). Or, you could be a really great guy and hold the view that it's good for great guys to be extra happy. But I was thinking that if you thought that your welfare was extra important just because it was yours you wouldn't count as a consequentialist.

As I was mentioning in the last post, there's some controversy about exactly how to spell out the consequentialist/non-consequentialist distinction. But probably the most popular way is to say that consequentialists favor promoting agent-neutral value. And thinking your welfare is special, as such, doesn't fit that mold.

Still, there are some folks who say that anyone who thinks you should maximize the promotion of some value or other counts as a consequentialist. I think this doesn't correspond as well to the way the term is used and what people naturally associate with it, but this is a terminological point, and not all that interesting.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-01T11:17:04.302Z · LW(p) · GW(p)

Consequentialism doesn't work by reversing the hedonic/deontological error of focusing on the agent, by refusing to consider the agent at all. A consequentialist cares about what happens with the whole world, the agent included. I'd say it's the only correct understanding for human consequentialists to care especially about themselves, though of course not exclusively.

Replies from: utilitymonster
comment by utilitymonster · 2010-06-01T14:16:27.383Z · LW(p) · GW(p)

Consequentialism doesn't work by reversing the hedonic/deontological error of focusing on the agent, by refusing to consider the agent at all. A consequentialist cares about what happens with the whole world, the agent included.

I hope I didn't say anything to make you think I disagree with this.

I'd say it's the only correct understanding for human consequentialists to care especially about themselves, though of course not exclusively.

I noted that there might be instrumental reasons to care about yourself extra if you're a consequentialist. But how could one outcome be better than another, just because you, rather than someone else, received a greater benefit?

Example: you and another person are going to die very soon and in the same kind of way. There is only enough morphine for one of you. Apart from making one of your deaths less painful, there will be nothing relevant hangs on who gets the morphine.

I take it that it isn't open to the consequentialist to say, "I should get the morphine. It would be better if I got it, and the only reason it would be better is because I was me, rather than him, who received it."

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-01T14:40:23.261Z · LW(p) · GW(p)

Your preference is not identical with the other person's preference. You prefer to help yourself more than the other person, and the other person similarly. There is no universal moral. (You might want to try the metaethics sequence.)

Replies from: utilitymonster
comment by utilitymonster · 2010-06-01T15:26:32.405Z · LW(p) · GW(p)

Our question is this: is there a consequentialist view according to which it is right for someone to care more about his own welfare, as such? I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can't make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa).

I agree that a preference utilitarian could believe that in a version of the example I described, it could better to help yourself. But that is not the case I described, and doesn't show that consequentialists can care extra about themselves, as such. My “consequentialist” said:

"I should get the morphine. It would be better if I got it, and the only reason it would be better is because I was me, rather than him, who received it."

Yours identifies a different reason. He says, "I should get the morphine. This is because there would be more total preference satisfaction if I did this." This is a purely agent-neutral view.

My “consequentialist” is different from your consequentialist. Mine doesn't think he should do what maximizes preference satisfaction. He maximizes weighted preference satisfaction, where his own preference satisfaction is weighted by a real number greater than 1. He also doesn't think his preferences are more important in some agent-neutral sense. He thinks that other agents should use a similar procedure, weighing their own preferences more than the preferences of others.

You can bring out the difference between by considering a case where all that matters to the agents is having a minimally painful death. My “consequentialist” holds that even in this case, he should save himself (and likewise for the other guy). I take it that on the view you're describing, saving yourself and saving the other person are equally good options in this new case. Therefore, as I understand it, the view you described is not a consequentialist view according to which agents should always care more about themselves, as such. Perhaps we are engaged in a terminological dispute about what counts as caring about your own welfare more than the welfare of others, just because it is yours?

Replies from: jimrandomh
comment by jimrandomh · 2010-06-01T16:10:58.379Z · LW(p) · GW(p)

I said there is no such view, because consequentialist theories are agent-neutral (i.e., a consequentialist value function is indifferent between outcomes that are permutations of each other with respect to individuals and nothing else; switching Todd and Steve can't make an outcome better, if Steve ends up with all of the same properties as Todd and vice versa)

I don't think this is a necessary property for a value system to be called consequentialist. Value systems can differ in which properties of agents they care about, and a lot of value systems single the agent that implements them out as a special case.

Replies from: utilitymonster
comment by utilitymonster · 2010-06-01T16:36:29.996Z · LW(p) · GW(p)

This is where things get murky. The traditional definition is this:

Consequentialism: an act is right if no other option has better consequences

You can say that it is consistent with consequentialism (in this definition) to favor yourself, as such, only if you think that situations in which you are better off are better than situations in which a relevantly similar other is better off. Unless you think you're really special, you end up thinking that the relevant sense of "better" is relative to an agent. So some people defend a view like this:

Agent-relative consequentialism: For each agent S, there is a value function Vs such that it is right for S to A iff A-ing maximizes value relative to Vs.

When a view like this is on the table, consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.) So some folks think, myself included, that we'd do better to stick with this definition:

Agent-neutral consequentialism: There is an agent-neutral value function v such that an act is right iff it maximizes value relative to v.

I don't think there is a lot more to say about this, other than that paradigm historical consequentialists rejected all versions of agent-relative consequentialism that allowed the value function to vary from person to person. Given the confusion, it would probably be best to stick to the latter definition or always disambiguate.

Replies from: jimrandomh
comment by jimrandomh · 2010-06-01T16:55:51.796Z · LW(p) · GW(p)

When a view like this is on the table consequentialism starts to look pretty empty. (Just take the value function that ranks outcomes solely based on how many lies you personally tell.)

Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.

Here's a non-agent-neutral consequentialist value that you might find more praiseworthy: prefer the well-being of friends and family over strangers.

Replies from: utilitymonster
comment by utilitymonster · 2010-06-01T17:31:54.081Z · LW(p) · GW(p)

Consequentialist value systems are a huge class; of course not all consequentialist value systems are praiseworthy! But there are terrible agent-neutral value systems, too, including conventional value systems with an extra minus sign, Clippy values, and plenty of others.

Yeah, the objection wasn't supposed to be that because there was an implausible consequentialist view on that definition of "consequentialism", it was a bad definition. The objection was that pretty much any maximizing view could count as consequentialist, so the distinction isn't really worth making.

comment by gwern · 2010-05-30T23:31:09.495Z · LW(p) · GW(p)

Others complain that the existence of an easy medical solution prevents people from learning personal responsibility. But here we see the status-quo bias at work, and so can apply a preference reversal test. If people really believe learning personal responsibility is more important than being not addicted to heroin, we would expect these people to support deliberately addicting schoolchildren to heroin so they can develop personal responsibility by coming off of it. Anyone who disagrees with this somewhat shocking proposal must believe, on some level, that having people who are not addicted to heroin is more important than having people develop whatever measure of personal responsibility comes from kicking their heroin habit the old-fashioned way.

Now that's a good use of the reversal test!

Replies from: SilasBarta, mindviews
comment by SilasBarta · 2010-05-31T18:05:42.957Z · LW(p) · GW(p)

I remember being in a similar argument myself. I was talking with someone about how I had (long ago!) deliberately started smoking to see if quitting would be hard [1], and I found that, though there were periods where I'd had cravings, it wasn't hard to distract myself, and eventually they went away and I was able to easily quit.

The other person (who was not a smoker and so probably didn't take anything personally) said, "Well, sure, in that case it's easy to quit smoking, because you went in with the intent to prove it's easy to quit. Anyone would find it easy to stay away from cigarettes in that case!"

So I said, "Then shouldn't that be the anti-smoking tactic that schools use? Make all students take up smoking, just to prove they can quit. Then, everyone will grow up with the ability to quit smoking without much effort."

[1] and many, many people have told me this is insane, so no need to remind me

Replies from: Yvain, NancyLebovitz, gwern
comment by Scott Alexander (Yvain) · 2010-05-31T21:44:56.192Z · LW(p) · GW(p)

I met someone who started smoking for the same reason you did once and is still addicted, so you couldn't have been at that much of an advantage.

I am torn between telling you you're insane and suggesting you take up crack on a sort of least convenient possible world principle.

Replies from: SilasBarta
comment by SilasBarta · 2010-05-31T23:26:21.843Z · LW(p) · GW(p)

Eh, I don't claim to be immune from addiction and addiction-like cravings. It's just that, AFAICT, I can only get addicted (in the broader sense of the term) to legal stuff. See this blog post for further information. I still struggle with e.g. diet and excessive internet/computer usage.

And, in fairness, maybe I needed to smoke more to make it a meaningful test, though I did get to the point where I had cravings.

comment by NancyLebovitz · 2010-06-02T05:44:04.270Z · LW(p) · GW(p)

Your experiment seems to me to prove less than you'd hope about people in general-- afaik there's metabolic variation in how people react to nicotine withdrawal.

comment by gwern · 2010-05-31T20:13:11.414Z · LW(p) · GW(p)

I'm afraid I don't have anywhere near as awesome a personal story as that; I can say that my family seems to have a tradition of making kids drink some beer or alcohol a few times, though, and it seems to work.

Replies from: SilasBarta
comment by SilasBarta · 2010-05-31T20:30:51.572Z · LW(p) · GW(p)

Right, because no one actually likes the taste of alcohol, nor the inhalation of smoke; and then eventually they decide to take up drinking, or smoking, because of the psychoactive effects such as relaxation, loss of inhibitions, or getting high.

Just kidding, I'm not starting that debate again! ;-)

comment by mindviews · 2010-05-31T02:40:02.436Z · LW(p) · GW(p)

I don't think that's a good example. For the status-quo bias to be at work we need to have the case that we think it's worse for people to have both less personal responsibility and more personal responsibility (i.e., the status-quo is a local optimum). I'm not sure anyone would argue that having more personal responsibility is bad, so the status-quo bias wouldn't be in play and the preference reversal test wouldn't apply. (A similar argument works for the current rate of heroin addiction not being a local optimum.)

I think the problem in the example is that it mixes the axes for our preferences for people to have personal responsibility and our preferences for people not to be addicted to heroin. So we have a space with at least these two dimensions. But I'll claim that personal responsibility and heroin use are not orthagonal.

I think the real argument is in the coupling between personal responsibility and heroin addiction. Should we have more coupling or less coupling? The drug in this example would make for less coupling. So let's do a preference reversal test and see if we had a drug that made your chances of heroin addiction more coupled to your personal responsiblity, would you take that? I think that would be a valid preference reversal test in this case if you think the current coupling is a local optimum.

comment by Aurini · 2010-06-02T04:43:06.623Z · LW(p) · GW(p)

"But the most convincing explanation I have read for why so many people are opposed to medical solutions for social conditions is a signaling explanation by Robin Hans...wait! no!...by Katja Grace."

Yeah! The hell with that Robin Hanson guy! He's nothing but a signaller trying to signal that he's better than signalling by talking about signals!

I am so TOTALLY not like that.

;)

Great article, by the way; I just can't resist metahumour though.

I recently wrote a blog article arguing that 95% of psychology and psychiatry is snake-oil and pseudoscience; primarily I was directing my ire at the incoherency of much of it, but I had the implicit premise of dismissing the types of 'conditions' you wrote about as pathologizing the mundane.

While on the one hand, I object to much of classifying these conditions as such - if the government ever manages to mindprobe me I know they'll classify me as an alcoholic paranoid with schizoid tendencies (something that I see nothing wrong with), you present a powerful argument of "Hey, if it works, what's wrong with that?" (The day they invent a workout pill, is the day I stop going for bloody stupid jogs.)

I'd wager that most people here are contrarian thinkers to some degree - that they're distrustful of over-diagnosis, over-medication, etc - but I'd also guess that your stance is something most of us do agree with, and it's important to segregate the "Psychology isn't founded upon empirical data" argument from the "Brain Pills violate the nobility of the human condition" argument.

I plan to amend my blog post with your excellent distillation.

comment by neq1 · 2010-06-02T04:44:58.144Z · LW(p) · GW(p)

We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.

I think you said it better earlier when you talked about whether the reduction in incidence outweighs the pain caused by the tactic. For some conditions, if it wasn't for the stigma there would be little-to-nothing unpleasant about it (and we wouldn't need to talk about reducing incidence).

I agree with your general principle, but think it's unlikely that blame and stigma are ever the most useful methods. We should be careful to avoid the false dichotomy between the "stop eating like a pig" tactic and fat acceptance.

Sandy's husband is an asshole, who probably defends his asshole behavior by rationalizing that he's trying to help her. He's not really trying to help her (or if he is, he knows little about psychology (or women)).

Blame and judgment are such strong signaling devices that I think people rarely use it for the benefit of the one being judged. If it happens to be the best tactic for dealing with the problem, well, that would be a quite a coincidence.

--

I liked your post a lot, in case that wasn't clear. I think you are focusing on the right kinds of questions.

comment by spencerth · 2010-05-31T19:18:43.683Z · LW(p) · GW(p)

Very good article. One thing I'd like to see covered are conditions that are "treatable" with good lifestyle choices, but whose burden is so onerous that no one would consider them acceptable. Let's say you have a genetic condition which causes you to gain much more weight (5x, 10x - the number is up to the reader) than a comparable non-affected person. So much that the only way you can prevent yourself from becoming obese is to strenuously exercise 8 hours a day. If a person chooses not to do this, are they really making a "bad" choice? Is it still their fault? In this scenario, 1/3 of your day/life has become about treating this condition. I doubt too many people would honestly choose to do the "virtuous" thing in this situation.

Second thing I'd like covered: things that were inflicted on you without your consent. How much blame can you take for, let's say, your poor job prospects if your parents beat you severely every day (giving you slight brain damage of some kind, but not enough for it to be casually noticeable), fed you dog food and dirt sandwiches until you were 18, or forced you to live in an area where bullets flew into your room while you slept, forcing you to wake up in terror? There's plenty of evidence for the potentially devastating and permanent effects of trauma, poor childhood nutrition, and stress. Sure, some people manage to live like that and come out of it OK, but can everyone? Is it still right to hold someone so treated /morally/ responsible for doing poorly in their life?

Replies from: Yvain, None, soreff, dhill
comment by Scott Alexander (Yvain) · 2010-05-31T21:27:20.050Z · LW(p) · GW(p)

If there's some cure for the genetic condition, naturally I'd support that. Otherwise, I think it would fall under the category of "the cost of the blame is higher than the benefits would be." It's not part of this person's, or my, or society's, or anyone's preferences that this person exercise eight hours a day to keep up ideal weight, so there's no benefit to blaming them until they do.

As for the second example, regarding "is it still right to hold someone so treated /morally/ responsible for doing poorly in their life", this post could be summarized as "there's no such thing as moral responsibility as a primitive object". These people aren't responsible if they're poor, just like a person with a wonderful childhood isn't responsible if they're poor, but if we have evidence that holding them responsible helps them build a better life, we might as well treat them as responsible anyway.

(the difference, I think, is that we have much more incentive to help the person with the terrible childhood, because one could imagine that this person would respond well to help; the person with the great childhood has already had a lot of help and we have no reason to think that giving more will be of any benefit)

Replies from: JenniferRM
comment by JenniferRM · 2010-06-01T19:16:35.613Z · LW(p) · GW(p)

I agree on the cause of genetic obesity, but my answer may be different for the case of an extremely impoverished childhood. Part of my response is reflected in the fact that neither I (nor anyone I personally know) grew up in that level of poverty so that in imagining the poverty situation I have to counter-factually modify the world and I'm not sure how to do it.

In one imaginary scenario I would find someone facing facing malnutrition, violently abusive parents, mental retardation, in an environment with no effective police services in the actual world and imagine myself helping them from a distance as a stranger. This is basically "how to help the comprehensively poor as an external intervention". There are a lot of people like this on the planet and helping them is a really hard problem that is not very imaginary at all. I don't think I have any kind of useful answer that fits in this space and meshes with the themes in the OP.

A second imaginary scenario would be that I am also in the same general situation but only slightly better off. Perhaps there is rampant crime and poverty but my parents gave me minimally adequate nutrition and they weren't abusive (yet I magically have the same planning capacities that grow from really having been well fed and then spending decades in personal learning).

In this case there are no substantial resources with which to help and my resources probably will mostly be devoted to my own survival and marginal improvement. However talk is cheap, so I could probably follow some sort of "talk strategy". Within that scope, I would try to avoid a "blaming strategy" because those are generally counter productive. Instead I would probably do my best to help my neighbor engage in mindfully pro-social conscientiousness because that's something that predicts positive life outcomes even assuming low IQ and it hooks you into social processes that generate and distribute positive externalities which are in desperately short supply in this scenario.

I think I might try to find a "large raft" buddhist temple or a christian church that was focused on acts rather than faith... or really basically any philosophically organized pragmatic self help community that implements what Nietzsche might criticize as a "slave morality". Picking the pragmatically best church from among those available would probably be the largest opportunity for a value add by myself (I'd be looking, pretty much, for long term members who started poor but became rich and generous due to the community, its doctrines, and its practices).

In any case, it would be relatively cheap to be part of this community for my own benefit and I could invite the tragic young man along every week as a way of exposing him to useful memes and opportunities to be socially reprocessed into a relatively moral and productive person and perhaps be given a slightly challenging job by someone in the church as an act of partial charity.

On the other hand, imagine that the young man had learned hostility and violence as a life coping strategy (from being surrounded by it). I could be subject to this violence. If I had body guards, or a mech suit, or a magic ring, I might still be able to safely help him without exposing myself to costs larger than the benefits I was trying to bring into his life... but in that case I could probably do more good for other people and in the meantime we were stipulating general poverty, so I wouldn't have any "self protecting wealth" in the first place.

Thus if his personality was so broken that he was dangerous to try to help, and I was so poor that I couldn't change contexts to avoid him, I'd probably follow just enough of a "social blaming" strategy to drive him away from me and recruit allies in protection if case he starts engaging in predatory rent-seeking as a survival strategy. If he tried to do this to me, and I didn't have enough allies to drive him off, and he didn't have too many allies to seek revenge, I might kill him as a way of avoiding victimization for myself and others (prison would lead to less aggregate harm, but its way more expensive). Hopefully I would be able to do that without malice or making excuses or otherwise damage to my ability to see the world clearly. Perhaps I could self-medicate with a "talking cure" like apologizing to his dead body or "confessing my sins" to a priest and maybe trying to do something good for his mother as an act of contrition?

In practice, for most of the evolutionary history of humanity, it appears that a substantial portion of the female population has been in precisely the nightmare scenario I've ignored so far where there are basically no options. For much of evolutionary history a substantial fraction of women were a variation of chattel slave called a "wife", held in bondage by an "abused and abusive" man who grew up in enormous material poverty but had the strong loyalty of his male relatives, with alliances to other slave holding groups of men, where the women lacked the ability to leave or to find any kind of better context. Remembering this helps me understand why early libertarians and early radical feminists were in such strong agreement. It also helps to explain why people seem evolved to get such pleasure from blaming common enemies (and are biased towards being systematically insane when it comes to sex differences and romantic relationships).

Talking about this kind of stuff can be really gut churning and I imagine it triggers all kinds of (currently) obsolete instincts in a way that the abstractions of meta-morality do not... but to not talk about it seems likely to ignore some pretty major causal factors when it comes to understanding and debugging human craziness in our present state of enormous wealth.

comment by [deleted] · 2010-05-31T23:36:22.018Z · LW(p) · GW(p)

I think that's the best model for semi-voluntary problems -- it's usually not the case that no amount of effort would solve them, but that they would need much more effort than the average person. People with poor parents can become rich, but they have to work much harder than people with rich parents for the same result. If you're doing the same amount of work as an average person, I'd say you deserve as much credit as an average person.

comment by soreff · 2010-05-31T19:49:19.036Z · LW(p) · GW(p)

One thing I'd like to see covered are conditions that are "treatable" with good lifestyle choices, but whose burden is so onerous that no one would consider them acceptable.

Excellent point. This can even be made considerably stronger: The whole health care debate was about ~15% of our economy (I'm writing from the U.S.). For any given individual, working a 40 hour week, the equivalent cost would be to burden them with ~15% of their working hours with some lifestyle choice (whether 6 hours per week of exercise or some other comparably time consuming action). Lifestyle changes can be damned expensive in terms of opportunity costs.

comment by dhill · 2010-06-01T10:09:49.840Z · LW(p) · GW(p)

Let's say you have a genetic condition which causes you to gain much more weight (5x, 10x - the number is up to the reader) than a comparable non-affected person.

...also everything may be a problem and an opportunity. You could consider yourself lucky, if you wanted to become a body-builder. Some quirks can actually become an advantage. I would say a real solution (when available) is more robust than hiding from a problem (of wrong perception).

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-03-13T01:01:33.182Z · LW(p) · GW(p)

The OP probably meant adipose tissue, not muscle.

Replies from: army1987
comment by A1987dM (army1987) · 2012-03-13T11:22:27.050Z · LW(p) · GW(p)

A sumo wrestler?

comment by Kutta · 2010-05-30T22:03:58.122Z · LW(p) · GW(p)

Great post.

We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective.

I think that this rule contains the sub-rule "condemn conditions such that people are aware of the actions that lead to them" almost all the time, because our condemnation cannot possibly create positive externalities otherwise. It's similar to how jails represent no deterrence if you don't know what action gets you in jail.

Other thoughts:

  • What is condemned and not condemned should change over time as people acquire information. Maybe several years from now there'll be a positive payoff to condemning people who don't take vitamin D. In the long run, all conditions caused great part by personal choice should be condemned and only victims of meteor impacts and vacuum metastability events should be sympathized with.

  • The chance that a certain condition is actually fully independent from socially pressurized personal decision making has to be considered. For example, if the vast majority of people are genetically immune to a disease we think is triggered by certain personal actions, then our condemnation would merely hurt the diseased people and generate no positive externalities.

Replies from: JenniferRM
comment by JenniferRM · 2010-06-01T02:49:41.946Z · LW(p) · GW(p)

You're homing in on the one fuzzy spot in this essay that jumped out at me, but I don't think you're addressing it head on because you (as well as Yvain) seem to be assuming that there are, in point of fact, many situations where condemnation and lack of sympathy will have net positive outcomes.

Yvain wrote:

Yelling at a cancer patient, shouting "How dare you allow your cells to divide in an uncontrolled manner like this; is that the way your mother raised you??!" will probably make the patient feel pretty awful, but it's not going to cure the cancer. Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness. The cancer is a biological condition immune to social influences; the laziness is a biological condition susceptible to social influences, so we try to socially influence the laziness and not the cancer.

It seems to me that there are a minuscule number of circumstances where yelling insults that fall afoul of the fundamental attribution error is going to have positive consequences taking everything into account.

  1. In general, people do things that are logical reactions to their environments, given their limited time and neurons for observation and analysis. In asserting that someone has a character flaw as the basis for their behavior, you're ignoring external factors (which are probably much more amenable to change) that might make the behavior locally rational. Instead of saying "Get up and do some work, you worthless bum," you might end up saying "Good job at finding a situation where you can survive and be reasonably happy with almost no personal effort! You're clearly very clever! I wonder however, if you've considered what is likely to happen when your cushy niche evaporates (when the relevant banks of personal good will or institutional slack have been fully exploited) and you have to support yourself like most other people - while you've learned very few useful skills in the meantime?"

  2. Supposing they don't have a character flaw but are falsely convinced by your harangue that they have one, the logical thing to do is probably (1) to give up on fixing it but then (2) find contexts where the hypothetical character flaw isn't seriously debilitating. Character flaws can require serious work to fix - like years of self-debugging. Generally it seems cheaper to just find something you're "naturally good at" instead of struggling in an area that you're "naturally bad at".

  3. In many cases, character flaws are caused precisely by people internalizing such critiques, avoiding situations where they could learn to practice better behaviors, and so their skillset and world model become stunted in that area. To top it off they feel guilty about this, and tend to be defensive and incapable of reacting to opportunities to fix it with the joyous enthusiasm that might seem more rational. Your insults thus have the ability to cause the thing you claim to be trying to fix when you engage in socially coercive manipulation tactics.

  4. The human brain mostly implements rationality as a method to detect flaws in the arguments of enemies, and criticism automatically puts people on the defensive. If you criticize someone with insufficient practice at rationality they're likely confabulate arguments against your criticism and dig themselves in deeper.

  5. To the degree that they really do have a character flaw, its probably associated with a rather large number of ugh fields that are more likely to get bigger if you get judgmental with them. Taking a "bad cop" approach with them is going to get obedience while you're around, but what you'd ideally like to do is expedite their personal debugging process, which works much better when you actively try to help them. This, however, requires real effort which means you can't just "cheap out" and accuse them of a personal flaw whose repair might require you to spend some time listening, sympathizing, brainstorming, researching, and generally being a skilled and effective life coach. In the absence of the skills, time, or financial resources to provide such support, some people resort to accusations of fault - not realizing that it implies something unflattering about their own material and intellectual poverty.

My impression is that Sandy's sister was probably trying to implement a relatively cheap, effective, and non-coercive "cure" for Sandy's obesity in line with nearly all of Yvain's discussion of solutions that involve "talk vs drugs" except taking note of the fact that blame and lack of sympathy are pretty much the worst variation on a talk solution, from a practical perspective of helping people actually succeed.

Just as violence is the last refuge of political incompetence, so is blame the last refuge of psychological incompetence.

Sandy's sister was starting from the place Yvain's article left off - having dissolved the kind of shallow disagreement between the men, she had probably moved into her personal toolbox for actually helping her sister process an emotionally complex situation that was likely to pose serious problems in finding and executing the right strategy in the face of hostile epistemic influences and possible akrasia if she started feeling really guilty for enjoying food and carrying a few extra pounds. Politicizing the issues and "blaming society" isn't without costs or failure modes, but it can help with some people get out of "guilt mode" and start using their brain.

Note that Sandy's sister started with a examination of the personal choices available to Sandy, the information sources available to her, and the incentives and goals of the people offering the various theories. I assume that after Sandy got into an emotionally safe context to talk about her issues, there's a chance she would decide to do something in her power to change course and decrease her weight. Or she might decide that she actually was perfectly fine in her current state. (In practice, being a little bit "over weight" may actually be optimal thing in terms of life expectancy.)

(One thing to mention is that unless Sandy's husband is good at other parts of the relationship that Yvain didn't mention, I would guess that they are headed for a divorce within 7 years.)

In some contexts, for some "diseases", some people might be "beyond saving" as a tragic fact of who they are, what flaws they have, the aggregate/average wealth of the community, the sanity waterline of the community, and one's pre-existing personal loyalties within the community. Mostly what I'm trying to express is that blame tactics are mostly only relevant when no one can afford to actually help but they want to try something while absolving themselves from needing to do any more. In the meantime, the tactic is probably going to lower the average sanity of the community just that little bit more for the person being blamed, the person doing the blaming, and everyone around them who will be epistemically influenced by their post-blame mental states.

Replies from: Kutta, Desrtopa
comment by Kutta · 2010-06-01T08:48:13.980Z · LW(p) · GW(p)

It seems to me that there are a minuscule number of circumstances where yelling insults that fall afoul of the fundamental attribution error is going to have positive consequences taking everything into account.

I got the impression from OP that the "condemned condition vs. disease" dichotomy primarily manifests itself as society's general attitudes, a categorization that determines people's modes of reasoning about a condition. I think the Sandy example was exaggerated for the purpose of illustration and Yvain probably does not advocate yelling insults in real life.

If someone is already in a a woeful condition it is unlikely that harsh treatment does any good, for all the reasons you wonderfully wrapped up. But nonetheless an alcoholic has to expect a great deal of silent and implied condemnation and a greatly altered disposition towards him from society - a predictable deterrence. Another very important factor is the makeup of the memepool about alcoholism. If the notion that drinking leads to "wrecking one's life" and "losing human dignity" thoroughly permeates society, an alcoholic candidate may be more likely to attempt overcoming their addiction or seeking help.

The OP only presented a model that tells us what factors could make condemnation net positive. The personal negative effects were actually presented as something to be weighed together with social positive effects; you expanded on the personal effects side of the equation.

UPDATE: After some further thinking I have to say that "just be nice to everyone" is better than Yvain's model, in real life. There are just too many possible failure modes. You have to be simultaneously right about

  • Whether the condition is good or bad (I'm using Eliezer's framework of morality). Today's condemned condition might be tomorrow's valued condition.

  • Whether there are any relevant actions that cause the condition at all. It's been a prevalent idea that personal actions and/or peer pressure causes homosexuality, which idea caused great harm (even if homosexuality really was a moral wrong, if we knew its cause was independent from personal actions we wouldn't ideally condemn homosexuals).

  • What actions really cause the condition. Currently the majority of people are utterly wrong about what causes obesity and what cures it. You condemn obese people, expect them to do some actions in order to loose weight, so the obese person proceeds to do those actions, only to find out they don't work, which makes them internalize their "character flaw'" and makes you condemn them even more, because "they just haven't the willpower".

And besides all of this, you have to correctly weigh positives (which are enormously difficult to estimate) against negatives (which are enormously difficult to estimate, as we've seen in JenniferRM's great comment).

comment by Desrtopa · 2013-05-24T01:04:51.365Z · LW(p) · GW(p)

Very late reply here, but

Sandy's sister was starting from the place Yvain's article left off - having dissolved the kind of shallow disagreement between the men, she had probably moved into her personal toolbox for actually helping her sister process an emotionally complex situation that was likely to pose serious problems in finding and executing the right strategy in the face of hostile epistemic influences and possible akrasia if she started feeling really guilty for enjoying food and carrying a few extra pounds. Politicizing the issues and "blaming society" isn't without costs or failure modes, but it can help with some people get out of "guilt mode" and start using their brain.

Note that Sandy's sister started with a examination of the personal choices available to Sandy, the information sources available to her, and the incentives and goals of the people offering the various theories. I assume that after Sandy got into an emotionally safe context to talk about her issues, there's a chance she would decide to do something in her power to change course and decrease her weight.

It is not my experience that people who support obesity as a valid life choice and decry "fat-ism" as akin to sexism and racism tend to take this next step.

comment by Nanani · 2010-06-01T01:03:04.086Z · LW(p) · GW(p)

Excellent article, though there is a point I'd like to see adressed on the topic.

One salient feature of these marginal, lifestyle-relaed conditions is the large number of false positives that comes with diagnosis. How many alcoholics, chronic gamblers, and so on, are really incapable of helping themselves, as opposed to just being people who enjoy drinking or gambling and claim to be unable to help themselves to diminish social disapproval? Similarly, how many are diagnosed by their peers (He's so mopey, he must be depressed) and possibly come to believe it themselves?

The existence of these false positives is probably a big reason for the sympathy/treatment difference these conditions have to more typical diseases.
The diagnosis for cancer is fairly straightforward (you have a cancerous tumor -> you have cancer), the diagnosis for gambling addiction is much less so (maybe you are neurologically normal and just really like gambling, maybe there's something deeply wrong with your neurochemistry.).

The lower lethality also makes it so that a person can not only self-diagnose a marginal condition and also justify never seeking treatment. If you don't seek treatment for cancer, you die. If you don't seek treatment for TB, you also put a lot of people at risk. If you don't seek treatment for obesity... you stay fat. Barring a certain extreme, that isn't going to kill you nor anyone else. Neither will chronic gambling or any of the other examples, though they might correlate with things that do kill you with a high probability, say alcoholism and drunk driving.

This is pretty much the opposite concern as the one stated in the conclusion of the main post: If a biological fix exists, is there a moral obligation to use it?

Replies from: Vulture, Vulture
comment by Vulture · 2012-04-27T04:07:04.439Z · LW(p) · GW(p)

"How many alcoholics, chronic gamblers, and so on, are really incapable of helping themselves, as opposed to just being people who enjoy drinking or gambling and claim to be unable to help themselves to diminish social disapproval?"

But by self-diagnosing as an alcoholic, a person would thereby be much more likely to become the focus of deliberate social interventions, like being taken to Alcoholics Anonymous (a shining example, by the way, of well-rganied and effective social treatment of a disease) or some such. This sort of focused attention, essentially being treated as if one had a disease, I would think would be the opposite of what a hedonistic boozer would want. Would they really consider possible medical intervention a fair price to pay for slightly less disapproval from friends?

comment by Vulture · 2012-04-27T04:02:33.338Z · LW(p) · GW(p)

"How many alcoholics, chronic gamblers, and so on, are really incapable of helping themselves, as opposed to just being people who enjoy drinking or gambling and claim to be unable to help themselves to diminish social disapproval?"

But by self-diagnosing as an alcoholic, a person would thereby be much more likely to become the focus of deliberate social interventions, like being taken to Alcoholics Anonymous (a shining example, by the way, of well-rganied and effective social treatment of a disease) or some such. This sort of focused attention, essentially being treated as if one had a disease, I would think would be the opposite of what a hedonistic boozer would want. Would they really consider possible medical intervention a fair price to pay for slightly less disapproval from friends?

comment by Eneasz · 2016-12-16T17:58:37.150Z · LW(p) · GW(p)

The graph image is broken. Does anyone have a copy of the image file? I remember what it looked like, and it was super-useful for demonstrating the concept.

Replies from: Vaniver, btrettel
comment by Vaniver · 2016-12-16T19:43:09.528Z · LW(p) · GW(p)

I predict Yvain still has one, since I think Raikoth was his personal site. Odds are high the site is temporarily down and it'll be fixed, but I'll ping him.

Replies from: arundelo
comment by arundelo · 2016-12-18T18:06:41.847Z · LW(p) · GW(p)

If he's let the raikoth.net domain lapse intentionally (maybe to minimize the amount of old stuff by him on the internet) I hope he'll consider renewing it just so he can host a permissive robots.txt. This way the rest of raikoth.net will no longer be visible to casual internet searchers but will still be available on the Internet Archive's Wayback Machine (which it will not if someone else buys the domain and puts up a restrictive robots.txt).

Replies from: btrettel
comment by btrettel · 2016-12-19T03:43:42.662Z · LW(p) · GW(p)

I spidered his site with wget at one point. I'd be happy to provide a copy to anyone who wants it, but I'm afraid wget did not get everything, e.g., the image in question here would probably not have been found by wget.

comment by Psychohistorian · 2010-06-02T06:10:40.816Z · LW(p) · GW(p)

The condition of rarity does not appear to be a necessary condition for a disease. If 90% of the population had AIDS, AIDS would still be a disease. Or the flu, or gonorrhea. Perhaps, "It needs to be something where, if everyone had it, it would still be called a disease" is the point you're aiming for. Plenty of psychiatric "problems" are problems principally because they go against current social norms - this is why homosexuality was previously classified as a disease - and it seems like that's what you're going for. I think this issue may already be covered in your non-normal distribution condition, which is brilliant.

Replies from: prase
comment by prase · 2010-06-02T12:17:50.091Z · LW(p) · GW(p)

None of the conditions are absolutely necessary. On the other hand, the rarity condition is at least as important as the others. If all people had a third functional hand, nobody would think it was a disease. But now virtually all people have two hands and most of the hypothetical three-handers would opt to surgically remove the superfluous limb, even if the third hand can be useful to perform several jobs.

Or more realistically, almost all people have the appendix, which is of no use, except it can host appendicitis. If only 1% of people had the appendix, I am pretty sure that having appendix would be classified as potentially life-threatening congenital disease.

As for your example, if 99% of people has AIDS since time immemorial, are you sure it would be classified as disease? People would have weaker immunity and die younger than they do today - that's all difference. Now we die at 75, with few long-livers who manage to remain healthy up to 90 and die at 100. In the AIDS-permeated society we would die at 25, and those few without AIDS who would manage it to their 50 or 70 would be viewed as anomalies.

comment by Sly · 2010-05-31T21:47:19.847Z · LW(p) · GW(p)

A very interesting article that made me think. I am not sure exactly where my thoughts line up with yours, so this will be primarily a means of clarifying what I think.

It seems to me that the entire purpose of framing obesity as a disease is a means to deflect the "blame" for obesity elsewhere. The disease-ness alone may not be the entire issue.

For example:

Person A bothers morbidly obese person B about trying to lose weight.

Person B says that obesity is a disease and not her fault.

Person A objects to obesity being a disease, in their mind person B is very much responsible for their obesity.

I do not think their dispute is about whether obesity is a disease but whether person B has obesity as a result of her own choices and actions.

To clarify:

Cancer patients are not responsible for cancer as the cause of cancer is separate from decisions that you make.

Yet we would say that someone with several STDs would be responsible if they were going around having frequent unprotected sex with several partners. The disease is a result of their own actions.

Obesity seems to fit more into the second example then the first.. Whereas even if it is a disease, society holds the person responsible for the disease.

Replies from: Blueberry, ocr-fork
comment by Blueberry · 2010-06-01T11:13:17.898Z · LW(p) · GW(p)

Cancer patients are not responsible for cancer as the cause of cancer is separate from decisions that you make.

Actually, many human behaviors like smoking and exposure to sunlight can cause cancer.

Replies from: Sly
comment by Sly · 2010-06-01T19:03:29.500Z · LW(p) · GW(p)

Ah true. I had something like brain cancer in mind when I wrote this. But yes, lung cancer in smokers would also fall into the second category.

comment by ocr-fork · 2010-05-31T22:02:14.805Z · LW(p) · GW(p)

Regret doesn't cure STDs.

Replies from: JenniferRM
comment by JenniferRM · 2010-06-01T16:27:10.497Z · LW(p) · GW(p)

I don't understand why this is downvoted to (as of my writing) -2. This actually seemed like a pithy response that raised a fascinating twist on the general model. It was a response to Sly saying:

Yet we would say that someone with several STDs would be responsible if they were going around having frequent unprotected sex with several partners. The disease is a result of their own actions.

The interesting part is that a given state can have different "choice and willpower" requirements for getting in versus getting out. This gets you back into the situation described by Holmes of punishing people in order to discourage other people from following their initial behavior, even (in the case of STDs) in the face of the inability of the punished person to "regret their way to a cure" once they've already made the mistake because they actually are infected with an "external agent" that meets basically all the criteria for disease that Yvain pointed out in the OP.

Replies from: JGWeissman
comment by JGWeissman · 2010-06-01T17:02:32.241Z · LW(p) · GW(p)

I don't understand why this is downvoted to (as of my writing) -2.

I downvoted it because I saw it as an irrelevent response to a claim nobody made or implicitly relied on. I was already interpreting Sly's comment in terms of "the situation described by Holmes of punishing people in order to discourage other people from following their initial behavior".

comment by malthrin · 2011-06-19T12:55:17.587Z · LW(p) · GW(p)

I think someone read your article: http://www.theatlantic.com/magazine/print/2011/07/the-brain-on-trial/8520/

He comes at it from a slightly different angle - the criminal justice system - but approaches it the same way, dissolving the question down to blameworthiness and free will. He also reaches the same conclusion; our reaction as a society should be based on influencing future outcomes, not punishing past actions.

Replies from: antz56, orbenn
comment by antz56 · 2012-03-13T00:03:23.543Z · LW(p) · GW(p)

There are so many observant writers who have written about the topic: to be sympathized vs. to be condemned. To me, human right violation comes in with two forms, i.e. either curbing personal freedom by portraying it as criminal behaviors (condemned) or as sickness (sympathized), neither way is acceptable. "A die for not to be obedience might be the only choice, aligned with 'that half naked man.'" My learning/take away: humans’ vendibility, one "When the going gets tough, the tough get going"

comment by orbenn · 2011-11-14T17:10:51.384Z · LW(p) · GW(p)

There's a book to this effect: http://www.amazon.com/gp/product/0691142084/ref=oh_o03_s01_i01_details

A little googling will bring up some very convincing lectures on the subject by the author. Unfortunately he hasn't made many headlines or much headway in actually implementing these ideas.

comment by Psychohistorian · 2010-06-02T06:05:13.504Z · LW(p) · GW(p)

This is, quite obviously, a terrific article. One major quibble: your conclusion is rather circular. You assume a consequentialist utilitarian ethics, and then conclude, "Therefore, the optimal solution is to maximize the outcome under consequentialist utilitarian ethics!" I'm not sure it's actually possible to avoid such circularity here, but it does feel a little unsatisfying to me.

On top of this, your dismissal of the "personal development" issue is a bit hand-wavy. That is, it's one thing if I make a decision to go smoke crack - then the personal development required to get better is essentially penance for my evil decision. Likewise with obesity: I've sinned by getting fat, therefore, I must absolve my sins by getting thin. Just taking a pill would be like getting absolution without saying any Hail Marys, or some such. While I'm actually on your side on this point, this position does not logically bind us to go around getting kids hooked on crack so they can develop as people - they haven't sinned, so they have no penance to do.

Replies from: ChrisHibbert
comment by ChrisHibbert · 2010-06-05T23:43:20.337Z · LW(p) · GW(p)

I don't believe much in penance. (The dictionary I checked said "self punishment as a sign of repentance". I don't think either aspect is valuable.) It's not related to the question of how we should treat people when they have conditions that are often under voluntary control.

We should convince them that (assuming they agree that it would be better to not have the condition) their best approach is to accept that the condition is at least partially under voluntary control, that control always appears hard, and therefore to change their lifestyle so as to address the problem. If they agree that the condition is a problem, and they find a magic bullet to solve the problem, then no penance is required. If there's no magic bullet, then they can try to change their lifestyle, but there is no need to for them to punish themselves for not understanding the situation before.

comment by MichaelVassar · 2010-05-31T16:07:31.785Z · LW(p) · GW(p)

Great Post!

Anyway, on to the obligatory quibble. "throwing biological solutions at spiritual problems might be disrespectful or dehumanizing, or a band-aid that doesn't affect the deeper problem" The 6 criteria for disease, including 'biological' in so far as that means caused by biological processes simple enough to understand relatively easily and confidently, do seem to me to each provide weak evidential support for any given treatment not being disrespectful, dehumanizing, or superficial. They also seem to provide weak evidence against the listed "reasonable objections to treating any condition with drugs". None of the criteria are individually or collectively decisive on those points, but it does seem to me like you could find correlations if you looked.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-05-31T22:01:22.822Z · LW(p) · GW(p)

You'll have to explain that more. I would have said that "dehumanizing" and "disrespectful" are meaningless weasel words in the context of someone freely choosing to take a drug. Disrespect needs a victim, and I'm wary of the idea of being disrespectful to yourself.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-06-02T08:34:14.710Z · LW(p) · GW(p)

I see fewer 'selves' and more 'agents' than most people here probably do. In particular, I see all sorts of complex cognitive sub-systems with interests and with the ability to act in the service of those interests within each human. I also see verbal expressions of alleged interests which ignore that complexity and verbalizing sub-systems which attempt to thwart as illegitimate the interests of those other non-verbalizing sub-systems, only to find out that without the cooperation of those other systems they can't actually get anything done.

More generally, I think that when you look for ostensible definitions, for instance, for the causes of people claiming that something is 'dehumanizing' or 'disrespectful' and tr to understand the causes of those claims it's not uncommon that you find some legitimate reasons for concern.

comment by [deleted] · 2010-05-31T12:35:44.272Z · LW(p) · GW(p)

I like this because it dissolves the question quite effectively. I'm not sure the question should be dissolved, though ... what about the sister?

This is why I'm not a consequentialist all the way. We may regard it as obvious that cancer is undesirable, but there really may be some who disagree. There are some who disagree that obesity is undesirable. There are some who disagree that depression is undesirable. Health is one issue where most people (in our society) are particularly unlikely to take account of differences in opinion.

Praise and blame are not just alternate possible ways to treat a disease. Example: I personally think obesity is undesirable. If I know an obese person who's happy that way, though, I wouldn't dream of trying to "treat" her, because it's none of my business. Yet I'm still curious to what extent she's "blameworthy" or personally responsible. Judging someone's blameworthiness or praiseworthiness doesn't necessarily result in trying to improve her behavior; it has to do with what opinion I hold of her.

That's a libertarian deontologist view, yeah, but it's close enough to ordinary behavior that I think we should consider whether it's completely unreasonable.

Replies from: stcredzero, Yvain
comment by stcredzero · 2010-05-31T15:00:51.519Z · LW(p) · GW(p)

Praise and blame are not just alternate possible ways to treat a disease.

Eating and survival are fundamental functions of life. Someone whose regulatory systems are so out of whack that they are eating/fasting themselves into an early grave, is probably subject to control dysfunctions which have inbuilt advantage over intellectual or social control.

Also, punishment is the trickiest of all behavioral modification techniques. It is very likely to backfire, which makes perfect sense. If punishment was very effective on a given individual, he/she would be a perfect slave. Being a perfect slave isn't so great from the perspective of the slave, though it is good for the master. Since human biology doesn't make it easy for a large population of slaves to be related to a master, it makes perfect sense that we'd evolve defenses against punishment.

For what it's worth, a member of my band is morbidly obese. He has taken extraordinary measures in terms of effort to lose weight. (Eschewing use of a car in Houston and walking everywhere instead.) His condition is not voluntary.

Replies from: Sly, None
comment by Sly · 2010-05-31T21:33:06.664Z · LW(p) · GW(p)

What do you mean by: his condition is not voluntary? Because he recently made the descision to walk everywhere, yet still remains obese his condition is not voluntary?

I am not sure that follows.

Replies from: marks, stcredzero
comment by marks · 2010-06-01T14:52:57.949Z · LW(p) · GW(p)

Bear in mind that having more fat means that the brain gets starved of (glucose)[http://www.loni.ucla.edu/~thompson/ObesityBrain2009.pdf] and blood sugar levels have (impacts on the brain generally)[http://ajpregu.physiology.org/cgi/content/abstract/276/5/R1223]. Some research has indicated that the amount of sugar available to the brain has a relationship with self-control. A moderately obese person may have fat cells that steal so much glucose from their brain that their brain is incapable of mustering the will in order to get them to stop eating poorly. Additionally, the marginal fat person is likely fat because of increased sugar consumption (which has been the main sort of food whose intake has increased since the origins of the obesity epidemic in the 1970s), in particular there has been a great increase in the consumption of fructose: which is capable of raising insulin levels (which signal to the body to start storing energy as fat) while at the same time not activating leptin (which makes you feel full). Thus, people are consuming this substance that may be kicking their bodies into full gear to produce more fat: which leaves them with no energy or will to perform any exercise.

The individuals most affected by the obesity epidemic are the poor and recall that some of the cheapest sources of calories available on the market are foods like fructose and processed meats. While there is a component of volition regardless, if the body works as the evidence suggests: they may have a diet that is pushing them quite hard towards being obese, sedentary, and unable to do anything about it.

Think about it this way, if you constantly wack me over the head you can probably get me to do all sorts of things that I wouldn't normally do: but it wouldn't be right to call my behavior in that situation "voluntary". Fat people may be in a similar situation.

comment by stcredzero · 2010-06-01T13:51:11.779Z · LW(p) · GW(p)

He doesn't want to be morbidly obese. He wasn't always this way. He doesn't want to die early and has tried to mitigate his trajectory into an early grave.

How about someone driving a car, skidding on a patch of oil and colliding with the guard rail? Was the collision voluntary? I don't think so, even if the driver in question habitually speeds and lets themselves get distracted. Add in a broken speedometer, and the analogy is complete. (And note that you can't take a human body out of commission like you can refuse an inspection sticker on a car.)

Replies from: Sly
comment by Sly · 2010-06-01T19:10:07.363Z · LW(p) · GW(p)

I think I see what you are saying here.

So non-obvious side effects of the descision are non voluntary. Colliding from speeding and obesity from overeating/lack of exercise would be arguable non obvious as well.

I would say however that the metaphor with the car may be more accurate if the driver was repeatedly skidding into mailboxes and other small things (apparently the ground has many oil patches), so that when he later on collided with the guard rail it was a rather obvious end result.

Replies from: stcredzero
comment by stcredzero · 2010-06-07T02:13:59.368Z · LW(p) · GW(p)

I notice you say "overeating/lack of exercise." I hope one of those two doesn't indicate careless reading.

I wouldn't be so glib about adjusting food intake, unless you've done it and kept weight off for some time. Usually, people who have done this know it isn't trivially easy. It's far from easy. Simply fasting for a set period of time is much easier by comparison.

Replies from: Sly
comment by Sly · 2010-06-07T04:08:37.227Z · LW(p) · GW(p)

The overeating/lack of exercise had to do with causes of morbid obesity in general.

I understand that this person has started to walk as a means of counteracting the lack of exercise, or are you referring to something else I may be misreading?

And yes, I understand that adjusting food intake is non trivial. How am I being glib? And how is that relevant to the metaphor?

Morbid obesity does not just spring up on you, your weight gradually changes and your eating patterns likely get worse. It is not at all like a sudden patch of oil.

It would be accurate to describe the situation in terms of a car driver not putting any maintenance into their car. Eventually the car starts to make strange noises. Later on still, the engine light comes on. As years go by, the car is driving slower and slower. Are we really surprised when the engine stops working altogether?

comment by [deleted] · 2010-05-31T18:43:25.900Z · LW(p) · GW(p)

My point was not that obesity is voluntary, but that it's worth asking whether or not it's voluntary. I don't think you and I disagree, because you made the point that your band friend's condition isn't voluntary.

Yvain's post argues that such questions are not important. I think they may be.

comment by Scott Alexander (Yvain) · 2010-05-31T21:33:05.503Z · LW(p) · GW(p)

I sort of agree. I didn't treat this issue because the post was already getting too long.

We have various incentives to want obese people to become thin: paternalistic concern for their health, negative externalities, selfish reasons if we're their friend or relative and want to continue to enjoy their company without them dying early, aesthetic reasons, the emotional drain of offering them sympathy if we don't think they deserve it. One of the most important reasons is helping them overcome akrasia - if they want to become thinner, us being seen to condemn obesity might help them.

If they don't want to become thinner, that incentive goes away. The other incentives might or might not be enough to move us on their own.

(usually, though, these things only become issues at the societal level. I can't think of the last time I personally was mean to an obese person, despite having ample opportunities. In that context, I think the feelings of particular obese people on the issue becomes less important)

comment by gjm · 2010-05-30T22:48:05.962Z · LW(p) · GW(p)

Yvain, you have a couple of instances of "(LINK)" in your text. I expect you intended to replace them with links :-).

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-05-30T22:56:09.414Z · LW(p) · GW(p)

I can't imagine what would possibly have given you that idea. (@$!%. Fixed.)

comment by Houshalter · 2010-05-30T22:16:50.154Z · LW(p) · GW(p)

The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. Consequentialists don't on a primary level want anyone to be treated badly, full stop; thus is it written: "Saddam Hussein doesn't deserve so much as a stubbed toe." But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future. And, one might infer, although alcoholics may not deserve condemnation, societal condemnation of alcoholics makes alcoholism a less attractive option.

This reminds me of my personal philosophy of crime. The only reason to punish people for a crime would be if it a) set an example (to society and to the person) or b) kept them from commiting the crime or a similiar one again as they can't when their in jail or dead. The only problem with this is that it works in reverse. We could put people who haven't commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.

Replies from: Matt_Simpson, Vladimir_Nesov, stcredzero, AShepard
comment by Matt_Simpson · 2010-05-30T23:41:19.227Z · LW(p) · GW(p)

The only problem with this is that it works in reverse. We could put people who haven't commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.

Once you factor in the dangers of giving humans that sort of power, I think that "problem" goes away for the most part.

Replies from: SilasBarta, Vladimir_M
comment by SilasBarta · 2010-05-31T18:18:19.360Z · LW(p) · GW(p)

The only problem with this is that it works in reverse. We could put people who haven't commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.

Once you factor in the dangers of giving humans that sort of power, I think that "problem" goes away for the most part.

I think a lot of you are missing that (a version of) this is already happening, and the connotations of the words "jail" and "imprison" may be misleading you.

Typically, jail is a place that sucks to be in. But would your opinion change if someone were preventatively "imprisoned" in a place that's actually nice to live in, with great amenities, like a gated community? What if the gated community were, say, the size of a country?

And there, you see the similarity. Everybody is, in a relevant sense, "imprisoned" in their own country (or international union, etc.). To go to another country, you typically must be vetted for whether you would be dangerous to the others, and if you're regarded as a danger, you're left in your own country. With respect to the rest of the world, then, you have been preventatively imprisoned in your own country, on the possibility (until proven otherwise) that you will not be a danger to the rest of the world.

(A common reason given for this general restriction on immigration. though not stated in these terms, is that fully-open borders would induce a memetic overload on the good countries, destroying that that makes them worthy targets of immigration. So indeed, a utilitarian justification is given for such preventative imprisonment.)

Again, the problem is recognizing what counts as a "prison" and what connotations you attach to the term.

Replies from: GreenRoot, Peterdjones
comment by GreenRoot · 2010-06-01T19:37:18.211Z · LW(p) · GW(p)

This is an interesting way of thinking about citizenship and immigration, one which I think is useful. I don't think I've ever thought about the way other countries' immigration rules regard me. Thanks for the new thought.

comment by Peterdjones · 2011-05-16T22:40:41.613Z · LW(p) · GW(p)

(A common reason given for this general restriction on immigration. though not stated in these terms, is that fully-open borders would induce a memetic overload on the good countries, destroying that that makes them worthy targets of immigration

I'd call that aribtrage. I don't see what memetics has got to do with it.

Replies from: SilasBarta
comment by SilasBarta · 2011-05-16T22:51:21.939Z · LW(p) · GW(p)

The relevant metaphor here is "killing the goose that lays the golden eggs". A country with pro-prosperity policies is a goose. Filling it with people who haven't assimilated the memes of the people who pass such policies will arguably lead to the end of this wealth production so sought after by immigrants.

Arbitrarge doesn't kill metaphorical geese like that: it simply allows people to get existing gold eggs more efficiently. It might destroy one particular seller's source of profit, but does not destroy wealth-production ability that an immigrant-based memetic overload would.

Replies from: Peterdjones
comment by Peterdjones · 2011-05-16T23:06:00.891Z · LW(p) · GW(p)

It's vary naive to suppose that prosperity is only down to know-how, and also not things like natural resource wealth, history (eg using colonisation to grab resources from other countries), etc.

Aribtrage has a number of effects including evening out costs and prices. There are hefty "trade barriers" against movements of workers almost everywhere that leave wide disparitiees in wages un arbitraged out. We regard this as normal, although it is the opposite of the situation regarded as desirable regarding the free movement of goods.

comment by Vladimir_M · 2010-05-31T01:07:18.819Z · LW(p) · GW(p)

So if there existed a hypothetical institution with the power to mete out preventive imprisonment, and which would reliably base its decisions on mathematically sound consequentialist arguments, would you be OK with it? I'm really curious how many consequentialists here would bite that bullet. (It's also an interesting question whether, and to what extent, some elements of the modern criminal justice system already operate that way in practice.)

[EDIT: To clarify a possible misunderstanding: I don't have in mind an institution that would make accurate predictions about the future behavior of individuals, but an institution that would preventively imprison large groups of people, including many who are by no means guaranteed to be future offenders, according to criteria that are accurate only statistically. (But we assume that they are accurate statistically, so that its aggregate effect is still evaluated as positive by your favored consequentialist calculus.)]

This seems to be the largest lapse of logic in the (otherwise very good) above post. Only a few paragraphs above an argument involving the reversal test, the author apparently fails to apply it in a situation where it's strikingly applicable.

Replies from: Yvain, SilasBarta, JGWeissman, ShardPhoenix, Matt_Simpson, LauraABJ, orthonormal, khafra
comment by Scott Alexander (Yvain) · 2010-05-31T21:49:02.064Z · LW(p) · GW(p)

I'll bite that bullet. I already have in the case of insane people and arguably the case terrorists who belong to a terrorist cell and are hatching terrorist plots but haven't committed any attacks yet.

But it would have to be pretty darned accurate, and there would have to be a very low margin of error.

comment by SilasBarta · 2010-05-31T19:24:52.519Z · LW(p) · GW(p)

Why would this institution necessarily imprison them? Why not just require the different risk classes to buy liability insurance for future damages they'll cause, with the riskier ones paying higher rates? Then they'd only have to imprison the ones that can't pay for their risk. (And prohibition of something for which the person can't bear the risk cost is actually pretty common today; it's just not applied to mere existence in society, at least in your own country.)

comment by JGWeissman · 2010-05-31T01:55:19.876Z · LW(p) · GW(p)

So if there existed a hypothetical institution with the power to mete out preventive imprisonment, and which would reliably base its decisions on mathematically sound consequentialist arguments, would you be OK with it? I'm really curious how many consequentialists here would bite that bullet.

If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb's problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.

I don't trust any human institution to satisfy the first two criteria (honesty and accuracy), and I expect anything that does satisfy the first two would not satisfy the third (not better option).

This seems to be the largest lapse of logic in the (otherwise very good) above post. Only a few paragraphs above an argument involving the reversal test, the author apparently fails to apply it in a situation where it's strikingly applicable.

The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.

Replies from: Vladimir_M, Houshalter
comment by Vladimir_M · 2010-05-31T02:05:54.669Z · LW(p) · GW(p)

If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb's problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.

Please see the edit I just added to the post; it seems like my wording wasn't precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb's problem).

The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.

I agree that it's not critical to the main point of the post, but I would say that it's a question that deserves at least a passing mention in any discussion of a consequentialist model of blame, even a tangential one.

Replies from: ocr-fork
comment by ocr-fork · 2010-05-31T06:01:55.970Z · LW(p) · GW(p)

If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb's problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.

Please see the edit I just added to the post; it seems like my wording wasn't precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb's problem).

I would also be ok with this... however by your own definition it would never happen in practice, except for extreme cases like cults or a rage virus that only infects redheads.

Replies from: babblefrog
comment by babblefrog · 2010-05-31T16:23:52.417Z · LW(p) · GW(p)

How much of a statistical correlation would you require? Anything over 50%? 90%? 99%? I'd still have a problem with this. "It is better [one hundred] guilty Persons should escape than that one innocent Person should suffer." - Ben Franklin

Replies from: dclayh, ocr-fork
comment by dclayh · 2010-05-31T21:12:04.393Z · LW(p) · GW(p)

An article by SteveLandsburg on a similar quote.

And a historical overview of related quotes.

comment by ocr-fork · 2010-05-31T16:38:45.828Z · LW(p) · GW(p)

How much of a statistical correlation would you require?

Enough to justify imprisoning everyone. It depends on how long they'd stay in jail, the magnitude of the crime, etc.

I really don't care what Ben Franklin thinks.

Replies from: babblefrog
comment by babblefrog · 2010-05-31T17:00:56.146Z · LW(p) · GW(p)

Sorry, not arguing from authority, the quote is a declaration of my values (or maybe just a heuristic :-), I just wanted to attribute it accurately.

My problem may just be lack of imagination. How could this work in reality? If we are talking about groups that are statistically more likely to commit crimes, we already have those. How is what is proposed above different from imprisoning these groups? Is it just a matter of doing a cost-benefit analysis?

Replies from: ocr-fork
comment by ocr-fork · 2010-05-31T22:26:02.997Z · LW(p) · GW(p)

How is what is proposed above different from imprisoning these groups?

It's not different. Vladmir is arguing that if you agree with the article, you should also support preemptive imprisonment.

comment by Houshalter · 2010-05-31T03:10:02.270Z · LW(p) · GW(p)

Preemptive improsonment (if thats what they're calling it) is just wrong on the grounds that our most sacred rights would be violated by it. One could argue that our current system does this by making attempted murder, death threats, etc, a crime, but thats alot more practical then grouping potential criminals by statistics. How far do you go? The only possible conclusion of such a system would be mass extermination (I'm serious.) Eliminate all but the people least likely to commit a crime, those that have genes that make them extremely non aggresive (or easily controlled.) Hell, why not just exterminate EVERYONE. No crimes EVER. Human values are complex, and if you reduce them to "do whats best for everyone", you basically agree to abolish the ones we do have.

EDIT: This was a long time ago and I have absolutely no idea what I meant by this. I won't delete it, but note even I think this is stupid as hell.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-05-31T21:52:31.483Z · LW(p) · GW(p)

Your first sentence is a classic summary of the deontological position. There's nothing on Less Wrong I can think of explaining why most of us wouldn't agree with it, which is a darned shame in my opinion.

The part about mass extermination I can talk about more confidently. Consequentialists only do things if the benefits are greater than the cost. Preemptive imprisonment would work if the benefits in lower crime were greater than the very real cost to the imprisoned individual. Mass extermination doesn't leave anyone better off, cause they're all dead, so there's no benefit and a huge cost.

Replies from: Houshalter
comment by Houshalter · 2010-05-31T22:33:07.520Z · LW(p) · GW(p)

Your first sentence is a classic summary of the deontological position. There's nothing on Less Wrong I can think of explaining why most of us wouldn't agree with it, which is a darned shame in my opinion.

Err, maybe "most sacred rights" was the wrong wording. How about "moral values". Same thing, don't get technical.

The part about mass extermination I can talk about more confidently. Consequentialists only do things if the benefits are greater than the cost. Preemptive imprisonment would work if the benefits in lower crime were greater than the very real cost to the imprisoned individual. Mass extermination doesn't leave anyone better off, cause they're all dead, so there's no benefit and a huge cost.

But your assuming that "Mass extermination doesn't leave anyone better off, cause they're all dead". How do you define "better off". Once you can do this, maybe that will make more sense. Oh, by the way, exterminating groups of individuals could make, in certain situations, things "better off". So maybe mass exterminations would have no advantage, but slaughtering that entire mafia family could save us alot of trouble. Then you get back to the "eye for an eye" scenario. Harsher punishments create a greater deterent for the individual and the rest of society. Not to mention that amputations and executions are by far cheaper and easier then prisons.

Replies from: orthonormal, Kaj_Sotala
comment by orthonormal · 2010-06-01T01:32:37.840Z · LW(p) · GW(p)

Err, maybe "most sacred rights" was the wrong wording. How about "moral values".

This goes deeper than you think. The position we're advocating, in essence, is that

  1. There are no inalienable rights or ontologically basic moral values. Everything we're talking about when we use normative language is a part of us, not a property of the universe as a whole.
  2. This doesn't force us to be nihilists. Even if it's just me that cares about not executing innocent people, I still care about it.
  3. It's really easy to get confused thinking about ethics; it's a slippery problem.
  4. The best way to make sure that more of what we value happens, generally speaking, is some form of consequentialist calculus. (I personally hesitate to call this utilitarianism because that's often thought of as concerned only with whether people are happy, and I care about some other things as well.)
  5. This doesn't mean we should throw out all general rules; some absolute ethical injunctions should be followed even when it "seems like they shouldn't", because of the risk of one's own thought processes being corrupted in typical human ways.
  6. This may sound strange, but in typical situations it all adds up to normality: you won't see a rationalist consequentialist running around offing people because they've calculated them to be net negatives for human values. It can change the usual answers in extreme hypotheticals, in dealing with uncertainty, and in dealing with large numbers; but that's because "common-sense" thinking ends up being practically incoherent in recognizable ways when those variables are added.

I don't expect you to agree with all of this, but I hope you'll give it the benefit of the doubt as something new, which might make sense when discussed further...

comment by Kaj_Sotala · 2010-06-01T00:10:46.246Z · LW(p) · GW(p)

So maybe mass exterminations would have no advantage, but slaughtering that entire mafia family could save us alot of trouble.

In theory, sure. In practice, there's a large number of social dynamics, involving things such as people's tendency to abuse power, that would make this option non-worthwhile.

Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an "eye for eye" society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that's why we try to look at the whole picture.

Replies from: Houshalter
comment by Houshalter · 2010-06-01T01:36:27.835Z · LW(p) · GW(p)

In theory, sure. In practice, there's a large number of social dynamics, involving things such as people's tendency to abuse power, that would make this option non-worthwhile.

Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?

Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an "eye for eye" society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that's why we try to look at the whole picture.

This is exactly what I mean. What are we trying to "optimize" for?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-01T01:43:03.678Z · LW(p) · GW(p)

Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?

Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.

This is exactly what I mean. What are we trying to "optimize" for?

For general well-being. Something among the lines of "the amount of happiness minus the amount of suffering", or "the successful implementation of preferences" would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn't want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.

Replies from: Houshalter
comment by Houshalter · 2010-06-01T02:28:36.112Z · LW(p) · GW(p)

Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.

They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.

For general well-being. Something among the lines of "the amount of happiness minus the amount of suffering", or "the successful implementation of preferences" would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn't want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.

In other words, we have to set its goal as the ability to predict our values, which is a problem since you can't make AI goals in english.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-01T03:17:10.854Z · LW(p) · GW(p)

They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.

I'm not sure of what exactly you're trying to say here.

In other words, we have to set its goal as the ability to predict our values, which is a problem since you can't make AI goals in english.

Yup.

comment by ShardPhoenix · 2010-05-31T09:10:40.490Z · LW(p) · GW(p)

Yes, this is obviously (to me) the right thing to do if possible. For example, we put down rabid dogs before they bite anyone (as far as I know). I can't think of any real-world human-applicable examples off the top of my head, though - although some groups are statistically more liable to crime than others, the utility saved would be far more than outweighed by the disutility of the mass imprisonment.

comment by Matt_Simpson · 2010-05-31T15:58:34.121Z · LW(p) · GW(p)

My only reservation is that I might actually intrinsically value "innocent until proven guilty." Drawing the line between intrinsic values and extremely useful but only instrumental values is a difficult problem when faced with the sort of value uncertainty that we [humans] have.

So assuming that this isn't an intrinsic value, sure, I'll bite that bullet. If it is, would still bite the bullet assuming that the gains from preemptive imprisonment outweigh the losses associated with preemptive imprisonment being an intrinsic bad.

comment by LauraABJ · 2010-06-01T15:09:23.579Z · LW(p) · GW(p)

It seems that one way society tries to avoid the issue of 'preemptive imprisonment' is by making correlated behaviors crimes. For example, a major reason marijuana was made illegal was to give authorities an excuse to check the immigration status of laborers.

comment by orthonormal · 2010-06-01T01:40:37.723Z · LW(p) · GW(p)

I bite this bullet as well, given JGWeissman's caveat about the probity and reliability of the institution, and Matt Simpson's caveat about taking into account the extra anguish humans feel when suffering for something.

comment by khafra · 2010-06-01T00:38:43.410Z · LW(p) · GW(p)

Sexual offender have a high rate of recidivism. Some states keep them locked up indefinitely, past the end of their sentences. Any of the various state laws which allow for involuntary commitment as an inpatient like Florida's Baker Act also match your description.

Replies from: savageorange
comment by savageorange · 2010-06-07T01:46:54.222Z · LW(p) · GW(p)

Correction: Sexual offenders have an unusually low rate of recidivism (about 7% IIRC); There is certainly a strong false perception that they have a high rate of recidivism, though.

Replies from: JoshuaZ, Alicorn
comment by JoshuaZ · 2010-06-07T02:11:19.177Z · LW(p) · GW(p)

Correct, the recidivism rate for sexual offenses is generally lower than for the general criminal population in the United States, although the rate calculated varies a lot based on the metric and type of offense. See here . Quoting from that page:

"Marshall and Barbaree (1990) found in their review of studies that the recidivism rate for specific types of offenders varied:

* Incest offenders ranged between 4 and 10 percent.
* Rapists ranged between 7 and 35 percent.
* Child molesters with female victims ranged between 10 and 29 percent.
* Child molesters with male victims ranged between 13 and 40 percent.
* Exhibitionists ranged between 41 and 71 percent."

This is in contrast to base rates for reoffense in the US for general crimes which ranges from around 40% to 60% depending on the metric see here.

This isn't the only example where recidivism rates for specific types of people have been poorly described. There's been a large deal made by certain political groups that about 20% of people released from Gitmo went on to fight the US.

Note also that in Western Europe recividism for the general criminal population is lower. I believe that the recidivism rate for sexual offenses does not seem to correspondingly drop, but I don't have a citation for that.

Edit: Last claim may be wrong, this article suggests that at least in the UK recidivism rates are close to those in the US for the general criminal population.

Replies from: cupholder
comment by cupholder · 2010-06-07T02:20:45.813Z · LW(p) · GW(p)

Note also that in Western Europe recividism for the general criminal population is lower.

Edit: Last claim may be wrong, this article suggests that at least in the UK recidivism rates are close to those in the US for the general criminal population.

You might still be mostly correct about Western Europe - the UK could be an outlier relative to the rest of Western Europe.

comment by Alicorn · 2010-06-07T02:00:54.622Z · LW(p) · GW(p)

Citation, please?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-07T02:22:32.762Z · LW(p) · GW(p)

See my reply to Savageorange where I gave the statistics and citations here. Savage is correct although the phenomenon isn't as strong as Savage makes it out to be.

comment by Vladimir_Nesov · 2010-05-31T14:56:34.422Z · LW(p) · GW(p)

The only problem with this is that it works in reverse. We could put people who haven't committed a crime in jail on the grounds that they are likely to or it helps society when their in jail.

If it really does help the society, it's by definition not a problem, but a useful thing to do.

Replies from: Houshalter
comment by Houshalter · 2010-05-31T18:41:31.526Z · LW(p) · GW(p)

If it really does help the society, it's by definition not a problem, but a useful thing to do.

I suppose so, under this point of view, but does that make it right? Also note that "helping society" isn't an exact definition. We will have to draw the line between helping and hurting, and we have already done that with the constitution. We have decided that it is best for society if we don't put innocent people in jail.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-31T18:52:10.707Z · LW(p) · GW(p)

We do put innocent people in prison. If not putting innocent people in prison was the most important thing, we'd have to live without prisons. The tradeoff is there, but it's easier to be hypocritical about it when it's not made explicit.

Replies from: Houshalter, zero_call
comment by Houshalter · 2010-05-31T18:57:41.233Z · LW(p) · GW(p)

We do our best not to put innocent people in prison. Actually, I should have been more clear: We try to put all criminals in jail, but not innocent people. And there's something called reasonable doubt.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-02T05:50:13.180Z · LW(p) · GW(p)

I don't think we do our best not to put innocent people in prison. I think we make some efforts to avoid it, but they're rather half-hearted.

For example, consider government resistance to DNA testing for prisoners. Admittedly, this is about keeping people in prison rather than putting them there in the first place, but I think it's an equivalent issue, and I assume the major reason for resisting DNA testing is not wanting to find out that the initial reasons for imprisoning people were inadequate.

Also, there's plea bargaining, which I think adds up to saying that we'd rather put people into prison without making the effort to find out whether they're guilty.

Replies from: Houshalter
comment by Houshalter · 2010-06-02T13:09:50.204Z · LW(p) · GW(p)

What do you mean? They did do DNA testing and discovered that dozens of people in prisons actually were innocent.

Also, there's plea bargaining, which I think adds up to saying that we'd rather put people into prison without making the effort to find out whether they're guilty.

Thats to make sure that if someone actually is innocent and more evidence comes up later, they can get out rather then rot away for the rest of their lives. Its a good thing.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-02T22:46:16.096Z · LW(p) · GW(p)

Everything I've read about DNA testing for prisoners has said that it was difficult for them to get the testing done. In some cases, they had to pay for it themselves.

Plea bargaining isn't just for life sentences.

I'm not sure you understand what plea bargaining is-- it means that a suspect accepts a shorter sentence for a lesser accusation in exchange for not taking the risk of getting convicted of a more serious crime at a trial.

comment by zero_call · 2010-06-01T00:33:55.425Z · LW(p) · GW(p)

That's a flagrant misinterpretation. The OP's intention was to say that innocent people don't get put in prison intentionally.

comment by stcredzero · 2010-05-31T14:49:22.173Z · LW(p) · GW(p)

The only problem with this is that it works in reverse. We could put people who haven't commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.

Before things go that far, shouldn't a society set up voluntary programs for treatment? Exactly how does one draw the line between punishment and treatment? Our society has blurred the two notions. (Plea bargaining involving attendance of a driving course.)

Replies from: SilasBarta, MichaelVassar, Houshalter
comment by SilasBarta · 2010-05-31T18:48:30.611Z · LW(p) · GW(p)

Exactly how does one draw the line between punishment and treatment? Our society has blurred the two notions.

Very true. As I noted in my other comment, jails necessarily suck to be in, above and beyond the loss of freedom of movement.

We just don't have a common, accepted protocol to handle people who are "dangerous to others, though they haven't (yet) done anything wrong, and maybe did good by turning themselves in". Such people would deserve to be detained, but not in a way intended to be unpleasant.

The closest examples I can think of for this kind of treatment (other than the international border system I described in the other comment) are halfway houses, quarantining, jury sequestration, and insane asylums (in those cases where the inmate has just gone nuts but not committed violent crimes yet). There needs to be a more standard protocol for these intermediate cases, which would look similar to minimum security prisons, but not require you to have committed a crime, and be focused on making you less dangerous so you can be released.

comment by MichaelVassar · 2010-05-31T16:14:57.228Z · LW(p) · GW(p)

Great point. in real life one should usually look for the best available option when considering a potentially costly change rather than just choosing one hard contrarian choice on a multiple choice test. The fact that we have conflicting intuitions on a point is probably evidence that better 'third way' options exist.

comment by Houshalter · 2010-05-31T18:37:11.628Z · LW(p) · GW(p)

Before things go that far, shouldn't a society set up voluntary programs for treatment?

Who would volunteer to go to jail? Seriously, if the cops came to your door and told you that because your statistics suggested you were likely to commit a crime and you had to go to a "rehabilitation program", would you want to go, or resist (if possible)?

Exactly how does one draw the line between punishment and treatment?

From this, hypothetical, point of view, there is no difference. There is no real punishment, but you can hardly call sending someone to jail or worse, execution, treatment.

"consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences."

Replies from: Patashu
comment by Patashu · 2010-10-02T10:20:56.601Z · LW(p) · GW(p)

Jails don't HAVE to be places of cruel and unusual punishment, as they are currently in the US. The prisons in Norway, for instance, are humane - they almost look like real homes. The purpose of a jail is served (ensuring people can't harm those in society) while diminishing side effects as much as possible and encouraging rehabilitation. Example: http://www.globalpost.com/dispatch/europe/091017/norway-open-prison

Replies from: Houshalter
comment by Houshalter · 2010-10-02T15:28:56.861Z · LW(p) · GW(p)

Thats the problem, where do you draw the line between rehabilitation and punishment? Getting criminals out of society is one benefit of prisons, but so is creating deterent to commit crimes. If I was a poor person and prison was this nice awesome place full of luxuries, I might actually want to go to prison. Obviously thats an extreme example, but how much of a cost getting caught is certainly plays a role when you ponder commiting a crime.

In ancient societies, they had barbaric punishments for criminals. The crime rate was high and they were rarely caught. And when resources are limited, providing someone free food and shelter is to costly and starving people might actually try to get in. Not to mention they didn't have any ways of rehabilitating people.

Personally I am in favor of more rehabilitation. There are alot of repeat offenders in jail, and most criminals are irrational and afffected by bias anyways, so trying treating them like rational agents doesn't work.

Replies from: Patashu
comment by Patashu · 2010-10-04T05:33:48.751Z · LW(p) · GW(p)

In the case where someone wishes to commit a crime so they can spend time in jail, they'll probably perform something petty, which isn't TOO bad especially if they can confess and the goods be returned (or an equivalent). If social planning can lower the poverty rate and provide ample social nets and re-education for people in a bad spot in their lives in the first place, this thing is also less likely to be a problem (conversely, if more people become poor, prisons will be pressured to become worse to keep them below the perceived bottom line). Finally, prison can be made to be nice, but it isolates you from friends, family and all places outside the prison, and imposes routine on you, so if you desire control over your life you'll be discouraged from going there.

comment by AShepard · 2010-05-31T20:45:54.143Z · LW(p) · GW(p)

You might check out Gary Becker's writings on crime, most famously Crime and Punishment: An Economic Approach. He starts from the notion that potential criminals engage in cost-benefit analysis and comes to many of the same conclusions you do.

comment by MadHatter · 2023-11-19T18:38:42.118Z · LW(p) · GW(p)

Now do schizophrenia?

comment by Yoav Ravid · 2023-01-26T05:51:10.084Z · LW(p) · GW(p)

...the situation reminds me of a pattern in similar cases I have noticed before. It goes like this. Some people make personal sacrifices, supposedly toward solving problems that don’t threaten them personally. They sort recycling, buy free range eggs, buy fair trade, campaign for wealth redistribution etc. Their actions are seen as virtuous. They see those who don’t join them as uncaring and immoral. A more efficient solution to the problem is suggested. It does not require personal sacrifice. People who have not previously sacrificed support it. Those who have previously sacrificed object on grounds that it is an excuse for people to get out of making the sacrifice. The supposed instrumental action, as the visible sign of caring, has become virtuous in its own right. Solving the problem effectively is an attack on the moral people.

I think this is what's happening with vegans and cultured meat. In someplace they feel like it snatches the moral victory from their hands. I think most vegan activists aren't against it, but they also don't support it nearly as much as they should.

comment by jeronimo196 · 2021-07-28T22:53:01.758Z · LW(p) · GW(p)

Great post!

However, I have the following problem with the scenario - I have hard time trusting a doctor, who prescribes a diet pill and consultation with a surgeon, but omits healthy diet and exercise. (Genetic predisposition does not trump the laws of thermodynamics!)

In general, I don't know of any existing medicine that can effectively replace willpower when treating addiction - which is why treatment is so difficult in the first place.

Psychology tells us that, on the individual level, encouragement works better than blame. Although both have far less impact than one would hope.

comment by Dmytry · 2012-03-06T14:57:31.000Z · LW(p) · GW(p)

The way I see it, we are blaming the 'intelligence' process for the things that this process had caused or had the power to prevent, and we aren't blaming it for other things where it was powerless. A bad outcome (like obesity) implies character flaw if less flawed character would not end up with this outcome. And it is perfectly consistent with the notion that the process itself had been shaped by things outside it's control. A bad AI is a bad AI even though it's programmer's fault; a badly designed bridge is a bad bridge even though it is architect's fault as well; more than one person can be equally to blame for something.

If someone is put into position where they truly believe they will get away with murder, gaining $1000, and if they murder someone, that is indicative of the character flaw, even though such conditions are extremely rare and this character flaw might well be relatively common.

Ohh, and something else: If two people's actions result in murder so that the murder could of only occured if (person 1 is a nasty murderer) & (person 2 is a nasty murderer) , they both deserve full blame. One common fallacy is to assume conservation of the blame; when you deal with e.g. bad parent raising a bad child, who grows up and becomes a criminal, the fact that parent is to blame shouldn't diminish the blame on the criminal; the blame is not a conserved quantity and just because there's someone else whom you can blame as well, doesn't mean that the proximate person deserves less blame.

comment by soreff · 2010-05-31T19:01:16.131Z · LW(p) · GW(p)

Very good article!

A couple of comments:

So here, at last, is a rule for which diseases we offer sympathy, and which we offer condemnation: if giving condemnation instead of sympathy decreases the incidence of the disease enough to be worth the hurt feelings, condemn; otherwise, sympathize.

Almost agreed: It is also important to recheck criterion 4:

Something unpleasant; when you have it, you want to get rid of it

to see if reducing the incidence of the disease is actually a worthwhile goal.

On another note:

Cancer satisfies every one of these criteria, and so we have no qualms whatsoever >about classifying it as a disease.

Criterion 3:

Something rare; the vast majority of people don't have it

is somewhat arguable, at least for some types. Quoth Wikipedia

Autopsy studies of Chinese, German, Israeli, Jamaican, Swedish, and Ugandan men who died of other causes have found prostate cancer in thirty percent of men in their 50s, and in eighty percent of men in their 70s

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-05-31T21:54:55.968Z · LW(p) · GW(p)

Good points. But prostate cancer might be an "ostrich" version of cancer (see the link on "ostrich" above) and something like breast cancer might be considered more like a type specimen.

comment by billswift · 2010-05-31T12:46:26.665Z · LW(p) · GW(p)

Slightly off-topic, but I was reading in Bernard Williams's Ethics and the Limits of Philosophy last night and this quote about a difference between deontological and consequentialist ethics caught my attention:

Obligation and duty look backwards, or at least sideways. The acts they require, supposing one is deliberating about what to do, lie in the future, but the reasons for those acts lie in the fact that I have already promised, the job I have undertaken, the position I am already in. Another kind of ethical consideration looks forward , to the outcomes of the acts open to me.

comment by xamdam · 2010-05-30T21:52:58.842Z · LW(p) · GW(p)

Great article

Taking a determinist consequentialist position allows us to do so more effectively

This sounds a little timid; being a determinist consequentialist is not an instrument that allows us some goal (an accidental implication I am sure), it is an honest outlook by itself.

comment by Mikael Ogannisian (mikael-ogannisian) · 2023-12-28T18:05:21.885Z · LW(p) · GW(p)

I would argue that the utility of a treatment also depends on the particular proximate genetic and/or environmental causes of the disease/illness/problem at hand.

Let's imagine two obese individuals, person A and B.

Person A's obesity can be attributed to some sort of genetic propensity to be eating more than the average person, e.g., having lower than average control of impulse, getting rewarded by high-calorie foods more than average, suffering more than average from exercising, etc.

Person B uses highly rewarding, high-calorie foods as a way to regulate negative emotions, in lack of a better way to cope. This might be a kind of behavior that he has learned as a child, either directly or by observing a parent, say.

Giving person A a hunger-reducing drug seems like a good idea, given that it will address some (not necessarily all) of the proximate causes of his obesity.

However, one could imagine that in the case of person B, while giving him the drug would solve his obesity problem, it would not solve his underlying problem of being unable to cope with negative emotions in a healthy way. Although an empirical question, one could worry that person B's coping mechanism would lose its viability. Could one imagine that he would instead resort to other unhealthy ways of coping with negative emotion, i.e., cutting, abuse of alcohol/drugs, etc. (would love to see research done on this)?

I am aware that even if this is the case for person B, losing weight by the drug could still be a net positive due to the health benefits of not being obese as well as making it less strenuous to exercise for further health benefits, and perhaps having a better perceived body image/other mental health benefits. The point I am trying to make is simply that, even if we grant that all mental and somatic problems can be seen as determined by genes and environments, there might be grave unexpected consequences of seemingly benign treatments, depending on the proximate causes of the particular problem, e.g., obesity.

comment by peterm · 2014-02-17T17:42:05.495Z · LW(p) · GW(p)

If people really believe learning personal responsibility is more important than being not addicted to heroin, [...]

Please feel free to correct me in case I misunderstood your point here, but I think that's an unfair one you raise because originally it's about the choice between the application of two different approaches (help on a biological vs. help on a social level) in case they both produce the same output—in your example, however, you adjust the outcome according to your desired conclusion (and it's fairly obvious to chose the one that actually helps).

Edit: I'm new to this site and just realized I'm a little bit late for this discussion, sorry about that.

comment by GreenRoot · 2010-06-01T19:02:32.114Z · LW(p) · GW(p)

This is a very well written post which I enjoyed reading quite a bit. The writing is clear, the (well cited!) application of ideas developed on LW to the problem is great to support further building on them, and your analysis of the conventional wisdom regarding disease and blameworthiness as a consequence of a deontologist libertarian ethics rang true for me and helped me to understand my own thinking on the issue better.

Thanks for the care you put into this post.

comment by Matt_Simpson · 2010-05-30T23:43:14.934Z · LW(p) · GW(p)

Great post. I try to give the nutshell version of this type of reasoning every time I get dragged into an abortion debate or the debate addressed in this post. People are much more receptive to this sort of thinking for diseases than they are for abortion.

Replies from: Blueberry
comment by Blueberry · 2010-06-01T11:14:57.062Z · LW(p) · GW(p)

How does it apply to abortion? I'm not sure what you mean.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-06-02T05:26:24.063Z · LW(p) · GW(p)

Much of the abortion debate is over whether a fetus counts as a "person."

Replies from: Blueberry
comment by Blueberry · 2010-06-02T20:10:05.640Z · LW(p) · GW(p)

I'm still not sure I understand. So you're saying to taboo the term 'person' (a being with moral rights)? That still doesn't address the main point, which is balancing the value of the fetus against the rights of the mother.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-06-02T23:22:10.632Z · LW(p) · GW(p)

So you're saying to taboo the term 'person' (a being with moral rights)?

Not exactly. "Is a fetus a person?" is a disguised query. When you ask that question, you are really asking, "should we allow women to abort fetuses?" Which is, as you said, the main point. But that doesn't stop some people from arguing semantics.

Replies from: Blueberry
comment by Blueberry · 2010-06-03T00:05:41.711Z · LW(p) · GW(p)

No, that's not the same question at all. Suppose we agree that a fetus is a person: that is, that a fetus should have the same moral rights as an adult. It's still not at all clear whether abortion should be legal. One of J. J. Thomson's thought experiments addresses this point: suppose you wake up and find yourself being used as a life support machine for a famous violinist. Do you have the right to disconnect the violinist? Thompson argued that you did, and thus people should have the right to an abortion, even if a fetus is a person.

Alternatively, consider something like the endangered species act: no one thinks that a spotted owl or other endangered species is a person, but there are many people who think that we shouldn't be allowed to kill them freely.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-06-03T17:28:06.779Z · LW(p) · GW(p)

No, that's not the same question at all.

You're missing my point. I'm not saying that it's the same question. Many times when people get into the abortion debate, they start arguing over whether a fetus is a person. The pro-choice side will point out the dissimilarities between a fetus and a human. The pro-life side will counter with the similarities. All of this is in an effort to show that a fetus is a "person." But that isn't really the relevant question. Say they finally settle the issue and come up with a suitable definition of "person" which includes fetuses of a certain age. Should abortion be allowed? Well, they don't really know. But they will try to use the definition to answer that question.

This is what I mean when I say that "is a fetus a person?" is a disguised query. The real question at issue is "should abortion be allowed?" They aren't the same question at all, but in most debates, once you have the answer to the first you have the answer to the second, and it shouldn't be that way because the first question is mostly irrelevant.

Replies from: Blueberry
comment by Blueberry · 2010-06-05T17:32:38.878Z · LW(p) · GW(p)

This is what I mean when I say that "is a fetus a person?" is a disguised query. The real question at issue is "should abortion be allowed?"

Ah, I see! Yes, I agree completely.

ETA: And most people in abortion debates don't seem to realize this. There are also the questions of whether it should be legal even if it's unethical (to avoid unsafe abortions that kill the mother), and whether abortion law should be decided at the state or federal level, which also get confused with the other questions. You can oppose Roe on federalism grounds even if you support abortion.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-05T18:29:59.546Z · LW(p) · GW(p)

:"Is a fetus a person?" isn't just about abortion, but about other rights for fetuses as well. If a fetus is a person, is the woman carrying it legally obligated to not endanger it?

Replies from: Blueberry
comment by Blueberry · 2010-06-06T18:15:55.531Z · LW(p) · GW(p)

I still think that's a disguised query. Whether a fetus is a person is a separate question from whether a woman is obligated to not endanger it. For instance, protected species of animals are not people, but we are legally obligated to not endanger them in certain ways. Convicted murderers on death row, enemy soldiers at war, and people trying to kill you are considered people, but in some situations involving such people, there is no legal obligation to not endanger them.

I can consistently think a fetus is a person, but that there should be no requirement to not endanger it, and vice versa.

comment by MaxNanasy · 2015-07-30T18:59:28.068Z · LW(p) · GW(p)

Our attitudes toward people with marginal conditions mainly reflect a deontologist libertarian (libertarian as in "free will", not as in "against government") model of blame. In this concept, people make decisions using their free will, a spiritual entity operating free from biology or circumstance. People who make good decisions are intrinsically good people and deserve good treatment; people who make bad decisions are intrinsically bad people and deserve bad treatment. But people who make bad decisions for reasons that are outside of their free will may not be intrinsically bad people, and may therefore be absolved from deserving bad treatment. For example, if a normally peaceful person has a brain tumor that affects areas involved in fear and aggression, they go on a crazy killing spree, and then they have their brain tumor removed and become a peaceful person again, many people would be willing to accept that the killing spree does not reflect negatively on them or open them up to deserving bad treatment, since it had biological and not spiritual causes.

Assuming souls exist, what's the difference between a brain tumor and an evil soul in terms of who "deserves" suffering (disregarding the argument that they deserve suffering because God said so)? At the moment of birth, neither one is chosen by the agent. If anyone was born with the same genetics, environment, and soul, they would make the same decisions throughout life.

Therefore, even if souls exist, that doesn't change any conclusions about consequentialist versus retributive justice IMO.

comment by alexg · 2013-11-13T10:50:29.294Z · LW(p) · GW(p)

Test for Consequentialism:

Suppose you are a judge in deciding whether person X or Y commited a murder. Let's also assume your society has the death penalty. A supermajority of society (say, encouraged by the popular media) has come to think that X committed the crime, which would decrease their confidence in the justice system if he is set free, but you know (e.g. because you know Bayes) that Y was responsible. We also assume you know that Y won't reoffend if set free because (say) they have been too spooked by this episode. Will you condemn X or Y? (Before you quibble your way out of this, read The Least Convenient Possible World)

If you said X, you pass.

Just a response to "Saddam Hussein doesn't deserve so much as a stubbed toe."

N.B. This does not mean I'm against consequentialism.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-13T11:25:36.237Z · LW(p) · GW(p)

... which would decrease their confidence in the justice system if he is set free...

By condemning X, I uphold the people's trust in the justice system, while making it unworthy of that trust. By condemning Y, I reduce the people's trust in the justice system, while making the system worthy of their trust. But what is their trust worth, without the reality that they trust in?

If I intend the justice system to be worthy of confidence, I desire to act to make it worthy of confidence. If I intend it to be unworthy of confidence, I desire to act to make it unworthy of confidence. Let me not become unattached to my desires, nor attached to what I do not desire.

Also, there is no Least Convenient Possible World. The Least Convenient Possible World for your interlocutors is the Most Convenient Possible World for yourself, the one where you get to just say "Suppose that such and such, which you think is Bad, were actually Good. Then it would be Good, wouldn't it?"

Replies from: Roxolan, alexg
comment by Roxolan · 2013-11-13T11:56:36.796Z · LW(p) · GW(p)

In the least convenient possible world, condemning an innocent in this one case will not make the system generally less worthy of confidence. Maybe you know it will never happen again.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-13T12:23:26.602Z · LW(p) · GW(p)

Maybe

Maybe everyone would have a pony.

ETA: It is not for the proponent of an argument to fabricate a Least Convenient Possible World -- that is, a Most Convenient Possible World for themselves -- and insist that their interlocutors address it, brushing aside every argument they make by inventing more and more Conveniences. The more you add to the scenario, the smaller the sliver of potential reality you are talking about. The endpoint of this is the world in which the desired conclusion has been made true by definition, at which point the claim no longer refers to anything at all.

The discipline of the Least Convenient Possible World is a discipline for oneself, not a weapon to point at others.

If I, this hypothetical judge, am willing to have the innocent punished and the guilty set free, to preserve confidence that the guilty are punished and the innocent are set free, I must be willing that I and my fellow judges do the same in every such case. Call this the Categorical Imperative, call it TDT, that is where it leads, at the speed of thought, not the speed of time: to take one step is to have travelled the whole way. I would have decided to blow with the mob and call it justice. It cannot be done.

Replies from: Jiro, Roxolan
comment by Jiro · 2013-11-13T17:19:03.366Z · LW(p) · GW(p)

The categorical imperative ignores the possibility of mixed strategies--it may be that doing X all the time is bad, doing Y all the time is bad, but doing a mixture of X and Y is not. For instance, if everyone only had sex with someone of the same sex, that would destroy society by lack of children. (And if everyone only had sex with someone of the opposite sex, gays would be unsatisfied, of course.) The appropriate thing to do, is to allow everyone to have sex with the type of partner that fits their preferences. Or to put it another way, "doing the same thing" and "in the same kind of case" depend on exactly what you count as the same--is the "same" thing "having only gay sex" or "having either type of sex depending on one's preference"?

In the punishment case, it may be that we're better off with a mixed strategy of sometimes killing innocent people and sometimes not; if you always kill innocent people, the justice system is worthless, but if you never kill innocent people, people have no confidence in the justice system and it also ends up being worthless. The optimal thing to do may be to kill innocent people a certain percentage of the time, or only in high profile public cases, or whatever. Asking "would you be willing to kill innocent people all the time" would be as inappropriate as asking "would you be willing to be in a society where people (when having sex) have gay sex all the time". You might be willing to do the "same thing" all the time where the "same thing" means "follow the public's preference, which sometimes leads to killing the innocent" (not "always kill the innocent ") just like in the gay sex example it means "follow someone's sexual preference, which sometimes leads to gay sex" (not "always have gay sex").

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-15T08:39:32.045Z · LW(p) · GW(p)

Yes, the categorical imperative has the problem of deciding on the reference class, as do TDT, the outside view, and every attempt to decide what precedent will be set by some action, or what precedent the past has set for some decision. Eliezer coined the phrase "reference class tennis" to refer to the broken sort of argumentation that consists of choosing competing reference classes in order to reach desired conclusions.

So how do you decide on the right reference class, rather than the one that lets you conclude what you already wanted to for other reasons? TDT, being more formalised (or intended to be, if MIRI and others ever work out exactly what it is) suggests a computational answer to this question. The class that your decision sets a precedent for is the class that shares the attributes that you actually used in making your decision -- the class that you would, in fact, make the same decision for.

This is not a solution to the reference class problem, or even an outline of a solution; it is only a pointer in a direction where a solution might be found. And even if TDT is formalised and gives a mathematical solution to the reference class problem, we may be in the same situation as we are with Bayesian reasoning: we can, and statisticians do, actually apply Bayes theorem in cases where the actual numbers are available to us, but "deep" Bayesianism can only be practiced by heuristic approximation.

Replies from: Jiro
comment by Jiro · 2013-11-15T15:26:31.531Z · LW(p) · GW(p)

"Would you like it if everyone did X" is just a bad idea, because there are some things whose prevalences I would prefer to be neither 0% nor 100%, but somewhere inbetween. That's really an objection to the categorical imperative, period. I can always say that I'm not really objecting to the categorical imperative in such a situation by rephrasing it in terms of a reference class "would you like it if everyone performed some algorithm that produced X some of the time", but that gets far away from what most people mean when they use the categorical imperative, even if technically it still fits.

An average person not from this site would not even comprehend "would you like it if everyone performed some algorithm with varying results" as a case of the golden rule, categorical imperative, or whatever, and certainly wouldn't think of it as an example of everyone doing the "same thing". In most people's minds, doing the same thing means to perform a simple action, not an algorithm.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-15T15:54:45.401Z · LW(p) · GW(p)

"Would you like it if everyone did X" is just a bad idea, because there are some things whose prevalences I would prefer to be neither 0% nor 100%, but somewhere inbetween. That's really an objection to the categorical imperative, period.

In that case, the appropriate X is to perform the action with whatever probability you would wish to be the case. It still fits the CI.

but that gets far away from what most people mean when they use the categorical imperative, even if technically it still fits.

Or more briefly, it still fits. But you have to actually make the die roll. What "an average person not from this site" would or would not comprehend by a thing is not relevant to discussions of the thing itself.

Replies from: Jiro
comment by Jiro · 2013-11-15T16:38:39.501Z · LW(p) · GW(p)

In that case, the appropriate X is to perform the action with whatever probability you would wish to be the case. It still fits the CI.

In that case, you can fit anything whatsoever into the categorical imperative by defining an appropriate reference class and action. For instance, I could justify robbery with "How would I like it, if everyone were to execute 'if (person is Jiro) then rob else do nothing'". The categorical imperative ceases to have meaning unless some actions and some reference classes are unacceptable.

Or more briefly, it still fits

That's too brief. Because :"what do most people mean when they say this" actually matters. They clearly don't mean for it to include "if (person is Jiro) then rob else do nothing" as a single action that can be universalized by the rule.

Replies from: army1987, Richard_Kennaway
comment by A1987dM (army1987) · 2013-11-16T03:00:00.118Z · LW(p) · GW(p)

For instance, I could justify robbery with "How would I like it, if everyone were to execute 'if (person is Jiro) then rob else do nothing'".

The reason that doesn't work is that people who are not Jiro would not like it if everyone were to execute 'if (person is Jiro) then rob else do nothing', so they couldn't justify you robbing that way. The fact that the rule contains a gerrymandered reference class isn't by itself a problem.

Replies from: nshepperd
comment by nshepperd · 2013-11-16T15:21:09.061Z · LW(p) · GW(p)

Does the categorical imperative require everyone to agree on what they would like or dislike? That seems brittle.

Replies from: Jiro, army1987
comment by Jiro · 2013-11-18T18:48:47.664Z · LW(p) · GW(p)

I've always heard it, the Golden Rule, and other variations to be some form of "would you like it if everyone were to do that?" I've never heard of it as "would everyone like it if everyone were to do that?". I don't know where army1987 is getting the second version from.

comment by A1987dM (army1987) · 2013-11-16T19:30:35.523Z · LW(p) · GW(p)

This post discusses the possibility of people “not in moral communion” with us, with the example of a future society of wireheads.

comment by Richard_Kennaway · 2013-11-16T19:35:28.168Z · LW(p) · GW(p)

In that case, you can fit anything whatsoever into the categorical imperative by defining an appropriate reference class and action.

Doing which is reference class tennis, as I said. The solution is to not do that, to not write the bottom line of your argument and then invent whatever dishonest string of reasoning will end there.

The categorical imperative ceases to have meaning unless some actions and some reference classes are unacceptable.

No kidding. And indeed some are not, as you clearly understand, from your ability to make up an example of one. So what's the problem?

Replies from: nshepperd
comment by nshepperd · 2013-11-18T21:22:32.181Z · LW(p) · GW(p)

What principle determines what actions are unacceptable apart from "they lead to a bottom line I don't like"? That's the problem. Without any prescription for that, the CI fails to constrain your actions, and you're reduced to simply doing whatever you want anyway.

Replies from: Richard_Kennaway, TheAncientGeek
comment by Richard_Kennaway · 2013-11-19T16:38:20.450Z · LW(p) · GW(p)

This asserts a meta-meta-ethical proposition that you must have explicit principles to prescribe all your actions, without which you are lost in a moral void. Yet observably there are good and decent people in the world who do not reflect on such things much, or at all.

If to begin to think about ethics immediately casts you into a moral void where for lack of yet worked out principles you can no longer discern good from evil, you're doing it wrong.

Replies from: nshepperd
comment by nshepperd · 2013-11-19T17:28:24.731Z · LW(p) · GW(p)

Look, I have no problem with basing ethics on moral intuitions, and what we actually want. References to right and wrong are after all stored only in our heads.

But in the specific context of a discussion of the Categorical Imperative—which is supposed to be a principle forbidding "categorically" certain decisions—there needs to be some rule explaining what "universalizable" actions are not permitted, for the CI to make meaningful prescriptions. If you simply decide what actions are permitted based on whether you (intuitively) approve of the outcome, then the Imperative is doing no real work whatsoever.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-19T17:33:38.849Z · LW(p) · GW(p)

If, like most people, you don't want to be murdered, the CI will tell you not to murder. If you don't want to be robbed, it will tell you not to rob. Etc. It does work for the normal majority, and the abnornmal minority are probably going to be a problem under any system.

Replies from: nshepperd
comment by nshepperd · 2013-11-19T17:41:54.521Z · LW(p) · GW(p)

Please read the above thread and understand the problem before replying.

But for your benefit, I'll repeat it: explain to me, in step-by-step reasoning, how the categorical imperative forbids me from taking the action "if (I am nshepperd) then rob else do nothing". It certainly seems like it would be very favourable to me if everyone did "if (I am nshepperd) then rob else do nothing".

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-19T17:57:42.260Z · LW(p) · GW(p)

That's a blatant cheat. How can you have a universal law that includes a specific exception for a named individual?

Replies from: Desrtopa
comment by Desrtopa · 2013-11-19T19:08:46.735Z · LW(p) · GW(p)

The way nshepperd just described. It is, after all, a universal law, applied in every situation. It just returns different results for a specific individual. We can call a situation-sensitive law like this a piecewise law.

Most people would probably not want to live in a society with a universal law not to steal unless you are a particular person, if they didn't know in advance whether or not the person would be them, so it's a law one is unlikely to support from behind a veil of ignorance.

However, some piecewise laws do better behind veils of ignorance than non-piecewise universal laws. For instance, laws which distinguish our treatment of introverts from extroverts stand to outperform ones which treat both according to the same standard.

You can rescue non piecewise categorical imperatives by raising them to a higher level of abstraction, but in order to keep them from being outperformed by piecewise imperatives, you need levels of abstraction higher than, for example "Don't steal." At a sufficient level of abstraction, categorical imperatives stop being actionable guides, and become something more like descriptions of our fundamental values.

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2013-11-19T20:38:53.022Z · LW(p) · GW(p)

I'm all in favour of going to higher levels of abstraction. Its much better appreach than coding in kittens-are-nice and slugs-are-nasty.

comment by Lumifer · 2013-11-19T19:54:15.303Z · LW(p) · GW(p)

It is, after all, a universal law, applied in every situation. It just returns different results for a specific individual.

Is there anything that makes it qualitatively different from

if (subject == A) { return X }
elsif (subject==B) { return Y }
elsif (subject==C) { return Z } ... etc. etc.?

Replies from: Jiro, Desrtopa, army1987
comment by Jiro · 2013-11-19T21:26:21.113Z · LW(p) · GW(p)

No, there isn't any real difference from that, which is why the example demonstrates a flaw in the Categorical Imperative. Any non-universal law can be expressed as a universal law. "The law is 'you can rob', but the law should only be applied to Jiro" is a non-universal law, but "The law is 'if (I am Jiro) then rob else do nothing' and this law is applied to everyone" is a universal law that has the same effect. Because of this ability to express one in terms of the other, saying "you should only do things if you would like for them to be universally applied" fails to provide any constraints at all, and is useless.

Of course, most people don't consider such universal laws to be universal laws, but on the other hand I'm not convinced that they are consistent when they say so--for instance "if (I am convicted of robbery) then put me in jail else nothing" is a law that is of similar form but which most people would consider a legitimate universalizable law.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-19T21:35:37.853Z · LW(p) · GW(p)

If the law gives different results for different people doing the same thing, it isn't universal jn the intended sense, which is pretty much the .same as fairness.

Replies from: Jiro
comment by Jiro · 2013-11-20T15:37:53.100Z · LW(p) · GW(p)

"In the intended sense" is not a useful description compared to actually writing down a description. It also may not necessarily even be consistent.

Furthermore, it's clear that most people consider "if (I am convicted of robbery) then put me in jail else nothing" to be a universal law in the intended sense, yet that gives different results for different people (one result for robbers, another result for non-robbers) doing the same thing (nothing, in either case).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T17:22:36.323Z · LW(p) · GW(p)

"In the intended sense" is not a useful description compared to actually writing down a description. It also may not necessarily even be consistent.

It is possible to arrive at the intended sense by assuming that the people you are commenting on are not idiots who can be disproven with one-line comments.

Furthermore, it's clear that most people consider "if (I am convicted of robbery) then put me in jail else nothing" to be a universal law in the intended sense, yet that gives different results for different people (one result for robbers, another result for non-robbers) doing the same thing (nothing, in either case).

Another facile point.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-20T17:34:36.877Z · LW(p) · GW(p)

It's also possible to completely fail to explain things to intelligent people by assuming that their intelligence ought to be a sufficient asset to make your explanations comprehensible to them. If people are consistently telling you that your explanations are unclear or don't make sense, you should take very, very seriously the likelihood that, at the least in your efforts to explain yourself, you are doing something wrong.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T17:57:53.684Z · LW(p) · GW(p)

Which bit of " pretty much the .same as fairness" were you having trouble with?

Do you think "all robbers should be jailed except TheAncientGeek" is a fair rule?

What rule would count as non universal for you?

Replies from: Desrtopa
comment by Desrtopa · 2013-11-20T18:11:22.023Z · LW(p) · GW(p)

The "fairness" part. Falling back on another insufficiently specified intuitive concept doesn't help explain this one. Is it fair to jail a man who steals a loaf of bread from a rich man so his nephew won't starve? A simple yes or no isn't enough here, we don't all have identical intuitive senses of fairness, so what we need isn't the output for any particular question, but the process that generates the outputs.

I don't think "all robbers should be jailed except TheAncientGeek" is a fair rule, but that doesn't advance the discussion from where we were already.

Where a universal rule would be one that anyone could check any time for relevant output (both "never steal" and "if nsheppard, steal, if else, do nothing) would be examples, one which only produces output for a specific individual or in a specific instance (for example "nsheppard can steal," or "on January 3rd, 2014, it is okay to steal.") These would be specific case rules.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T18:37:23.863Z · LW(p) · GW(p)

The "fairness" part. Falling back on another insufficiently specified intuitive concept doesn't help explain this on.

It is not an intuitiojn about what is true, it is a concept that helps to explain another concept.,If you let it.

I don't think "all robbers should be jailed except TheAncientGeek" is a fair rule, but that doesn't advance the discussion from where we were already.

Then why do you think you can build explicit exceptions into rules and still deem them universal? I think you can't because I think, roughly speaking, universal=fair.

Where a universal rule would be one that anyone could check any time for relevant output (both "never steal" and "if nsheppard, steal, if else, do nothing) would be examples, one which only produces output for a specific individual or in a specific instance (for example "nsheppard can steal," or "on January 3rd, 2014, it is okay to steal.") These would be specific case rules.

Such a rule is useless for moral guidance. But intelligent people think the CI is useful for moral guidance. That should have told you that your guess about what "universal" means, in this context, is wrong. You should have discarded that interpretation and sought one that does not make the CI obviously foolish.

Replies from: Jiro
comment by Jiro · 2013-11-20T19:43:17.077Z · LW(p) · GW(p)

Such a rule is useless for moral guidance. But intelligent people think the CI is useful for moral guidance.

"Intelligent people" also think you shouldn't switch in the common version of the Monty Hall problem. The whole point of this argument is to point out that the CI doesn't make sense as given and therefore, that "intelligent people" are wrong about it.

That should have told you that your guess about what "universal" means, in this context, is wrong.

No, it tells me that people who think the CI is useful have not thought through the implications. It's easy to say that rules like the ones given above can't be made "universal", but the same people who wouldn't think such rules can be made universal are willing to make other rules of similar form universal (why is a rule that says that only Jiro can rob not "universal", but one which says that only non-minors can drink alcohol is?)

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T20:00:00.321Z · LW(p) · GW(p)

None of the comments have not anywhere near the .CI as given. Kant did not define the .CI as an accessible function.

I have already answered your second point.

comment by Desrtopa · 2013-11-19T20:02:40.135Z · LW(p) · GW(p)

I don't think there is, but then, I don't think that classifying things as universal law or not is usually very useful in terms of moral guidelines anyway. I consider the Categorical Imperative to be a failed model.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-19T20:49:08.373Z · LW(p) · GW(p)

Why is it failed? A counterexample was put forward that isn't a universal law. That doesn't prove the .CI to be wrong. So what does?

We already adjust rules by reference classes, since we have different rules for minors and the insane. Maybe we just need rules that are apt to the reference class and impartial within it.

Replies from: Desrtopa, Jiro
comment by Desrtopa · 2013-11-19T23:09:07.682Z · LW(p) · GW(p)

When you raise it to high enough levels of abstraction that the Categorical Imperative stops giving worse advice than other models behind a veil of ignorance, it effectively stops giving advice at all due to being too abstract to apply to any particular situation with human intelligence.

You can fragment the Categorical Imperative into vast numbers of different reference classes, but when you do it enough to make it ideally favorable from behind a veil of ignorance, you've essentially defeated any purpose of treating actions as if they were generalizable to universal law.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-19T23:23:06.090Z · LW(p) · GW(p)

I'd lovely know the meta model you are using to judge between models.

Universal isn't really universal, since you can't prove mathematial theorem to stones.

Fairness within a reference class counts.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-19T23:27:58.851Z · LW(p) · GW(p)

I think I've already made that implicit in my earlier comments; I'm judging based on the ability of a society run on such a model to appeal to people from behind a veil of ignorance

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T10:15:51.104Z · LW(p) · GW(p)

You can fragment the Categorical Imperative into vast numbers of different reference classes, but when you do it enough to make it ideally favorable from behind a veil of ignorance, you've essentially defeated any purpose of treating actions as if they were generalizable to universal law.

I think that is a false dichotomy. One rule for everybody may well fail: Everybody has their own rule may well fai. However, there is till the tertium datur of N>1 rules for M>1 people. Which is kind of how legal systems work in the real world.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-20T16:46:58.070Z · LW(p) · GW(p)

Legal systems that were in place before any sort of Categorical Imperative formulation, and did not particularly change in response to it.

I think our own legal systems could be substantially improved upon, but that's a discussion of its own. Do you think that the Categorical Imperative formulation has helped us, morally speaking, and if so how?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T17:11:05.100Z · LW(p) · GW(p)

Legal systems that were in place before any sort of Categorical Imperative formulation, and did not particularly change in response to it.

The planets managed to stay in their orbits before Newton, as well.

Do you think that the Categorical Imperative formulation has helped us, morally speaking, and if so how?

So far I have only been pointing out that the arguments against it barely scratch the surface.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-20T17:21:06.087Z · LW(p) · GW(p)

So do you think that it either improves or accurately describes our morality, and if so, can you provide any argument for this?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T17:33:00.127Z · LW(p) · GW(p)

I think it is a feasible approach which is part of a family of arguments which have never been properly considered on LW.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-20T17:38:12.663Z · LW(p) · GW(p)

That doesn't answer my question.

I would suggest that the Categorical Imperative has been considered at some length by many, if not all members of Less Wrong, but doesn't have much currency because in general nobody here is particularly impressed with it. That is, they don't think that it either improves upon or accurately describes our native morality.

If you think that people on Less Wrong ought to take it seriously, demonstrating that it does one of those would be the way to go.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T18:08:39.772Z · LW(p) · GW(p)

That doesn't answer my question.

I was deliberately not playing along with your framing that the CI is wrong by default unless elaborately defended.

I would suggest that the Categorical Imperative has been considered at some length by many, if not all members of Less Wrong, but doesn't have much currency because in general nobody here is particularly impressed with it.

I see no evidence of that. If it had been considered at length: if it had been people would be able to understand it (you keep complaining that you do not), and they would be able to write relevant critiques that address what it is actually about.

If you think that people on Less Wrong ought to take it seriously, demonstrating that it does one of those would be the way to go.

Again, I don't have to put forward a steelmanned version of a theory to demonstrate that it should not be lightly dismissed. That is a false dichotomy.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-20T18:22:27.183Z · LW(p) · GW(p)

I'm not complaining that I don't understand it, I'm complaining that your explanations do not make sense to me. Your formulation seems to differ substantially from Kant's (for instance, the blanket impermissibility of stealing was a case he was sufficiently confident in to use as an example, whereas you do not seem attached to that principle.)

You haven't explained anything solid enough to make a substantial case that it should not be lightly dismissed; continuing to engage at all is more a bad habit of mine than a sign that you're presenting something of sufficient use to merit feedback. If you're not going to bother explaining anything with sufficient clarity to demonstrate both crucially that you have a genuinely coherent idea of what you yourself are talking about, and that it is something that we should take seriously, I am going to resolve not to engage any further as I should have done well before now.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T18:26:55.532Z · LW(p) · GW(p)

I'm not complaining that I don't understand it, I'm complaining that your explanations do not make sense to me.

If you understand, why do you need me to explain?

for instance, the blanket impermissibility of stealing was a case he was sufficiently confident in to use as an example, whereas you do not seem attached to that principle

I have no idea what you are referring to.

You haven't explained anything solid enough to make a substantial case that it should not be lightly dismissed;

Again: that is not the default.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-20T18:34:19.527Z · LW(p) · GW(p)

If you understand, why do you need me to explain?

Because I think you don't have a coherent idea of what you're talking about, and if you tried to formulate it rigorously you'd either have to develop one, or realize that you don't know how to express what you're proposing as a workable system. Explaining things to others is how we solidify or confirm our own understanding, and if you resist taking that step, you should not be assured of your own understanding.

Now you know why I was bothering to participate in the first place, and it is time, unless you're prepared to actually take that step, for me to stop.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T18:48:14.480Z · LW(p) · GW(p)

Why should I repeat what is in the literature on the CI, instead of you reading it? It is clear from your other comments that you don't in fact understand it. It is not as if you had read some encyclopedia article and said "I don't get this bit" -- a perfectly ordinary kind and level of misunderstanding. Instead, you have tried to shoe-horn it into some weird computing-programming metaphor which is entirely inappropriate.It is that layer of "let's translate this into some entirely different discipline" that is is causing the problem for you and others.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-20T19:16:36.004Z · LW(p) · GW(p)

Okay, I'm being really bad here, and I encourage anyone who's following along to downvote me for my failure to disengage, but I might as well explain myself here to a point where you actually know what you're arguing with.

I have already read Kant, and I wasn't impressed; some intelligent people take the CI seriously, most, including most philosophers, do not. I think Kant was trying too hard to find ways he could get his formulation to seem like it worked, and not looking hard enough for ways he could get it to break down, and failed to grasp that he had insufficiently specified his core concepts in order to create a useful system (and also that he failed to prove that objective morality enters into the system on any level, but more or less took it for granted.)

I don't particularly expect you to agree that piecewise rules like the ones I described qualify as "universal," but I don't think you or Kant have sufficiently specified the concept of "universal," such that one can rigorously state what does or does not qualify, and I think that by trying to so specify, for an audience prepared to point out failures of rigor in the formulation, would lead you to the conclusion that it's much, much harder to develop a moral framework which is rigorous and satisfying and coherent than you or Kant have made it out to be.

I think that the Categorical Imperative fails to describe our intuitive sense of morality (I can offer explanations as to why if you wish, but I would be much more amenable to doing so if you would actually offer explanations for your positions when asked, rather than claiming it's not your responsibility to do so,) fails to offer improvements of desirability over our intuitive morality on a society that runs on it from behind a veil of ignorance, and that there is not sound reason to think that it is somehow, in spite of these things, a True Objective Description of Morality, and absent such reason we should assume, as with any other hypothetical framework lacking such reason, that it's not.

You may try to change my mind, but hopefully you will now have a better understanding of what it would take to do so, and why admonishments to go read the original literature are not going to further engage my interest.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-20T19:37:58.214Z · LW(p) · GW(p)

I have already read Kant, and I wasn't impressed; some intelligent people take the CI seriously, most, including most philosophers, do not.

Could that have been based on misunderstanding on your part?

he failed to prove that objective morality enters into the system on any level, but more or less took it for granted.

Was he supposed to prove that? Some think he is a constructivist.

I don't think you or Kant have sufficiently specified the concept of "universal," such that one can rigorously state what does or does not qualify,

I don;'t think he did either, and I don't think that's a good reason to give such trivial counterexamples. All the stuff you like started out non-rigourous as well.

I think that the Categorical Imperative fails to describe our intuitive sense of morality

And physics fails to describe folk-physics.

The problem is that you are rejecting one theory for being non-rigourous whilst tacitly accepting others that are also non-rigourous. Yoru intuitions being an extreme example.

Replies from: Desrtopa, JoshuaZ
comment by Desrtopa · 2013-11-20T19:58:00.132Z · LW(p) · GW(p)

Could that have been based on misunderstanding on your part?

Yes, but I don't think I have more reason to believe so now than I did when this conversation began; I would need input of a rather different sort to start taking it more seriously.

Was he supposed to prove that? Some think he is a constructivist.

He made it rather clear that he intended to, although if you wish to offer your own explanation as to why I should believe otherwise, you are free to do so; referring me back to the original text is naturally not going to help here.

If you're planning to refer me to some other philosopher offering a critique on him, I'd appreciate an explanation of why I should take this philosopher's position seriously; as I've already said, I was unimpressed with Kant, and for that matter, with most philosophers whose work I've read (in college, I started out with a double major in philosophy, but eventually dropped it because it required me to spend so much time on philosophers whose work I felt didn't deserve it, so I'm very much not predisposed to spring into more philosophers' work without good information to narrow down someone I'm likely to find worth taking seriously.)

I don;'t think he did either, and I don't think that's a good reason to give such trivial counterexamples. All the stuff you like started out non-rigourous as well.

What stuff do you think I like? The reason I was giving "trivial counterexamples" was to try and encourage you to offer a formulation that would make it clear what should or should not qualify as a counterexample. I don't think the problem with the Categorical Imperative is that there are clear examples where it's wrong, so much as I think that it's not formulated clearly enough that one could even say whether something qualifies as a counterexample or not.

And physics fails to describe folk-physics.

The problem is that you are rejecting one theory for being non-rigourous whilst tacitly accepting others that are also non-rigourous. Yoru intuitions being an extreme example.

I don't accept my moral intuitions as an acceptable moral framework. What do you think it is that I tacitly accept which is not rigorous?

If the distinction between physics and folk physics is that the former is an objective description of reality while the latter is a rough intuitive approximation of it, what reason do we have to suspect that the distinction between the Categorical Imperative and intuitive morality is in any way analogous to this?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-21T17:57:11.538Z · LW(p) · GW(p)

What stuff do you think I like?

Everyone likes soemething.

The reason I was giving "trivial counterexamples" was to try and encourage you to offer a formulation that would make it clear what should or should not qualify as a counterexample.

Makes it clear to whom? The points you are missing are so basic, it can only be that you don't want to understand.

I don't think the problem with the Categorical Imperative is that there are clear examples where it's wrong, so much as I think that it's not formulated clearly enough that one could even say whether something qualifies as a counterexample or not.

Would you accept a law -- an actual legal law--that exempts a named individual for no particular reason, as being a fair and just law? Come on, this is just common-sense reasoning.

Replies from: Desrtopa
comment by Desrtopa · 2013-11-21T18:58:17.076Z · LW(p) · GW(p)

Would you accept a law -- an actual legal law--that exempts a named individual for no particular reason, as being a fair and just law? Come on, this is just common-sense reasoning.

If it's "just common sense reasoning," then your common sense is doing all the work, which is awfully unhelpful when you run into an agent whose common sense says differently.

Let's say I think it would be a good law. Can you explain to me why I should think otherwise, while tabooing "fair" and "common sense?"

People have been falling back on "common sense" for thousands of years, and it made for lousy science and lousy philosophy. It's when we can deconstruct our intuitions that we start to make progress.

ETA: Since you've not been inclined to actually follow along and offer arguments for your positions so far, I'll make it clear that this is not a position I'm putting forward out of sheer contrarianism, I have an actual moral philosophy in mind which has been propounded by real people, under which I think that such a law could be a positive good.

Replies from: ialdabaoth, TheAncientGeek
comment by ialdabaoth · 2013-11-21T19:33:07.471Z · LW(p) · GW(p)

Let's say I think it would be a good law. Can you explain to me why I should think otherwise, while tabooing "fair" and "common sense?"

I'll take a crack at this.

Laws are essentially code that gets executed by an enforcement and judicial system. Each particular law/statute is a module or subroutine within that code; its implementation will have consequences for the implementation of other modules / subroutines within that system.

So, let's say we insert a specific exception into our legal system for a particular person. Which person? Why that person, rather than another? Why only one person?

Projecting myself into the mindset of someone who wants a specific exception for themselves, let's go with the simplest answers first:

"Me. Because I'm that person. Because I don't want competition."

Now, remember that laws are just code; they still have to be executed by the people who make up the enforcement and judicial systems of the society they're passed for. What's in it for those people, to enforce your law?

If you can provide an incentive for people to make a privileged exception for you, then you de facto have your own law, even if it isn't on the books. If you CAN'T provide such an incentive, then you de facto don't have your own law, even if you DO get it written into the law books.

Now, without any "particular reason", why would people adopt and execute such a law?

If there IS such a reason - say, the privileged entity has a private army, or mind-control lasers, or wild popular support - then the actual law isn't "Such-and-such entity is privileged", even if that's what's written in the law books. The ACTUAL law is "Any entity with a private army larger than the state can comfortably disarm is privileged", or "any entity with mind-control lasers is privileged", or "any entity with too much popular support is privileged", all of which are circumstances that might change. And the moment they do, the reality will change, regardless of what laws might be on the books.

It's really the same with personal ethics. When you say, "I should steal and people shouldn't punish me for it, even though most people should be punished for stealing", you're actually (at least partially) encoding "I think I can get away with stealing". Most primate psychology has rather specific conditions for when that belief is true or not.

If I want to increase the chance that "I can get away with stealing" is true, setting a categorical law of "If Brent Dill, then cheat, otherwise don't cheat" won't actually help me Win nearly as much as wild popular support, or a personal army, or mind control lasers would.

And no, I am not bypassing the original question of "should I have such a law?" - I'm distilling it down, while tabooing "fair" and "common sense", to the only thing that's left - "can I get away with having such a law?"

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-25T13:57:46.541Z · LW(p) · GW(p)

The ACTUAL law is "Any entity with a private army larger than the state can comfortably disarm is privileged",

Which explain, albeit in a weird and disturbing way, the principle at work. There is a difference between having universal (fair, impartial) laws for multiple reference classes, and laws that apply to a reference class, but make exceptions. There is a difference between "minors should have different laws" and "the law shouldn't apply to me". The difference is that reference classes are defined by shared properties -- which can rationally justify the use of different rules -- but individuals aren't. What's is it about mean that means I can be allowed to steal?

This is a familiar idea. For instance, in physics, we expect different laws to apply to, eg, charged and uncharged particles. But we don't expect electron #34568239 to follow some special laws of its own.

Replies from: Jiro
comment by Jiro · 2013-11-25T15:26:53.572Z · LW(p) · GW(p)

The difference is that reference classes are defined by shared properties -- which can rationally justify the use of different rules -- but individuals aren't.

I'm pretty sure I can define a set of properties which specifies a particular individual.

What's is it about [me] that means I can be allowed to steal?

That you're in a class and the class is a class for which the rule spits out "is allowed to steal".

It may not be rule that you expect the CI to apply to, but it's certainly a rule.

What you're doing is adding extra qualifications which define good rules and bad rules. The "shared property" one doesn't work well, but I'm sure that eventually you could come up with something which adequately describes what rules we should accept and what rules we shouldn't.

The trouble with doing this is that your qualifications would be doing all the work of the Categorical Imperative--you're not using the CI to distinguish between good and bad rules, you have a separate list that essentially does the same thing independently and the CI is just tacked on. The CI is about as useful as a store sign which says "Prices up to 50% off or more!"

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-25T17:19:41.312Z · LW(p) · GW(p)

I'm pretty sure I can define a set of properties which specifies a particular individual.

I think you will find that defining a set of properties that picks out only one individual, and always defines the same individual under any circumstances is extremely difficult.

What's is it about [me] that means I can be allowed to steal?

That you're in a class and the class is a class for which the rule spits out "is allowed to steal".

And if I stop being in that class, or other people join it, there is nothing (relevantly) special about me. But that is not what you are supposed to be defending. You are supposed to be defending the claim that:

" is allowed to steal"

is equivalent to

" are allowed to steal".

I say they are not because there is no rigid relationship between names and properties (and, therefore, class membership).

The trouble with doing this is that your qualifications would be doing all the work of the Categorical I imperative--you're not using the CI to distinguish between good and bad rules,

No, I can still say that rules that do not apply impartially to all members of a class are bad.

Replies from: Jiro
comment by Jiro · 2013-11-25T18:56:58.870Z · LW(p) · GW(p)

I say they are not because there is no rigid relationship between names and properties (and, therefore, class membership).

Being "the person named by ___" is itself a property.

I can still say that rules that do not apply impartially to all members of a class are bad.

Then you're shoving all the nuance into your definitions of "impartially" or "class" (depending on what grounds you exclude the examples you want to exclude) and the CI itself still does nothing meaningful. Otherwise I could say that "people who are Jiro" is a class or that applying an algorithm that spits out a different result for different people is impartial.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-25T19:10:20.057Z · LW(p) · GW(p)

Being "the person named by _" is itself a property.

What instrument do you use to detect it? Do an entitiy's properties change when you rename it?

Then you're shoving all the nuance into your definitions of "impartially" or "class" (depending on what grounds you exclude the examples you want to exclude) and the CI itself still does nothing meaningful.

If I expand out the CI in terms of "impartiality" and "class" it is doing something meaningful.

Replies from: Jiro
comment by Jiro · 2013-11-25T19:17:16.843Z · LW(p) · GW(p)

What instrument do you use to detect it?

A property does not mean something that is (nontrivially) detectable by an instrument.

If I expand out the CI in terms of "impartiality" and "class" it is doing something meaningful.

No it's not. It's like saying you shouldn't do bad things and claiming that that's a useful moral principle. It isn't one unless you define "bad things", and then all the meaningful content is really in that, not in the original principle. Likewise for the CI. All its useful meaning is in the clarifications, not in the principle.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-25T19:20:57.200Z · LW(p) · GW(p)

A property does not mean something that is (nontrivially) detectable by an instrument.

That's a matter of opinion. IMO, the usual alternative, treating any predicate as a property, is a source of map-territory confusions.

No it's not. It's like saying you shouldn't do bad things and claiming that that's a useful moral principle. It isn't one unless you define "bad things", and then all the meaningful content is really in that, not in the original principle. Likewise for the CI.

Clearly that could apply to any other abstract term ... so much for reductionism, physicalism, etc.

comment by TheAncientGeek · 2013-11-21T20:39:16.438Z · LW(p) · GW(p)

I can't see how my appeals to common sense are worse than your appeals to intuition. And it is not a case of my defending the C I but of my explaining to you how to understand it. You can understand it by assuming it is saying something commonsensical. You keep trying to read it as though it is a rigorous specification of something arbitrary and unguessable , like an acontextual line of program code. It's not rigorous, and that doesn't matter because it's non arbitrary and it is understandable in terms of non rigorous nations you already have.e

comment by JoshuaZ · 2013-11-20T19:41:49.584Z · LW(p) · GW(p)

There's some chance that Derstopa is mistaken about absolutely anything. What evidence do you have to persuade Derstopa is misunderstanding the categorical imperative?

comment by Jiro · 2013-11-19T21:33:00.144Z · LW(p) · GW(p)

If we have different rules for minors and the insane, why can't we have different rules for Jiro? "Jiro" is certainly as good a reference class as "minors".

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-19T21:46:35.560Z · LW(p) · GW(p)

Remember the "apt". You would need to explain why you need those particular rules.

Replies from: Vaniver
comment by Vaniver · 2013-11-19T21:52:58.136Z · LW(p) · GW(p)

Explain to who? And do I just have to explain it, or do they have to agree?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-19T21:57:59.049Z · LW(p) · GW(p)

In rationality land, one rational agent is as good as another

comment by A1987dM (army1987) · 2013-11-19T20:46:31.165Z · LW(p) · GW(p)

A qualitative difference is a quantitative difference that is large enough.

Replies from: Lumifer
comment by Lumifer · 2013-11-19T21:01:54.092Z · LW(p) · GW(p)

Sometimes. Not always.

comment by TheAncientGeek · 2013-11-19T16:41:09.616Z · LW(p) · GW(p)

It's not like the issue has never been noticed or addressed:

"Hypothetical imperatives apply to someone dependent on them having certain ends to the meaning:

if I wish to quench my thirst, I must drink something; if I wish to acquire knowledge, I must learn.

A categorical imperative, on the other hand, denotes an absolute, unconditional requirement that asserts its authority in all circumstances, both required and justified as an end in itself. It is best known in its first formulation:

Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.[1] "--WP

comment by Roxolan · 2013-11-13T12:40:16.386Z · LW(p) · GW(p)

If that's what makes the world least convenient, sure. You're trying for a reductio ad absurdum, but the LCPW is allowed to be pretty absurd. It exists only to push philosophies to their extremes and to prevent evasions.

Your tone is getting unpleasant.

EDIT: yes, this was before the ETA.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-13T12:47:19.586Z · LW(p) · GW(p)

I think you replied before my ETA. The LCPW is, in fact, not allowed to be pretty absurd. When pushed on one's interlocutors, it does not prevent evasions, it is an evasion.

comment by alexg · 2013-11-13T11:51:27.373Z · LW(p) · GW(p)

You're kind of missing the point here. I probably should have clarified my position more The reason I want people to trust the justice system is so that people wil not be inclined to commit crimes, because it would then more likely (from their point of view) that, if they did, they would get caught. I suppose there is the issue of precedent to worry about, but the ultimate purpose of the justice system, from the consequentialist viewpoint, is to deter crimes (by either the offender it is dealing with or potential others), not to punish criminals. As the offender is, by assumption, unlikely to reoffend, the everyone else's criminal behaviors are the main factor here, and these are minimised through the justice system's reputation. (I also should have added the assumption that attempts to convince people of the truth have failed). By prosecuting X you are acheiving this purpose. The Least Convenient Possible World is the one where there's not a third way, or additional factor (I hadn't thought of) that lets you get out of this.

Rationality is not about maximising the accuracy of your beliefs, nor the accuracy of others. It is about winning!

EDIT: Grammer EDIT: The point is, if you would punish a guilty person for a stabler society, you ought to to the same to an innocent person, for the some benefit.

Replies from: Richard_Kennaway, Richard_Kennaway
comment by Richard_Kennaway · 2013-11-13T13:02:32.972Z · LW(p) · GW(p)

The point is, if you would punish a guilty person for a stabler society, you ought to to the same to an innocent person, for the some benefit.

This ignores the causal relationships. How is punishing the innocent supposed to create a stabler society? Because, in your scenario, it's just this once and no-one will ever know. But it's never just this once, and people (the judge, X, and Y at least) will know. As one might observe from a glance at the news from time to time. All you're doing is saying, "But what if it really was just this once and no-one would ever know?" To which the answer is, "How will you know?" To which the LCPW replies "But what if you did know?", engulfing the objection and Borgifying it into an extra hypothesis of your own.

You might as well jump straight to your desired conclusion and say "But what if it really was Good, not Bad?" and you are no longer talking about anything in reality. Reality itself is the Least Convenient Possible World.

comment by Richard_Kennaway · 2013-11-13T12:27:34.190Z · LW(p) · GW(p)

Rationality is not about maximising the accuracy of your beliefs, nor the accuracy of others. It is about winning!

I don't think you understand what "rationality is about winning" means. It is explained here, here, and here.

Replies from: alexg
comment by alexg · 2013-11-13T12:41:29.585Z · LW(p) · GW(p)

Possibly I used it out of context, What I mean is that utility (less crime)> utility(society has inaccurate view of justice system) when the latter has few other consequences, and rationaliy is about maximising utility. Also, in the Least Convenient World, overall this trial will not affect any others, hence negating the point about the accuracy of the justice system. Here knowledge is not an end, it is a means to an end.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-11-13T12:45:43.041Z · LW(p) · GW(p)

Also, in the Least Convenient World, overall this trial will not affect any others, hence negating the point about the accuracy of the justice system.

See my reply to Roxolan.

comment by ACuriousMan · 2012-09-25T22:52:08.828Z · LW(p) · GW(p)

The disease characteristics is where this essay breaks down. Those don't really line up with any medical definition of disease. Seems like he redefines disease in order to deconstruct it a bit.

Replies from: beoShaffer, fubarobfusco
comment by beoShaffer · 2012-09-27T03:17:26.801Z · LW(p) · GW(p)

Those don't really line up with any medical definition of disease.

It does however, fit with (my impressions of) the way people use the word in real life, which is far more relevant to the point of this article.

comment by fubarobfusco · 2012-09-26T01:37:18.996Z · LW(p) · GW(p)

Could you be more specific? Which characteristics do you dispute, and which other ones would you propose?

Replies from: ACuriousMan
comment by ACuriousMan · 2012-09-26T13:24:32.463Z · LW(p) · GW(p)

Most of, if not all of them have nothing to do with what disease is. He is creating a definition wholecloth through his characteristics.

disease /dis·ease/ (dĭ-zēz´) any deviation from or interruption of the normal structure or function of any body part, organ, or system that is manifested by a characteristic set of symptoms and signs and whose etiology, pathology, and prognosis may be known or unknown.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-09-26T16:44:48.048Z · LW(p) · GW(p)

Ah. I think you are looking for something different in definitions than Yvain is getting at here. Have you read the linked posts "Disguised Queries", "The Cluster Structure of Thingspace", and "Words as Hidden Inferences"? These might explain some of the difference.

Replies from: ACuriousMan
comment by ACuriousMan · 2012-09-26T23:58:34.004Z · LW(p) · GW(p)

And?

Replies from: fubarobfusco
comment by fubarobfusco · 2012-09-27T00:13:04.077Z · LW(p) · GW(p)

Ah. I had assumed you were expressing curiosity, not merely contradiction. My mistake. Sorry about that.

Replies from: ACuriousMan
comment by ACuriousMan · 2012-09-28T13:39:21.726Z · LW(p) · GW(p)

It isn't "mere contradiction". It is a looking at what the writer is doing rhetorically and questioning the root of his argument. Again his characteristics of disease have nothing to do with our medical understanding of disease. Disease means something rather specific in the medical profession, and just throwing up a bunch of characteristics based on nothing more than he writers intuition (and with no supporting evidence) is a horrible foundation for an argument.

Replies from: Kindly
comment by Kindly · 2012-09-28T13:56:54.619Z · LW(p) · GW(p)

Disease does mean something specific to doctors, but doctors aren't the only ones asking questions like "Is obesity really a disease?"

And when people ask that question, what matters to them isn't really whether obesity matches the dictionary definition. In practice, it does boil down to trying to figure out whether the obesity should be treated medically, and whether obese people deserve sympathy. (On occasion, another question that is asked is "Does the condition need to be 'fixed' at all?")

You can't answer these questions by checking the dictionary to see if obesity is a disease. In general, thinking of "disease" as a basic concept results in confusion. If you're not certain whether obesity is a disease, and what you really want to know is whether it should be treated medically, then the right thing to do is to first figure out "What about diseases makes medical intervention a good idea?" And then you figure out whether obesity satisfies the criteria you come up with.

Replies from: ACuriousMan
comment by ACuriousMan · 2012-09-29T12:55:03.703Z · LW(p) · GW(p)

No. That is not how things work. All you are doing is confusing several different questions into one. The response to peoples' misunderstanding of what disease means isn't to embrace their understanding as a new definition of disease and take each component of that new definition bit by bit. It is to clarify what disease is (and disease means something). Then once you estabish the medical definition of disease, you can ask: does obesity qualify as a disease. Then you can ask ose other questions in light of our answer (do sufferers from obesity deserve our sympathy, is obesity a good or bad thing, should it be treated, etc). That doctors label it a disease doesn't give us an ought, it gives us an is. Just because doctors determine something is a disease that doesn't mean it has to be treated. We need to establish what the individual wants and give him the best info to make that choice himself (how long does he want to live, what kind of lifestyle and diet is acceptable to him, how important is it to him to be perceived as fit, etc). We can also establish general shoulds for the population if we assume most people want to live long and healthy livestherefore doctors encourage patients to avoid being obese (with the understanding that individual goals and desires vary). Then there is the quesiton of whether obesity is a matter of self control or not. Siply being a disease wouldn't provide an answer to this. Soe diseases are outside the patient's control, others aren't. Again you are respondong to an inccorect understanding of what disease is, by offering up a bad new definition of disease and then confounding the definition with a bunch of questions that are largley seperate from the definiton itself. Tis more wrong, not less....

Replies from: Kindly
comment by Kindly · 2012-09-29T14:26:29.603Z · LW(p) · GW(p)

I'm not offering up a new definition of disease! I'm doing precisely the opposite.

Look, maybe you're a perfect rational thinker already, but most people aren't like that. They do conflate a bunch of different questions into the "disease" label.

If you impose a single fixed definition on everyone, and make sure they are all talking about the same thing... well, if it works, I don't actually know what would happen, but it won't work. You'll just be arguing about the definition of disease all day.

The important point to make is that the people asking "is obesity a disease" don't actually want to know that. They want to answer some other question. It's simply irrelevant, most of the time, whether or not obesity satisfies the medical definition of disease, to do this.

So why waste time establishing that your official definition of the disease is better than someone else's intuitive one? This just seems like an effort to try and frame the debate, so that people will address the real question "in light of your answer".

comment by Bugmaster · 2012-03-13T03:23:42.766Z · LW(p) · GW(p)

I generally agree with your article, but it has at least one false premise:

Something discrete; a graph would show two widely separate populations, one with the disease and one without, and not a normal distribution.

But many undesirable conditions that are caused by genetic or environmental sources are continuous. Cancer is actually one of them, as far as I understand: there are many different kinds of cancer, and the symptoms can vary in severity (though all are fatal if left untreated). The common cold is another example, though of course it is rarely fatal.

comment by ohwilleke · 2011-03-31T02:16:10.985Z · LW(p) · GW(p)

In the mental health area the polar extreme from the pathology model is the "neurodiversity" model. The point about allowing treatment when it is available and effective, whether the treatment is an "enhancement" or a "cure" is also worthwhile.

In the area of obesity, I think we are pretty open, as a society, to letting the evidence guide us. In the area of mental health, we are probably less so, although I do think that empirical evidence about the nature of homosexuality has been decisive in driving a dramatic change in public opinion about it.

A key concept that sums up your analysis, which you call "determinist consequenntialist" is the revelation that you should know why you want to use a word before you define it, and that a word may have different definitions that are appropriate for different contexts. A definition of disease designed to draw a line limiting covered medical expenditures may find an enhancement/pathology treatment distinction useful, while a definition of disease designed to distinguish between whether treatment should be available to those for whom ability to pay is not the issue might not.

comment by MartinB · 2011-03-08T16:12:08.105Z · LW(p) · GW(p)

And then someone points out how bacteria might be involved in creating obesity.

comment by Dr_Manhattan · 2010-12-24T06:08:29.046Z · LW(p) · GW(p)

the sorts of thing you study in biology: proteins, bacteria, ions, viruses, genes.

Ions->prions

Replies from: wizzwizz4
comment by wizzwizz4 · 2019-05-03T21:18:07.881Z · LW(p) · GW(p)

You also study ions, though. You study ethene!

comment by Ganapati · 2010-06-05T16:58:44.553Z · LW(p) · GW(p)

It was an interesting read. I am a little confused about one aspect, though, that is determinist consequentialism.

From what I read, it appears a determinist consequentialist believes it is 'biology all the way down' meaning all actions are completely determined biologically. So where does choice enter the equation, including the optimising function for the choice, the consequences?

Or are there some things that are not biologically determined, like whether to approve someone else's actions or not, while actions physically impacting others are themsleves completely determined biologically? It doesn't appear to be the case, since the article states that even something like taste for music, not an action physically impacting the others, is completely determined biologically.

Replies from: RobinZ, RobinZ
comment by RobinZ · 2010-06-07T02:15:49.905Z · LW(p) · GW(p)

From what I read, it appears a determinist consequentialist believes it is 'biology all the way down' meaning all actions are completely determined biologically. So where does choice enter the equation, including the optimising function for the choice, the consequences?

I think you might be confused on the matter of free will - it's not obvious that there is any conflict between determinism and choice.

Replies from: Ganapati
comment by Ganapati · 2010-06-07T07:31:13.505Z · LW(p) · GW(p)

I used the word choice, but 'free will' do as well.

Was your response to my question biologically determined or was it a matter of conscious choice?

Whether there is going to be another response to this comment of mine or not, would it have been completely determined biologically or would it be a matter of conscious choice by some?

If all human actions are determined biologically the 'choice' is only an apparent one, like a tossed up coin having a 'choice' of turning up heads or tails. Whether someone is a determinist or not should itself have been determined biologically including all discussions of this nature!

Replies from: Morendil, Mitchell_Porter, cousin_it
comment by Morendil · 2010-06-13T18:01:38.178Z · LW(p) · GW(p)

Was your response to my question biologically determined or was it a matter of conscious choice?

The correct answer to this is "both" (and it is a false dichotomy). My consciousness is a property of a certain collection of matter which can be most compactly described by reference to the regularities we call "biology". Choosing to answer (or not to answer) is the result of a decision procedure arising out of the matter residing (to a rough approximation) in my braincase.

The difference between me and a coin is that a coin is a largely homogenous lump of metal and does not contain anything like a "choice mechanism", whereas among the regularities we call "biology" we find some patterns that reliably allow organisms (and even machines) to steer the future toward preferred directions, and which we call "choosing" or "deciding".

comment by Mitchell_Porter · 2010-06-07T09:12:10.020Z · LW(p) · GW(p)

Do your choices have causes? Do those causes have causes?

Determinism doesn't have to mean epiphenomenalism. Metaphysically, epiphenomenalism - the belief that consciousness has no causal power - is a lot like belief in true free will - consciousness as an uncaused cause - in that it places consciousness half outside the chain of cause and effect, rather than wholly within it. (But subjectively they can be very different.)

Increase in consciousness increases the extent to which the causes of one's choices and actions are themselves conscious in origin rather than unconscious. This may be experienced as liberation from cause and effect, but really it's just liberation from unconscious causes. Choices do have causes, whether or not you're aware of them.

Whether someone is a determinist or not should itself have been determined biologically including all discussions of this nature!

This is a point which throws many people, but again, it comes from an insufficiently broad concept of causality. Reason itself has causes and operates as a cause. We can agree, surely, that absurdly wrong beliefs have a cause; we can understand why a person raised in a cult may believe its dogmas. Correct beliefs also have a cause. Simple Darwinian survival ensures that any conscious species that has been around for hundreds of thousands of years must have at least some capacity for correct cognition, however that is achieved.

Nonetheless, despite this limited evolutionary gift, it may be true that we are deterministically doomed to fundamental error or ignorance in certain matters. Since the relationship of consciousness, knowledge, and reality is not exactly clear, it's hard to be sure.

Replies from: Ganapati
comment by Ganapati · 2010-06-08T08:10:34.976Z · LW(p) · GW(p)

Do your choices have causes? Do those causes have causes?

Determinism doesn't have to mean epiphenomenalism. Metaphysically, epiphenomenalism - the belief that consciousness has no causal power - is a lot like belief in true free will - consciousness as an uncaused cause - in that it places consciousness half outside the chain of cause and effect, rather than wholly within it. (But subjectively they can be very different.)

I don't equate determinism with epiphenomenalism, but that even when it acts as a cause, it is completely determined meaning the apparent choice is simply the inability, at current level of knowledge, of being able to predict exactly what choice will be made.

Simple Darwinian survival ensures that any conscious species that has been around for hundreds of thousands of years must have at least some capacity for correct cognition, however that is achieved.

Not sure how that follows. Evolutionary survival can say nothing about emergence of sentient species, let alone some capacity for correct cognition in that species. If the popular beliefs and models of the universe until a few centuries ago are incorrect, that seems to point in the exact opposite direction of your claim.

It appears that the problem seems to be one of 'generalisation from one example'. There exist beings with a consciousness that is not biologically determined and there exist those whose consciousness is completely biologically detemined. The former may choose determinism as a 'belief in belief' while the latter will see it as a fact, much like a self-aware AI.

Replies from: prase
comment by prase · 2010-06-08T09:24:38.633Z · LW(p) · GW(p)

... the apparent choice is simply the inability, at current level of knowledge, of being able to predict exactly what choice will be made.

That's true. And there is no problem within it.

Evolutionary survival can say nothing about emergence of sentient species, let alone some capacity for correct cognition in that species.

If the cognition was totally incorrect, leading to beliefs unrelated to the outside world, it would be only a waste of energy to maintain such cognitive capacity. Correct beliefs about certain things (like locations of food and predators) are without doubt great evolutionary advantage.

If the popular beliefs and models of the universe until a few centuries ago are incorrect, that seems to point in the exact opposite direction of your claim.

Yes, but it is a very weak evidence (more so, if current models are correct). The claim stated that there was at least some capacity for correct cognition, not that the cognition is perfect.

There exist beings with a consciousness that is not biologically determined and there exist those whose consciousness is completely biologically detemined.

Can you explain the meaning? What are the former and what are the latter beings?

Replies from: Ganapati
comment by Ganapati · 2010-06-09T12:08:08.638Z · LW(p) · GW(p)

If the cognition was totally incorrect, leading to beliefs unrelated to the outside world, it would be only a waste of energy to maintain such cognitive capacity. Correct beliefs about certain things (like locations of food and predators) are without doubt great evolutionary advantage.

Not sure what kind of cognitive capacity the dinosaurs held, but that they roamed around for millions of years and then became extinct seems to indicate that evolution itself doesn't care much about cognitive capacity beyond a point (that you already mentioned)

Can you explain the meaning? What are the former and what are the latter beings?

You are already familiar with the latter, those whose consciousness is biologically determined. How do you expect to recognise the former, those whose consciousness is not biologically determined?

Replies from: prase, Jack, Thomas
comment by prase · 2010-06-09T14:29:14.185Z · LW(p) · GW(p)

Not sure what kind of cognitive capacity the dinosaurs held...

At least they probably hadn't a deceptive cognitive capacity. That is, they had few beliefs, but that few were more or less correct. I am not saying that an intelligent species is universally better at survival than a dumb species. I said that of two almost identical species with same quantity of cognition (measured by brain size or better its energy consumption or number of distinct beliefs held) which differ only in quality of cognition (i.e. correspondence of beliefs and reality), the one which is easy deluded is in a clear disadvantage.

How do you expect to recognise the former, those whose consciousness is not biologically determined?

Well, what I know about nature indicates that any physical system evolves in time respecting rigid deterministic physical laws. There is no strong evidence that living creatures form an exception. Therefore I conclude that consciousness must be physically and therefore bilogically determined. I don't expect to recognise "deterministic creatures" from "non-determinist creatures", I simply expect the latter can't exist in this world. Or maybe I even can't imagine what could it possibly mean for consciousness to be not biologically determined. From my point of view, it could mean either a very bizarre form of dualism (consciousness is separated from the material world, but by chance it reflects correctly what happens in the material world), or it could mean that the natural laws aren't entirely deterministic. But I don't call the latter possibility "free will", I call it "randomness".

Your line of thought reminds me of a class of apologetics which claim that if we have evolved by random chance, then there is no guarantee that our cognition is correct, and if our cognition is flawed, we are not able to recognise that we have evolved by random chance; therefore, holding a position that we have evolved by random chance is incoherent and God must have been involved in the process. I think this class of arguments is called "presuppositionalist", but I may be wrong.

Whatever is the name, the argument is a fallacy. That our cognition is correct is an assumption we must take, otherwise we may better not argue about anything. Although a carefully designed cognitive algorithm may have better chances to work correctly than by chance evolved cognitive algorithm, i.e. it is acceptable that p(correct|evolved)<p(correct|designed), it doesn't necessarily mean that p(evolved|correct)<p(designed|correct), which is the conclusion the presuppositionalists essentially make.

Back to your argument, you seem to implicitly hold about cognition that p(correct|deterministic)<p(correct|indeterministic), for which I can't see any reason, but even if that is valid, it isn't automatically a strong argument for indeterminism.

Replies from: Ganapati
comment by Ganapati · 2010-06-11T07:20:02.632Z · LW(p) · GW(p)

I said that of two almost identical species with same quantity of cognition (measured by brain size or better its energy consumption or number of distinct beliefs held) which differ only in quality of cognition (i.e. correspondence of beliefs and reality), the one which is easy deluded is in a clear disadvantage.

Unless the delusions are related to survival and procreation, don't see how they would present any evolutionary disadvantage.

Well, what I know about nature indicates that any physical system evolves in time respecting rigid deterministic physical laws. There is no strong evidence that living creatures form an exception.

Actually there is plenty of evidence to show that living creatures require additional laws to be predicted. Darwinian evolution itself is not required to describe the physical world. However what you probably meant was that there is no evidence that living creatures violate any physical laws, meaning laws governing the living are potentially reducible to physical laws. Someone else looking at the exact same evidence, can come to an entirely different conclusion, that we are actually on the verge of demonstrating what we always felt, that the living are more than physics. Both the positions are based on something that has not yet been demonstrated, the only "evidence" for either lying with the individual, a case of generalisation from one example.

Back to your argument, you seem to implicitly hold about cognition that p(correct|deterministic)<p(correct|indeterministic),...

Not at all. I was only questioning the logical consistency of an approach called 'determinist consequentialism'. Determinism implies a future that is predetermined and potentially predictable. Consequentialism would require a future that is not predetermined and dependent on choices that we make now either because of a 'free will' or 'randomness'.

Replies from: prase, CarlShulman
comment by prase · 2010-06-11T11:58:58.768Z · LW(p) · GW(p)

Unless the delusions are related to survival and procreation, don't see how they would present any evolutionary disadvantage.

Forming and holding any belief is costly. The time and energy you spend forming delusions can be used elsewhere.

Actually there is plenty of evidence to show that living creatures require additional laws to be predicted.

An example would be helpful. I don't know what evidence you are speaking about.

However what you probably meant was that there is no evidence that living creatures violate any physical laws, meaning laws governing the living are potentially reducible to physical laws.

What is the difference between respecting physical laws and not violating them? Physical laws (and I am speaking mainly about the microscopical ones) determine the time evolution uniquely. Once you know the initial state in all detail, the future is logically fixed, there is no freedom for additional laws. That of course doesn't mean that the predictions of future are practically feasible or even easy.

Consequentialism would require a future that is not predetermined and dependent on choices that we make now either because of a 'free will' or 'randomness'.

Consequentialism doesn't require either. The choices needn't be principially unpredictable to be meaningful.

Replies from: Ganapati
comment by Ganapati · 2010-06-12T07:13:08.775Z · LW(p) · GW(p)

Forming and holding any belief is costly. The time and energy you spend forming delusions can be used elsewhere.

Perhaps. But do not see why that should present an evolutionary disadvantage if they do not impact survival and procreation. On the contrary it could present an evolutionary adavantage. A species that deluded itself inot believing that its has been the chosen species, might actually work energetically towards establshing its hegemony and gain an evolutionary advantage.

An example would be helpful. I don't know what evidence you are speaking about.

The evidence was stated in the very next line, the Darwinian evolution, something that is not required to describe the evolution of non-biological systems.

What is the difference between respecting physical laws and not violating them?

Of course, none. The distinction I wanted to make was one between respecting/not-violating and being completely determined by.

Physical laws (and I am speaking mainly about the microscopical ones) determine the time evolution uniquely. Once you know the initial state in all detail, the future is logically fixed, there is no freedom for additional laws. That of course doesn't mean that the predictions of future are practically feasible or even easy.

Nothing to differ there as a definition of determinism. It was exactly the point I was making too. If biological systems are, like us, are completely determined by physical laws, the apparent choice of making a decision by considering consequences is itself an illusion.

Consequentialism doesn't require either. The choices needn't be principially unpredictable to be meaningful.

In which case every choice every entity makes, regardless of how it arrives at it, is meaningful. In other words there are no meaningless choices in the real world.

Replies from: prase
comment by prase · 2010-06-12T09:00:33.350Z · LW(p) · GW(p)

But do not see why that should present an evolutionary disadvantage if they do not impact survival and procreation.

Large useless brain consumes a lot of energy, which means more dangerous hunting and faster consumption of supplies when food is insufficient. The relation to survival is straightforward.

A species that deluded itself inot believing that its has been the chosen species, might actually work energetically towards establshing its hegemony and gain an evolutionary advantage.

Sounds like a group selection to me. And not much in accordance with observation. Although I don't believe the Jews believe in their chosenness on genetical grounds, even if they did, they aren't much sucessful after all.

the Darwinian evolution, something that is not required to describe the evolution of non-biological systems.

Depends on interpretation of "required". If it means that practically one cannot derive useful statements about trilobites from Schrödinger equation, then yes, I agree. If it means that laws of evolution are logically independent laws which we would need to keep even if we overcome all computational and data-storage difficulties, then I disagree. I expect you meant the first interpretation, given your last paragraph.

Replies from: Ganapati
comment by Ganapati · 2010-06-13T06:43:53.410Z · LW(p) · GW(p)

Large useless brain consumes a lot of energy, which means more dangerous hunting and faster consumption of supplies when food is insufficient. The relation to survival is straightforward.

Peacock tails reduce their survival chances. Even so peacocks are around. As long as the organism survives until it is capable of procreation, any survival disadvantages don't pose an evolutionary disadvantage.

Sounds like a group selection to me. And not much in accordance with observation.

I am more inclined towards the gene selection theory, not group selection. About the only species whose delusions we can observe are ourselves. So it is difficult to come out wth any significant objective observational data.

Although I don't believe the Jews believe in their chosenness on genetical grounds, even if they did, they aren't much sucessful after all.

I didn't mean Jews, I meant human species. If delusions are not genetically determined, what would be their source, from a deterministic point of view?

Replies from: prase
comment by prase · 2010-06-13T12:57:05.634Z · LW(p) · GW(p)

Peacock tails reduce their survival chances. Even so peacocks are around. As long as the organism survives until it is capable of procreation, any survival disadvantages don't pose an evolutionary disadvantage.

Peacock tail survival disadvantage isn't limited to post-reproduction period. In order to explain the existence of the tails, it must be shown that their positive effect is greater than the negative.

I don't dispute that (probably large) part of the human brain's capacity is used in the peacock-tail manner as a signal of fitness. What I say is only that having two brains of same energetic demands, the one with more correct cognition is in advantage; their signalling value is the same, so any peacock mechanism shouldn't favour the deluded one.

This doesn't constitute proof of the correctness of human cognition, perhaps (almost certainly) some parts of our brain's design is wrong in a way that no single mutation can repair, like the blind spot on human retina. But the evolutionary argument for correctness can't be dismissed as irrelevant.

Replies from: Ganapati
comment by Ganapati · 2010-06-13T14:09:17.317Z · LW(p) · GW(p)

If delusions presented only survival dsiadvantages and no advantages, you are right. However, that need not be the case.

The delusion about an afterlife can co-exist with correct cognition in matters affecting immediate survival and when it does, it can enhance survival chances. So evolution doesn't automatically lead to/enhance correct cognition. I am not saying correctness plays no role, but isn't the sole deciding factor, at least not in the case of evolutionary selection.

comment by CarlShulman · 2010-06-11T08:27:56.512Z · LW(p) · GW(p)

Consequentialism would require a future that is not predetermined and dependent on choices that we make now either because of a 'free will' or 'randomness'.

This post is relevant.

comment by Jack · 2010-06-09T12:25:39.712Z · LW(p) · GW(p)

Not sure what kind of cognitive capacity the dinosaurs held, but that they roamed around for millions of years and then became extinct seems to indicate that evolution itself doesn't care much about cognitive capacity beyond a point (that you already mentioned)

Huh? Presumably if the dinosaurs had the cognitive capacity and the opposable thumbs to develop rocket ships and divert incoming asteroids they would have survived. They died out because they weren't smart enough.

Replies from: cousin_it, Ganapati
comment by cousin_it · 2010-06-09T13:13:46.938Z · LW(p) · GW(p)

I will side with Ganapati on this particular point. We humans are spending much more cognitive capacity, with much more success, on inventing new ways to make ourselves extinct than we do on asteroid defense. And dinosaurs stayed around much longer than us anyway. So the jury is still out on whether intelligence helps a species avoid extinction.

prase's original argument still stands, though. Having a big brain may or may not give you a survival advantage, but having a big non-working brain is certainly a waste that evolution would have erased in mere tens of generations, so if you have a big brain at all, chances are that it's working mostly correctly.

ETA: disregard that last paragraph. It's blatantly wrong. Evolution didn't erase peacock tails.

Replies from: Jack, prase
comment by Jack · 2010-06-09T13:42:02.265Z · LW(p) · GW(p)

The asteroid argument aside it seems to me bordering on obvious that general intelligence is adaptive, even if taken to an extreme it can get a species into trouble. (1) Unless you think general intelligence is only helpful for sexual selection it has to be adaptive or we wouldn't have it (since it is clearly the product of more than one mutation). (2) Intelligence appears to use a lot of energy such that if it wasn't beneficial it would be a tremendous waste. (3) There are many obvious causal connections between general intelligence and survival. It enabled us to construct axes, spears harness fire, communicate hunting strategies, pass down hunting and gathering techniques to the next generation, navigate status hierarchies etc. All technologies that have fairly straight forward relations to increased survival.

And the fact that we're doing more to invent new ways to kill ourselves instead of protect ourselves can be traced pretty directly to collective action problems and a whole slew of evolved features other than intelligence that were once adaptive but have ceased to be-- tribalism most obviously.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-09T13:53:48.027Z · LW(p) · GW(p)

The fact that only a handful of species have high intelligence suggests that there are very few niches that actually support it. There's also evidence that human intelligent is due in a large part to runaway sexual selection (like a peacock's tail). See Norretranders's "The Generous Man"" for example. A number of biologists such as Dawkins take this hypothesis very seriously.

Replies from: Jack
comment by Jack · 2010-06-09T14:35:34.900Z · LW(p) · GW(p)

There's also evidence that human intelligent is due in a large part to runaway sexual selection (like a peacock's tail).

Thats an explanation that explains the increase in intelligence from apes to humans and my comment was a lot about that but the original disputed claim was

Simple Darwinian survival ensures that any conscious species that has been around for hundreds of thousands of years must have at least some capacity for correct cognition, however that is achieved.

And there are less complex adaptive behaviors that require correct cognition: identifying prey, identifying predators, identifying food, identifying cliffs, path-finding etc. I guess there is an argument to be had about what a 'conscious species' but that doesn't seem to be worthwhile. Also, there is a subtle difference between what human intelligence is due to and what the survival benefits of it are. It may have taken sexual selection to jump start it but our intelligence has made us far less vulnerable than we once were (with the exception of the problems we created for ourselves). Humans are rarely eaten by giant cats, for one thing.

The fact that only a handful of species have high intelligence suggests that there are very few niches that actually support it.

No species have as high intelligence as humans but lots of species of high intelligence relative to, say, clams. --- Okay, that's a little facetious but tool use has arisen independently throughout the animal again and again, not to mention the less complex behaviors mentioned above.

Are people really disputing whether or not accurate beliefs about the world are adaptive? Or that intelligence increases the likelihood of having accurate beliefs about the world?

Replies from: JoshuaZ, thomblake
comment by JoshuaZ · 2010-06-09T14:54:04.873Z · LW(p) · GW(p)

Are people really disputing whether or not accurate beliefs about the world are adaptive? Or that intelligence increases the likelihood of having accurate beliefs about the world?

Well, having more accurate beliefs only matters if you are an entity intelligence enough to general act on those beliefs. To make an extreme case, consider the hypothetical of say an African Grey Parrot able to do calculus problems. Is that going to actually help it? I would suspect generally not. Or consider a member of a species that gains the accurate belief that it can sexually self-stimulate and then engages in that rather than mating. Here we have what is non-adaptive trait (masturbation is a very complicated trait and so isn't non-adaptive in all cases but one can easily see situations where it seems to be). Or consider a pair of married humans Alice and Bob who have kids that Bob believes are his. Then Bob finds out that his wife had an affair with Bob's brother Charlie and the kids are all really Charlie's. If Bob responds by cutting off support for the kids this is likely non-adaptive. Indeed, one can take it a step further and suppose that Bob and Charlie are identical twins. So that Bob's actions are completely anti-adaptive.

Your second point seems more reasonable. However, I'd suggest that intelligence increases the total number of beliefs one has about the world but that it may not increase the likelyhood of beliefs being accurate. Even if it does, the number of incorrect beliefs is likely to increase as well. It isn't clear that the average ratio of correct beliefs to total beliefs is actually increasing (I'm being deliberately vague here in that it would likely be very difficult to measure how many beliefs one has without a lot more thought). A common ape may have no incorrect beliefs even as the common human has many incorrect beliefs. So it isn't clear that intelligence leads to more accurate beliefs.

Edit: I agree that overall intelligence has been a helpful trait for human survival over the long haul.

comment by thomblake · 2010-06-09T14:43:58.018Z · LW(p) · GW(p)

Are people really disputing whether or not accurate beliefs about the world are adaptive?

That seems a likely area of dispute. Having accurate beliefs seems, ceteris paribus, to be better for you than inaccurate beliefs (though I can make up as many counterexamples as you'd like). But that still leaves open the question of whether it's better than no beliefs at all.

comment by prase · 2010-06-09T13:40:13.307Z · LW(p) · GW(p)

And dinosaurs stayed around much longer than us anyway.

Dinosaurs weren't a single species, though. Maybe better compare dinosaurs to mammals than to humans.

Replies from: Ganapati, cousin_it
comment by Ganapati · 2010-06-11T07:23:01.370Z · LW(p) · GW(p)

Or we could pick a partciular species of dinaosaur that survived for a few million years and compare to humans.

Do you expect any changes to the analysis if we did that?

comment by cousin_it · 2010-06-09T13:55:51.782Z · LW(p) · GW(p)

Nitpicking huh? Two can play at that game!

  1. Maybe better compare mammals to reptiles than to dinosaurs.

  2. Many individual species of dinosaurs have existed for longer than humans have.

  3. Dinosaurs as a whole probably didn't go extinct, we see their descendants everyday as birds.

Okay, this isn't much to argue about :-)

Replies from: prase
comment by prase · 2010-06-09T15:05:46.579Z · LW(p) · GW(p)

I love nitpicking!

  1. Mammals are a clade while reptiles are paraphyletic. Well, dinosaurs are too when birds are excluded, but I would gladly leave the birds in. In any case, dinosaurs win over mammals, so it wasn't probably a good nitpick after all.

  2. No dinosaur species did live along with humans, so direct competition didn't take place.

  3. I can't find a nit to pick it here.

comment by Ganapati · 2010-06-11T07:28:46.654Z · LW(p) · GW(p)

Are you claiming that the human species will last a million years or more and not become extinct before then? What are the grounds for such a claim?

comment by Thomas · 2010-06-11T08:12:25.134Z · LW(p) · GW(p)

they roamed around for millions of years and then became extinct

I don't think one should compare humans and dinos. Maybe mammals and dinos or something like that. Many dinosaurs went extinct during the era, our ancestors where many different "species". Successful enough, that we are still around. As were some dinos which gave birds to Earth.

Just a side note,

comment by cousin_it · 2010-06-07T08:27:43.899Z · LW(p) · GW(p)

Yep, your view is confused.

So where does choice enter the equation, including the optimising function for the choice, the consequences?

The optimizing function is implemented in your biology, which is implemented in physics.

Replies from: Ganapati
comment by Ganapati · 2010-06-08T07:48:05.174Z · LW(p) · GW(p)

In other words, the 'choices' you make are not really choices, but already predetermined, You didn't really choose to be a determinist, you were programmed to select it, once you encountered it.

Replies from: cousin_it, Vladimir_Nesov
comment by cousin_it · 2010-06-08T11:53:42.185Z · LW(p) · GW(p)

Yep, kind of. But your view of determinism is too depressing :-)

My program didn't know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one. Like a program that receives an array as input and finds the maximum element in it, the output is "predetermined", but it's still useful. Likewise, the worldview I chose was "predetermined", but that doesn't mean my choice is somehow "wrong" or "invalid", as long as my inner program actually implements valid common sense.

Replies from: Ganapati
comment by Ganapati · 2010-06-09T08:54:41.607Z · LW(p) · GW(p)

My program didn't know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one.

You couldn't possibly know that! Someone programmed to pick the mystical worldview would feel exactly the same and would have been programmed not to recognise his/her own programming too :-)

Like a program that receives an array as input and finds the maximum element in it, the output is "predetermined", but it's still useful.

Of course the output is useful, for the programmer, if any :-)

Likewise, the worldview I chose was "predetermined", but that doesn't mean my choice is somehow "wrong" or "invalid", as long as my inner program actually implements valid common sense.

It doesn't appear that regardless of what someone has been programmed to pick, the 'feelings' don't seem to be any different.

Replies from: cousin_it
comment by cousin_it · 2010-06-09T09:51:12.754Z · LW(p) · GW(p)

If my common sense is invalid and just my imagination, then how in the world do I manage to program computers successfully? That seems to be the most objective test there is, unless you believe all computers are in a conspiracy to deceive humans.

Replies from: Ganapati, Ganapati
comment by Ganapati · 2010-06-12T06:24:59.565Z · LW(p) · GW(p)

I program computers successfully too :-)

comment by Ganapati · 2010-06-13T07:53:43.491Z · LW(p) · GW(p)

Just to clarify, in a deterministic universe, there are no "invalid" or "wrong" things. Everything just is. Every belief and action is just as valid as any other because that is exactly how each of them has been determined to be.

Replies from: cousin_it
comment by cousin_it · 2010-06-13T09:46:35.668Z · LW(p) · GW(p)

No, this belief of yours is wrong. A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.

Replies from: Ganapati
comment by Ganapati · 2010-06-13T14:26:14.269Z · LW(p) · GW(p)

A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.

Sure it can. But it is possible to declare one of them as valid only because you are outside of both and you have a notion of what the result should be.

But to avoid the confusion over the use of words I will restate what I said earlier slightly differently.

In a deterministic universe, neither of a pair of opposites like valid/invalid, right/wrong, true/false etc has more significance than the other. Everything just is. Every belief and action is just as significant as any other because that is exactly how each of them has been determined to be.

Replies from: cousin_it, cousin_it, cousin_it
comment by cousin_it · 2010-06-14T13:30:25.243Z · LW(p) · GW(p)

I thought about your argument a bit and I think I understand it better now. Let's unpack it.

First off, if a deterministic world contains a (deterministic) agent that believes the world is deterministic, that agent's belief is correct. So no need to be outside the world to define "correctness".

Another matter is verifying the correctness of beliefs if you're within the world. You seem to argue that a verifier can't trust its own conclusion if it knows itself to be a deterministic program. This is debatable - it depends on how you define "trust" - but let's provisionally accept this. From this you somehow conclude that the world and your mind must be in fact non-deterministic. To me this doesn't follow. Could you explain?

comment by cousin_it · 2010-06-13T15:16:08.220Z · LW(p) · GW(p)

So your argument against determinism is that certain things in your brain appear to have "significance" to you, but in a deterministic world that would be impossible? Does this restatement suffice as a reductio ad absurdum, or do I need to dismantle it further?

comment by cousin_it · 2010-06-14T13:07:56.092Z · LW(p) · GW(p)

I'm kind of confused about your argument. Sometimes I get a glimpse of sense in it, but then I notice some corollary that looks just ridiculously wrong and snap back out. Are you saying that the validity of the statement 2+2=4 depends on whether we live in a deterministic universe? That's a rather extreme form of belief relativism; how in the world can anyone hope to convince you that anything is true?

comment by Vladimir_Nesov · 2010-06-08T15:15:15.143Z · LW(p) · GW(p)

the 'choices' you make are not really choices, but already predetermined

The only way that choices can be made is by being predetermined (by your decision-making algorithm). Paraphrasing the familiar wordplay, choices that are not predetermined refer to decisions that cannot be made, while the real choices, that can actually be made, are predetermined.

Replies from: Blueberry, Ganapati
comment by Blueberry · 2010-06-12T17:00:52.408Z · LW(p) · GW(p)

I like this phrasing; it makes things very clear. Are you alluding to this quote, or something else?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-12T17:33:00.535Z · LW(p) · GW(p)

Yes.

comment by Ganapati · 2010-06-09T08:43:51.516Z · LW(p) · GW(p)

Of course! Since all the choices of all the actors are predetermined, so is the future. So what exactly would be the "purpose" of acting as if the future were not already determined and we can choose an optimising function based the possible consequences of different actions?

Replies from: Vladimir_Nesov, RobinZ
comment by Vladimir_Nesov · 2010-06-09T10:49:50.846Z · LW(p) · GW(p)

Since the consequences are determined by your algorithm, whatever your algorithm will do, will actually happen. Thus, the algorithm can contemplate what would be the consequences of alternative choices and make the choice it likes most. The consideration of alternatives is part of the decision-making algorithm, which gives it the property of consistently picking goal-optimizing decisions. Only these goal-optimizing decisions actually get made, but the process of considering alternatives is how they get computed.

Replies from: Ganapati
comment by Ganapati · 2010-06-12T06:14:21.590Z · LW(p) · GW(p)

Sure. So consequentialism is the name for the process that happens in every programmed entity, making it useless to distinguish between two different approaches.

comment by RobinZ · 2010-06-09T12:15:22.669Z · LW(p) · GW(p)

In a deterministic universe, the future is logically implied by the present - but you're in the present. The future isn't fated - if, counterfactually, you did something else, then the laws of physics would imply very different events as a consequence - and it isn't predictable - even ignoring computational limits, if you make any error, even on an unmeasurable level, in guessing the current state, your prediction will quickly diverge from reality - it's just logically consistent.

Replies from: Ganapati
comment by Ganapati · 2010-06-12T05:52:00.596Z · LW(p) · GW(p)

if, counterfactually, you did something else, ...

How could it happen? Each component of the system is programmed to react in a predetermined way to the inputs it receives from the rest of the system. The the inputs are predetermined as is the processing algorithm. How can you or I do anything that we have not been preprogrammed to do?

Consdier an isolated system with no biological agents involved. It may contain preprogrammed computers. Would you or would you not expect the future evolution of the system to be completely determined. If you would expect its future to be completely determined, why would things change when the system, such as ours, contains biological agents? If you do not expect the future of the system to be completely determined, why not?

Replies from: RobinZ
comment by RobinZ · 2010-06-12T13:49:53.293Z · LW(p) · GW(p)

I said "counterfactual". Let me use an archetypal example of a free-will hypothetical and query your response:

Suppose that there are two worlds, A and A', which are at a certain time indistinguishable in every measurable way. They differ, however, and differ most strongly in the nature of a particular person, Alice, who lives in A versus the nature of her analogue in A', whom we shall call Alice' for convenience.

In the two worlds at the time at which A and A' are indistinguishable, Alice and Alice' are entering a restaurant. They are greeted by a server, seated, and given menus, and the attention of both Alice and Alice' rapidly settles upon two items: the fettucini alfredo and the eggplant parmesan. As it happens, the previously-indistinguishable differences between Alice and Alice' are such that Alice orders fettucini alfredo and Alice' orders eggplant parmesan.

What dishes will Alice and Alice' receive?

I'm off to the market, now - I'll post the followup in a moment.

Replies from: RobinZ
comment by RobinZ · 2010-06-12T15:13:06.939Z · LW(p) · GW(p)

Now: I imagine most people would say that Alice would receive the fettucini and Alice' the eggplant. I will proceed on this assumption

Now suppose that Alice and Alice' are switched at the moment they entered the restaurant. Neither Alice nor Alice' notice any change. Nobody else notices any change, either. In fact, insofar as anyone in universe A (now containing Alice') and universe A' (now containing Alice) can tell, nothing has happened.

After the switch, Alice' and Alice are seated, open their menus, and pick their orders. What dishes will Alice' and Alice receive?

Replies from: Blueberry
comment by Blueberry · 2010-06-12T16:47:42.674Z · LW(p) · GW(p)

I'm missing the point of this hypothetical. The situation you described is impossible in a deterministic universe. Since we're assuming A and A' are identical at the beginning, what Alice and Alice' order is determined from that initial state. The divergence has already occurred once the two Alices order different things: why does it matter what the waiter brings them?

I'm not sure exactly how these universes would work: it seems to be a dualistic one. Before the Alices order, A and A' are physically identical, but the Alices have different "souls" that can somehow magically change the physical makeup of the universe in strangely predictable ways. The different nature of Alice and Alice' has changed the way two identical sets of atoms move around.

If this applies to the waiter as well, we can't predict what he'll decide to bring Alice: for all we know he may turn into a leopard, because that's his nature.

Replies from: RobinZ
comment by RobinZ · 2010-06-12T17:02:30.092Z · LW(p) · GW(p)

The requirement is not that there is no divergence, but that the divergence is small enough that no-one could notice the difference. Sure, if a superintelligent AI did a molecular-level scan five minutes before the hypothetical started it would be able to tell that there was a switch, but no such being was there.

And the point of the hypothetical is that the question "what if, counterfactually, Alice ordered the eggplant?" is meaningful - it corresponds to physically switching the molecular formation of Alice with that of Alice' at the appropriate moment.

Replies from: Blueberry
comment by Blueberry · 2010-06-12T17:15:34.371Z · LW(p) · GW(p)

I understand now. Sorry; that wasn't clear from the earlier post.

This seems like an intuition pump. You're assuming there is a way to switch the molecular formation of Alice's brain to make her order one dish, instead of another, but not cause any other changes in her. This seems unlikely to me. Messing with her brain like that may cause all kinds of changes we don't know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn't order eggplant). While it's intuitively pleasing to think that there's a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so.

Also, suppose I ask "what if Alice ordered the linguine?" Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?

Replies from: RobinZ
comment by RobinZ · 2010-06-12T19:30:17.197Z · LW(p) · GW(p)

I understand now. Sorry; that wasn't clear from the earlier post.

I know - I didn't phrase it very well.

Messing with her brain like that may cause all kinds of changes we don't know about, to the point where the new person seems totally different (after all, the kind of person Alice was didn't order eggplant). While it's intuitively pleasing to think that there's a switch in her brain we can flip to change just that one thing, the hypothetical is begging the question by assuming so.

Yes, yes it is.

Also, suppose I ask "what if Alice ordered the linguine?" Since there are many ways to switch her brain with another brain such that the resulting entity will order the linguine, how do you decide which one to use in determining the meaning of the question?

I'm not sure. My instinct is to try to minimize the amount the universes differ (maybe taking some sort of sample weighted by a decreasing function of the magnitude of the change), but I don't have a coherent philosophy built around the construction of counterfactuals. My only point is that determinism doesn't make counterfactuals automatically meaningless.

Replies from: Ganapati
comment by Ganapati · 2010-06-13T06:28:09.340Z · LW(p) · GW(p)

The elaborate hypothetical is the equivalent of saying what if the programming of Alice had been altered in the minor way, that nobody notices, to order eggplant parmesan instead of fettucini alfredo which her earlier programming would have made her to order? Since there is no agent external to the world that can do it, there is no possibility of that happening. Or it could mean that any minor changes from the predetermined program are possible in a deterministic universe as long as nobody notices them, which would imply an incompletely determined universe.

Replies from: RobinZ
comment by RobinZ · 2010-06-13T11:46:50.909Z · LW(p) · GW(p)

...

Ganapati, the counterfactual does not happen. That's what "counterfactual" means - something which is contrary to fact.

However, the laws of nature in a deterministic universe are specified well enough to calculate the future from the present, and therefore should be specified well enough to calculate the future* from some modified present*, even if no such present* occurs. The answer to "what would happen if I added a glider here to this frame of a Conway's Life game?" has a defined answer, even though no such glider will be present in the original world.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-13T11:55:59.530Z · LW(p) · GW(p)

"what would happen if I added a glider here to this frame of a Conway's Life game?" has a defined answer, even though no such glider will be present in the original world.

Why would you be interested in something that can't occur in the real world?

Replies from: RobinZ
comment by RobinZ · 2010-06-13T12:07:57.921Z · LW(p) · GW(p)

In the "free will" case? Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-13T12:19:42.251Z · LW(p) · GW(p)

In the "free will" case?

What?

Because I want the most favorable option to be factual, and in order to prove that, I need to be able to deduce the consequences of the unfavorable options.

Not prove, implement. You are not rationalizing the best option as being the actual one, you are making it so. When you consider all those options, you don't know which ones of them are contrary to fact, and which ones are not. You never consider something you know to be counter-factual.

Replies from: RobinZ
comment by RobinZ · 2010-06-13T12:25:27.661Z · LW(p) · GW(p)

Yes, that's a much better phrasing than mine.

(p.s. you realize that I am having an argument with Ganapati about the compatibility of determinism and free will in this thread, right?)

Replies from: Ganapati
comment by Ganapati · 2010-06-13T14:49:34.229Z · LW(p) · GW(p)

Actually you brought in the counterfactual argument to attempt to explain the significance (or "purpose") of an approach called consequentialism (as opposed to others) in a determined universe.

Replies from: RobinZ
comment by RobinZ · 2010-06-13T15:44:16.514Z · LW(p) · GW(p)

Allow me the privilege of stating my own intentions.

Replies from: Ganapati
comment by Ganapati · 2010-06-13T16:42:01.134Z · LW(p) · GW(p)

You brought up the counterfactualism example right here, so I assumed it was in response to that post.

Replies from: RobinZ
comment by RobinZ · 2010-06-13T17:29:21.933Z · LW(p) · GW(p)

I'm sorry, do you have an objection to the reading of "counterfactual" elaborated in this thread?

Replies from: Ganapati
comment by Ganapati · 2010-06-17T06:35:31.863Z · LW(p) · GW(p)

Sorry for the delay in replying. No, I don't have any objection to the reading of the counterfactual. However I fail to connect it to the question I posed.

In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it.

Determinism, like solipsism, is a logically consistent system of belief. It cannot be proven wrong anymore than solpsism can be, since the only "evidence" disproving it, if any, lies with the entity believing it, not outside.

Do you feel that you are a purposeless entity whose actions and beliefs have no significance whatsoever on the future? If so, your feelings are very much consistent with your belief in determinism. If not, it may be time to take into consideration the evidence in the form of your feelings.

Thank you all for your time!

Replies from: RobinZ
comment by RobinZ · 2010-06-18T17:30:52.775Z · LW(p) · GW(p)

In a determined universe, the future is completely determined whether any conscious entity in it can predict it or not. No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it. [emphasis added]

Wrong. If Alice orders the fettucini in world A, she gets fettucini, but if Alice' orders eggplant in world A, she gets eggplant. The future is not fixed in advance - it is a function of the present, and your acts in the present create the future.

There's an old Nozick quote that I found in Daniel Dennett's Elbow Room: "No one has ever announced that because determinism is true thermostats do not control temperature." Our actions and beliefs have exactly the same ontological significance as the switching and setting of the thermostat. Tell me in what sense a thermostat does not control the temperature.

Replies from: red75
comment by red75 · 2010-06-18T21:17:03.350Z · LW(p) · GW(p)

Correction.

Ganapati is partially right. In deterministic universe (DU) initial conditions define all history from beginning to the end by definition. If it is predetermined that Alice will order fettucini, she will order fettucini. But it doesn't mean that Alice must order fetuccini. I'll elaborate on that further.

  1. No one inside DU can precisely predict future. Proof: Let's suppose we can exactly predict future, then A) we can change it, thus proving that prediction was incorrect, B) we can't change it a bit. How can case B be the case? It can't. Prediction brings information about the future, and so it changes our actions. Let p be prediction, and F(p) be prediction, given that we know prediction p. For case B to be possible, function F(p) must have fixed point p'=F(p'), but information from future brings entropy, which causes future entropy to increase, so increasing prediction's entropy and so on. Thus, there's cannot be fixed point. QED.

No actions, considerations, beliefs of any entity have any more significance on the future than those of another simply because they cannot alter it.

  1. Given 1, no one can be sure that his/her actions predetermined to wanish. On the other hand, if one decided to abstain from acting, then it is more likely he/she is predetermined to fail. Thus, his/her actions (if any) have less probability to affect future. On the third hand, if one stands up and wins, then only then one will know that one was predetermined to win, not a second earlier.

  2. If Alice cannot decide what she likes more, she cannot just say "Oh! I must eat fettucini. It is my fate.", she haven't and cannot have such information in principle. She must decide for herself, determination or not. And if external observer (let's call him god) will come down and say to Alice "It's your fate to eat fettucini." (thus effectively making determenistic universe undeterministic), no single physical law will force Alice to do it.

Replies from: RobinZ
comment by RobinZ · 2010-06-18T22:37:48.728Z · LW(p) · GW(p)

I'd like to dispute your usage of "predetermined" there: like "fated", it implies an establishment in advance, rather than by events. A game of Agricola is predetermined to last 14 turns, even in a nondeterministic universe, because no change to gameplay at any point during the game will cause it to terminate before or after the 14th turn. The rules say 14, and that's fixed in advance. (Factors outside the game may cause mistakes to be made or the game not to finish, but those are both different from the game lasting 13 or 15 turns.) On the opposite side, an arbitrary game of chess is not predetermined to last (as that one did) 24 turns, even in a deterministic universe, because a (counterfactual) change to gameplay could easily cause it to last fewer or more.

If one may determine without knowing Alice's actions what dish she will be served (e.g. if the eggplant is spoiled), then she may be doomed to get that dish, but in that case the (deterministic or nondeterministic) causal chain leading to her dish does not pass through her decision. And that makes the difference.

Replies from: red75
comment by red75 · 2010-06-19T06:04:47.535Z · LW(p) · GW(p)

I'm not sure that I sufficiently understand you. "Fated" implies that no matter what one do, one will end up as fate dictates, right? In other words: in all counterfactual universes one's fate is same. Predetermination I speak of is different. It is a property of deterministic universe: all events are determined by initial conditions only.

When Alice decides what she will order she can construct in her mind bunch of different universes, and predetermination doesn't mean that in all those constructed universes she will get fettucini, predetermination means that only one constructed universe will be factual. As I proved in previous post Alice cannot know in advance which constructed universe is factual. Alice cannot know that she's in universe A where she's predetermined to eat fettucini, or that she's in universe B where she's to eat eggplant. And her decision process is integral part of each of these universes.

Without her decision universe A cannot be universe A.

So her decision is crucial part of causal chain.

Did I answer your question?

Edit: spellcheck.

Replies from: RobinZ
comment by RobinZ · 2010-06-19T14:20:58.067Z · LW(p) · GW(p)

I don't like the connotations, but sure - that's a mathematically consistent definition.

comment by RobinZ · 2010-06-07T12:22:59.269Z · LW(p) · GW(p)

P.S. Welcome to Less Wrong! Besides posts linked from the "free will" Wiki page - particularly How An Algorithm Feels From Inside - you may be interested in browsing the various Sequences. The introductory sequence on Map and Territory) is a good place to start.

Edit: You may also try browsing the backlinks from posts you like - that's how I originally read through EY's archive.

Replies from: Ganapati
comment by Ganapati · 2010-06-08T07:41:32.676Z · LW(p) · GW(p)

Thanks! I read the links and sequences.

Replies from: Jack
comment by Jack · 2010-06-09T12:31:20.518Z · LW(p) · GW(p)

Not in one day you didn't.

Replies from: Ganapati
comment by Ganapati · 2010-06-10T05:37:18.945Z · LW(p) · GW(p)

I didn't read them in one day and not all of them either.

I 'stubled upon' this article on the night of June 1 (GMT + 5.30) and did a bit of research on the site looking to check if my question had been previously raised and answered. In the process I did end up reading a few articles and sequences.

comment by soreff · 2010-05-31T18:59:48.443Z · LW(p) · GW(p)

Very good article!

A couple of comments:

So here, at last, is a rule for which diseases we offer sympathy, and which we offer >condemnation: if giving condemnation instead of sympathy decreases the >incidence of the disease enough to be worth the hurt feelings, condemn; >otherwise, sympathize.

Almost agreed: It is also important to recheck

  1. Something unpleasant; when you have it, you want to get rid of it to see if reducing the incidence of the disease is actually a worthwhile goal.

On another note

Cancer satisfies every one of these criteria, and so we have no qualms whatsoever >about classifying it as a disease. Criteria

  1. Something rare; the vast majority of people don't have it and perhaps
  2. Something discrete; a graph would show two widely separate populations, one with the disease and one without, and not a normal distribution. are somewhat arguable, at least for some types. Quoth Wikipedia Autopsy studies of Chinese, German, Israeli, Jamaican, Swedish, and Ugandan men who died of other causes have found prostate cancer in thirty percent of men in their 50s, and in eighty percent of men in their 70s
comment by RobinZ · 2010-05-31T03:38:01.302Z · LW(p) · GW(p)

Nicely done. (If I had anything else to add, I would add it.)

comment by alexs · 2010-05-31T16:22:50.533Z · LW(p) · GW(p)

PracticalEthicsNews.com has a few recent posts, a talk, and an interview about whether addiction is a disease. It becomes quite obvious that there is always more at stake in these debates than just the appropriate definition of a medical concept.

comment by Peterdjones · 2011-05-16T19:14:45.710Z · LW(p) · GW(p)

In this concept, people make decisions using their free will, a spiritual entity operating free from biology or circumstance

Does that mean naturalistic theories of free will like Robert Kane's are false by definition.