Morality and relativistic vertigo
post by Academian · 2010-10-12T02:00:43.474Z · LW · GW · Legacy · 80 commentsContents
Treating relativistic vertigo None 80 comments
tl;dr: Relativism bottoms-out in realism by objectifying relations between subjective notions. This should be communicated using concrete examples that show its practical importance. It implies in particular that morality should think about science, and science should think about morality.
Sam Harris attacks moral uber-relativism when he asserts that "Science can answer moral questions". Countering the counterargument that morality is too imprecise to be treated by science, he makes an excellent comparison: "healthy" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.
What needs adding to his presentation (which is worth seeing, though I don't entirely agree with it) is what I consider the strongest concise argument in favor of science's moral relevance: that morality is relative simply means that the task of science is to examine absolute relations between morals. For example, suppose you uphold the following two moral claims:
- "Teachers should be allowed to physically punish their students."
- "Children should be raised not to commit violence against others."
First of all, note that questions of causality are significantly more accessible to science than people before 2000 thought was possible. Now suppose a cleverly designed, non-invasive causal analysis found that physically punishing children, frequently or infrequently, causes them to be more likely to commit criminal violence as adults. Would you find this discovery irrelevant to your adherence to these morals? Absolutely not. You would reflect and realize that you needed to prioritize them in some way. Most would prioritize the second one, but in any case, science will have made a valid impact.
So although either of the two morals is purely subjective on its own, how these morals interrelate is a question of objective fact. Though perhaps obvious, this idea has some seriously persuasive consequences and is not be taken lightly. Why?
First of all, you might change your morals in response to them not relating to each other in the way you expected. Ideas parse differently when they relate differently. "Teachers should be allowed to physically punish their students" might never feel the same to you after you find out it causes adult violence. Even if it originally felt like a terminal (fundamental) value, your prioritization of (2) might make (1) slowly fade out of your mind over time. In hindsight, you might just see it as an old, misinformed instrumental value that was never in fact terminal.
Second, as we increase the number of morals under consideration, the number of relations for science to consider grows rapidly, as (n2-n)/2: we have many more moral relations than morals themselves. Suddenly the old disjointed list of untouchable maxims called "morals" fades into the background, and we see a throbbing circulatory system of moral relations, objective questions and answers without which no person can competently reflect on her own morality. A highly prevalent moral like "human suffering is undesirable" looks like a major organ: important on its own to a lot of people, and lots of connections in and out for science to examine.
Treating relativistic vertigo
To my best recollection, I have never heard the phrase "it's all relative" used to an effect that didn't involve stopping people from thinking. When the topic of conversation — morality, belief, success, rationality, or what have you — is suddenly revealed or claimed to depend on a context, people find it disorienting, often to the point of feeling the entire discourse has been and will continue to be "meaningless" or "arbitrary". Once this happens, it can be very difficult to persuade them to keep thinking, let alone thinking productively…
To rebuke this sort of conceptual nihilism, it's natural to respond with analogies to other relative concepts that are clearly useful to think about:
"Position, momentum, and energy are only relatively defined as numbers, but we don't abandon scientific study of those, do we?"
While an important observation, this inevitably evokes the "But that's different" analogy-immune response. The real cure is in understanding explicitly what to do with relative notions:
If belief is subjective, let us examine objective relations between beliefs.
If morality is relative, let us examine absolute relations between morals.
If beauty is in the eye of the beholder, let us examine the eyes of the beholders.
…
To use one of these lines of argument effectively — and it can be very effective — one should follow up immediately with a specific example in the case you're talking about. Don't let the conversation drift in abstraction. If you're talking about morality, there is no shortage of objective moral relations that science can handle, so you can pick one at random to show how easy and common it is:
- "Birth control should be discouraged."
"Teen pregnancy / the spread of STDs is undesirable."
Question: Does promoting the use of condoms increase or decrease teen pregnancy rates / the spread of STDs? - "Masturbation should be frowned upon."
"Married couples should do their best not to cheat on each other."
Question: Does masturbation increase or decrease adulterous impulses over time? - "Gay couples should not be allowed to adopt children."
"Children should not be raised in psychologically damaging environments."
Question: What are the psychological effects of being raised by gay parents?
I'm not advocating here any of these particular moral claims, nor any particular resolution between them, but simply that the answer to the given question — and many other relevant ones — puts you in a much better position to reflect on these issues. Your opinion after you know the answer is more valuable than before.
"But of course science can answer some moral questions... the point is that it can't answer all of them. It can't tell us ultimately what is good or evil."
No. That is not the point. The point is whether you want teachers to beat their students. Do you? Well, science can help you decide. And more importantly, once you do, it should help you in leading others to the same conclusion.
A lesson from history: What happens when you examine objective relations between subjective beliefs? You get probability theory… Bayesian updating… we know this story; it started around 200 years ago, and it ends well.
Now it's morality's turn.
Between the subjective and the subjective lies the objective.
Relative does not mean structureless.
It does not mean arbitrary.
It does not mean meaningless.
Let us not discard the compass along with the map.
80 comments
Comments sorted by top scores.
comment by Vladimir_M · 2010-10-12T06:23:08.868Z · LW(p) · GW(p)
Also, this line of argument struck me as a sneaky piece of Dark Arts, though in all likelihood unintentional:
Countering the counterargument that morality is too imprecise to be treated by science, he [Sam Harris] makes an excellent comparison: "healthy" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.
Actually, in the overwhelming majority of cases, "healthy" is a very precisely and uncontroversially defined concept. Nobody would claim that I became healthier if I started coughing blood, lost control of a limb, or developed chronic headaches.
However, observe one area where the concept of "health" is actually imprecise and controversial, namely mental health. And guess what: there are many smart and eminently sane people questioning whether, to what extent, and in what situations medicine can legitimately answer questions of health in this area. (I recommend this recent interview with Gary Greenberg as an excellent example.) Moreover, in this area, there are plenty of questions where both ideological and venal interests interfere with the discussion, and as a result, it's undeniable that at least some corruption of science has taken place, and that supposedly scientific documents like the DSM are laden with judgments that reflect these influences rather than any real scientific knowledge.
So, it seems to me that properly considered, this example actually undermines the case it was supposed to support.
Replies from: prase, HughRistik, None, itaibn↑ comment by prase · 2010-10-12T07:03:40.978Z · LW(p) · GW(p)
Nobody would claim that I became healthier if I started coughing blood, lost control of a limb, or developed chronic headaches.
Nobody would claim that I became more moral if I started stealing, killed two people for money, or turned into a notorious liar. That there are conditions uncontroversially classified as disease doesn't mean that the boundary is strict and precise.
Replies from: Vladimir_M, taw↑ comment by Vladimir_M · 2010-10-12T20:07:33.811Z · LW(p) · GW(p)
I don't see how this answers my objection. I'll try to restate my main point in a more clear form.
The claim that "'healthy' is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health" is, while superficially plausible, in fact false under the interpretation relevant for this discussion. Namely, the claim is true only for those issues where the concept of "health" is precise and uncontroversial. In situations where the concept of "health" is imprecise and a matter of dispute, there are sane and knowledgeable people who plausibly dispute that medicine can legitimately answer questions of health in those particular situations. Thus, what superficially looks like a lucid analogy is in fact a rhetorical sleight of hand.
(Also, I'd say that by any reasonable measure, questions of health vs. disease are typically much more clear-cut than moral questions. The appearance of coughing or headaches, ceteris paribus, represents an unambiguous reduction of health; on the other hand, even killing requires significant qualifications to be universally recognized as evil. But my main objection stands regardless of whether you agree with this.)
Replies from: atucker, prase, DanielLC, torekp, Zetetic↑ comment by atucker · 2010-10-13T01:44:58.333Z · LW(p) · GW(p)
Its easier to tell that something is unhealthy than if its optimally healthy. Coughing up blood is worse than not doing so, but is good stamina better than increased alertness?
(I'd posit that) Most moral arguments are over if something is immoral or not, and I think that a lot of times those can be related to facts.
Replies from: Mass_Driver↑ comment by Mass_Driver · 2010-10-14T13:43:13.327Z · LW(p) · GW(p)
You're right that people often wonder whether something is moral as if it were a binary question, but they should be concerned about precisely how good or bad various actions or policies are, because all actions have opportunity costs.
It makes little sense to say "it is immoral for teachers to beat schoolchildren" without considering the effects of not beating schoolchildren.
↑ comment by prase · 2010-10-13T09:35:25.998Z · LW(p) · GW(p)
I am not sure whether I can fully agree, although I see your point more clearly now. To give one example, we had a discussion about deafness recently. One of the disputed question was whether the deaf are "sick" or "a linguistic minority". If deafness can be easily cured in all instances (and this is purely a question of medicine), then the "linguistic minority" stance would be hardly defensible. Anyway, there are questions which medicine certainly can answer (typically - what are the causes, can the condition be cured, what are the side effects of the treatment) pertaining to conditions whose qualification as disease is disputed by reasonable people.
↑ comment by torekp · 2010-10-18T01:03:47.675Z · LW(p) · GW(p)
I'm confused. It looks like the original post is arguing that science can answer some moral questions, and using the health analogy to advance this claim. In that case, pointing out that science can't answer all health issues but only some, even if true, does not undercut the original argument.
↑ comment by Zetetic · 2010-10-13T21:48:13.042Z · LW(p) · GW(p)
Perhaps the fields of psychology and ethics both exhibit a continuum of objectivity of a similar nature. If this is the case, then as surely as psychology is helpful, so could be a well constructed formal theory of ethical action. Certainly moral solutions are not clear cut, and many factors can play in to choosing how to act.
An Ethical system qua normative claims is effectively a system of heuristics for effecting an outcome. The normative claims represent our physical (neurological) response to external consequences, and there is definite interplay between situational parameters that weight the decision to act in one way or another. Many people, for instance, claim it is wrong to murder one person to save another, but various factors can come in to play that alter the weight of that conviction. For instance, it is generally considered acceptable to kill an attacker when it is necessary to prevent him/her from killing you.
I am not convinced that is is not possible to effectively model average (or any augmentation of) human morality, and I think that it is also likely that if we could do this we might be able to more effectively sort out which actions to take given certain parameters. However, like a healthy psyche, a healthy morality is defined via social standards. Due to that, it will not be absolute, but rather goal relative. As far as I can tell, a healthy psyche is most generally one that allows for adherence to the most commonly held social conventions for what is of value and how that which is valuable is acceptably obtained.
As long a certain basic reactions to certain consequences of one's actions are nearly universally accepted (and this seems to be the case when it comes to very basic questions of morality), I think that it is reasonable in theory (though I am fuzzy about how one might work out the details) to think that we could model moral decision making in such a way that it could effectively help us to make practical decisions to yield optimal results.
↑ comment by taw · 2010-10-27T03:14:28.980Z · LW(p) · GW(p)
Nobody would claim that I became more moral if I started stealing, killed two people for money, or turned into a notorious liar.
Nobody except every single person that ever lived. I call bullshit on this entire thread. Pretty much every culture supports these in some contexts and condones in some other contexts, with contexts differing between cultures.
Even today we have ridiculously many contexts in which killing another human being is considered morally acceptable, and consequentially equivalent inaction resulting in someone's death hardly triggers any moral reaction at all.
Could moral absolutists name even a single action that is universally condoned by every culture? (just don't try implicitly answering with moral context, "intentional killing" would be an acceptable answer if it was true, "murder" would not)
Replies from: prase, Perplexed↑ comment by prase · 2010-10-27T11:53:03.167Z · LW(p) · GW(p)
Did you deliberately choose the least favourable interpretation of what I have written? I have specifically included "for money" as a qualifier for universally immoral killing, which you have ignored. My point wasn't that killing is universally immoral, but that there are patterns of behaviour whose immorality isn't disputed by reasonable people. But fine, I think I can take your reply literally too. Do you really claim that every single person would condone if I killed two strangers and took their money? That's just ridiculous.
I also don't take the cultural relativist argument, especially in the context of the debate. I tried to support the idea that morality is more or less as well uniquely and precisely defined as health. Of course there are cultures which have unusual moralities, but there are cultures with unusual notions of health too (e.g. the Hinduists celebrating various physical deformities). But if you are steering the argument in this direction, please tell me in what culture I am morally entitled to kill my neighbour just because I want to occupy his house.
Replies from: taw↑ comment by taw · 2010-10-27T18:46:23.082Z · LW(p) · GW(p)
There are at least 20 million human beings paid to kill other human beings when right now.
Moral context in which this happens is hardly unusual.
↑ comment by Perplexed · 2010-10-27T03:19:20.931Z · LW(p) · GW(p)
Could moral absolutists name even a single action that is universally condoned by every culture?
Uh, ... breathing?
Or am I taking the question too literally here? :)
Replies from: taw↑ comment by taw · 2010-10-27T03:41:38.748Z · LW(p) · GW(p)
Uh, ... breathing?
To avoid looking far, Socrates considered it morally obligatory to stop breathing, and living for that matter, by drinking hemlock. There seems to be agreement that he could just as easily flee from Athens, so it was a moral choice, not something he was forced to do.
It's all relative.
↑ comment by HughRistik · 2010-10-12T21:27:31.097Z · LW(p) · GW(p)
Yes. Morals are made of a completely different substance from anything else, including concepts about the empirical world like "health." Fuzzy concepts about "morality" and fuzzy concepts about how to classify things based on their empirical features are not even the same type of fuzziness. This is philosophy 101.
Replies from: wedrifid↑ comment by wedrifid · 2010-10-15T06:00:28.026Z · LW(p) · GW(p)
Fuzzy concepts about "morality" and fuzzy concepts about how to classify things based on their empirical features are not even the same type of fuzziness. This is philosophy 101.
It probably is Philosophy 101. But in Philosophy 202 you go back and review the overlap and interaction of the two.
↑ comment by [deleted] · 2012-09-14T10:32:43.247Z · LW(p) · GW(p)
May I just remark that we are not libertarian deontologists, but rather determinist consequentialists; mental illness can be bad in many ways: Patient zerself can express that it is undesirable (many developmentally handicapped people are aware of their disability), patient's peers and loved ones can express that it is undesirable (my uncle is manic depressive and only admitted so to himself in his early fourties), the mental illness can have negative repercussions to society (treatment costs, damages caused by the patient), a prospective mother can express that having a child with a disorder is undesirable, etc.
Mental illness is illness, right there in the name is the first clue. Most patients will upon realising they have a disorder want it gone, if for no other reason than to fit in. Classifying what things are disorders and which aren't is just looking at the consequences of it and making a cost benefit analysis.
comment by [deleted] · 2010-10-12T04:54:28.437Z · LW(p) · GW(p)
Bad tactics: mentioning Sam Harris (who got a pretty bad reception here) and choosing somewhat political examples.
Your point seems so true as to be obvious. Even a deontologist cares about the state of the world; if you have a duty to, say, not kill people, it is relevant to know what kills people. That may involve only common-sense knowledge, but it may sometimes require the kind of science done by specialists.
(Even if your duty is to have a "good will," your good will must be somehow connected to the state of the real world; how can you be said to have a "good will" if you feed your wife a liquid without being remotely concerned as to whether it's poison or not?)
Replies from: Academian↑ comment by Academian · 2010-10-12T13:19:49.113Z · LW(p) · GW(p)
Bad tactics: mentioning Sam Harris (who got a pretty bad reception here) and choosing somewhat political examples.
I didn't want to choose issues people already agreed upon or ignored, including Harris himself.
Your point seems so true as to be obvious. ...
Have you not had a conversation that was ended or degraded with "Well morality is subjective anyway, this is all a pointless question."? The goal of the post is to respond as effectively as possible to this disorientation, and unsurprisingly, the most convincing response is an obviously true one... what I'm offering is which obviously true response is most effective. That's what I was getting at when I wrote
Though perhaps obvious, this idea has some seriously persuasive consequences
though maybe I should expand on that in the OP?
Replies from: None↑ comment by [deleted] · 2010-10-12T15:23:54.289Z · LW(p) · GW(p)
I didn't mean it as a criticism of you -- I meant more that I was shocked that people in the comments disagreed with your argument. I mean, no matter how you form your moral values, they're going to be affected by factual claims, and people will change their opinions on moral issues based on learning new objective facts. Actually, that's probably the predominant way that people change their minds on moral issues.
"X is good."
"What's that? You say X kills vast numbers of people? You have strong evidence for that? Oops, X is bad."
comment by Perplexed · 2010-10-12T03:42:47.453Z · LW(p) · GW(p)
All of your examples dealing with morality take a consequentialist stance with regard to ethics. I don't think that anyone has ever doubted that science might be relevant in computing the expected consequences of actions. So, I don't think you are saying anything fundamentally new here by applying science to pairs of ethical maxims rather than to one at a time.
But a lot of people are not consequentialists - they are deontologists (i.e. believers in moral duties). That duties may be in conflict on occasion has also been known for a long time - I'm told this theme was common in Greek tragedy. I'm curious as to whether and how your methodology can find a toehold for science in a duty-based account of morality.
For example:
- Everyone has a duty not to masturbate.
- Every married person has a duty not to commit adultery.
Where is the conflict, even if science is brought in?
Replies from: Vladimir_M, fortyeridania, DanielLC↑ comment by Vladimir_M · 2010-10-12T04:13:17.901Z · LW(p) · GW(p)
Perplexed:
But a lot of people are not consequentialists - they are deontologists (i.e. believers in moral duties).
Actually, my impression is that the overwhelming majority of people are practitioners of folk virtue ethics in their own personal lives. (This typically applies to the self-professed consequentialists and deontologists too, including those who have made whole academic careers out of advocating these ideas in the abstract.) I expanded on this thesis once in a long and somewhat rambling comment, which I should rewrite in a more systematic way sometime.
It mostly boils down to maintaining and enforcing an elaborate system of tacit-agreement focal points in one's interactions with other people, and priding oneself on being the sort of person who does this with consistent high skill, which is one of the basic elements of what the ancients called "virtue." (Of course, when it comes to views that don't have practical relevance for one's personal life, it's mostly about signaling games instead.)
↑ comment by fortyeridania · 2010-10-12T15:03:35.057Z · LW(p) · GW(p)
I don't think that anyone has ever doubted that science might be relevant in computing the expected consequences of actions.
Indeed. Put differently, science bears upon instrumental issues but not terminal ones. What would falsify this idea would be an example of new factual knowledge changing someone's perception of the moral value of some action, with this change persisting even after adjusting for the effect the knowledge has on the instrumental value of the action.
Neither Harris nor Academian seems to have provided such an example, and I'm not sure one exists. Following are two examples of a slightly different type that also seem to fail.
Alice thinks homosexuality is immoral because it's unnatural. Bob tells her that there are cases of animal homosexuality. Alice decides that it's not unnatural and that it isn't wrong. (But isn't being natural the end, with sexuality being merely a means, such that what we see here is still just a revaluation of instruments?)
Alice thinks it's wrong to X until Bob tells her about an evopsych theory under which condemning X was adaptive before people invented farming. Condemning X is not obviously adaptive or maladaptive today. Alice stops condemning X because she thinks her disapproval of it was just a mind trick and she'd rather not expend effort condeming things that aren't "really wrong." (Again, the end here is some sort of mental energy economy, while the instrument is her moral belief set?)
That said, I'm not too comfortable with the idea that new knowledge has no effect on terminal values. This is because the other contenders for influence on terminal values (e.g. ancient instinct) seem decidedly less open to my control.
P.S. I'm rather new here, and have not finished the sequences. If I've missed something that's already been covered, I'd love a point in the correct direction.
Replies from: Academian↑ comment by Academian · 2010-10-15T02:04:29.292Z · LW(p) · GW(p)
...science bears upon instrumental issues but not terminal ones.
For what I consider non-obvious reasons, I disagree. As you say (and thanks for pointing this out explicitly),
What would falsify this idea would be an example of new factual knowledge changing someone's perception of the moral value of some action, with this change persisting even after adjusting for the effect the knowledge has on the instrumental value of the action.
I have undergone changes in values that I would describe in this way. Namely, I had something I considered a terminal value that I stopped considering terminal upon realizing something factual about it. I'm guessing LucasSloan and Jayson_Virissimo are referring to similar experiences in these comments.
You could argue that it changing means "It wasn't really terminal to begin with". However, the separation of a given utility function into values and balancing operations is non-unique, so my current opinion is that the terminal/instrumental distinction is at best somewhat nominal. In other words, the change that it stopped feeling terminal may be the only sort of change worth calling "not being terminal anymore".
So I think you should more precisely demand an example of a person's utility function changing in response to knowledge. On the day of the factual realization I mentioned above, while it's clear that my description of my utility function to myself and others changed, it's not clear to me that the function itself changed much right away. But it does seem to me that over time, expressing it differently has gradually changed the function, though I can't be sure.
I only hinted at all this when I added
First of all, you might change your morals in response to them not relating to each other in the way you expected. Ideas parse differently when they relate differently. "Teachers should be allowed to physically punish their students" might never feel the same to you after you find out it causes adult violence.
When I first made the utility function/description distinction, it was for abstract reasons (I was making a toy model of human morality for another purpose), and I didn't quite notice the implications it would have for how people think of moral progress. Now in response to your demand for explicit examples, I'm a lot more motivated to sort this out. Thanks!
Replies from: torekp↑ comment by torekp · 2010-10-18T01:47:52.048Z · LW(p) · GW(p)
I have undergone changes in values that I would describe in this way. Namely, I had something I considered a terminal value that I stopped considering terminal upon realizing something factual about it.
Changing terminal values in response to learning is not only possible, but downright normal. We pursue one goal or another and find the life thus lived to be good or bad in our experience. We learn more about the goal-state or goal object, and it deepens or loses its attraction.
This needn't mean that "the true terminal value" is pleasure or other positive emotion, even though happiness does play a role in such learning. Most people reject wire-heading: clearly pleasure is not their overarching "true terminal value."
Replies from: fortyeridania↑ comment by fortyeridania · 2010-10-18T04:11:53.913Z · LW(p) · GW(p)
This needn't mean that "the true terminal value" is pleasure or other positive emotion
True, it wouldn't mean that pleasure was the actual terminal value, and the fact that many people reject wire-heading is evidence that pleasure is indeed not a terminal value for those people.
However, what role could "happiness" or feelings of well-being play, if not as true terminal values, if it's in response to those feelings that people change (what they thought were) their terminal values?
comment by Vladimir_M · 2010-10-12T03:29:56.662Z · LW(p) · GW(p)
Trouble is, when moral conclusions (and thus also the political and ideological positions that follow from them) depend on the conclusions of science, what force is going to keep scientists objective when they're faced with the resulting biases and perverse incentives? It's not like we have an oracle that would be guaranteed to provide objective and accurate scientific answers regardless of the moral, ideological, and political controversies for which the questions are relevant.
The evidence from the history of science, both past and current, clearly shows that organized science reliably produces accurate, well-substantiated, and unbiased results only when its practitioners are not subject to incentives, either venal or ideological, to reach some predetermined conclusions. In contrast, whenever some powerful political or ideological forces have needed a fig-leaf of scientific legitimacy, they had no problem finding allies and stooges in the academic world wiling to produce junk science suitable for their purposes. For an example, just look at the history of 20th century economics, or any other field that was ever involved in ideological controversies, for that matter.
Considering this, I disagree with this post radically. Involving science in normative controversies is a sure way to corrupt and debase the former, not to improve the epistemological standards in the latter.
Replies from: None↑ comment by [deleted] · 2010-10-12T04:42:27.930Z · LW(p) · GW(p)
Science is involved in moral controversies even if the scientists aren't even aware they're participating in a moral debate. Any moral question refers to a state of the real world, and so whenever scientists discover something about the state of the world their knowledge could be used for a moral question. For instance the discovery that fish can feel pain has implications for bioethics, but I'm not sure if the scientists involved were thinking about bioethics.
Science is involved in moral questions necessarily, in exactly the same way that ordinary perception and knowledge is involved in moral questions. The question "Is it moral for me to shoot this gun at you?" has something to do with the state of the world: is the gun loaded? are you shooting at me? Obviously in making a moral decision you would use your knowledge of such matters, right? You would not prefer to remain agnostic on factual questions?
So likewise, "Should I vaccinate my child?" is a moral question that depends on the scientific questions "Does the vaccine prevent disease?" and "Does it have side effects?" Would you prefer science to remain agnostic on those questions because they are related to a moral issue? Would you prefer never to use scientific evidence in making this decision?
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-12T05:44:18.683Z · LW(p) · GW(p)
What you write is true, but these facts should be seen as imposing practical limitations on science. Sometimes, scientific inquiry will stumble onto ideologically charged questions, and the less aware the scientists are of the ideological implications, the greater the chance that their work will be sound. If the ideological implications are clear, the partisan opinions impassioned, and the consequences for practical power politics undeniable, we can't realistically expect that the results will not be influenced by these considerations, whether consciously or not. And if the scientific work is specifically motivated by the fact that the question is interesting for reasons of ideology or policy, the confidence we can have in its quality is very low indeed.
For all practical purposes, this imposes limitations on the efficacy of institutional science of the sort we have today, and this must be recognized by anyone whose interest is finding truth rather than ideological ammunition. There are already many research areas where the ideological influences are so strong that their output can be trusted only after a very careful examination, and there are those whose output is almost pure bullshit, yet nevertheless gets to be adorned with the most prestigious academic affiliations. Therefore, it seems pretty clear to me that in the present situation, science is already excessively engaged in ideologically sensitive areas, and encouraging further such engagement will result only in additional corruption of science, not bringing clarity and rigor to the discussions of these areas.
Take your example of vaccination. In a situation where researchers consider it a moral imperative to dispel the crackpot conspiracy theories and pseudoscientific claims about vaccination, I have very little confidence that their research will provide an accurate picture of the risks and negative consequences of vaccination if their magnitude actually is non-negligible, for fear of providing ammunition to the anti-vaccination side. Now, in this case, it does seem like the situation is simple enough that all evidence overwhelmingly points to the pro-vaccination side, and assuming agreement on the facts, there is no significant additional disagreement on values and preferences, so there isn't much concern overall. But often neither is the case, and insisting that science should be involved in the controversies more heavily will ultimately just corrupt and debase science, not bring any clarity to the situation.
Replies from: None, prase↑ comment by [deleted] · 2010-10-12T11:59:00.500Z · LW(p) · GW(p)
That's fair, insofar as science doesn't give you correct answers when it isn't working properly. When science isn't working properly, the results of science are no better, or barely better, than random.
A few questions: One, are you saying that scientists should strive to be ignorant of the existence of widely discussed ideological and moral issues? Is this one of the cases where less knowledge is better than more knowledge?
Two, what is an ideology? (Of course, I know how to use the word in a sentence, but you use it so often on LW that I wonder if you have a precise definition.) For example, would you describe yourself as having any ideology?
Three, of the possible means one could use to achieve one's desires, would you say that writing biased scientific papers is an immoral means? What about persuasive essays?
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-13T01:29:32.218Z · LW(p) · GW(p)
SarahC:
A few questions: One, are you saying that scientists should strive to be ignorant of the existence of widely discussed ideological and moral issues? Is this one of the cases where less knowledge is better than more knowledge?
Well, first, it depends on what they're working on. Many things are remote enough from any conceivable issues of ideology and power politics that this is not a problem; for example, Albert Einsten’s very silly ideology didn't seem to interfere with his physics. However, topics that have bearing on such issues would indeed be best done by space aliens who'd feel complete disconnect from all human concerns. This seems to me like an entirely obvious corollary of the general principle that in the interest of objectivity, a judge should have no personal stakes in the case he presides over.
If scientists could somehow remain ignorant of the ideological implications of their work, this would indeed have a positive effect on their objectivity. But of course that this is impossible in practice, so it would make no sense to strive for it. This is a deep problem without a solution in sight. (Except for palliative measures like increasing public awareness that in ideologically sensitive areas, one should be skeptical even towards work with highly prestigious affiliations.)
Two, what is an ideology? (Of course, I know how to use the word in a sentence, but you use it so often on LW that I wonder if you have a precise definition.) For example, would you describe yourself as having any ideology?
My favorite characterization was given by James Burnham: “An ‘ideology’ is similar in the social sphere to what is sometimes called ‘rationalization’ in the sphere of individual psychology. [...] It is the expression of hopes, wishes, fears, ideals, not a hypothesis about events -- though ideologies are often thought by those who hold them to be scientific theories.” (From The Managerial Revolution.)
Taken in the broadest possible sense, therefore, every person has an ideology, which encompasses all their beliefs, ideas, and attitudes that are not a matter of exact scientific or practical knowledge, and which are at least partly concerned with the public matters of social order (with the implications this has on the practical relations of power and status, although these are rarely stated and discussed openly and explicitly).
In a more narrow sense, however, ideology refers to such beliefs, ideas, and attitudes that are held with an extraordinary level of commitment and passion, which pushes one towards constant conflict -- verbal, propagandistic, political, perhaps even physical -- with those who don't share the same ideological affiliation, and which renders one fatally biased and incapable of rational argument in ideologically charged matters. (In particular, when I call someone an “ideologue,” I refer to such people, especially those who are at the forefront of developing and propagandizing their favored ideological systems.)
Whether I belong to this latter category, well, you be the judge.
Three, of the possible means one could use to achieve one's desires, would you say that writing biased scientific papers is an immoral means? What about persuasive essays?
That depends on your value judgment: how bad is it when someone contributes to the corruption of science? Science is not a natural and resilient mode of human intellectual work. It is something that critically depends on the quality of the institutions pursuing it, and these institutions are easy to corrupt, but almost impossible to fix. That our culture has them at all is, by all historical standards, a lucky accident.
Of course, one biased paper won’t cause much harm by itself, but only in the same sense that perfect forgery of a moderate amount of money harms nobody in particular very much. (On the other hand, I would say that even a single prominent biased career can cause a great deal of damage.) In both cases, if this activity is permitted and becomes widespread, the consequences will be disastrous.
Replies from: HughRistik, None↑ comment by HughRistik · 2010-10-13T04:38:19.981Z · LW(p) · GW(p)
Vladimir, have you read Spreading Misandry and Legalizing Misandry by Nathanson & Young? They've done some of the best work I've read on the subject of ideology. Here is their description of ideological feminism:
Ideological feminism is the direct heir of both the Enlightenment and Romanticism. From the former it takes the theory of class conflict, merely substituting "gender" for "class" and "patriarchy" for "bourgeoisie." From the latter it takes the notion of nation or even race, focusing ultimately on the innate biological differences between women and men. The worldview of ideological feminism, like that of both Marxism and National Socialism—our analogies are between ways of thinking, not between specific ideas—is profoundly dualistic. In effect, "we" (women) are good, "they" (men) are evil. Or, to use the prevalent lingo, "we" are victims, "they" are oppressors.
Most of their criticism is aimed at feminism, but if you think about their description of ideology, it's not difficult to see the same problems in any political movement. Here are the features they relate to ideologies:
- Dualism (see above)
- Essentialism ("calling attention to the unique qualities of women")
- Hierarchy ("alleging directly or indirectly that women are superior to men")
- Collectivism ("asserting that the rights of individual men are less important than the communal goals of women")
- Utopianism ("establishing an ideal social order within history")
- Selective cynicism ("directing systematic suspicion only toward men")
- Revolutionism ("adopting a political program that goes beyond reform")
- Consequentialism ("asserting the beliefs that ends can justify means")
- Quasi-religiousity ("creating what amounts to a secular religion")
I would be interested to know how these features relate to your experiences with ideologies.
Other notable sections in Spreading Misandry:
Making the World Safe for Ideology
The use of deconstructionism by ideologies
Film Theory and Ideological Feminism
I recommend these books to anyone who is interested in biases, group psychology, and ideologies; their books give excellent philosophical discussions of these subjects that go beyond the particular examples of feminism and misandry. They also attempt a philosophical exploration of what "political correctness" is, and what's wrong with it, and they examine deconstructionism.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-13T06:30:50.092Z · LW(p) · GW(p)
I haven't read the books by Nathanson & Young, but looking at their tables of contents, I can say that I am well familiar with these topics. However, it's important to immediately note that the notion of ideology that you (and presumably N&Y) have in mind is narrower than what I was writing about. This might sound like nitpicking about meanings of words, and clearly neither usage can claim to be exclusively correct, but it is important to be clear about this to avoid confusion.
Ideology in the broader sense also includes the well-established and uncontroversial views and attitudes that enable social cohesion in any human society. (This follows the usage in Burnham's text I cited; for example, in that same text, shortly after the cited passage, Burnham goes on to discuss individualism and belief in property rights as key elements of the established ideologies of capitalist societies.) In contrast, your meaning is narrower, covering a specific sort of more or less radical ideologies that have played a prominent role in modern history, which all display the traits you listed to at least some extent.
One book you might find interesting, which discusses ideology in this latter sense, is Alien Powers: The Pure Theory of Ideology by the LSE political theorist Kenneth Minogue. I only skimmed through a few parts of the book, but I would recommend it based on what I've seen. Minogue is upfront about his own position (i.e. ideology, in Burnham's sense, but not his), which might be described as intellectual and moderate libertarianism; in my opinion, this is the kind of topic where authors of this sort usually shine at their brightest. You can find an excerpt presenting the basic ideas from the book here.
I'll check out these books by Nathanson & Young in more detail, and perhaps post some more comments later.
↑ comment by [deleted] · 2010-10-13T03:07:43.986Z · LW(p) · GW(p)
Thanks for replying.
So, I guess you'd say that true statements of scientific fact are different in kind from statements of wishes, dreams, beliefs, attitudes and so on. And, additionally, that it's in the interest of human beings to have true statements of scientific fact, which are not contaminated with wishes, dreams, beliefs, and attitudes, or falsified by bias or forgery.
Hmm. That seems plausible but I'm not certain of it. It's close enough, of course, that I don't intend to practice or condone scientific fraud in real life.
And ideology is, for you, basically about conflict and incapacity to be rational. By that definition, you're probably not an ideologue. I'm probably not either, but I know I have points where I cannot continue a rational discussion (in particular, if someone makes an unkind personal remark.)
But sometimes a person can care more about one of the things he or she values than about being patient and tolerant with everyone. Sometimes, some value takes precedence over peacemaking and discussion. Then conflict will happen, and rational discussion will not. I can think of situations where I would sympathize with the "ideologue" in that case. I am not sure that it's a good person who believes that nothing is more important than rational inquiry and the absence of conflict.
Would I patiently entertain the notion that, say, it might be better for society for someone to kill my sister? (Imagine that there was some argument in favor of it.) Would I strive to be evenhanded about it? Or would I be in "conflict ... perhaps even physical" and "fatally biased" and "incapable of rational argument"?
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-13T21:38:12.371Z · LW(p) · GW(p)
I agree with all this. In all sorts of human conflicts, even if all the relevant questions of fact and logic have been addressed to the maximum extent achievable by rational inquiry, there is still the inevitable clash of power and interest, which can be resolved only by finding a modus vivendi, or with the victory of one side, which then gets to impose its will on the other.
Among the available tactics in various types of conflicts, it is ultimately a judgment of value and taste which ones you'll see as legitimate, and which ones depraved. This is especially true when it comes to propaganda aimed at securing the coherence of one's own side in a conflict, and swaying the neutrals (and potential converts from the enemy camp) in one's favor. It so happens that I have a particularly strong loathing for propaganda based on claims that one side's pretensions to power are somehow supported by "science." I see this as the most debased sort of ideological warfare, the propagandistic equivalent of a war crime, especially if the effort is successful in attracting people with official institutional scientific affiliations to actively join and drag their own "science" into it. (It is also my factual belief that this phenomenon tends to make ideological conflicts more intense, more destructive, and less likely to end in a tolerable compromise, but let's not get sidetracked there.)
Yet, while the intensity of my dislike is a matter of my own values and tastes, the question of whether such corruption of science has taken place in some particular instance is still an objective question of fact and logic, because it is a special (even if difficult) case of the objective question of discerning valid science from invalid. Therefore, people can be objectively and demonstrably wrong in seeing themselves on the side of science and truth against superstition and falsity, where they are in fact just engaged in a pure contest for power, whether in their own interest or as someone else's useful idiots.
Now that I've written all this, you might perhaps understand better my antipathy towards these "let science help us resolve moral questions!" proposals. People behind them, whether consciously or not, strive to recruit and debase science into a propaganda weapon in an ongoing struggle for power, not to resolve and end this struggle by reducing it to a rational argument. The latter is impossible even in principle, since the ultimate question is who gets to impose his values and preferences on others.
Replies from: None, multifoliaterose↑ comment by [deleted] · 2010-10-13T22:37:24.510Z · LW(p) · GW(p)
Thanks.
I do understand better now, and I think the world probably needs people like you who are vigilant about keeping science unbiased -- it would be a much worse world, from my perspective, if we ceased to have science at all.
I also appreciate your courtesy over the past few days. I sometimes have trouble accepting and listening to skeptical perspectives; I'm learning to accept that ideas with a skeptical/critical/realist tone can be very valuable, but it does run against the grain for me, and I think I didn't handle myself very well. At any other web forum, this would have been a feud between us. So I appreciate your patience and your explanations.
↑ comment by multifoliaterose · 2010-10-13T22:39:15.915Z · LW(p) · GW(p)
If you're inclined to write about it, I would be interested in reading more about what your personal values/tastes are. This would help me place your comments in context.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-14T07:20:45.530Z · LW(p) · GW(p)
You probably understand that a full answer to this question would require an enormous amount of space (and time), and that it would involve all kinds of diversions into controversial topics. But since you're curious, I will try to provide a cursory outline of my views that are relevant in this context.
About a century ago -- and perhaps even earlier -- one could notice two trends in the public perception of science, caused by its immense practical success in providing all sorts of world-changing technological marvels. First, this success had given great prestige to scientists; second, it had opened hopes that in the future science should be able provide us with foolproof guidance in many areas of human concern that had theretofore been outside the realm of scientific investigation. The trouble with these trends was that around this time, dreams and hopes fueled by them started to seriously drift away from reality, and as might be expected, a host of pseudo-scientific bullshit-artists, as well as political and bureaucratic players with ready use for their services, quickly arose to exploit the opportunities opened by this situation.
This has led to a gradually worsening situation that I described in an earlier LW comment:
The trouble nowadays is not that governments are not listening to scientists (in the sense of people officially and publicly recognized as such), but that the increased prominence of science in public affairs has subjected the very notion of "science" to a severe case of Goodhart's law. In other words, the fact that if something officially passes for "science," governments listen to it and are willing to pay for it has led to an awful debasement of the very concept of science in modern times.
Once governments started listening to scientists, it was only a matter of time before talented charlatans and bullshit-artists would figure out that they can sell their ideas to governments by presenting them in the form of plausible-looking pseudoscience. It seems to me that many areas have been completely overtaken by this sort of thing, and the fact that their output is being labeled as "scientific" and used to drive government policy is a major problem that poses frightful threats for the future.
This, in my view, is one of the worst problems with the entire modern system of government, and by far the greatest source of dangerous falsity and nonsense in today's world. I find it tragicomic when I see people worrying about supposedly dangerous anti-scientific trends like creationism or postmodernism, without realizing that these are entirely marginal phenomena compared to the corruption that happens within even the most prestigious academic institutions due to the fatal entanglement of science with ideology and power politics, to which they are completely oblivious, and in which they might even be blindly taking part. Just the thought of the disasters that our governments might wreak on us by pushing policies guided by this pseudo-scientific input should be enough to make one shiver -- especially when we consider that these processes typically operate on bureaucratic auto-pilot, completely outside of the scope of politics that gets public attention.
Whether or not you agree with this, I hope it clarifies the reasons why I have such strong interest in topics of this sort.
Replies from: multifoliaterose, NancyLebovitz↑ comment by multifoliaterose · 2010-10-14T12:38:00.898Z · LW(p) · GW(p)
Thanks for your response.
Actually, my question was broader in intent - I was expressing curiosity about your personal values/tastes in general rather than about the matter at hand in particular. But from the way you took my question I imagine that the matter at hand figures in prominently :-).
Concerning
You probably understand that a full answer to this question would require an enormous amount of space (and time), and that it would involve all kinds of diversions into controversial topics.
I understand that doing so would require a lot of time and energy, I wouldn't want to divert your attention from things that are more important to you, but will express interest in reading a carefully argued, well-referenced top-level posting from you on a relatively uncontroversial topic expressing some small fraction of your views on science and government so that I can have a more detailed idea of what you're talking about.
Most of what you've said so far has been allusive in nature and while I can guess at some of what you might have in mind, I strain to think of examples that would provoke such a strong reaction. Of course, this may be rooted in a personality difference rather than an epistemological difference, but you've piqued my curiosity and I wonder whether there might be something that I'm missing.
At present: I think that various sectors of science have in fact become debased by politicization. This may have made the situation in certain kinds of science worse than it has been in the past, but I don't think that this has made the political situation worse than it has been in the past. As far as I know, there have always been issues of people putting manipulative spin on the truth for political advantage and I suspect that manipulative appeals to the authority of science are no more problematic than other sorts of manipulative appeals to authority were hundreds of years ago.
Incidentally, I was drawn toward math in high school by the fact that the the truth seemed to me to be much more highly valued there than in most other subjects. I soon came to appreciate Beauty in Mathematics but a large part of my initial attraction was simply grounded in the fact that exposure to a subject grounded in reason was so refreshing relative to most of what I had seen before (both in and out of school). I perceived an almost spiritual purity attached to justifying each step systematically.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-15T05:03:34.800Z · LW(p) · GW(p)
multifoliaterose:
Most of what you've said so far has been allusive in nature and while I can guess at some of what you might have in mind, I strain to think of examples that would provoke such a strong reaction.
Well, to fully explain my opinions on the role of institutional science and pseudoscience in modern governments, I would first have to explain my overall view of the modern state, which, come to think of it, I did sketch recently in a reply to an earlier question from you. So I'll try to build my answer from there (and ask other readers to read that other comment first if they're confused by this one).
The permanent bureaucracies that in fact run our modern governments, in almost complete independence from the entire political circus we see on TV, are intimately connected with many other, nominally private or "independent" institutions. These entities are formally not a part of the government bureaucracy, but their structure is, for all practical purposes, not separable from it, due to both formal and informal connections, mutual influences, and membership overlaps. (The workings of this whole system are completely outside the awareness of the typical citizen, who instead imagines something surely imperfect but still essentially similar to what the civics textbooks describe -- although they are subject to no secrecy at all and thus hidden in plain sight.) There are all kinds of such institutions, each with its peculiar Siamese-twin connections with some parts of the government: the mainstream media, too-big-to-fail businesses, "non-governmental organizations" (boy, are some people protesting too much!), public sector unions, etc., etc. -- and last but definitely not least, the academia and its purveyors of official science.
Now, in theory, the connection between the bureaucracy and official science is supposed to mean that we have a professional civil service using the best knowledge and expertise to implement the will of the people as legislated by its elected representatives, and our great institutions of scholarship supplying this expertise forged by its tireless seekers for truth and their magnificent institutions such as the peer review. In reality, well, it's not hard to imagine how this situation can lead to all sorts of perverse incentives that might compromise various elements of this idealized picture, about which I've already written in my earlier comments.
To take the most blatant example, just observe the way economic "science" is involved in our government system. The government does lots of things that you may support or oppose in the ultimate analysis, but which would have been clearly recognized a century ago for what they are: wealth transfers, currency debasement, nationalization, patronage, amassing debt, raising and lowering of trade and migration barriers, etc., etc. Yet nowadays, we have a whole profession of pseudoscientists who are weaving webs of abstruse and vapid theory around such things, until neither their essence nor their likely consequences are possible to discuss with any reference to actual reality. The present economic crisis might be only a mild preview of the disasters that may befall us in the not so far future thanks to the utterly irresponsible and reality-ignoring policies that this pseudoscience has been rationalizing and excusing for decades already. It’s far from certain, but far from implausible either.
While this is admittedly an exceptionally bad example (though bad enough by itself!), the same pathologies can be found to a smaller or greater degree in almost any branch of the Kafkaesque bureaucracies that rule over us. In some cases, it's not easy to discern how bad the corruption really is, as e.g. in the case of climate science, where I'm still not quite sure what to think. But it's clear that many fields of official science nowadays operate solely for the purposes of their symbiosis with the government, and any actual advances of knowledge that result from them are merely a by-product, and hard to distinguish from the accompanying bullshit. (For example, any field that has "public" in its name is almost certain to be in this category.)
Replies from: multifoliaterose↑ comment by multifoliaterose · 2010-10-20T01:54:37.424Z · LW(p) · GW(p)
Thanks for writing this; upvoted.
I'm not in a position to assess your comment's accuracy as I don't know very much about either of the workings of the government or the state of the field of macroeconomics, but you've offered me some food for thought.
If I find Carl's subsequent postings potentially convincing grounds for political involvement I'll look more closely into the aforementioned topics and may ask you some more questions. Up until now I haven't had reason to carefully research and think about these things.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-20T05:46:33.726Z · LW(p) · GW(p)
multifoliaterose:
I'm not in a position to assess your comment's accuracy as I don't know very much about either of the workings of the government or the state of the field of macroeconomics, but you've offered me some food for thought.
If you're interested in these topics, as an accompaniment to my fervent philippics, you should check out some more mainstream materials on the issues of administrative rulemaking and the Chevron doctrine. Googling about these topics will uncover some fascinating discussions and examples of the things I've been writing about, all from unimpeachable official and respectable sources.
(I'm sticking to the U.S. law and institutions because it's by far the easiest to find good online materials about them. However, if you live anywhere else in the developed world, you can be pretty sure that you have close local equivalents of all these things I've been talking about.)
Replies from: multifoliaterose↑ comment by multifoliaterose · 2010-10-20T05:51:37.816Z · LW(p) · GW(p)
Thank you for the references. I live in the U.S. so these should be relevant.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-20T06:39:45.005Z · LW(p) · GW(p)
Oh, and here's one more fascinating link. Before you click on it, think about the average citizen's idea of how the laws of the land come into being. And then behold the majesty of this chart:
http://www.reginfo.gov/public/reginfo/Regmap/index.jsp
(Though it should be noted that there are still visible vestigial influences of traditions from the old times when the de facto constitution of the U.S. resembled the capital-C one much more closely. Notice how the process is described as rulemaking, and by no means as legislation. It would still be unacceptable to use the latter name for something that doesn't come directly from the formally designated legislative branch, even if their practical control over the law has long since disappeared in favor of the bureaucracies and courts.)
↑ comment by NancyLebovitz · 2010-10-14T13:29:09.689Z · LW(p) · GW(p)
In some ways, things have gotten better, not worse. Both communism and Nazism claimed scientific backing. I don't see anything like that on the horizon.
On the other hand, people became disenchanted with them because of disastrous results-- I don't think there's any public recognition of the poor quality of science they used.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-15T18:25:46.688Z · LW(p) · GW(p)
NancyLebovitz:
In some ways, things have gotten better, not worse. Both communism and Nazism claimed scientific backing. I don't see anything like that on the horizon.
These political systems, however, are now distant in both time and space, and their faults can be comfortably analyzed from the outside. The really important question is in what ways, and to what degree, our present body of official respectable knowledge and doctrine deviates from reality, which is far more difficult to answer with any degree of accuracy. This is both because for us it's like water for fish, and because challenging it is apt to provoke accusations of crackpottery (and perhaps even extremism), with all their status-lowering implications.
↑ comment by prase · 2010-10-12T06:55:32.664Z · LW(p) · GW(p)
The vaccination controversy isn't a particularly good example of damages science takes from discussing morals. Although I agree that the rigour of research and the objectivity of publications suffer from the controversy, it isn't about morality. The anti-vaccination crackpots don't claim that vaccination is somehow ethically unjustifiable, they simply claim that it doesn't work and furthermore causes autism.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-12T19:35:35.763Z · LW(p) · GW(p)
That's not entirely true. In recent years, at least in North America, HPV vaccines have become a significant ideological issue, mostly for purely moral reasons. (Though the media exposure of this controversy seems to have died down somewhat recently.) I haven't followed this issue in much detail, however I've noticed that it has involved not only moral disputes, but also disputes about factual questions that are in principle amenable to scientific resolution, but the discourse is hopelessly poisoned by ideological passions.
What you write is true about the majority of the historical vaccination controversies, though.
Replies from: prasecomment by knb · 2010-10-12T02:26:58.197Z · LW(p) · GW(p)
The problem with this whole line of reasoning is that people really don't change their beliefs even if their reasons for their beliefs are shown to be contradictory with other values or internally logically incoherent. So even if you prove to someone that gay parents are not bad for kids with a huge longitudinal study with random assignment and causal control, a lot of people will simply say it still is inherently immoral for kids to be raised by gays. You can't say they're wrong.
People aren't optimizing for some coherent set of values, we just have a set of purely non-rational feelings about moral issues.
Replies from: Relsqui, LucasSloan, Jordan↑ comment by Relsqui · 2010-10-12T03:00:21.406Z · LW(p) · GW(p)
I think your point here is correct. However, the people who believe it's inherently morally wrong for gays to raise kids put a lot of money into convincing other people that it's wrong, and some of the convincees may then share the belief, but not the moral. Rellevant studies can then change the belief back.
↑ comment by LucasSloan · 2010-10-12T03:22:30.623Z · LW(p) · GW(p)
I have changed my mind about my values due to noticing that my values were inconsistent.
Replies from: Jayson_Virissimo, MBlume, knb↑ comment by Jayson_Virissimo · 2010-10-12T04:36:05.138Z · LW(p) · GW(p)
Same here (at least twice).
↑ comment by MBlume · 2010-10-13T02:55:53.887Z · LW(p) · GW(p)
Yeah, but that makes you really really weird.
Replies from: LucasSloan↑ comment by LucasSloan · 2010-10-13T03:29:13.306Z · LW(p) · GW(p)
For which I am truly grateful.
↑ comment by knb · 2010-10-12T06:17:56.756Z · LW(p) · GW(p)
First of all, that was intended as a general statement, not an absolute description of every case. Experiments have been done on people to see if, for example, they stop being opposed to incest in fictional scenarios where the incest is stated outright to be harmless.
Before the scenario was presented, people offered utilitarian justifications for the incest taboo, but even when those were stripped away, they insisted that incest is still "just wrong". My point is that this is what generally happens when someone points out incoherency in a moral system. People generally switch to offering an axiomatic rationalization for their moral sentiments instead of a utilitarian one.
Also, I have to say:
I have changed my mind about my values due to noticing that my values were inconsistent.
Do you mean that you made a judgement elevating one value above another you had in cases where they conflict? Or do you mean you actually gained a new value? It seems like you must have used some sort of higher level value preference to make that meta-level moral judgement.
Replies from: LucasSloan, Tyrrell_McAllister↑ comment by LucasSloan · 2010-10-12T07:04:31.808Z · LW(p) · GW(p)
Do you mean that you made a judgment elevating one value above another you had in cases where they conflict? Or do you mean you actually gained a new value? It seems like you must have used some sort of higher level value preference to make that meta-level moral judgment.
I noticed that my values were inconsistent, and I decided that one of them needed to be expunged. I removed a "value" that had been created at too high a level of abstraction, one which conflicted with the rest of my values and whose actual, important content could be derived from lower level moral concepts.
↑ comment by Tyrrell_McAllister · 2010-10-12T19:01:30.891Z · LW(p) · GW(p)
First of all, that was intended as a general statement, not an absolute description of every case. Experiments have been done on people to see if, for example, they stop being opposed to incest in fictional scenarios where the incest is stated outright to be harmless.
Before the scenario was presented, people offered utilitarian justifications for the incest taboo, but even when those were stripped away, they insisted that incest is still "just wrong". My point is that this is what generally happens when someone points out incoherency in a moral system. People generally switch to offering an axiomatic rationalization for their moral sentiments instead of a utilitarian one.
Such a person isn't going against Academian's advice. They've been led through the correct procedure of analysis, though they've only gone part of the way. They've found evidence that, all else being equal it's better not to give into a desire to commit incest. The incest itself is what they find bad, not some consequence of it. You haven't identified an incoherence in their final position.
To continue the analysis, they should see what bad consequences would follow from not doing the incest. They should check whether this badness outweighs the badness of doing the incest. They should be able to identify the hypothetical scenarios where it's worse not to commit the incest than to do it.
In the end, they may decide that people shouldn't commit incest in most typical situations, even when there are no distinct bad consequences of the incest. Whether or not you agree with them, they would still be vastly more reflective about their morality than most people are. It would be great if more people were so reflective, even if they ended up disagreeing with you about which things are harms-in-themselves.
↑ comment by Jordan · 2010-10-12T03:06:46.845Z · LW(p) · GW(p)
The gay parents example jumped out at me as a bad example as well (the two morals stated aren't contradictory in light of a study showing gays to be good parents). The first two examples illustrate Academian's point well though.
Contradictions actually do change peoples minds, I think. Look at birth control in Catholicism. Despite the pope himself saying it is wrong, many Catholics use it and support it (because to do otherwise would contradict other morals/desires they have).
Replies from: Academian↑ comment by Academian · 2010-10-12T06:16:17.781Z · LW(p) · GW(p)
What if being raised by gay parents improved a lot of cognitive functions, and had no significant effect on other personality traits? Or some other uncompromisingly positive effect?
I don't know how that would happen, but I don't really know much about having gay parents anyway. The point is that science would help me have a better opinion than whatever I have so far.
comment by jasticE · 2010-10-12T18:10:35.030Z · LW(p) · GW(p)
Is not being hypocritical a moral value in itself, or is it above morality? Either way, why?
If my values contradict, but I don't care about hypocrisy, should it matter to me?
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-10-12T19:57:33.587Z · LW(p) · GW(p)
If your values contradict, then what're you going to do, lay on the floor flopping around trying to do multiple contradictory things at once? You want to sort out exactly how much you value each relative to the other and to what extent they contradict each other so that, well, you can act in accordance with your values. Hypocrisy is more about giving lip service to one set of values while acting on others.
Replies from: jasticE↑ comment by jasticE · 2010-10-12T20:36:46.918Z · LW(p) · GW(p)
I may act in accordance with different values without resulting in undirected floppyness.
For instance, I could value both animal life and wearing traditional Bavarian lederhosn, and act on these values by producing, buying and wearing lederhosn while donating money to a save-the-cows fund. But I guess I could just donate an amount relative to how much I value the cows over/under lederhosn. Hm. Okay.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-10-12T22:13:58.953Z · LW(p) · GW(p)
Yeah. When, at any specific moment your values produce different suggestions as to what action to take, you have to balance or alter them somehow.
Replies from: wedrifidcomment by DanArmak · 2010-10-12T13:00:58.373Z · LW(p) · GW(p)
"Teachers should be allowed to physically punish their students." "Children should be raised not to commit violence against others."
These two are contradictory. If a child is taught not to commit violence, they won't be able to become teachers who commit violence against children.
Replies from: Pavitra, DSimon, Kingreaper↑ comment by Kingreaper · 2010-10-16T15:26:03.830Z · LW(p) · GW(p)
No contradiction. Allowing a teacher to physically punish children =/= requiring a teacher to do same.
If the next generation of teachers all choose not to physically punish children, but have the option, both morals are conserved.
comment by teageegeepea · 2010-10-13T07:11:03.194Z · LW(p) · GW(p)
I'm not sure what's the best thread to link this, but a blog here purports to be written by a sociopath. Hat-tip to Chip Smith.
Replies from: Document, DanielLC↑ comment by DanielLC · 2010-10-23T01:33:19.727Z · LW(p) · GW(p)
I think it should have been linked in one of the Open Threads they add periodically.
Too late now, I guess, unless someone can move it.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-10-23T01:40:07.702Z · LW(p) · GW(p)
At this point, the only new open threads are in the discussion section. On the other hand, there's nothing stopping people from starting a top-level open thread to see what happens.
comment by knb · 2010-10-12T02:13:12.344Z · LW(p) · GW(p)
"Birth control should be discouraged." "Teen pregnancy / the spread of STDs is undesirable." Question: Does promoting the use of condoms increase or decrease teen pregnancy rates / the spread of STDs?
"Masturbation should be frowned upon." "Married couples should do their best not to cheat on each other." Question: Does masturbation increase or decrease adulterous impulses over time?
"Gay couples should not be allowed to adopt children." "Children should not be raised in psychologically damaging environments." Question: What are the psychological effects of being raised by gay parents?