Antisocial personality traits predict utilitarian responses to moral dilemmas
post by Vladimir_M · 2011-08-23T09:13:13.807Z · LW · GW · Legacy · 34 commentsContents
34 comments
So says the title of an interesting recent paper I stumbled on yesterday (ungated link; h/t Chris Bertram). Here's the abstract:
Researchers have recently argued that utilitarianism is the appropriate framework by which to evaluate moral judgment, and that individuals who endorse non-utilitarian solutions to moral dilemmas (involving active vs. passive harm) are committing an error. We report a study in which participants responded to a battery of personality assessments and a set of dilemmas that pit utilitarian and non-utilitarian options against each other. Participants who indicated greater endorsement of utilitarian solutions had higher scores on measures of Psychopathy, machiavellianism, and life meaninglessness. These results question the widely-used methods by which lay moral judgments are evaluated, as these approaches lead to the counterintuitive conclusion that those individuals who are least prone to moral errors also possess a set of psychological characteristics that many would consider prototypically immoral.
This conclusion is very much along the lines of some of my recent LW comments (for example, those I left in this thread). To me it seems quite obvious that in the space of possible human minds, those that produce on the whole reasonably cooperative and reliably non-threatening behavior are overwhelmingly unlikely to produce utilitarian decisions in trolley-footbridge and similar "sacrificial" problems.
Of course, what people say they would do in situations of this sort is usually determined by signaling rather than a realistic appraisal. Kind and philosophical utilitarians of the sort one meets on LW would be extremely unlikely to act in practice according to the implications of their favored theories in real-life "sacrificial" situations, so their views are by themselves not strong evidence of antisocial personality traits. However, actually acting in such ways would be, in my opinion, very strong evidence for such traits, which is correctly reflected in the typical person's fear and revulsion of someone who is known to have acted like that. I would venture to guess that it is in fact the signaling-driven disconnect between people's endorsement of utilitarian actions and the actual decisions they would make that makes the found correlations fairly low. (Assuming also that these tests really are strong indicators of antisocial personalities, of course, which I lack the knowledge to judge.)
(Also, endorsement of utilitarianism even just for signaling value causes its own problems, since it leads to political and ideological support for all sorts of crazy ideas backed by plausible-sounding utilitarian arguments, but that's a whole different issue.)
Here is also a full citation for reference: “The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas”, by Daniel M. Bartels and David A. Pizarro, Cognition 121 (2011), pp. 154-161.
Edit: As Wei Dai points out in a comment, I should also add that some of the previous literature cited by Bartels and Pizarro has concluded that, in their words, "individuals with higher working memory capacity and those who are more deliberative thinkers are... more likely to approve of utilitarian solution." One the face of it, taken together with the conclusions of this paper, this would mean that propensity for utilitarian responses may stem from different causes in different individuals (i.e. deliberative thinking versus antisocial traits).
My own hypothesis, however, is that deliberative thinking leads to verbal utilitarian responses that are likely due to signaling, and that propensity for actual utilitarian "sacrificial" acts would have a much weaker link to deliberative thinking and a much stronger link to antisocial traits than mere utilitarian statements. Unfortunately, I don't know how this could be tested empirically in an ethical manner.
34 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2011-08-23T18:52:08.037Z · LW(p) · GW(p)
The authors do not claim that only people with antisocial personality traits tend to choose utilitarian responses:
What do those 10% of people who are comfortable with the utilitarian solution to the footbridge dilemma look like? Might these utilitarians have other psychological characteristics in common? Recently, consistent with the view that rational individuals are more likely to endorse utilitarianism (e.g., Greene et al., 2001), a variety of researchers have shown that individuals with higher working memory capacity and those who are more deliberative thinkers are, indeed, more likely to approve of utilitarian solutions (Bartels, 2008; Feltz & Cokely, 2008; Moore, Clark, & Kane, 2008). [...]
Yet in addition to the link between deliberative thinkers and utilitarian judgments, there is another possible psychological route to utilitarian preferences—the ability to inhibit emotional reactions to harm (or the inability to experience such emotions in the first place).
I was not previously aware of the evidence linking deliberative thinking to utilitarianism, and many other LWers probably aren't. By presenting only the evidence about antisocial personality traits in the post, I think you're giving people a biased impression of the actual state of knowledge in this area of moral psychology.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-08-23T20:49:37.356Z · LW(p) · GW(p)
That's a fair point; I just added to the post.
comment by Scott Alexander (Yvain) · 2011-08-23T21:53:22.135Z · LW(p) · GW(p)
Luke quoted Joshua Greene as saying:
...deontological judgments tend to be driven by emotional responses, and... deontological philosophy, rather than being grounded in moral reasoning, is to a large extent an exercise in moral rationalization. This is in contrast to consequentialism, which, I will argue, arises from rather different psychological processes, ones that are more 'cognitive,' and more likely to involve genuine moral reasoning...
If this is true, then it makes sense that people who don't have emotional responses to moral questions won't be deontologists.
Think of it as a conflict between a special moral module and general purpose reasoning. General purpose reasoning that you'd use in eg economics tells you that if you lose $100 to gain $200, you come out ahead - it's utilitarian. The special moral module is what makes most people naturally deontologists instead.
You can end up utilitarian either because you're a psychopath and don't have the special moral module - in which case you default to general purpose reasoning - or because you're very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.
Replies from: Vladimir_M, private_messaging, jhuffman↑ comment by Vladimir_M · 2011-08-23T23:10:51.212Z · LW(p) · GW(p)
Think of it as a conflict between a special moral module and general purpose reasoning. [...] The special moral module is what makes most people naturally deontologists instead.
I think that utilitarianism vs. deontology is a false dichotomy. People's natural folk ethics is by no means deontological -- refusing to break deontological rules in some situations where this is normally expected will also make you look weird, creepy, or even monstrous in the eyes of a typical person. As far as I see, virtue ethics is the only approach that captures the actual human moral thinking with any accuracy.
However, I agree with your remark if we replace deontology with virtue ethics. Where we might have a deeper disagreement is when the output of these special modules should be seen as baggage we'd better get rid of, and when it has non-obvious but vitally important functions.
You can end up utilitarian either because you're a psychopath and don't have the special moral module - in which case you default to general purpose reasoning - or because you're very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.
My own hypothesis is that being very philosophical tends to produce primarily utilitarian signaling in the form of words and relatively cheap symbolic actions, and very little or no serious utilitarian behavior. And while some small number of people are persuaded by philosophical utilitarian arguments to undertake great self-sacrifice for (what they believe to be) the greater good, I doubt that anyone can be persuaded by such arguments to commit the utilitarian act in those "sacrificial" trolley-like scenarios. Therefore, if someone is observed to have acted in such a way, this would be strong evidence that it's due to antisocial traits, not philosophical inclinations.
Replies from: Yvain, lessdazed↑ comment by Scott Alexander (Yvain) · 2011-08-24T09:51:47.412Z · LW(p) · GW(p)
I suppose I'd agree with you that folk ethics aren't exactly deontological, though I'd have trouble calling them virtue ethics since I don't understand virtue ethics well enough to draw any predictive power out of it (and I'm not sure it's supposed to have predictive power in moral dilemmas). Maybe you're right about the distinction between folk moral actions and folk moral justifications - in the latter, people seem much more supportive of deontological justifications than utilitarian justifications, but I don't know how much effect that has on actual actions.
My own hypothesis is that being very philosophical tends to produce primarily utilitarian signaling in the form of words and relatively cheap symbolic actions, and very little or no serious utilitarian behavior.
Do you think this is specific to utilitarianism or more of a general issue with philosophy? David Hume didn't seriously stock up on candles in case the sun didn't rise the next morning, Objectivists probably do as many nice things for other people as anyone else, and economists don't convert en masse even though most don't have a good argument against stronger forms of Pascal's Wager. I don't really expect thoughts to influence ingrained behaviors that much, so it doesn't seem to require any special properties of utilitarianism to explain this.
Where we might have a deeper disagreement is when the output of these special modules should be seen as baggage we'd better get rid of, and when it has non-obvious but vitally important functions.
I'm not sure to what degree we disagree on that.
I would agree that the special modules have "important functions", but I would cash out "important" in a utiltiarian way: it would require an argument like "If we didn't have those modules people would do crazy things and society would collapse, which would be bad". This seems representative of a more general sense in which to resolve conflicts in our special moral reasoning we've got to apply general reasoning to them and utilitarianism is sort of the "common currency" that allows us to do that. Once we've done that we can link special moral reasoning to our more general reasoning and ground a lot of our intuitive moral rules. This is in the same sense that our visual processing modules are a heck of a lot better than trying to sort out luminance data from the environment by hand, but we still sometimes subject it to general-purpose reasoning when eg we're not sure if something is an optical illusion.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-08-24T21:39:25.755Z · LW(p) · GW(p)
I suppose I'd agree with you that folk ethics aren't exactly deontological, though I'd have trouble calling them virtue ethics since I don't understand virtue ethics well enough to draw any predictive power out of it (and I'm not sure it's supposed to have predictive power in moral dilemmas).
My understanding is that you can look at virtue ethics as consequentialism that incorporates some important insights from game theory and Newcomb-like problems in decision theory (i.e. those where agents have some ability to predict each others' decisions). These concepts aren't incorporated via explicit understanding, which is still far from complete, but by observing people's actual intuitions and behaviors that were shaped by evolutionary processes (both biological and cultural), in which these game- and decision-theoretic issues have played a crucial role.
(Of course, such reduction to consequentialism is an arbitrary convention. You can reduce either consequentialism or deontology to each other just by defining the objective function or the deonotological rules suitably. I'm framing it that way just because you like consequentialism.)
Do you think this is specific to utilitarianism or more of a general issue with philosophy? David Hume didn't seriously stock up on candles in case the sun didn't rise the next morning, Objectivists probably do as many nice things for other people as anyone else, and economists don't convert en masse even though most don't have a good argument against stronger forms of Pascal's Wager.
Of course it's not specific to utilitarianism. It happens whenever some belief is fashionable and high-status but has seriously costly or inconvenient implications.
I generally agree with the rest of your comment. Ultimately, as long as we're talking about what happens in the real physical world rather than metaphysics, our reasoning is in some reasonable sense consequentialist. (Though I wouldn't go so far to say "utilitarian," since this gets us into the problem of interpersonal utility comparison.)
I think the essence of our disagreements voiced in previous discussions is that I'm much more pessimistic about our present ability to subject our moral intuitions (as well as the existing social customs, norms, and institutions that follow from them) to general-purpose reasoning. Even many fairly simple problems in game and decision theory are still open, and the issues (most of which are deeply non-obvious) that come into play with human social interactions, let alone large-scale social organization, are hopelessly beyond our current understanding. At the same time, it's hard to resist the siren call of plausible-sounding rationalizations for ideology and theories that are remote from reality but signal smarts and sophistication.
↑ comment by private_messaging · 2012-06-19T07:30:03.618Z · LW(p) · GW(p)
But when you necessarily do not possess the computational power to track all the consequences of different strategies, or do not think strategically at all, then believing yourself to be an utilitarian (but not being one due to computational constraints) you will end up either not changing your behaviour or philosophizing yourself into psychopathy whereby you'll rationalize virtually any form of immoral (net negative global utility) conduct. I do think that believing oneself to be an utilitarian while not having the hardware enough to track consequences is functionally equivalent to psychopathy whenever the belief does not work like dragon in the garage belief (you can virtually always alter the action a little bit and set up a partial sum to obtain positive; if you want to murder a co-worker, you can sell the organs and donate to charity for example). The belief that one is capable of accurately tracking consequences may also be a product of narcissism, which is a very antisocial trait.
↑ comment by jhuffman · 2011-08-24T16:41:18.868Z · LW(p) · GW(p)
You can end up utilitarian either because you're a psychopath and don't have the special moral module - in which case you default to general purpose reasoning - or because you're very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.
This is an interesting interpretation. The study's authors seemed to suggest that the psychopaths et. al were getting to their answer via a very different route than the thoughtful utilitarians. Your suggestion is more intuitively appealing to me - there is to be expected a larger set of common answers between psychopaths and utilitarians if they are both using reason to answer these questions.
comment by shokwave · 2011-08-23T11:07:02.961Z · LW(p) · GW(p)
Kind and philosophical utilitarians of the sort one meets on LW would be extremely unlikely to act in practice according to the implications of their favored theories in real-life "sacrificial" situations
I don't think you should be quite so certain of this.
comment by atucker · 2011-08-23T13:32:21.766Z · LW(p) · GW(p)
The differences between the tasks that the different groups choose utilitarianism on seems interesting. It seems like Machiavellians take advantage of weakness more than Psychopaths, while Psychopaths are more okay with choosing who to kill arbitrarily.
Psychopaths choose the utilitarian answer on Trespassers, Hostages, Plane Crash, Prisoners of War, Surgery, and Footbridge the most out of all the groups.
Trespassers, Hostages, Plane Crash, and Prisoners of War all involve killing one member of a group that you're a part of in order to save the rest of the group from external forces. Plane Crash has the added difference that the member killed is injured, and it's suggested that he's eaten afterwards.
Surgery (the one with the patient you can kill to donate their organs to 5 other patients) and Footbridge (aka Trolley) involve killing individuals in order to same larger numbers of people.
Machiavellians choose the utilitarian answer on Submarine, Bystander, Liferaft, Fume, Spelunkers, and Baby the most out of all the groups
Submarine, Liferaft, and Spelunkers involve killing injured individuals in order to save the rest of a group that you're part of.
Baby involves smothering a baby in order to have the rest of the group not get heard.
Bystander and Fumes both involve flipping a switch to kill an individual rather than a group.
No meaningers choose the utilitarian answer on surgery and footbridge almost as much as psychopaths. Maybe they're more familiar with those questions, and as a result became no-meaningers?
comment by Dallas · 2011-08-23T11:56:52.357Z · LW(p) · GW(p)
The actual participants in the study were responding to fictional dilemmas as well; why should we assume that LW readers would "actually not do it" and the participants would?
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-08-23T20:33:16.659Z · LW(p) · GW(p)
I don't think most of the study participants would do it either; I'm sure most of their statements are also driven by signaling, not a realistic appraisal of what they would really do. I merely hypothesize that in a more representative sample of the general population (and just of an undergraduate student population) this proportion, while still large, is somewhat lower than among LW participants.
comment by [deleted] · 2011-08-23T18:25:23.697Z · LW(p) · GW(p)
One implication of adopting a utilitarian framework as a normative standard in the psychological study of morality is the inevitable conclusion that the vast majority of people are often morally wrong. For instance, when presented with Thomson’s footbridge dilemma, as many as 90% of people reject the utilitarian response (Mikhail, 2007). Many philosophers have also rejected utilitarianism, arguing that it is inadequate in important, morally meaningful ways, and that it presents an especially impoverished view of humans as ‘‘locations of utilities [and nothing more]. . .’’ and that ‘‘persons do not count as individuals. . . any more than individual petrol tanks do in the analysis of the national consumption of petroleum’’ (Sen & Williams, 1982, p. 4). For those who endorse utilitarianism, the ubiquitous discomfort toward its conclusions points to the pessimistic possibility that human moral judgment is even more prone to error than many other forms of judgment, and that attempting to improve the quality of moral judgment will be a steep uphill battle. Before drawing those conclusions, it might prove useful to investigate individuals who are more likely to endorse utilitarian solutions and perhaps use them as a psychological prototype of the ‘‘optimal’’ moral judge. What do those 10% of people who are comfortable with the utilitarian solution to the footbridge dilemma look like? Might these utilitarians have other psychological characteristics in common? Recently, consistent with the view that rational individuals are more likely to endorse utilitarianism (e.g., Greene et al., 2001), a variety of researchers have shown that individuals with higher working memory capacity and those who are more deliberative thinkers are, indeed, more likely to approve of utilitarian solutions (Bartels, 2008; Feltz & Cokely, 2008; Moore, Clark, & Kane, 2008). In fact, one well-defined group of utilitarians likely shares these characteristics as well—the subset of philosophers and behavioral scientists who have concluded that utilitarianism is the proper normative ethical theory.
This seems a reasonable cause for further investigation. And leads me to wonder, what is more likley, that 90% of people are in a very "don't believe your lying eyes" way wrong about the interpretation of their morality or that 10% of people actually have genuinely different moral intuitions on a particular set of issues? What if philosophers and cognitive scientists and psychopaths just have values that on reflection drift in different ways than other groups or each other (just because they agree on some utilitarian actions dosen't mean their systematized ethical frameworks are similar on other dimensions). Of course as Vladimir_M points out how one chooses to signal about moral issues and how one actually respond are two different things.
But could it be that perhaps society, rather than experiencing something fitting our accepted grand tale of moral progress (hastened by enlightened elites throughout history), is rather just rationalizing experiencing moral change that reflects raw demographic shifts, economic conditions and the fickle fashions of those in positions of authority on matters of moral arbitration and/or power? Ah, but that robs me of a comforting future with values that are just my own values extrapolated and "fixed", best not think of this too much then.
Replies from: None, None↑ comment by [deleted] · 2012-02-06T22:37:32.023Z · LW(p) · GW(p)
Moral Intuitions: Are Philosophers Experts?
Recently psychologists and experimental philosophers have reported findings showing that in some cases ordinary people’s moral intuitions are affected by factors of dubious relevance to the truth of the content of the intuition. Some defend the use of intuition as evidence in ethics by arguing that philosophers are the experts in this area, and philosophers’ moral intuitions are both different from those of ordinary people and more reliable. We conducted two experiments indicating that philosophers and non-philosophers do indeed sometimes have different moral intuitions, but challenging the notion that philosophers have better or more reliable intuitions.
↑ comment by [deleted] · 2014-06-13T07:27:39.833Z · LW(p) · GW(p)
Moral reasoning can have specific psychometric meaning than is inconsistent with lay interpretations of moral reasoning.
''Professor Simon Baron-Cohen suggests that, unlike the combination of both reduced cognitive and affective empathy often seen in those with classic autism, psychopaths are associated with intact cognitive empathy, implying non-diminished awareness of another’s feelings when they hurt someone.[57] Moral judgment
Psychopaths have been considered notoriously amoral – an absence of, indifference towards, or disregard for moral beliefs. There are few firm data on patterns of moral judgment, however. Studies of developmental level (sophistication) of moral reasoning found all possible results – lower, higher or the same as non-psychopaths. Studies that compared judgments of personal moral transgressions versus judgments of breaking conventional rules or laws, found that psychopaths rated them as equally severe, whereas non-psychopaths rated the rule-breaking as less severe.[58]
A study comparing judgments of whether personal or impersonal harm would be endorsed in order to achieve the rationally maximum (utilitarian) amount of welfare, found no significant differences between psychopaths and non-psychopaths. However, a further study using the same tests found that prisoners scoring high on the PCL were more likely to endorse impersonal harm or rule violations than non-psychopaths were. Psychopaths who scored low in anxiety were also more willing to endorse personal harm on average.[58]
Assessing accidents, where one person harmed another unintentionally, psychopaths judged such actions to be more morally permissible. This result is perhaps a reflection of psychopaths’ failure to appreciate the emotional aspect of the victim’s harmful experience, and furnishes direct evidence of abnormal moral judgment in psychopathy.[59]" - Wikipedia
comment by rehoot · 2011-09-04T03:21:53.057Z · LW(p) · GW(p)
Yvain said:
You can end up utilitarian either because you're a psychopath and don't have the special moral module - in which case you default to general purpose reasoning - or because you're very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.
I participate in utilitarian forum, and from that experience I would add to the quote above by saying that there are some people who encountered emotional arguments about "painism" or "speciesism" (e.g. arguments from Singer; Ryder, and the like) and followed those arguments to utilitarianism. I would expect that there are few people in this category as a percentage of the total population (in part because few people seriously study ethics of any kind and fewer still find their way to that end).
comment by Jack · 2011-08-23T22:28:57.408Z · LW(p) · GW(p)
Maybe this was accounted for successfully, but I didn't see it: We don't know that anyone who took part in the study is actually a psychopath. We only know that some participants scored relatively high on this measure of psychopathy. But using a sliding scale measure means that lots of psychological conditions will come up as a higher than average psychopathy score. Autism would score higher than average on a sliding scale measure of psychopathy.
comment by [deleted] · 2011-08-23T18:36:50.628Z · LW(p) · GW(p)
A possible way to look at this is that its just one consequence of neurodiversity.
Most people share a set of biases towards the answer to an ethical question according to a certain moral system, thus only the most rational, intelligent or those who devote the most time to pondering such questions are likley to overcome this bias.
Psychopaths have a different set of biases, so it dosen't take above average rationality to see the "right" answers according to a moral framework on some issues where "normal" people have a very hard time considering things dispassionately.
Note: Seeing the "right" answer according to a moral framework/ethical system and choosing to answer in a manner consistent with this on a poll or when questioned dosen't necessarily mean one embraces it as his own. Since a variant of utilitarianism is perhaps the default implicit framework of modern society, respondents may may just be trying to model it so as to mimic proper responses in order to fit in or signal the right affiliations. This seems especially plausible for highly functional psychopaths. In which case they may be failing because they don't really understand what it is like to think with those biases in place and don't really know in which direction they are likley to shift typical answers, something a rationalist psychopath would strive pretty hard to take into account.
comment by halcyon · 2014-07-03T19:44:47.140Z · LW(p) · GW(p)
The cynical economist's position would be that if utilitarianism leads to good results, and being antisocial leads to utilitarianism, then that is a positive side to being antisocial. For example, English social theories, which have lead to the most progressive societies, is intuitively utilitarian. Only valid to a certain extent, of course, but you might say that if you want to live in a progressive society, then you should be slightly antisocial.
I would also like to know whether this definition of "antisocial" covers the Buddha as well. Moreover, might not having non-mainstream tastes or opinions also be correlated with "antisocial" behavior?
comment by lessdazed · 2011-08-23T23:38:32.882Z · LW(p) · GW(p)
Of course, what people say they would do in situations of this sort is usually determined by signaling rather than a realistic appraisal. Kind and philosophical utilitarians of the sort one meets on LW would be extremely unlikely to act in practice according to the implications of their favored theories in real-life "sacrificial" situations, so their views are by themselves not strong evidence of antisocial personality traits.
The study seems to target the category of people who answer like utilitarians do, rather than "real utilitarians" defined in any way, so the study seems to directly challenge this statement. I thought one conclusion was that the advocates of utilitarianism are more likely to be antisocial in certain ways, regardless of how they would act when faced with a trolley problem, which was something not measured here.
comment by iii · 2011-08-23T21:40:52.806Z · LW(p) · GW(p)
Pardon me, but this seems to have little to nothing to do with whether utilitarianism should be considered a superior moral framework or not (if this has never been the point I apologize). If anything the article seems to lend evidence to the claim that given certain circumstances psychopaths tend to be more moral than the average individual, why stigmatizing a mental disorder and not its consequences is still tolerated in a society that seems to ostensibly have developed neural imaging is also up for debate.
comment by lessdazed · 2011-08-24T00:14:50.528Z · LW(p) · GW(p)
these approaches lead to the counterintuitive conclusion
Counter-intuitiveness is not a property of the conclusion alone. Surprise does have an important role in learning, because otherwise one may rationalize and modify a theory being tested to fit the facts discovered, so a good way to test a theory's informativeness is to set predictions beforehand and notice one's surprise whenever they don't pan out.
However, experiments are never tests of a single hypothesis but of complexes of hypotheses. A problem with testing a theory's predictions to evaluate the theory is that one may make terrible predictions; the strength of the predictions is not a test of the theory alone but of the tester's ability at extrapolating from a theory.
E.g.: I have several times encountered the argument, "If evolution is true, then why do scientists wear clothes?!" The idea is that by wearing clothes scientists admit to weaknesses of the bare skin design, and evolutionary theory would never predict less furry creatures, creatures who would sometimes be well served by clothing, to evolve from furry creatures. That's a really stupid thing to think, and the conclusion one may draw from noticing surprise that scientists believe in evolution and wear clothes is not that there is a problem with the theory, but with the predictor's extrapolation from the theory.
Evolution is not disproved by scientists wearing clothes. That scientists wear clothes is not even the most infinitesimally small theoretical evidence against evolution.
Obviously, I post in hindsight. But "Participants who indicated greater endorsement of utilitarian solutions had higher scores on measures of Psychopathy, machiavellianism, and life meaninglessness...a set of psychological characteristics that many would consider prototypically immoral," i.e. "Participants who were abnormally likely to endorse unpopular moral calculations were aneurotypical in ways society labels as bad, and they were more willing to defy societal conventions and trust their own moral judgement rather than society's, and/or ignore society's mores for personal gain." Surprise?
I vaguely recall reading about a study showing that most people were better at estimating their grocery bill when they abstractly estimated as they went along than when they tried to keep track of dollars and cents in their heads. Suppose this is true, pretend that it was well know and that the workings of cash registers were a total mystery.
One might find that those more adept at calculating would be less likely to take approximations as representing objective truths, and less likely to respect seeing $2.95 or $2.99 and $3.00 as being qualitatively the same, even though all three numbers are of the same type - amounts that are "threeish dollars".
Would the usually superior results of the approximaters' models show that the true operation of the registers most resembles converting prices into approximations and arriving at a larger number in a formula no one understood but was approximately addition, and then adding a bit for tax and translating the result into a dollar amount? Would the counters who laboriously try to add the prices and carry the numbers and then calculate the tax by multiplying (rather than "adding a bit" as the good approximating models do - they never accidentally forget to move a decimal point) be obviously wrong, by having the only models that fail catastrophically, in addition to their doing worse on average?
The counters might claim that counting best represents the type of thing cash registers do, and present contrived thought experiments like a person buying two items, one for $1.60 and the other for $2.80, with a 7.5% tax rate. Obviously, no experiment can be run with a cash register as such prices are never found in practice; counters say ($1.60+$2.80)*1.075=$4.73, while approximaters say $1.60 is the type of thing that is one-and-a-halfish-dollars (a fringe position, still noteworthy, is that it is the type of thing that is twoish dollars), $2.80 is the type of thing that is threeish dollars, add them together to get four-and-a-halfish dollars, and a little bit more than four-and-a-halfish dollars is the price of that basket of goods: "$4.75".
The consonance of "$4.75" and not "$4.73" to apes counting in base ten is not relevant to unraveling the secrets of the cash register, or much else, as interesting as it is in its own right. Likewise a strong emotional aversion to pushing the fat man tells us about the psychology of someone, not the rightness of pushing the fat man (which is dependent on all facts, including of course the psychology of each person).
comment by sam0345 · 2011-08-23T19:46:54.791Z · LW(p) · GW(p)
To see what utilitarians would actually do, observe what happens when a group with a utilitarian ideology gets unchecked political power.
I pretty regularly hear that Stalin or Ho or Tito's "authoritarianism", was regrettably necessary in order to achieve economic development, which is just the same argument that "socialism needs killing fields", only made by a socialist instead of a supporter of capitalism
Replies from: lessdazed, satt↑ comment by lessdazed · 2011-08-23T21:25:45.220Z · LW(p) · GW(p)
a utilitarian ideology
To see what deontologists would actually do, observe what happens when a group with a deontological ideology gets unchecked political power.
Actually, that's probably not an informative question.
Replies from: sam0345↑ comment by sam0345 · 2011-08-23T21:56:09.887Z · LW(p) · GW(p)
So what have these wicked deontologists been up to?
It looks to me that deontological ideologies have a markedly better record than utilitarian ideologies.
Replies from: lessdazed↑ comment by lessdazed · 2011-08-23T22:27:20.904Z · LW(p) · GW(p)
Have you ever seen a typical chair? I haven't, so if you have, tell me its measurements.
Likewise, there is no typical deontology or utilitarianism. Comparison across societies - none of which were purely either - in which no variable is controlled for, to see which of the two generally does better or worse, is not how to evaluate morality.
To calculate the cost of a purchase, one multiplies by a fraction to figure out the tax to pay. If doing it with decimal points and standard notation, there is a chance for a huge mistake if one forgets to carry a one, move a decimal point, or add the tax to the subtotal. The hand waving "add a bit to the subtotal" may have less error, on average, as a way to determine how much one will spend on a purchase plus tax. It will also never have a major error. Nonetheless, anything other than computation is an approximation of the real process, some situations are so simple most everyone should calculate, some people are so good at math they should calculate when others shouldn't try, etc.
↑ comment by satt · 2011-08-24T10:05:09.713Z · LW(p) · GW(p)
I pretty regularly hear that Stalin or Ho or Tito's "authoritarianism", was regrettably necessary in order to achieve economic development, which is just the same argument that "socialism needs killing fields"
That's not how I'd interpret it (at least not when put like that). To me that argument reads more like "economic development in the USSR/Vietnam/Yugoslavia required killing fields", which is very different to "socialism needs killing fields".
comment by Voldemort · 2011-08-23T18:42:50.740Z · LW(p) · GW(p)
Well this is splendid! I should have a far easier time concealing myself and my intentions here than on nearly any other venue.
Replies from: None↑ comment by [deleted] · 2011-08-23T21:09:29.957Z · LW(p) · GW(p)
What's wrong with Voldemort inferring from the study that some regular strategies for identifying sociopaths may be less effective on LessWrong? Though I may be putting words in his mouth at this point.
One could say that people on LW are more clever and rational overall so they find new strategies for identifying them, but I'm not so sure, we are rather tribal about the "rationalist community" (this is in fact endorsed as a feature not a bug), are we sure we won't be biased against someone who proposed a novel or an efficient way to detect their likely presence?
Replies from: JoshuaZ