[SEQ RERUN] Is Morality Preference?

post by MinibearRex · 2012-06-24T04:46:58.978Z · LW · GW · Legacy · 66 comments

Contents

66 comments

Today's post, Is Morality Preference? was originally published on 05 July 2008. A summary (taken from the LW wiki):

 

A dialogue on the idea that morality is a subset of our desires.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Moral Complexities, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

66 comments

Comments sorted by top scores.

comment by Gastogh · 2012-06-24T14:29:58.488Z · LW(p) · GW(p)

Am I only one who has serious trouble following presentations in a fictitious dialogue format such as this? The sum of my experience of the whole Obert/Subhan exchange and almost every intermediate step therein boils down to the line:

Subhan: "Fair enough. Where were we?"

Replies from: TheOtherDave, TimS
comment by TheOtherDave · 2012-06-24T15:15:24.707Z · LW(p) · GW(p)

Nope, you're not the only one. That said, I also know people who react well to this sort of presentation. Different strokes, and all. (That said, this particular example of presentation-through-dialogue is IMHO relatively poor.)

comment by TimS · 2012-06-24T14:49:39.195Z · LW(p) · GW(p)

You are not alone.

comment by buybuydandavis · 2012-06-24T22:10:05.206Z · LW(p) · GW(p)

Maybe better to say that Morality is a certain species of valuation.

It is a valuation of behavior, associated with different behaviors and emotions depending on the result of the valuation. When the valuation is positive, you get various flavors of increased liking and you reward, and then the valuation is negative, you get various flavors of disliking and you punish.

Note that the valuations are multiordinal - you may punish the person exhibiting the behavior, and punish those who don't punish the behavior, and punish those who don't say that they would punish behavior, ...

We're social creatures. We have evolved mechanisms for interacting with others. You have moral pattern recognition algorithms in your head which fire for some types of behavior, eliciting the moral emotional and behavioral reactions in turn. Being adaptive, learning creatures, those mechanisms are adaptable.

Obert tries to attack this in a number of ways.

First, by the language we use. We use "I want" in different contexts than "that's right", with different associated emotions and acitons. He's correct, but it doesn't really make his point. I want is also different from "that's yummy". Used for different kinds of valuations, resulting in different kinds of emotions and actions depending on how the valuation turns out. Does that make "yummy" not a preference, not a valuation?

Same thing for Obert's subsequent questioning of Surhan's psychological account of I want versus It is right. I agree with Obert's comment that Surhan's account is unrealistic. I agree that when someone says "It is moral", they are usually pinging different valuations than when they say "I want". Do we want people to do what is moral? Usually. But we also use want when we want to distinguish preferences from moral preferences as well, to distinguish the general from the specific.

Time and time again, Obert tries to make his point by showing that we don't use "moral' in the same way as "

It's useful to also distinguish between want and yummy, and similarly useful to distinguish between want and moral. A wise fellow once said:

Well, it may turn out that the moral thing to do was not the right thing to do.

Old Jean Luc makes a good point. Moral, as pinging our behavioral pattern matchers, pings only a subset of our values. When taken in total, we may not prefer the moral behavior over all alternatives. The yummy food may not be the right food, and the moral action may not be the right action. Life is full of trade offs.

Obert goes on to attack Subhan's contention that morality is what "society wants". I have an aversion to this view, but I'd note that recent work on analyzing the dimensions of morality has shown that for some people, obedience to the tribe is a very strong component of their morality. They've come up with six moral dimensions (or so), and found that people have fairly consistent preferences between the different dimensions. My morality algorithm rates social conformity low, and autonomy high.

I'd expect we could find the same kinds of patterns with yummy. You have different types of taste buds, and everyone's yummy detector will show different relative weightings for activation of the different types of taste buds. (Different weightings for different combinations too. I wonder if they've done that kind of combinatoric analysis for morality yet.)

The final attack on morality as preference concerned the advancement of morality.

Notice how this works. We evaluate all time, us versus all the different thems, and find that by our evalution, we've advanced. Hmmm. Do you think that people from a thousand years ago would necessarily agree with that evaluation? No? Oh, they're wrong. I see. But they'd say that we're wrong, right? "Yeah yeah, but they're really the ones who are wrong." Got it.

But I think advancement is actually possible, and even to be expected, on the preference account. People aren't all knowing. Over the years, you gain experience. It shouldn't be surprising that people get better at optimizing some valuation over time. Nor should it be surprising that the valuations change over time as our situation changes.

One thing I disagree with, which will likely be empirically verifiable, if it hasn't already been verified.

"I am not so sure; human cognitive psychology has not had time to change evolutionarily over that period.

I think it has had plenty of time. Not just our psychology, which one could partially attribute to societal changes, but the statistics of the genetic pool. Saw a recent study on the differential birth and child survival rates by income in the 18 century (19th?), We should expect income to have some correlates with genetics. Large differential birth rates in populations with genetic differences mean large shifts in population genetics.

And considering the moral dimension in particular, since punishment and reward are large parts of the moral response, that gives a directed component to genetic evolution. Kill off everyone you see displaying a certain phenotype and breed only the phenotypes you like, and you can make a lot of progress very quickly.

But this gets rather tiresome. I don't think Subhan is a worthy champion for "morality as preference", so cleaning up his mess doesn't really amount to much. Likely Surhan would feel the same about me. That's the limitation of these kinds of Socratic dialogues - they're only as convincing as the champion that loses.

I've tried instead to give some positive account of my views of morality as preference, how it works, and how it answers the usual objections against it, along with the commentary on Obert and Subhan. I really don't think it's that complicated once you treat human beings as evolved social creatures.

comment by mwengler · 2012-06-24T17:56:47.974Z · LW(p) · GW(p)

Is morality 1) a kind of preference with a somewhat different set of emotional flavors associated with it or 2) something which has a true-or-falseness beyond preference?

For me to credit 2) (Morality is true), I would need to know that 2) is a statement that is actually distinguishable in the world from 1). Someone tells me electrons attract other electrons, we do a test, turns out to be false, electrons repel other electrons is a true statement beyond preference. Someone else tells me electrons repel each other because they hate each other. Maybe some day we will learn how to talk to electrons, but until then this is not testable, not tested, and so the people who come down on each side of this question are not talking about truth.

Someone tells me Morality has falsifiable truths in it, where is the experimental test? Name a moral proposition and describe the test to determine its falsehood or truthiness. If the proponent of moral truth did this, I missed it an need to be reminded.

If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true. I am happy to label this difference "scientific" or "fact-based" but of course the danger of labels is they carry freight from their pasts. But however you choose to label it, is their a proponent of the existence of moral truth who can propose a test, or will these proponents accept that "moral truth is more like truths about electrons hating each other and less like truths about electrons repelling each other?"

Note that in discussing the proposition "electrons hate each other" I actually proposed a test of it's truth, but pointed out we did not yet know how to do that test. If we say "we will NEVER know how to do that test, its just dopey" are we saying something scientific? Something testable? I THINK not, I think this is an unscientific claim. But maybe some chain of scientific discovery will put us in a place where we can test statements about what will NEVER be knowable. I personally do not know how to do that now, though. So if I hold an opinion that electrons neither hate nor love each other, I hold it as an opinion, knowing it might be true, it might be false, and/or it might be meaningless in the real world.

So then what of Moral "Truths?" For the moment, at my state of knowledge, they are like statements about the preferences of electrons. Maybe there are moral truths but I don't know how to learn any of them as facts and I am not aware of anyone who has presented a moral truth and a test for its truthiness. Maybe some day...

But in the meantime, everybody who tells me there are moral truths and especially anybody who tells me "X is one of those moral truths" gets tossed in the dustbin labeled "people who don't know the difference between opinion and truth." Is murder wrong, is that a fact? If by murder you mean killing people, you cannot find a successful major civilization that has EVER appeared to believe that. Self-defense, protection of those labeled "innocent," are observed to justify homicide in societies that I am aware of.

But suppose by murder we mean "unjustifiable homicide? Well then you are either in tautology land (murder is defined as killing which is wrong) or you have kicked the can down the road to a discussion of what justifies homicide, and now you need to propose tests of your hypotheses about what justifies homicide.

So even if there is "moral truth," if you can't propose a test for for any moral truths, you are happily joining the cadre of people who know the truth of whether electrons hate each other or not.

Replies from: None, buybuydandavis, pragmatist, shminux
comment by [deleted] · 2012-06-24T23:18:11.073Z · LW(p) · GW(p)

G.E. Moore is famous for this argument against external world skepticism: "How do I know I have hands?" (he raises his hands in front of his face) "Here! Here are my hands!". His point was that it is absurd to call the more obvious into doubt by means of the less obvious: By whatever means I might understand an argument supporting skepticism about my hands (say, the Boltzmann Brain argument), by those very means I am all the more sure that I do have hands.

I think something similar might apply here. To say that morality is 'objective' or 'subjective' may be an equivocation or category mistake, but if I understand anything, I understand that slavery is wrong. I can't falsify this, or reduce it to some more basic principle because there is nothing more basic, and no possible world in which slavery is right. A world in which the alternative is true cannot be tested for because it is wholly inconceivable.

Replies from: gwern, mwengler
comment by gwern · 2012-06-25T02:06:45.749Z · LW(p) · GW(p)

His point was that it is absurd to call the more obvious into doubt by means of the less obvious: By whatever means I might understand an argument supporting skepticism about my hands (say, the Boltzmann Brain argument), by those very means I am all the more sure that I do have hands.

Further reading: http://en.wikipedia.org/wiki/Here_is_a_hand#Logical_form http://www.overcomingbias.com/2008/01/knowing-your-ar.html http://www.gwern.net/Prediction%20markets#fn41

comment by mwengler · 2012-06-25T00:07:18.847Z · LW(p) · GW(p)

A world in which the alternative is true cannot be tested for because it is wholly inconceivable.

A world in which I do not have hands is totally conceivable and easily tested for. So it would appear you have at least this gigantic difference between "Slavery is wrong" differs from your G.E. Moore analogy source statement.

To say that it is obvious that "slavery is wrong" does not rule out this being a statement of preference, does it? I would rather be slowly sucked to orgasm by healthy young females than to have my limbs and torso crushed and ground while immersed in a strong acid. This is AT LEAST as obvious to me as "slavery is wrong" is obvious to you, I would bet, yet it is quite explicitly a statement of preference.

Replies from: None
comment by [deleted] · 2012-06-25T13:45:26.085Z · LW(p) · GW(p)

To say that it is obvious that "slavery is wrong" does not rule out this being a statement of preference, does it?

That's a fair point, but I can easily imagine a world in which I prefer being crushed to death than receiving the attentions of some attractive women, so long as you let me add a little context. Lots of people have chosen painful deaths over long and pleasant lives and we've rightly praised them for it. So while I agree that the choice you describe is a clear preference of mine, it has none of the strength of my moral belief about slavery.

A world in which I do not have hands is totally conceivable and easily tested for.

That wasn't quite the point. The analogue here wouldn't be between the hands and the moral principle. The analogue is this: how surely do you know this epistemic rule about falsification? Do you know it more surely than you know that slavery is wrong? I for one, am vastly more sure that slavery is wrong than I am that instrumentalism or falsificationism is the correct epistemic theory.

I may be misguided, of course, so I won't say that instrumentalist epistemology can't in principle call my moral idea into question. But it seems absurd to assume that is does.

Replies from: mwengler, TheOtherDave
comment by mwengler · 2012-06-25T17:58:46.891Z · LW(p) · GW(p)

I think your point about falsification is a good one. I in fact believe in falisifiability in some powerful sense of the word believe. I suspect a positive belief in falsifiability is at least weakly falsifiable. With time and resources one could look for correlations between belief in falsifiability and various forms of creativity and understanding. I would expect to find it highly correlated with engineering, scientific, and mathematical understanding and progress.

Of course "proving" falisifiabilty by using falisifiability is circular. In my own mind I fall back on instrumentalism: I claim I'm interested in learning falisifiable things about the world and don't care whether we call them "true" or not and don't care whether you call other non-falsifiable statements true or not, I'm interested in falsifiable ones. Behind or above that belief is my belief that I really want power, I want to be able to do things, and that it is the falsifiable statements only that allow me to manipulate the environment effectively: since non-falsifiable statements almost by definition don't help me in manipulating the world in which I would be trying to falsify them.

Is a statement like "Slavery is wrong" falsifiable? Or even "Enslaving this particular child in this particular circumstance"? I think they are not "nakedly" falsifiable and in fact have zero problem imagining a world in which at least some people do not think they are wrong (we live in that world). I think the statement "Slavery is wrong because it reduces average happiness" is falsifiable. "Slavery is wrong because it misallocates human resources" is falsifiable. These reflect instrumentalist THEORIES of morality, theories which it does not seem to be could be falsifiable.

So I have an assumption of falsifiability. You may have an assumption of what is moral. I admit the symmetry.

I can tell you the "I can't imagine it" test fails in epic fashion in science. One of the great thrills of special relativity and quantum mechanics is that they are so wildly non-intuitive for humans, and yet they are so powerfully instrumentally true in understanding absolute reams of phenomenon allowing us to correctly design communications satellites and transistors to name just two useful instrumentalities. So I suppose my belief against " I can't imagine it" as a useful way to learn the truth is a not-necessarily-logical extension of a powerful truth from one domain that I respect powerfully in to other domains.

Further, I CAN imagine a world in which slavery is moral. I can go two ways to imagine this: 1) mostly we don't mind enslaving those who are not "people." Are herds of cattle for food immoral? Is it unimaginable that they are moral? Well if you can't imagine that is moral, what about cultivated fields of wheat? Human life in human bodies ends if we stop exploiting other life forms for nutritition. Sure, you can "draw the line" at chordates for whether cultivating a crop is "slavery" or not. Other people have drawn the line at clan members, family members, nation members, skin-color members. I'm sure there were many white slave holders in the southern U.S. who could not imagine a world in which enslaving white people was moral. Or enslaving British people. Or enslaving British aristocracy. So how far do you go to be sure you are not enslaving anything that shouldn't be enslaved? Or do you trust your imagination that it is only people (or only chordates), even as you realize how powerfully other people's imaginations have failed in the past?

I also reject all religious truth based on passed down stories of direct revelations from god. Again, this kind of belief fails epically in doing science, and I extend its failure there in to domains where perhaps it is not so easy to show it fails. And in my instrumentalist soul, I ultimately don't care whether I am "right" or "wrong," I would just rather use my limited time, energy, and brain-FLOPs pursuing falsifiable truths, and hope fort he best.

comment by TheOtherDave · 2012-06-25T14:04:43.118Z · LW(p) · GW(p)

I can easily imagine a world in which I prefer being crushed to death than receiving the attentions of some attractive women, so long as you let me add a little context. [..] So while I agree that the choice you describe is a clear preference of mine, it has none of the strength of my moral belief about slavery.

I understand this to imply that you cannot imagine a world in which you prefer to send someone into slavery than not do so, no matter what the context. Have I understood that correctly?

Replies from: None
comment by [deleted] · 2012-06-25T16:36:14.039Z · LW(p) · GW(p)

Have I understood that correctly?

No, I can easily imagine a world in which I prefer to send someone into slavery than drink a drop of lemon juice: all I have to do is imagine that I'm a bad person. My point was that it's easy to imagine any world in which my preferences are different, but I cannot imagine a world in which slavery is morally permissible (at least not without radically changing what slavery means).

Replies from: mwengler, TheOtherDave
comment by mwengler · 2012-06-25T18:21:18.801Z · LW(p) · GW(p)

How about a world in which by sending one person from your planet into slavery you defer the enslavement of the entire earth for 140 years? A world in which alien invaders which outgun us more than Europeans outgunned the tribes in Africa from which many of them took slaves, but who are willing for some reason we can't comprehend to take one person you pick back to the home planet, 70 light years away. But failing your making that choice, they will stay here and at some expense to themselves enslave our entire race and planet?

Can you now imagine a world in which your sending someone in to slavery is not immoral? If so, how does this change in what you can and cannot imagine change your opinion of either the imagination standard or slavery's moral status?

It seems to me most likely source of emotions, feelings, is evolution. We aren't just evolved to run from a sabre tooth tiger, we have a rush of overwhelming fear as the instrumentality of our fleeing effectively. SImilarly, we have evolved, mammals as a whole, not just humans not even just primates, to be "social" animals meaning a tremendously important part of the environment was our group of other mammals. Long before we made the argument that slavery was wrong, we had strong feelings of wanting to resist the things that went along with being enslaved, while apparently we also had power feelings that assisted us in forcing others to do what we wanted.

Given the way emotions probably evolved, I think it does make sense to look to our emotions to guide us in knowing what strategies probably work better than others in interacting with our environment, but it doesn't make sense to expect them to guide us correctly in corner cases, in rare situations in which there would have not been enough pay-off to have evolution do any fine tuning of emotional responses.

comment by TheOtherDave · 2012-06-25T16:44:25.254Z · LW(p) · GW(p)

Ah, OK. Thanks for the clarification.

Can you imagine a world in which killing people is morally permissible?

Replies from: None
comment by [deleted] · 2012-06-25T17:10:32.368Z · LW(p) · GW(p)

Can you imagine a world in which killing people is morally permissible?

Sure, I live in one. I chose slavery because it's a pretty unequivocal case of moral badness, while killing is not such as in war, self-defense, execution, etc. I think probably rape, and certainly lying are things which are always morally wrong (I don't think this entails that one should never do them, however).

My thought is just that at least at the core of them, moral beliefs aren't subject to having been otherwise. I guess this is true of beliefs about logic too, though maybe not for the same reasons. And this doesn't make either kind of belief immune to error, of course.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-25T17:27:39.216Z · LW(p) · GW(p)

OK. Thanks for clarifying.

comment by buybuydandavis · 2012-06-24T22:14:00.977Z · LW(p) · GW(p)

Is morality 1) a kind of preference with a somewhat different set of emotional flavors associated with it or 2) something which has a true-or-falseness beyond preference?

The trick Objective moralists play is to set truth against preference, when what you have are truths about preferences. Is it true, for you, that ice cream is yummy? For me, it is. That doesn't make it any less a preference.

comment by pragmatist · 2012-06-25T02:29:18.033Z · LW(p) · GW(p)

Here's a simple example of a moral claim being tested and falsified:

A: What are you doing with that gun?

B: I'm shooting at this barrel. It's a lot of fun.

A: What? Don't do that! It's wrong.

B: No, it's not. There's nothing wrong with shooting at a barrel.

A: But there's a child inside that barrel! You could kill her.

B: You don't know what you're talking about. That barrel's empty. Go look.

A: [looks in the barrel] Oh, you're right. Sorry about that.

So here A made a moral claim with which B disagreed ("It's wrong to shoot at that barrel."). B proposed a test of the moral claim. A performed the test and the moral claim was falsified.

Now, I anticipate a number of objections to the adequacy of this example. I think they can all be answered, but instead of trying to predict how you will object or tediously listing all the objections I can think of, I'll just wait for you to object (if you so desire) before responding.

So even if there is "moral truth," if you can't propose a test for for any moral truths, you are happily joining the cadre of people who know the truth of whether electrons hate each other or not.

I'm already part of this cadre. I know that electrons do not hate each other.

Replies from: mwengler
comment by mwengler · 2012-06-25T03:21:06.418Z · LW(p) · GW(p)

Without the assumption that shooting a child is immoral, this is not a moral argument. With that as an assumption, the moral component of the conclusion is assumed, not proven.

Find me the proof that shooting a child is immoral and we will be off to a good start.

Replies from: pragmatist
comment by pragmatist · 2012-06-25T04:39:18.286Z · LW(p) · GW(p)

If you're looking for a test of a moral claim that does not rely on any background assumptions about morality, then I agree that I can't give you an example. But that's because your standard is way too high. When we test scientific hypotheses, the evidence is always interpreted in the context of background assumptions. If it's kosher for scientific experiments to assume certain scientific facts (as they must), then why isn't it kosher for moral experiments to assume certain moral facts?

Consider the analog of your position in the descriptive case: someone denies that there's any fact of the matter about whether descriptive claims about the external world are true or false. This person says, "Show me how you'd test whether a descriptive claim is true or false." Now you could presumably give all sorts of examples of such tests, but all of these examples will assume the truth of a host of other descriptive claims (minimally, that the experimental apparatus actually exists and the whole experiment isn't a hallucination). If your interlocutor insisted that you give an example of a test that does not itself assume the truth of any descriptive claim, you would not be able to satisfy him.

So why demand that I must give an example of a test of a moral claim that does not assume the truth of any other moral claim? Why does the moral realist have this extra justificatory burden that the scientific realist does not? It's fine to have problems with the specific assumptions being made in any particular experiment, and these can be discussed. Perhaps you think my particular assumptions are flawed for some reason. But if you have a general worry about all moral assumptions then you need to tell me why you don't have a similar worry about non-normative assumptions. If you have a principled reason for this distinction, that would be the real basis of your moral anti-realism, not this business about untestability.

Replies from: TimS
comment by TimS · 2012-06-25T14:06:22.940Z · LW(p) · GW(p)

You position suggests that one cannot consistently be a physical realist and a moral anti-realist. Is that a fair summary of your position?

Replies from: pragmatist
comment by pragmatist · 2012-06-25T17:38:13.351Z · LW(p) · GW(p)

I do think moral and scientific reasoning are far less asymmetric than is usually assumed. But that doesn't mean I think there are no asymmetries at all. Asymmetries exist, and perhaps they can be leveraged into an argument for moral anti-realism that is not also an argument against scientific realism. So I wouldn't say it's inconsistent to be a physical realist and a moral anti-realist. I will say that in my experience most people who hold that combination of positions will, upon interrogation, reveal an unjustified (but not necessarily unjustifiable) double standard in the way they treat moral discourse.

Replies from: TimS
comment by TimS · 2012-06-25T17:56:41.917Z · LW(p) · GW(p)

I don't think it is a double standard. Empiricism admits the Problem of Induction, but says that the problem doesn't justify retreating all the way to Cartesian skepticism. This position is supported by the fact that science makes good predictions - I would find the regularity of my sensory experiences surprising if physical realism were false. Plus, the principle of falsification (i.e. making beliefs pay rent) tells us what sorts of statements are worth paying attention to.

Moral reasoning seems to lack ay equivalent for either falsification or prediction. I don't know what it means to try to falsify a statement like "Killings in these circumstances are not morally permissible." And to the extent that predictions can be made based on the statement, they seem either false or historically contingent - it's pretty easy to imagine my society having different rules about what killings are morally permissible simply by looking at how a nearby society came to its different conclusions.

In short, the problem of induction in empiricism seems very parallel to the is/ought problem in moral philosophy. But moral philosophy seems to lack the equivalent of practical arguments like accurate prediction that seem to rescue empiricism.

Replies from: pragmatist
comment by pragmatist · 2012-06-25T18:37:19.916Z · LW(p) · GW(p)

I do think one can offer a pragmatic justification for moral reasoning. It won't be exactly parallel to the justification of scientific reasoning because moral and scientific discourse aren't in the same business. Part of the double standard I was talking about involves applying scientific standards of evaluation to determine the success of moral reasoning. This is as much of an error as claiming that relativity is false because the nuclear bomb caused so much suffering. We don't engage in moral reasoning in order to make accurate predictions about sensory experience. We engage in moral reasoning in order to direct action in such a way that our social environment becomes a better place. And I do think we have plenty of historical evidence that our particular system of moral reasoning has contributed to making the world a better place, just as our particular system of scientific reasoning has contributed to our increasing ability to control and predict the behavior of the world.

Now obviously there's a circularity here. Our standards for judging that the world is better now that slavery is illegal and women can vote are internal to the very moral discourse we purport to be evaluating. But this kind of ultimate circularity is unavoidable when we attempt to justify any system of justification as a whole. It's precisely the problem Hume pointed out when he talked about induction. Sure, we can appeal to past success as a justification of our inductive practices, but that justification only works if we are already committed to induction. Furthermore, our belief in the past success of the scientific method is based on historical data collected and interpreted in accord with this method. Somebody who rejects the scientific method wholesale may well say "Why should I believe any of these historical claims you are making?"

A completely transcendental justification, one that would be normative to any possible mind in mindspace, is an impossible goal in both moral and scientific reasoning. Any justification you offer for your justificatory practices is ultimately going to appeal to standards that are internal to those practices. That's something we've all learned to live with in science, but there's still a resistance to this unfortunate fact when it comes to moral discourse.

And to the extent that predictions can be made based on the statement, they seem either false or historically contingent - it's pretty easy to imagine my society having different rules about what killings are morally permissible simply by looking at how a nearby society came to its different conclusions.

Our scientific schemes of justification are historically contingent in the same way. There are a number of other communities (extremely religious ones, for instance) that employ a different set of tools for justifying descriptive claims about the universe. Of course, our schemes of justification are better than theirs, as evidenced by their comparative lack of technological and predictive success. By the same token, though, our moral schemes of justification are more successful than those of, say, fundamentalist Islamic societies, as evidenced by our greater degree of moral progress. In both cases, the members of those other societies would disagree that we have done better than them, but that's because they have different (and I would say incorrect) standards of evaluation.

Replies from: TimS
comment by TimS · 2012-06-25T20:34:45.216Z · LW(p) · GW(p)

Yes, there's inherently a certain amount of unsatisfying circularity is everything. But that's a weakness that calls for minimization of circularity.

Empiricism has only one circularly justified position: You can (more or less) trust the input of your senses - which implies some consistency over time. Everything else follows from that. Modern science is better than pytolemic science because it makes better predictions.

By contrast, there's essentially no limit to moral circularity. There's the realism premise: There is a part of the territory called "moral rightness". Then you need a circular argument to show any particular moral premise (these killings are unjustified) is part of moral rightness. And there are multiple independent moral premises. (When killing is wrong does not shed much light on when lying is wrong). It's not even clear that there are a finite number of circularly justified assertions.

So I hold empiricism to the same standard as moral realism, and moral realism seems to come up short. Further, my Minimization of Circular Justification principle is justified by worry about the ease of creating a result simply by making in an axiom. (That is, the Pythagorean Theorem is on a different footing if it is introduced as an axiom of Euclidean geometry rather than a derived result).

Replies from: pragmatist
comment by pragmatist · 2012-06-25T21:10:07.383Z · LW(p) · GW(p)

If your principle is actually that circular justification must be minimized, then why aren't you an anti-realist about both scientific and moral claims? Surely that would involve less circular justification than your current position. You wouldn't even have to commit yourself to the one circularly justified position assumed by empiricism.

In any case, scientific reasoning as a whole does not just reduce to the sort of minimal empiricism you describe. For starters, even if you assume that the input of your senses is trustworthy and will continue to remain trustworthy, this does not establish that induction based on the input of your senses is trustworthy. This is a separate assumption you must make. Your minimal empiricism also does not establish that simpler explanations of data tend to be better. This is a third assumption. It also doesn't establish what it means for one explanation to be simpler than another. It doesn't establish that the axioms on which the mathematical and statistical tools of science are based are true. I could go on.

Scientific justification as it's actually practiced in the lab involves a huge suite of tools, and it is not true that the reliability of all these tools can be derived once you accept that you can trust the input of your senses. A person can be an empiricist in your sense while denying the reliability of statistical methods used in science, for instance. To convince them otherwise you will presumably present data that you think establishes the reliability of those methods. But in order for the data to deliver this conclusion, you need to use the same sorts of statistical methods that the skeptic is rejecting. I don't see how your shared empiricism helps in this situation.

Our schemes of justification, both scientific and moral, have developed through a prolonged process of evolutionary and historical accretion. The specific historical reasons underlying the acceptance of particular tools into the toolbox are complex and variegated. It is implausible in either case that we could reconstruct the entire scheme from one or two simple assumptions.

Replies from: TimS
comment by TimS · 2012-06-25T23:52:40.599Z · LW(p) · GW(p)

If you'd like to separate the axiom about the reliability of the senses from the axiom that sensory input will remain consistent, I won't actively resist - I think reliability of the senses implies consistency of the sense, but I'm not certain my formulation is more technically correct.

Regarding Ockham's Razor - I'm not sure that is a fundamental principle or a useful rule of thumb. If MWI and Copenhagen really are in evidentiary equipose, I'm not sure I should have a preference for one or the other (that's obviously not the consensus position in this community).

It doesn't establish that the axioms on which the mathematical and statistical tools of science are based are true.

I think deductive reasoning produces necessary truths - so in a sense, I get statistics "for free" as long as I accept the Peano axioms. Other than that, I don't understand the quoted assertion.

More generally, empirical philosophy provides a place to stop the recursion. I don't think circular justifications work at all, so I think a separate justification for using this stopping place is required - I have memory of consistent sensory impressions, and that is difficult to explain except by believing that consistency is true. One could object that I can't justify reliance on my memory - so I'm being hypocritical to allow my memories to justify themselves. Maybe so, but there's no other principled stopping place for recursion - and continuing recursion past this point devolves to the point that I don't think coherence is a workable concept.

To return to the comparison with morality, I suggest that all the axiomatic assertions in the empirical program are at a fundamental level. When you start doing object level science, recursion goes away entirely. By contrast, object level morality never gets away from [EDIT: recursion]. As you noted, it is impossible to say whether we've made moral progress without referencing what moral position is better.

If progress (scientific, moral, etc) really is possible, we ought to be able to get away from recursive reasoning. That we can't when dealing with moral reasoning is not a good sign that moral reasoning is talking about some objective fact.

Replies from: pragmatist
comment by pragmatist · 2012-06-26T00:38:30.586Z · LW(p) · GW(p)

When you start doing object level science, recursion goes away entirely. By contrast, object level morality never gets away from morality. As you noted, it is impossible to say whether we've made moral progress without referencing what moral position is better.

I don't know what you mean by "object level morality never gets away from morality". Read literally, that's tautologically true, but I don't see the relevance. Is this a typo?

Also, I'm not seeing the distinction here. When I'm engaged in object-level moral reasoning, or when I read examples of object-level moral reasoning on blogs or in newspapers, I very rarely come across recursion or circular justification. There's usually an assumption that everyone in the community agrees that certain sorts of fundamental moral inferences are justified, and the debate is about whether those inferences can be made in a particular case. Here is a classic example of object-level moral reasoning. MLK offers a number of justifications for his moral stance on this particular issue. None of these justifications, as far as I can see, are circular. I don't think this is atypical. Of course, if you think that every moral argument must also simultaneously justify the whole enterprise of objective moral evaluation, then every moral argument will have a circular component. But this places a disproportionately large burden on moral justification.

It's true that if I want to argue that we have made moral progress I need to take for granted certain moral standards of evaluation, but if I want to argue that we have made scientific progress I need to take for granted certain scientific standards of evaluation. The only difference I can see is that the moral assumptions are as a matter of fact more contentious than the scientific ones, so perhaps moral debate breaks down on disagreement about foundational assumptions more often. But this is at least partly because most scientific debate is usually conducted in an institutional setting that has various mechanisms for consensus formation and weeding out sufficiently recalcitrant dissenters. Outside this setting, debate about descriptive issues is often just as contentious as moral debate. I know a number of new-agey people who have completely bizarre standards of epistemic justification. My discussion with them quite often breaks down on disagreement about foundational assumptions.

Replies from: TimS
comment by TimS · 2012-06-26T01:01:23.218Z · LW(p) · GW(p)

Yes, typo corrected.

There's usually an assumption that everyone in the community agrees that certain sorts of fundamental moral inferences are justified, and the debate is about whether those inferences can be made in a particular case.

That's not my sense at all. Moral inferences are fairly easy (compared to cutting-edge scientific inferences). Toy example: If God wants us to attend church, the inference that church attendance should be compelled by the government follows quite easy. There are secondary negative effects, but the only reason to care about them is if the moral assertion that God wants church attendance is false.

When I read political arguments, they almost always operate by assuming agreement on the moral premise. When that assumption is falsified, the argument falls apart. Even for fairly ordinary moral disputes, the argument is usually based on moral principle, not facts or moral inference.

By contrast, equivalently basic scientific questions are fact and inference based. To decide how much weight a bridge can carry, knowing the strength of the steel and the design of the bridge is most of the work. In practice, those types of disputes don't devolve into arguments about whether gravity is going to work this time.

Replies from: wedrifid
comment by wedrifid · 2012-06-26T05:33:43.425Z · LW(p) · GW(p)

There are secondary negative effects, but the only reason to care about them is if the moral assertion that God wants church attendance is false.

Unless the secondary effects were that people are more likely to eat bacon for breakfast that day now that they aren't able to sleep in and it also happens that God doesn't want people to eat pigs.

comment by Shmi (shminux) · 2012-06-24T22:05:18.972Z · LW(p) · GW(p)

Someone tells me Morality has falsifiable truths in it, where is the experimental test?

You are describing instrumentalism, which is an unpopular position on this forum, where most follow EY's realism. For a realist untestable questions have answers, justified on the basis of their preferred notion of the Occam's razor.

If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true.

Replace "moral truth" with "many worlds", and you get the EY's understanding of QM.

Replies from: mwengler, mwengler, TimS
comment by mwengler · 2012-06-25T03:33:11.088Z · LW(p) · GW(p)

Concerns with confusing the map with the territory) are extensively discussed on this forum. If it walks like a duck and quacks like a duck, is it not instrumentalism?

Replies from: shminux
comment by Shmi (shminux) · 2012-06-25T07:09:13.485Z · LW(p) · GW(p)

The difference is whether you believe that even though it walks like a duck and quacks like a duck, it could be in fact a well-designed mechanical emulation of a duck indistinguishable from an organic duck, and then prefer the former model, because Occam's razor!

Replies from: mwengler
comment by mwengler · 2012-06-25T18:27:11.359Z · LW(p) · GW(p)

Occam's razor is a strategy for being a more effective instrumentalist. It may or may not be elevated to some other status, but this is at least one powerful draw that it has. Do not infer robot ducks when regular ducks will do, do not waste your efforts (instrumentality!) designing for robot ducks when your only evidence so far (razor) is ducks. Or ven more compactly in your belief: whether these ducks are "real" or "emulations," only design for what you actually know about these ducks, not for something that takes a lot of untested message to presume about the ducks.

Do not spend a lot of time filling in the details of unreachable lands on your map.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-25T19:50:42.542Z · LW(p) · GW(p)

Do not spend a lot of time filling in the details of unreachable lands on your map.

Yep. Also, do not argue which of the many identical maps is better.

comment by mwengler · 2012-06-24T23:56:19.487Z · LW(p) · GW(p)

If you accept as "true" some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have "true-and-I-can-prove-it" and "true-and-I-can't-prove-it." I'd be surprised if given those two categories there would be many people who wouldn't elevate the testable statements above the untestable one in "truthiness."

Replies from: TheOtherDave, Eugine_Nier, shminux
comment by TheOtherDave · 2012-06-25T01:07:58.679Z · LW(p) · GW(p)

Is this different from having higher confidence in statements for which I have more evidence?

Replies from: mwengler
comment by mwengler · 2012-06-25T03:43:21.236Z · LW(p) · GW(p)

For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don't know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the "argument" over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.

I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn't. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.

But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.

comment by Eugine_Nier · 2012-06-25T06:39:11.577Z · LW(p) · GW(p)

We would have "true-and-I-can-prove-it" and "true-and-I-can't-prove-it." I'd be surprised if given those two categories there would be many people who wouldn't elevate the testable statements above the untestable one in "truthiness."

Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can't tested and even for the ones that can be tested the proof is generally considered better evidence than the test.

In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.

Replies from: TimS, mwengler
comment by TimS · 2012-06-25T14:03:50.893Z · LW(p) · GW(p)

There's another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.

Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths ("i-can-prove-it"), and "truth-and-i-can't-prove-it."

Generally, this categorization scheme will put most contentious moral assertions into the third category.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-06-26T06:11:16.324Z · LW(p) · GW(p)

Agreed except for your non-conventional use of the word "prove" which is normal restricted to things in the first category.

comment by mwengler · 2012-06-25T18:32:40.936Z · LW(p) · GW(p)

This may be a situation where the modern world's resources start to break down the formerly strong separation between mind and world.

These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I've implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.

These modern machines seem to render the statements within axiomatic mathematical systems as testable and falsifiable as any other physical facts.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-06-26T05:51:25.009Z · LW(p) · GW(p)

These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I've implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.

How would you do this for something like the Poincare conjecture or the unaccountability of the reals?

Also how do you show that your implementation does in fact compute addition without using math?

Frankly the argument you're trying to make is like arguing that we no longer need farms since we can get our food from supermarkets.

Edit: Also the most you can show STATISTICALLY is that the commutative law holds for most (or nearly all) examples of the size you try, whereas mathematical proofs can show that it always holds.

comment by Shmi (shminux) · 2012-06-25T00:01:42.330Z · LW(p) · GW(p)

We would have "true-and-I-can-prove-it" and "true-and-I-can't-prove-it."

The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-06-25T07:24:52.000Z · LW(p) · GW(p)

The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.

A rationalist (in the original sense of the word) would go even further requiring a logical proof, and not accepting a mere prediction as a substitute.

comment by TimS · 2012-06-25T13:59:09.052Z · LW(p) · GW(p)

How did instrumentalism and realism get identified as conflicting positions? There are forms of physical realism that conflict with instrumentalism - but instrumentalism is not inherently opposed to physical realism.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-25T17:03:25.509Z · LW(p) · GW(p)

Not inherently, no. But the distinction is whether the notion of territory is a map (instrumentalism) or the territory (realism). It does not matter most of the time, but sometimes, like when discussing morality or quantum mechanics, is does.

Replies from: TimS
comment by TimS · 2012-06-25T17:43:57.355Z · LW(p) · GW(p)

I don't understand. Can you give an example?

Replies from: shminux
comment by Shmi (shminux) · 2012-06-25T19:53:37.820Z · LW(p) · GW(p)

A realist finds is perfectly OK to argue which of the many identical maps is "truer" to the invisible underlying territory. An instrumentalist simply notes that there is no way to resolve this question to everyone's satisfaction.

Replies from: TimS
comment by TimS · 2012-06-25T20:03:52.494Z · LW(p) · GW(p)

I'm objecting to your exclusion of instrumentalism from the realist label. An anti-realist says there is no territory. That's not necessarily the position of the instrumentalist.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-25T20:14:41.697Z · LW(p) · GW(p)

An anti-realist says there is no territory. That's not necessarily the position of the instrumentalist.

Right. Anti-realism makes an untestable and unprovable statement like this (so does anti-theism, by the way). An instrumentalist says that there is no way to tell if there is one, and that the map/territory distinction is an often useful model, so why not use it when it makes sense.

I'm objecting to your exclusion of instrumentalism from the realist label.

Well, this is an argument about labels, definitions and identities, which is rarely productive. You can either postulate that there is this territory/reality thing independent of what anyone thinks about it, or you can call it a model which works better in some cases and worse in others. I don't really care what label you assign to each position.

Replies from: TimS
comment by TimS · 2012-06-25T20:42:32.761Z · LW(p) · GW(p)

Respectfully, you were the one invoking technical jargon to do some analytical work.

Without jargon: I think there is physical reality external to human minds. I think that the best science can do is make better predictions - accurately describing reality is harder.

You suggest there is unresolvable tension between those positions.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-25T21:49:42.253Z · LW(p) · GW(p)

I think there is physical reality external to human minds.

It's a useful model, yes.

I think that the best science can do is make better predictions - accurately describing reality is harder.

The assumption that "accurately describing reality" is even possible is a bad model, because you can never tell if you are done. And if it is not possible, then there is no point postulating this reality thing. Might as well avoid it and stick with something that is indisputable: it is possible to build successively better models.

You suggest there is unresolvable tension between those positions.

Yes, one of them postulates something that cannot be tested. if you are into Occam's razor, that's something that fails it.

Replies from: TimS
comment by TimS · 2012-06-25T23:21:02.090Z · LW(p) · GW(p)

We can't talk about testing propositions against reality until we decide whether there is a reality to test it against. If you are uncertain about that point, the nuances between predicting reality and modelling reality are not on point - and probably confuse the analysis more than they shed any light.

If someone walked into one of your high-end physics lectures and wanted to talk about whether there was reality (see Cartesian doubt), I think you would tell him that the physics class was not the venue for that type of conversation. If you tried to answer his questions while also answering other students' questions, everything would get hopelessly confused.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-25T23:47:21.668Z · LW(p) · GW(p)

We can't talk about testing propositions against reality

I never did. I talk about testing propositions against experiment, without postulating a mysterious untestable reality behind those experiments.

Unlike the model you call reality, the existence of repeatable experiments is a repeatable experimental fact.

Replies from: TimS
comment by TimS · 2012-06-25T23:54:58.866Z · LW(p) · GW(p)

What is an experiment but testing a proposition against reality?

Replies from: shminux
comment by Shmi (shminux) · 2012-06-26T02:07:11.616Z · LW(p) · GW(p)

What is an experiment but testing a proposition against reality?

That's the realist's approach. To me, you test a proposition with an experiment, not against anything.

Replies from: TimS
comment by TimS · 2012-06-26T02:10:12.022Z · LW(p) · GW(p)

If the experiment is not a way to tap into reality (in some extremely metaphorical sense), why should I care about the experimental results when trying to decide whether my proposition is true?

Replies from: shminux
comment by Shmi (shminux) · 2012-06-26T02:20:37.312Z · LW(p) · GW(p)

If you want to know how far a rock you throw will land (a prediction based on a model constructed based on previously performed experiments), you want your model to have the necessary predictive power. Whether it corresponds to some metaphysical concept of reality is quite secondary.

Replies from: TimS
comment by TimS · 2012-06-26T02:23:13.329Z · LW(p) · GW(p)

That doesn't answer my question. To rephrase using your new example, if the prior experiments do not metaphorically "tap into reality," why should I have any confidence that a model based on those experimental results will be useful in predicting future events?

Replies from: TheOtherDave, shminux
comment by TheOtherDave · 2012-06-26T04:07:06.690Z · LW(p) · GW(p)

Well, either the experimental result has predictive power, or it doesn't. If certain kinds of experimental results prove useful for predicting the future, then I should have confidence in predictions based on (models based on) those results. Whether I call them "reality" or "a model" doesn't really matter very much.

More generally, to my way of thinking; this whole "instrumentalists don't believe in reality" business mostly seems like a distinction in how we use words rather than in what experiences we anticipate.

It would potentially make a difference, I suppose, if soi-disant instrumentalists didn't actually expect the results of different experiments to be reconcilable with one another (under the principle that each experiment was operating on its own model, after all, and there's no reason to expect those models to have any particular relationship to one another). But for the most part, that doesn't seem to be the case.

There's a bit of that when it comes to quirky quantum results, I gather, but to my mind that's kind of an "instrumentalism of the gaps"... when past researchers have come up with a unified model we accept that unified model, but when current data doesn't seem unified given our current understanding, rather than seeking a unified model we shrug our shoulders and accept the inconsistency, because hey, they're just models, it's not like there's any real underlying territory.

Which in practice just means we wait for someone else to do the hard work of reconciling it all.

comment by Shmi (shminux) · 2012-06-26T06:13:02.713Z · LW(p) · GW(p)

why should I have any confidence that a model based on those experimental results will be useful in predicting future events?

Because it has been experimentally confirmed before, and from experience we can assign a high probability that a model that has been working well in the past will continue to work in the similar circumstances in the future.

comment by billswift · 2012-06-24T08:11:07.098Z · LW(p) · GW(p)

Morality is often a warped form of our desires. Our values, for the most part, are a generalization and categorization of our desires; another way of putting it is that our desires are specific cases of our values (for example, for life, freedom, comfort, and so on) though our desires are more fundamental. Many moral claims, then turn around and make specific claims based on the abstract values. This specific (desire) -> abstract (value) -> specific (moral claim) is one reason our moral claims often seem so incompatible with our actual desires. Please note that I am not saying this always, nor necessarily usually, happens.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-24T17:35:53.447Z · LW(p) · GW(p)

Our values, for the most part, are a generalization and categorization of our desires

So, if I covet what you have, my values should include "robbery is OK"?

Replies from: billswift
comment by billswift · 2012-06-24T19:15:44.832Z · LW(p) · GW(p)

If that is your only desire, maybe so. There have been plenty of societies that have been perfectly okay with robbing outgroups. But given all the other desires real people have, no. And "robbery is okay" would be a "moral claim", not a value. "Acquiring property" would be the relevant generalized value.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-24T23:13:13.827Z · LW(p) · GW(p)

Maybe I misunderstand your definition of the term value.