A Sketch of an Anti-Realist Metaethics
post by Jack · 2011-08-22T05:32:10.596Z · LW · GW · Legacy · 136 commentsContents
The Map is Not the Territory Reviewed Beliefs vs. Preferences Projectivism and Psychopathy But utility functions are part of the territory described by the map! None 136 comments
Below is a sketch of a moral anti-realist position based on the map-territory distinction, Hume and studies of psychopaths. Hopefully it is productive.
The Map is Not the Territory Reviewed
Consider the founding metaphor of Less Wrong: the map-territory distinction. Beliefs are to reality as maps are to territory. As the wiki says:
Since our predictions don't always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called "belief", the second thingy "reality".
Of course the map is not the territory.
Here is Albert Einstein making much the same analogy:
Physical concepts are free creations of the human mind and are not, however it may seem, uniquely determined by the external world. In our endeavor to understand reality we are somewhat like a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and cannot even imagine the possibility or the meaning of such a comparison. But he certainly believes that, as his knowledge increases, his picture of reality will become simpler and simpler and will explain a wider and wider range of his sensuous impressions. He may also believe in the existence of the ideal limit of knowledge and that it is approached by the human mind. He may call this ideal limit the objective truth.
The above notions about beliefs involve pictorial analogs, but we can also imagine other ways the same information could be contained. If the ideal map is turned into a series of sentences we can define a 'fact' as any sentence in the ideal map (IM). The moral realist position can then be stated as follows:
Moral Realism: ∃x(x ⊂ IM) & (x = M)
In English: there is some set of sentences x such that all the sentences are part of the ideal map and x provides a complete account of morality.
Moral anti-realism simply negates the above. ¬(∃x(x ⊂ IM) & (x = M)).
Now it might seem that, as long as our concept of morality doesn't require the existence of entities like non-natural gods, which don't appear to figure into an ideal map, moral realism must be true (where else but the territory could morality be?). The problem of ethics then, is chiefly one of finding a satisfactory reduction of moral language into sentences we are confident of finding in the IM. Moreover, the 'folk' meta-ethics certainly seems to be a realist one. People routinely use moral predicates and speak of having moral beliefs. "Stealing that money was wrong", "I believe abortion is immoral", "Hitler was a bad person". In other words, in the maps people *actually have right now* a moral code seems to exist.
Beliefs vs. Preferences
But we don't think talking about belief networks is sufficient for modeling an agent's behavior. To predict what other agents will do we need to know both their beliefs and their preferences (or call them goals, desires, affect or utility function). And when we're making our own choices we don't think we're responding merely to beliefs about the external world. Rather, it seems like we're also responding to an internal algorithm that helps us decide between actions according to various criteria, many of which reference the external world.
The distinction between belief function and utility function shouldn't be new to anyone here. I bring it up because the queer thing about moral statements is that they seem to be self-motivating. They're not merely descriptive, they're prescriptive. So we have a good reason to think that they call our utility function. One way of phrasing a moral non-cognitivist position is to say that moral statements are properly thought of as expressions of an individual's utility function rather than sentences describing the world.
Note that 'expressions of an individual's utility function' is not the same as 'sentences describing an individual's utility function'. The latter is something like 'I prefer chocolate to vanilla' the former is something like 'Mmmm chocolate!'. It's how the utility function feels from the inside. And the way a utility function feels from the inside appears to be, or at least involve, emotion.
Projectivism and Psychopathy
That our brains might routinely turn expressions of our utility function into properties of the external world shouldn't be surprising. This was essentially Hume's position. From the Stanford Encyclopedia of Philosophy.
Projectivism is best thought of as a causal account of moral experience. Consider a straightforward, observation-based moral judgment: Jane sees two youths hurting a cat and thinks “That is impermissible.” The causal story begins with a real event in the world: two youth performing actions, a suffering cat, etc. Then there is Jane's sensory perception of this event (she sees the youths, hears the cat's howls, etc.). Jane may form certain inferential beliefs concerning, say, the youths' intentions, the cats' pain, etc. All this prompts in Jane an emotion: She disapproves (say). She then “projects” this emotion onto her experience of the world, which results in her judging the action to be impermissible. In David Hume's words: “taste [as opposed to reason] has a productive faculty, and gilding and staining all natural objects with the colours, borrowed from internal sentiment, raises in a manner a new creation” (Hume [1751] 1983: 88). Here, impermissibility is the “new creation.” This is not to say that Jane “sees” the action to instantiate impermissibility in the same way as she sees the cat to instantiate brownness; but she judges the world to contain a certain quality, and her doing so is not the product of her tracking a real feature of the world, but is, rather, prompted by an emotional experience.
This account has a surface plausibility. Moreover, it has substantial support in psychological literature. In particular, the behavior of psychopaths closely matches what we would expect if the projectivist thesis were true. The distinctive neurobiological feature of psychopathy is impaired function of the amygdala. The amygdala mainly associated with emotional processing and memory. Obviously, as a group psychopaths tend toward moral deficiency. But more importantly psychopaths fail to make the normal human distinction between morality and convention. Thus a plausible account of a moral judgment is that it requires both social convention and emotional reaction. See the work of Shaun Nichols, in particular this for an extended discussion of the implications of psychopathy on metaethics and his book for a broader, empirically informed account of sentimentalist morality. Auditory learners might benefit from this bloggingheads he did.
If the projectivist account is right the difference between non-cognitivism and error theory is essentially one of emphasis. If you want to call moral judgments beliefs based on the above account then you are an error theorist. If you think they're a kind of pseudo-belief then you're a non-cognitivist.
But utility functions are part of the territory described by the map!
Modeling reality has a recursive element which tends to generate considerable confusion over multiple domains. The issue is that somewhere in any good map of the territory will be a description of the agent doing the mapping. So agents end up with beliefs about what they believe and beliefs about what they desire. Thus, we might think there could be a set of sentences in IM that make up our morality so long as some of those sentences describe our utility function. That is, the motivational aspect of morality can be accounted for by including in the reduction both a) a sentence which describes what conditions are to be preferred to others and b) a statement which says that the agent prefers such conditions.
The problem is, our morality doesn't seem completely responsive to hypothetical and counter-factual shifts in what our utility function is. That is, *if* I thought causing suffering in others was something I should do and I got good feelings from doing it that *wouldn't* make causing suffering moral (though Sadist Jack might think it was). In other words, changing one's morality function isn't a way to change what is moral (perhaps this judgment is uncommon, we should test it).
This does not mean the morality subroutine of your utility function isn't responsive to changes in other parts of the utility function. If you think fulfilling your own non-moral desires is a moral good then which actions are moral will depend on how your non-moral desires change. But hypothetical changes in our morality subroutine don't change our moral judgments about our actions in the hypothetical. This is because when we make moral judgments we *don't* look at our map of the world to find our what our morality says, rather we have an emotional reaction to a set of facts and that emotional reaction generates the moral belief. Below is a diagram that somewhat messily describes what I'm talking about.
On the left we have the external world which generates the sensory inputs our agent uses to form beliefs. Those beliefs are then input into the utility function, a subroutine of which is morality. The utility function outputs the action the agent chooses. On the right we have zoomed in on the green Map circle from the left. Here we see that the map includes moral 'beliefs' (note that this isn't an ideal map) which have been projected from the morality subroutine in the utility function. Then we have, also within the Map, the self-representation of the agent which in turn includes her algorithms and mental states. Note that altering morality of the self-representation won't change the output of the morality subroutine of the first level of the model. Of course, in an ideal map the self-representation would match the first level but that doesn't change the causal or phenomenal story of how moral judgments are made.
Observe how easy it is to make category errors if this model is accurate. Since we're projecting our moral subroutine onto our map and we're depicting ourselves in the map it is very easy to think that morality is something we're learning about from the external world (if not from sensory input then from a priori reflection!). Of course, morality is in the external world in a meaningful sense since our brains are in the external world. But learning what is in our brains is not motivating in the way moral judgments are supposed to be. This diagram explains why: the facts about our moral code in our self-representation are not directly connected to our choice circuits which cause us to perform actions. Simply stating what our brains are like will not activate our utility function and so the expressive content of moral language will be left out. This is Hume's is-ought distinction- 'ought' sentences can't be derived from 'is' sentences because ought sentences involve the activation of the utility function at the first level of the diagram, whereas 'is' sentences are exclusively part of the map.
And of course since agents can have different morality functions there are no universally compelling arguments.
The above is the anti-realist position given in terms I think Less Wrong is comfortable with. It has the following things in it's favor: it does not posit any 'queer' moral properties as having objective existence and it fully accounts for the motivating, prescriptive aspect of moral language. It is psychologically sound, albeit simplistic. It is naturalistic while providing an account of why meta-ethics is so confusing to begin with. It explains our naive moral realism and dissolves the difference between the two prominent anti-realist camps.
136 comments
Comments sorted by top scores.
comment by torekp · 2011-08-24T02:18:05.716Z · LW(p) · GW(p)
The post heavily relies on moral internalism without arguing for it. Internalism holds that a necessary connection exists between sincere moral judgment and motivation. As the post says, "moral statements [...] seem to be self-motivating." I've never seen a deeply plausible argument for internalism, and I'm pretty sure it's false. The ability of many psychopaths to use moral language in a normal way, and in some cases to agree that they've done evil and assert that they just don't care, would seem to refute it.
Upvoted for giving a clear statement of an anti-realist view.
Replies from: Jack, lessdazed↑ comment by Jack · 2011-08-24T03:00:55.986Z · LW(p) · GW(p)
As the study I link to in the post points out, even though psychopaths often make accurate moral judgments they don't seem to understand the difference between morality and convention. It seems like they can agree they've done evil and assert that they don't care- but thats because they're using evil to mean "against convention" and not what we mean by it.
You're right that it's a weaker point of the post, though. Didn't really have room or time to say everything.
Just to start: imagine a collection of minds without any moral motivation. How would they learn what is moral? (What we do is closely examine the contours of what we are motivated to do, right?)
Replies from: torekp↑ comment by torekp · 2011-08-25T00:55:36.888Z · LW(p) · GW(p)
Psychopaths, or at least convicted criminals (the likely target of research), may lack the distinction between moral and conventional. But there are brain-damage-induced cases of sociopathy in which individuals can still make that distinction (page 2 of the link). These patients with ventromedial frontal brain damage retain their moral reasoning abilities and beliefs but lose their moral motivation. So, I don't think even the claim that moral judgments necessarily carry some motivational force is true.
@lessdazed: nice point.
Replies from: Jack↑ comment by Jack · 2011-08-25T01:30:50.226Z · LW(p) · GW(p)
Great article, really exciting to read because this:
My working model of how VM cortex is involved in moral belief and motivation is that VM cortex is necessary for acquisition of moral concepts, but not their retention or employment. Damage to VM cortex results in disconnection of the pathway by which cognitive processing of moral propositions normally causes activation of emotional and motivational systems that ultimately lead to action. This model explains why moral reasoning usually results in moral motivation, why damage to VM cortex in early life prevents people from learning moral concepts, and why the connection between moral belief and motivation is contingent and not necessary, and thus why the form of MI I target is false. It also explains why damage to VM cortex fails to impair the moral concepts and beliefs of VM patients who have already acquired moral knowledge.
Is exactly the kind of thing projectionism expects us to find. You need an intact VM cortex to develop moral beliefs in the first place. Once your emotional responses are projected into beliefs about the external world you can loose the emotional response through VM cortex damage but retain the beliefs without the motivation.
Replies from: torekp↑ comment by torekp · 2011-08-25T22:35:32.176Z · LW(p) · GW(p)
I agree, projectivism strongly predicts that emotional faculties will be vital to moral development. But most cognitivist approaches would also predict that the emotional brain has a large role to play. For example, consider this part of the article:
Damage to VM cortex results in difficulties in attributing emotional states to others on the basis of facial and vocal characteristics (Shamay-Tsoory et al., 2003), and leads to the disruption of the subjective experience of emotion, as indicated by self-report (Bechara et al., 2000, Damasio et al. 1990). What we can conclude from these studies is that VM patients have emotional deficits, and have difficulty in attributing emotions to others, and thus that they may not be reliable in emotion attribution.
People who can't tell whether others are suffering or prospering are going to be seriously impaired in moral learning, on almost any philosophical ethical view.
Replies from: Jack↑ comment by Jack · 2011-08-26T00:02:21.408Z · LW(p) · GW(p)
Sure. But, to tie it back to what we were discussing before, that internalism is false when it comes to moral beliefs is not evidence against a projectivist and non-cognitivist thesis.
As a tentative aside-- I'm not sure whether or not internalism is a necessary part of the anti-realist position. It seems conceivable that there could be preferences, desires or emotive dispositions that aren't motivating at all. It certainly seems psychologically implausible- but it doesn't follow that it is impossible.
Someone should do a series of qualitative interviews with VM cortex impaired patients. I'd like to know things like what "ought" means to them.
Replies from: torekp↑ comment by torekp · 2011-08-28T02:08:10.786Z · LW(p) · GW(p)
In a Bayesian sense, the falsity of internalism tends to weaken the case for projectivism and non-cognitivism, by taking away an otherwise promising line of support for them. Mackie's argument from queerness relies upon it, for example.
Replies from: Jack↑ comment by Jack · 2011-08-28T03:17:15.498Z · LW(p) · GW(p)
Mackie conflates two aspects of queerness- motivation and direction, the latter of which remains even if motivational internalism is false. Second, that motivation can be detached from moral judgment in impaired brains doesn't mean that moral facts don't have a queer associate with motivation.
↑ comment by lessdazed · 2011-08-24T02:46:38.977Z · LW(p) · GW(p)
I've never seen a deeply plausible argument for internalism
If we all agree that some different moral statements are motivating in different amounts, the burden of proof is on the one who says that a certain amount of motivation is impossible.
E.g. The belief "It would be nice to help a friend by helping carry their couch up the stairs to their apartment" makes me feel mildly inclined to help. The belief "It would be really nice to give the homeless guy who asked for food a sandwich" makes me significantly inclined to help. Why would it be impossible for me to believe "It would be nice to help my friend with his diet when he visits me" and feel nothing at all?
comment by Wei Dai (Wei_Dai) · 2011-08-22T08:43:08.291Z · LW(p) · GW(p)
One way of phrasing a moral non-cognitivist position is to say that moral statements are properly thought of as expressions of an individual's utility function rather than sentences describing the world.
Note that 'expressions of an individual's utility function' is not the same as 'sentences describing an individual's utility function'. The latter is something like 'I prefer chocolate to vanilla' the former is something like 'Mmmm chocolate!'. It's how the utility function feels from the inside. And the way a utility function feels from the inside appears to be, or at least involve, emotion.
This seems very plausible, except for the fact that we are able to reflect on our emotions and intuitive moral judgements, and to some extent the results of our conscious moral deliberations can override our emotions/intuitions, or even change how our emotions/intuitions work. This simply can't happen if our emotions are direct expressions of our utility functions, or what they feel like from the inside.
An analogy with Yvain's blue-minimizing robot might help here. Emotions perhaps express the utility function of the "main part" of a human brain, but there's this "side module" that works by its own rules, and can occasionally override/modify the "main part". How to formulate a meta-ethics that applies to the human as a whole still seems puzzling to me.
Replies from: Jack↑ comment by Jack · 2011-08-22T11:26:04.768Z · LW(p) · GW(p)
This seems very plausible, except for the fact that we are able to reflect on our emotions and intuitive moral judgements, and to some extent the results of our conscious moral deliberations can override our emotions/intuitions, or even change how our emotions/intuitions work. This simply can't happen if our emotions are direct expressions of our utility functions, or what they feel like from the inside.
I'm skeptical that moral deliberation actually overrides emotions directly. It seems more likely that it changes the beliefs that are input into the utility function (thought often in a subtle way). Obviously this can lead the expression of different emotions. Second, we should be extremely skeptical when we think we've reasoned from one position to another even when the facts haven't changed. If our morality function just changed for an internal reason, say hormone level, it seems very characteristic of humans to invent a rationalization for the change.
Can you give a particular example of a moral deliberation that you think is a candidate? This seems like it would be easier to discuss on an object level.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-08-22T16:32:22.413Z · LW(p) · GW(p)
Can you give a particular example of a moral deliberation that you think is a candidate?
Take someone who talks (or reads) themselves into utilitarianism or egoism. This seems to have real consequences on their actions, for example:
I had deliberately terminated my donations to charities that seemed closer to "rescuing lost puppies". I had also given up personal volunteering (I figured out {work - earn - donate} before I heard it here.) And now I'm really struggling with akrasia / procrastination / laziness /rebellion / escapism.
Presumably, when that writer "converted" to utilitarianism, the positive emotions of "rescuing lost puppies" or "personally volunteering" did not go away, but he chose to override those emotions. (Or if they did go away, that's a result of converting to utilitarianism, not the cause.)
Second, we should be extremely skeptical when we think we've reasoned from one position to another even when the facts haven't changed. If our morality function just changed for an internal reason, say hormone level, it seems very characteristic of humans to invent a rationalization for the change.
I don't think changes in hormone level could explain "converting" to utilitarianism or egoism, but I do leave open the more general possibility that all moral changes are essentially "internal". If someone could conclusively show that, I think the anti-realist position would be much stronger.
Replies from: Jack↑ comment by Jack · 2011-08-22T18:17:40.406Z · LW(p) · GW(p)
So a couple points:
First, I'm reluctant to use Less Wrong posters as a primary data set because Less Wrong posters are far from neurotypical. A lot of hypotheses about autism involve... wait for it... amygdala abnormality.
Second, I think it is very rare for people to change their behavior when they adopt a new normative theory. Note that all the more powerful arguments in normative theory involve thought experiments designed to evoke an emotional response. People usually adopt a normative theory because it does a good job explaining the emotional intuitions they already possess.
Third, a realist account of changing moral beliefs is really metaphysically strange. Does anyone think we should be updating P(utilitarianism) based on evidence we gather? What would that evidence look like? If an anti-realist metaphysics gives us a natural account of what is really happening when we think we're responding to moral arguments then shouldn't anti-realism be the most plausible candidate?
Replies from: Wei_Dai, Will_Newsome↑ comment by Wei Dai (Wei_Dai) · 2011-08-22T19:58:47.724Z · LW(p) · GW(p)
This part of a previous reply to Richard Chappell seems relevant here also:
suppose the main reason I'm interested in metaethics is that I am trying to answer a question like "Should I terminally value the lives of random strangers?" and I'm not sure what that question means exactly or how I should go about answering it. In this case, is there a reason for me to care much about the pre-theoretic grasp of most people, as opposed to, say, people I think are most likely to be right about morality?
In other words, suppose I think I'm someone who would change my behavior when I adopt a new normative theory. Is your meta-ethical position still relevant to me?
If nothing else, my normative theory could change what I program into an FAI, in case I get the chance to do something like that. What does your metaethics imply for someone in this kind of situation? Should I, for example, not think too much about normative ethics, and when the time comes just program into the FAI whatever I feel like at that time? In case you don't have an answer now, do you think the anti-realist approach will eventually offer an answer?
Third, a realist account of changing moral beliefs is really metaphysically strange.
I think we currently don't have a realist account of changing moral beliefs that is metaphysically not strange. But given that metaphysics is overall still highly confusing and unsettled, I don't think this is a strong argument in favor of anti-realism. For example what is the metaphysics of mathematics, and how does that fit into a realist account of changing mathematical beliefs?
Replies from: Jack↑ comment by Jack · 2011-08-22T21:31:22.935Z · LW(p) · GW(p)
In other words, suppose I think I'm someone who would change my behavior when I adopt a new normative theory. Is your meta-ethical position still relevant to me?
What the anti-realist theory of moral change says is that terminal values don't change in response to reasons or evidence. So if you have a new normative theory and a new set of behaviors anti-realism predicts that either your map has changed or your terminal values changed internally and you took up a new normative theory as a rationalization of those new values.
I wonder if you, or anyone else can give me some examples reasons for changing one's normative theory. I suspect that most if not all such reasons which actually lead to a behavior change will either involve evoking emotion or updating the map (i.e. something like your normative theory ignores this class of suffering or something like that).
If nothing else, my normative theory could change what I program into an FAI, in case I get the chance to do something like that. What does your metaethics imply for someone in this kind of situation? Should I, for example, not think too much about normative ethics, and when the time comes just program into the FAI whatever I feel like at that time? In case you don't have an answer now, do you think the anti-realist approach will eventually offer an answer?
Good question that I could probably turn into a full post. Anti-realism doesn't get rid of normative ethics exactly, it just redefines what we mean by it. We're not looking for some theory that describes a set of facts about the world. Rather, we're trying to describe the moral subroutine in our utility function. In a sense, it deflates the normative project into something a lot like coherent extrapolated volition. Of course, anti-realism also constrains what methods we should expect to be successful in normative theory and what kinds of features we should expect an ideal normative theory to have. For example, since the morality function is a biological and cultural creation we shouldn't be surprised to find out that it is weirdly context dependent, kludgey or contradictory. We should also expect to uncover natural variance between utility functions. Anti-realism also suggests that descriptive moral psychology is a much more useful tool for forming an ideal normative theory than, say, abstract reasoning.
I think we currently don't have a realist account of changing moral beliefs that is metaphysically not strange. But given that metaphysics is overall still highly confusing and unsettled, I don't think this is a strong argument in favor of anti-realism. For example what is the metaphysics of mathematics, and how does that fit into a realist account of changing mathematical beliefs?
I actually think an approach similar to the one in this post might clarify the mathematics question (I think mathematics could be thought of as a set of meta-truths about our map and the language we use to draw the map). In any case, it seems obvious to me that the situations of mathematics and morality are asymmetric in important ways. Can you tell an equally plausible story about why we believe mathematical statements are true even though they are actually false? In particular, the intensive use of mathematics in our formulation of scientific theories seems to give it a secure footing that morality does not have.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-08-23T19:49:35.422Z · LW(p) · GW(p)
So if you have a new normative theory and a new set of behaviors anti-realism predicts that either your map has changed or your terminal values changed internally and you took up a new normative theory as a rationalization of those new values.
In your view, is there such a thing as the best rationalization of one's values, or is any rationalization as good as another? If there is a best rationalization, what are its properties? For example, should I try to make my normative theory fit my emotions as closely as possible, or also take simplicity and/or elegance into consideration? What if, as seems likely, I find out that the most straightforward translation of my emotions into a utility function gives a utility function that is based on a crazy ontology, and it's not clear how to translate my emotions into a utility function based on the true ontology of the world (or my current best guess as to the true ontology). What should I do then?
The problem is, we do not have a utility function. If we want one, we have to construct it, which inevitably involves lots of "deliberative thinking". If the deliberative thinking module gets to have lots of say anyway, why can't it override the intuitive/emotional modules completely? Why does it have to take its cues from the emotional side, and merely "rationalize"? Or do you think it doesn't have to, but it should?
Anti-realism also suggests that descriptive moral psychology is a much more useful tool for forming an ideal normative theory than, say, abstract reasoning.
Unfortunately, I don't see how descriptive moral psychology can help me to answer the above questions. Do you? Or does anti-realism offer any other ideas?
Replies from: Jack↑ comment by Jack · 2011-08-23T20:50:25.720Z · LW(p) · GW(p)
In your view, is there such a thing as the best rationalization of one's values, or is any rationalization as good as another? If there is a best rationalization, what are its properties? For example, should I try to make my normative theory fit my emotions as closely as possible, or also take simplicity and/or elegance into consideration?
What counts as a virtue in any model depends on what you're using that model for. If you're chiefly concerned with accuracy then you want your normative theory to fit your values as much as possible. But maybe the most accurate model takes to long to run on your hardware- in that case you might prefer a simpler, more elegant model. Maybe there are hard limits to how accurate we can make such models and will be willing to settle for good enough.
What if, as seems likely, I find out that the most straightforward translation of my emotions into a utility function gives a utility function that is based on a crazy ontology, and it's not clear how to translate my emotions into a utility function based on the true ontology of the world (or my current best guess as to the true ontology). What should I do then?
Whatever our best ontology is it will always have some loose analog in our evolved, folk ontology. So we should try our best to to make it fit. There will always be weird edge cases that arise as our ontology improves and our circumstances diverge from our ancestor's i.e. "are fetuses in the class of things we should have empathy for?" Expecting evolution to have encoded an elegant set of principles in the true ontology is obviously crazy. There isn't much one can do about it if you want to preserve your values. You could decide that you care more about obeying a simple, elegant moral code than you do about your moral intuition/emotional response (perhaps because you have a weak or abnormal emotional response to begin with). Whether you should do one or the other is just a meta moral judgment and people will have different answers because the answer depends on their psychological disposition. But I think realizing that we aren't talking about facts but trying to describe what we value makes elegance and simplicity seem less important.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-08-23T22:12:21.349Z · LW(p) · GW(p)
There isn't much one can do about it if you want to preserve your values.
I dispute the assumption that my emotions represent my values. Since the part of me that has to construct a utility function (let's say for the purpose of building an FAI) is the deliberative thinking part, why shouldn't I (i.e., that part of me) dis-identify with my emotional side? Suppose I do, then there's no reason for me to rationalize "my" emotions (since I view them as just the emotions of a bunch of neurons that happen to be attached to me). Instead, I could try to figure out from abstract reasoning alone what I should value (falling back to nihilism if ultimately needed).
According to anti-realism, this is just as valid a method of coming up with a normative theory as any other (that somebody might have the psychological disposition to choose), right?
Alternatively, what if I think the above may be something I should do, but I'm not sure? Does anti-realism offer any help besides that it's "just a meta moral judgment and people will have different answers because the answer depends on their psychological disposition"?
A superintelligent moral psychologist might tell me that there is one text file, which if I were to read it, would cause me to do what I described earlier, and another text file which would cause me to to choose to rationalize my emotions instead, and therefore I can't really be said to have an intrinsic psychological disposition in this matter. What does anti-realism say is my morality in that case?
Replies from: torekp, Jack↑ comment by torekp · 2011-08-24T02:02:06.750Z · LW(p) · GW(p)
I dispute the assumption that my emotions represent my values.
Me too. There are people who consistently judge that their morality has "too little" motivational force, and there are people who perceive their morality to have "too much" motivational force. And there are people who deem themselves under-motivated by certain moral ideals and over-motivated by others. None of these would seem possible if moral beliefs simply echoed (projected) emotion. (One could, of course, object to one's past or anticipated future motivation, but not one's present; nor could the long-term averages disagree.)
Replies from: Jack↑ comment by Jack · 2011-08-23T23:00:22.862Z · LW(p) · GW(p)
Since the part of me that has to construct a utility function (let's say for the purpose of building an FAI) is the deliberative thinking part, why shouldn't I (i.e., that part of me) dis-identify with my emotional side? Suppose I do, then there's no reason for me to rationalize "my" emotions (since I view them as just the emotions of a bunch of neurons that happen to be attached to me). Instead, I could try to figure out from abstract reasoning alone what I should value (falling back to nihilism if ultimately needed). According to anti-realism, this is just as valid a method of coming up with a normative theory as any other (that somebody might have the psychological disposition to choose), right?
First, this scenario is just impossible. One cannot dis-identify from one's 'emotional side'. Thats not a thing. If someone thinks they're doing that they've probably smuggled their emotions into their abstract reasons (see, for example, Kant). Second, it seems silly, even dumb, to give up on making moral judgments and become a nihilist just because you'd like there be a way to determine moral principles from abstract reasoning alone. Most people are attached to their morality and would like to go on making judgments. If someone has such a strong psychological need to derive morality through abstract reasoning along that they're just going to give up morality: so be it I guess. But that would be a very not-normal person and not at all the kind of person I would want to have programming an FAI.
But yes- ultimately my values enter into it and my values may not be everyone else's. So of course there is no fact of the matter about the "right" way to do something. Nevertheless, there are still no moral facts.
You seem to be asking anti-realism to supply you with answers to normative questions. But what anti-realism tells you is that such questions don't have factual answers. I'm telling you what morality is. To me, the answer has some implications for FAI but anti-realism certainly doesn't answer questions that it says there aren't answers to.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-08-23T23:37:18.480Z · LW(p) · GW(p)
One cannot dis-identify from one's 'emotional side'. Thats not a thing.
In order to rationalize my emotions, I have to identify with them in the first place (as opposed to the emotions of my neighbor, say). Especially if I'm supposed to apply descriptive moral psychology, instead of just confabulating unreflectively based on whatever emotions I happen to feel at any given moment. But if I can identify with them, why can't I dis-identify from them?
If someone thinks they're doing that they've probably smuggled their emotions into their abstract reasons (see, for example, Kant).
That doesn't stop me from trying. In fact moral psychology could be a great help in preventing such "contamination".
You seem to be asking anti-realism to supply you with answers to normative questions. But what anti-realism tells you is that such questions don't have factual answers.
If those questions don't have factual answers, then I could answer them any way I want, and not be wrong. On the other hand if they do have factual answers, then I better use my abstract reasoning skills to find out what those answers are. So why shouldn't I make realism the working assumption, if I'm even slightly uncertain that anti-realism is true? If that assumption turns out to be wrong, it doesn't matter anyway--whatever answers I get from using that assumption, including nihilism, still can't be wrong. (If I actually choose to make that assumption, then I must have a psychological disposition to make that assumption. So anti-realism would say that whatever normative theory I form under that assumption is my actual morality. Right?)
I'm telling you what morality is.
Can you answer the last question in the grandparent comment, which was asking just this sort of question?
Replies from: cousin_it, Jack↑ comment by cousin_it · 2011-08-24T12:46:53.494Z · LW(p) · GW(p)
If those questions don't have factual answers, then I could answer them any way I want, and not be wrong.
That's true as stated, but "not being wrong" isn't the only thing you care about. According to your current morality, those questions have moral answers, and you shouldn't answer them any way you want, because that could be evil.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-08-24T19:12:07.585Z · LW(p) · GW(p)
When you say "you shouldn't answer them any way you want" are you merely expressing an emotional dissatisfaction, like Jack?
If it's meant to be more than an expression of emotional dissatisfaction, I guess "should" means "what my current morality recommends" and "evil" means "against my current morality", but what do you mean by "current morality"? As far as I can tell, according to anti-realism, my current morality is whatever morality I have the psychological disposition to construct. So if I have the psychological disposition to construct it using my intellect alone (or any other way), how, according to anti-realism, could that be evil?
Replies from: cousin_it↑ comment by cousin_it · 2011-08-24T22:12:50.490Z · LW(p) · GW(p)
By "current morality" I mean that the current version of you may dislike some outcomes of your future moral deliberations if Omega shows them to you in advance. It's quite possible that you have a psychological disposition to eventually construct a moral system that the current version of you will find abhorrent. For an extreme test case, imagine that your long-term "psychological dispositions" are actually coming from a random number generator; that doesn't mean you cannot make any moral judgments today.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-08-25T05:49:12.585Z · LW(p) · GW(p)
It's quite possible that you have a psychological disposition to eventually construct a moral system that the current version of you will find abhorrent.
I agree it's quite possible. Suppose I do somehow find out that the current version of me emotionally dislikes the outcomes of my future moral deliberations. I still have to figure out what to do about that. Is there a normative fact about what I should do in that case? Or is there only a psychological disposition?
Replies from: cousin_it↑ comment by cousin_it · 2011-08-25T10:01:30.598Z · LW(p) · GW(p)
I think there's only a psychological disposition. If the future of your morals looked abhorrent enough to you, I guess you'd consider it moral to steer toward a different future.
Ultimately we seem to be arguing about the meaning of the word "morality" inside your head. Why should that concept obey any simple laws, given that it's influenced by so many random factors inside and outside your head? Isn't that like trying to extrapolate the eternally true meaning of the word "paperclip" based on your visual recognition algorithms, which can also crash on hostile input?
I appreciate your desire to find some math that could help answer moral questions that seem too difficult for our current morals. But I don't see how that's possible, because our current morals are very messy and don't seem to have any nice invariants.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-08-25T16:49:49.632Z · LW(p) · GW(p)
Why should that concept obey any simple laws, given that it's influenced by so many random factors inside and outside your head?
Every concept is influenced by many random factors inside and outside my head, which does not rule out that some concepts can be simple. I've already given one possible way in which that concept can be simple: someone might be a strong deliberative thinker and decide to not base his morality on his emotions or other "random factors" unless he can determine that there's a normative fact that he should do so.
Emotions are just emotions. They do not bind us, like a utility function binds an EU maximizer. We're free to pick a morality that is not based on our emotions. If we do have a utility function, it's one that we can't see at this point, and I see no strong reason to conclude that it must be complex.
Isn't that like trying to extrapolate the eternally true meaning of the word "paperclip" based on your visual recognition algorithms, which can also crash on hostile input?
How do we know it's not more like trying to extrapolate the eternally true meaning of the word "triangle"?
But I don't see how that's possible, because our current morals are very messy and don't seem to have any nice invariants.
Thinking that humans have a "current morality" seems similar to a mistake that I was on the verge of making before, of thinking that humans have a "current decision theory" and therefore we can solve the FAI decision theory problem by finding out what our current decision theory is, and determining what it says we should program the FAI with. But in actuality, we don't have a current decision theory. Our "native" decision making mechanisms (the ones described in Luke's tutorial) can be overridden by our intellect, and no "current decision theory" governs that part of our brains. (A CDT theorist can be convinced to give up CDT, and not just for XDT, i.e., what a CDT agent would actually self-modify into.) So we have to solve that problem with "philosophy" and I think the situation with morality may be similar, since there is no apparent "current morality" that governs our intellect.
Replies from: cousin_it↑ comment by cousin_it · 2011-09-02T09:41:59.779Z · LW(p) · GW(p)
How do we know it's not more like trying to extrapolate the eternally true meaning of the word "triangle"?
Even without going into the complexities of human minds: do you mean triangle in formal Euclidean geometry, or triangle in the actual spacetime we're living in? The latter concept can become arbitrarily complex as we discover new physics, and the former one is an approximation that's simple because it was selected for simplicity (being easy to use in measuring plots of land and such). Why you expect the situation to be different for "morality"?
↑ comment by Jack · 2011-08-24T01:24:04.280Z · LW(p) · GW(p)
In order to rationalize my emotions, I have to identify with them in the first place (as opposed to the emotions of my neighbor, say). Especially if I'm supposed to apply descriptive moral psychology, instead of just confabulating unreflectively based on whatever emotions I happen to feel at any given moment. But if I can identify with them, why can't I dis-identify from them?
I'm not sure I actually understand what you mean by "dis-identify".
If those questions don't have factual answers, then I could answer them any way I want, and not be wrong. On the other hand if they do have factual answers, then I better use my abstract reasoning skills to find out what those answers are. So why shouldn't I make realism the working assumption, if I'm even slightly uncertain that anti-realism is true? If that assumption turns out to be wrong, it doesn't matter anyway--whatever answers I get from using that assumption, including nihilism, still can't be wrong.
So Pascal's Wager?
In any case, while there aren't wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things.
Which question exactly?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-08-24T01:40:42.627Z · LW(p) · GW(p)
In any case, while there aren't wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things.
Then there is fact of the matter about which answers are moral, and we might as well call those that aren't, "incorrect".
Replies from: wedrifid, Jack↑ comment by wedrifid · 2011-08-24T04:57:29.978Z · LW(p) · GW(p)
Then there is fact of the matter about which answers are moral, and we might as well call those that aren't, "incorrect".
It seems like a waste to overload the meaning of the word "incorrect" to also include such things as "Fuck off! That doesn't satisfy socially oriented aspects of my preferences. I wish to enforce different norms!"
It really is useful to emphasize a carve in reality between 'false' and 'evil/bad/immoral'. Humans are notoriously bad at keeping the concepts distinct in their minds and allowing 'incorrect' (and related words) to be used for normative claims encourages even more motivated confusion.
↑ comment by Will_Newsome · 2011-08-22T19:26:30.718Z · LW(p) · GW(p)
A lot of hypotheses about autism involve... wait for it... amygdala abnormality.
Autism gets way over-emphasized here and elsewhere as a catch-all diagnosis for mental oddity. Schizotypality and obsessive-compulsive spectrum conditions are just as common near the far right of the rationalist ability curve. (Both of those are also associated with lots of pertinent abnormalities of the insula, anterior cingulate cortex, dorsolateral prefrontal cortex, et cetera. However I've found that fMRI studies tend to be relatively meaningless and shouldn't be taken too seriously; it's not uncommon for them to contradict each other despite high claimed confidence.)
I'm someone who "talks (or reads) myself into" new moral positions pretty regularly and thus could possibly be considered an interesting case study. I got an fMRI done recently and can probably persuade the researchers to give me a summary of their subsequent analysis. My brain registered absolutely no visible change during the two hours of various tasks I did while in the fMRI (though you could see my eyes moving around so it was clearly working); the guy sounded somewhat surprised at this but said that things would show up once the data gets sent to the lab for analysis. I wonder if that's common. (At the time I thought, "maybe that's because I always feel like I'm being subjected to annoying trivial tests of my ability to jump through pointless hoops" but besides sounding cool that's probably not accurate.) Anyway, point is, I don't yet know what they found.
(I'm not sure I'll ever be able to substantiate the following claim except by some day citing people who agree with me, 'cuz it's an awkward subject politically, but: I think the evidence clearly shows that strong aneurotypicality is necessary but not sufficient for being a strong rationalist. The more off-kilter your mind is the more likely you are to just be crazy, but the more likely you are to be a top tier rationalist, up to the point where the numbers get rarer than one per billion. There are only so many OCD-schizotypal IQ>160 folk. I didn't state that at all clearly but you get the gist, maybe.)
Replies from: Jack, Will_Newsome↑ comment by Jack · 2011-08-22T19:48:00.696Z · LW(p) · GW(p)
Can you talk about about some of the arguments that lead you to taking new moral positions? Obviously I'm not interested in cases where new facts changed how you thought ethics should be applied but cases where your 'terminal values' changed in response to something.
Replies from: Will_Newsome, Will_Newsome↑ comment by Will_Newsome · 2011-08-22T20:06:34.669Z · LW(p) · GW(p)
That's difficult because I don't really believe in 'terminal values', so everything looks like "new facts" that change how my "ethics" should be applied. (ETA: Like, falling in love with a new girl or a new piece of music can look like learning a new fact about the world. This perspective makes more sense after reading the rest of my comment.) Once you change your 'terminal values' enough they stop looking so terminal and you start to get a really profound respect for moral uncertainty and the epistemic nature of shouldness. My morality is largely directed at understanding itself. So you could say that one of my 'terminal values' is 'thinking things through from first principles', but once you're that abstract and that meta it's unclear what it means for it to change rather than, say, just a change in emphasis relative to something else like 'going meta' or 'justification for values must be even better supported than justification for beliefs' or 'arbitrariness is bad'. So it's not obvious at which level of abstraction I should answer your question.
Like, your beliefs get changed constantly whereas methods only get changed during paradigm shifts. The thing is that once you move that pattern up a few levels of abstraction where your simple belief update is equivalent to another person's paradigm shift, it gets hard to communicate in a natural way. Like, for the 'levels of organization' flavor of levels of abstraction, the difference between "I love Jane more than any other woman and would trade the world for her" and "I love humanity more than other memeplex instantiation and would trade the multiverse for it". It is hard for those two values to communicate with each other in an intelligible way; if they enter into an economy with each other it's like they'd be making completely different kinds of deals. kaldkghaslkghldskg communication is difficult and the inferential distance here is way too big.
To be honest I think that though efforts like this post are well-intentioned and thus should be promoted to the extent that they don't give people an excuse to not notice confusion, Less Wrong really doesn't have the necessarily set of skills or knowledge to think about morality (ethics, meta-ethics) in a particularly insightful manner. Unfortunately I don't think this is ever going to change. But maybe five years' worth of posts like this at many levels of abstraction and drawing on many different sciences and perspectives will lead somewhere? But people won't even do that. dlakghjadokghaoghaok. Ahem.
↑ comment by Will_Newsome · 2011-08-22T20:15:36.585Z · LW(p) · GW(p)
Like, there's a point at which object level uncertainty looks like "should I act as if I am being judged by agents with imperfect knowledge of the context of my decisions or should I act as if I am being judged by an omniscient agent or should I act as if I need to appease both simultaneously or ..."; you can go meta here in the abstract to answer this object level moral problem, but one of my many points is that at this point it just looks nothing like 'is killing good or bad?' or 'should I choose for the Nazis kill my son, or my daughter (considering they've forced this choice upon me)?'.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-08-22T20:22:58.224Z · LW(p) · GW(p)
'should I choose for the Nazis kill my son, or my daughter (considering they've forced this choice upon me)?'
I remember that when I was like 11 years old I used to lie awake at night obsessing about variations on Sophie's choice problems. Those memories are significantly more vivid than my memories of living off ramen and potatoes with no electricity for a few months at around the same age. (I remember thinking that by far the worst part of this was the cold showers, though I still feel negative affect towards ramen (and eggs, which were also cheap).) I feel like that says something about my psychology.
↑ comment by Will_Newsome · 2011-08-22T19:39:10.211Z · LW(p) · GW(p)
You know, it isn't actually any more descriptive to write out what ACC and DLPFC stand for, since if people know anything about them they already know their acronyms, but writing them out signals that I know that most people don't know what ACC and DLPFC stand for and am penitent that I'm not bothering to link to their respective Wikipedia articles. I hate having to constantly jump through such stupid signalling hoops.
Replies from: JGWeissman, thomblake↑ comment by JGWeissman · 2011-08-22T19:56:33.487Z · LW(p) · GW(p)
I can tell from "anterior cingulate cortext" that you are talking about a part of the brain, even though I haven't heard of that part before. (I may have been able to tell from context that much about "ACC", but it would have been more work, and I would have been less confident.)
And compare the Google searches for ACC and anterior cingulate cortext. It is nice to get a more relevant first search result than "Atlantic Coast Conference Official Athletic Site".
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-08-22T20:30:56.627Z · LW(p) · GW(p)
It's rare that people bother to learn new things on their own whereas it's common for them to punish people that make it trivially more difficult for them to counterfactually do so, even though they wouldn't even have been primed to want to do so if that person hadn't brought up the subject. That's the thing I'm complaining about. (bla bla marginal cost something bla opportunity costs b;'dja. disclaimer disclaimer disclaimer.)
Replies from: Nisan↑ comment by Nisan · 2011-08-22T22:05:43.579Z · LW(p) · GW(p)
This might make you feel better: There is a part of every reader that cares about the subjective experience of reading. If you propitiate that part by writing things that are a pleasure to read, they'll be more likely to read what you say.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-08-23T00:43:25.210Z · LW(p) · GW(p)
Very helpful advice in only two sentences. Appeals to aesthetics are my favorite. Thank you.
↑ comment by thomblake · 2011-08-22T21:17:35.045Z · LW(p) · GW(p)
FWIW, I have a passing familiarity from long ago with both terms, and none with the acronyms. I would have been mystified if you'd written ACC and probably would not have been able to figure out what you're talking about, given some quick googling. Though DLPFC could probably have gotten me there after googling that.
It's certainly easier to look up terms instead of abbreviations, and even moreso years later. People using abbreviations that have since fallen out of use is one of my pet peeves when reading older papers.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-08-22T21:25:15.366Z · LW(p) · GW(p)
Right, but "a passing familiarity from long ago" != "knowing anything about them" in my mind. (Obviously I should have used a less hyperbolic phrase.) OTOH I wasn't aware of the phenomenon where abbreviations often fall out of use. I think the DLPFC was only carved out of conceptspace in the last decade, both the idea and its abbreviation, which does indicate that the inverse problem might be common for quickly advancing fields like neuroscience. (ETA: So, I was wrong to see this as an example of the thing I was complaining about. (I don't think I was wrong to complain about that thing but only in a deontological sense; in a consequentialist sense I was wrong there too.))
comment by lessdazed · 2011-08-22T08:13:22.176Z · LW(p) · GW(p)
Fantastic post. It goes a long way toward dissolving the question.
On the left we have the external world which generates the sensory inputs our agent uses to form beliefs.
Rhetorical question one: how is the singular term "agent" justified when there is a different configuration of molecules in the space the "agent" occupies from moment to moment? Wouldn't "agents" be better? What if the agent gets hit by a non-fatal brain-altering gamma ray burst or something? There's no natural quantitative point to say we have "an agent who has changed to become a different agent, such that when talking about the present and past agent, I will use 'agents' rather than '(a substantially unchanged) 'agent'", and neither is there a qualitative difference.
Answer: don't be crazy. It's basically one person.
Rhetorical question two: May I say that "Chocolate ice cream is more delicious than getting beaten with tire irons by a troop of gorillas", or should I stick with "I would prefer eating chocolate ice cream to getting beaten with tire irons by a troop of gorillas"?
Answer: words are handles, don't be confused by them, not all possible minds would have that set of preferences, but enough "human" ones do that it's reasonable to say: likewise for the class of beings I include under "human", there is a reasonable thing to mean by it.
Replies from: Strange7↑ comment by Strange7 · 2011-08-24T00:00:55.164Z · LW(p) · GW(p)
"Chocolate ice cream is more delicious than getting beaten with tire irons by a troop of gorillas",
I would say that this first phrasing actually provides more information than the second, in that it refers to the nature of the preference, which is relevant for predicting how the agent in question might change it's preferences over time. Deliciousness tends to vary with supply, so the degree to which you prefer icecream over gorilla assault is likely to increase when you're hungry or malnourished, and decrease when you're nutritionally sated. In fact, if you were force-fed chocolate ice cream and nothing else for long enough, the preference might even reverse.
Replies from: lessdazed↑ comment by lessdazed · 2011-08-24T00:26:42.995Z · LW(p) · GW(p)
it refers to the nature of the preference
Does it imply that all possible minds find the experience of eating ice cream more delicious than being beaten by gorillas with metal bars? For that would be untrue!
I question the assumption of error theorists that statements like the first have such expansive meaning, I hadn't meant to change the variable you pointed out - the reason for the preference.
Replies from: Strange7↑ comment by Strange7 · 2011-08-24T04:00:12.267Z · LW(p) · GW(p)
My understanding is that when someone talks about matters of preference, the default assumption is that they are referring to their own, or possibly the aggregate preferences of their peer group, in part because there is little or nothing that can be said about the aggregate preferences of all possible minds.
comment by whowhowho · 2013-02-13T23:08:12.735Z · LW(p) · GW(p)
The above is the anti-realist position given in terms I think Less Wrong is comfortable with. It has the following things in it's favor: it does not posit any 'queer' moral properties as having objective existence and it fully accounts for the motivating, prescriptive aspect of moral language. It is psychologically sound, albeit simplistic. It is naturalistic while providing an account of why meta-ethics is so confusing to begin with. It explains our naive moral realism and dissolves the difference between the two prominent anti-realist camps.
..and its a counsel of despair when it comes to getting people to ultimately agree about morality. Which, for those who think that is a vital feature, would incline them to regard it as an error theory.
Replies from: DaFranker↑ comment by DaFranker · 2013-02-14T15:26:46.061Z · LW(p) · GW(p)
Have you ever heard of Game Theory?
Because I don't see why this counsel of despair couldn't crack down some math and figure out Pareto-optimal moral rules or laws or agreements, and run with those. If they know enough about their own moralities to be a "counsel of despair", they should know enough to put down rough estimates and start shutting up.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-14T16:22:59.932Z · LW(p) · GW(p)
That presupposes something like utilitariansim. If something like deontology is true, then number-crunched solutions could involve unjustifiable violations of rights.
Replies from: DaFranker↑ comment by DaFranker · 2013-02-14T16:36:57.426Z · LW(p) · GW(p)
Could you humor me for an example? What would the universe look like if "deontology is true" versus a universe where "deontology is false"? Where is the distinction?
I don't see how a deontological system would prevent number-crunching. You just number-crunch for a different target: find the pareto optima that minimize the amount of rule-breaking and/or the importance of the rules broken.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T00:00:00.208Z · LW(p) · GW(p)
Could you humor me for an example? What would the universe look like if "deontology is true" versus a universe where "deontology is false"?
What would it be like if utilitarianism is true? Or the axiom of choice? Or the continuum hypothesis?
I don't see how a deontological system would prevent number-crunching.
I don't see how a description of the neurology of moral reasoning tells you how to crunch the numbers -- which decision theory you need to use to implement which moral theory to resolve conflicts in the right way.
Replies from: DaFranker, BerryPick6↑ comment by DaFranker · 2013-02-15T01:04:45.566Z · LW(p) · GW(p)
What would it be like if utilitarianism is true?
This statement seems meaningless to me. As in "Utilitarianism is true" computes in my mind the exact same way as "Politics is true" or "Eggs are true".
The term "utilitarianism" encompasses a broad range of philosophies, but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called "utility function".
If this latter meaning is used, "utilitarianism is true" is a complete type error, just like "Blue is true" or "Eggs are loud". You can't say that the mathematical formulas and formalisms of utilitarianism are "true" or "false", they're just formulas. You can't say that "x = 5" is "true" or "false". It's just a formula that doesn't connect to anything, and that "x" isn't related to anything physical - I just pinpointed "x" as a variable, "5" as a number, and then declared them equivalent for the purposes of the rest of this comment.
This is also why I requested an example for deontology. To me, "deontology is true" sounds just like those examples. Neither "utilitarianism is true" or "deontology is true" correspond to well-formed statements or sentences or propositions or whatever the "correct" philosophical term is for this.
Replies from: nshepperd, whowhowho↑ comment by nshepperd · 2013-02-15T11:46:10.521Z · LW(p) · GW(p)
but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called "utility function".
Wait, seriously? That sounds like a gross misuse of terminology, since "utilitarianism" is an established term in philosophy that specifically talks about maximising some external aggregative value such as "total happiness", or "total pleasure minus suffering". Utility functions are a lot more general than that (ie. need not be utilitarian, and can be selfish, for example).
Replies from: DaFranker↑ comment by DaFranker · 2013-02-15T15:16:31.863Z · LW(p) · GW(p)
Wait, seriously? That sounds like a gross misuse of terminology, since "utilitarianism" is an established term in philosophy that specifically talks about maximising some external aggregative value such as "total happiness", or "total pleasure minus suffering".
To an untrained reader, this would seem as if you'd just repeated in different words what I said ;)
I don't see "utilitarianism" itself used all that often, to be honest. I've seen the phrase "in utilitarian fashion", usually referring more to my description than the traditional meaning you've described.
"Utility function", on the other hand, gets thrown around a lot with a very general meaning that seems to be "If there's something you'd prefer than maximizing your utility function, then that wasn't your real utility function".
I think one important source of confusion is that LWers routinely use concepts that were popularized or even invented by primary utilitarians (or so I'm guessing, since these concepts come up on the wikipedia page for utilitarianism), and then some reader assumes they're using utilitarianism as a whole in their thinking, and the discussion drifts from "utility" and "utility function" to "in utilitarian fashion" and "utility is generally applicable" to "utilitarianism is true" and "(global, single-variable-per-population) utility is the only thing of moral value in the universe!".
↑ comment by whowhowho · 2013-02-15T09:49:27.384Z · LW(p) · GW(p)
Everywhere outside of LW , utilitarianism means a a moral theory. It, or some specific variation of it is therefore capable of being true or false. The point could have as well been made with some less mathematical moral theory. The truth or falsehood or moral theories doesn't have direct empirical consequences, and more than the truth or falsehood of abstract mathematical claims. Shut-up-and-calculate doesn't work here, because one is not using utilitarianism or any other moral theury for predictingwhat will happen, one is using to plan what one will do.
You can't say that the mathematical formulas and formalisms of utilitarianism are "true" or "false", they're just formulas. You can't say that "x = 5" is "true" or "false". It's just a formula that doesn't connect to anything, and that "x" isn't related to anything physical - I just pinpointed "x" as a variable, "5" as a number, and then declared them equivalent for the purposes of the rest of this comment.
And I can't say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That's what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.
This is also why I requested an example for deontology. To me, "deontology is true" sounds just like those examples. Neither "utilitarianism is true" or "deontology is true" correspond to well-formed statements or sentences or propositions or whatever the "correct" philosophical term is for this.
I don't know why you would want to say you have an explanation of morality when you are an error theorist..
I also don't know why you are an error theorist. U-ism and D-ology are rival answers to the question "what is the right way to resolve conflicts of interest?". I don't think that is a meaningless or unanswerable question. I don't see why anyone would want to pluck a formula out of the air, number-crunch using it, and then make it policy. Would you walk into a suicide booth because someone had calculated, without justifying the formula used that you were a burden to society?
Replies from: DaFranker, BerryPick6↑ comment by DaFranker · 2013-02-15T16:36:03.952Z · LW(p) · GW(p)
I think you are making a lot of assumptions about what I think and believe. I also think you're coming dangerously close to being perceived as a troll, at least by me.
U-ism and D-ology are rival answers to the question "what is the right way to resolve conflicts of interest?"
Oh! So that's what they're supposed to be? Good, then clearly neither - rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices.
The real question, of course, is how to put meaningful numbers into the game theory formula, how to calculate the utility of the agents, how to determine the correct utility functions for each agent.
My answer to this is that there is already a set of utility functions implemented in each humans' brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you'll end up with a reflectively coherent CEV-like ("ideal" from now on) utility function for this one human, and then that's the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
So now what we need is better insights and more research into what these sets of utility functions look like, how close to completion they are, and how similar they are across different humans.
Note that I've never even heard of a single human capable of knowing or always acting on their "ideal utility function". All sample humans I've ever seen also have other mechanisms interfering or taking over which makes it so that they don't always act even according to their current utility set, let alone their ideal one.
I don't know why you would want to say you have an explanation of morality when you are an error theorist. (...) I also don't know why you are an error theorist.
I don't know what being an "error theorist" entails, but you claiming that I am one seems valid evidence that I am one so far, so sure. Whatever labels float your boat, as long as you aren't trying to sneak in connotations about me or committing the noncentral fallacy. (notice that I accidentally snuck in the connotation that, if you are committing this fallacy, you may be using "worst argument in the world")
And I can't say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That's what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.
Sure. Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
The formulation for f=ma is that the force applied to an object is equal to the product of the object's mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally. If for some reason we ran a well-designed, controlled experiment and suddenly more massive objects started accelerating more than less massive objects with the same amount of force, or more generally the physical behavior didn't correspond to that equation, the equation would be false.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T16:56:40.811Z · LW(p) · GW(p)
Oh! So that's what they're supposed to be? Good, then clearly neither - rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don't.
My answer to this is that there is already a set of utility functions implemented in each humans' brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you'll end up with a reflectively coherent CEV-like ("ideal" from now on) utility function for this one human, and then that's the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
No. You cant leap from "a reflectively coherent CEV-like [..] utility function for this one human" to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.
I don't know what being an "error theorist" entails,
Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims.
Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values. U and D pass both criteria with flying colours.
And if CEV is not a meaningul metaethical theory, why bother with it? If you can't say that the output of a grand CEV number crunch is what someone should actually do, what is the point?
The formulation for f=ma is that the force applied to an object is equal to the product of the object's mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally.
I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?
Replies from: DaFranker, BerryPick6↑ comment by DaFranker · 2013-02-15T19:24:36.894Z · LW(p) · GW(p)
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don't.
(...)
No. You cant leap from "a reflectively coherent CEV-like [..] utility function for this one human" to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.
Two individual interests: Making paperclips and saving human lives. Prisoners' dilemma between the two. Is there any sort of theory of morality that will "solve" the problem or do better than number-crunching for Pareto optimality?
Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with "1" and "0". Then I can count them. Then I can compare them: I'd rather have Unquantifiable-A than Unquantifiable-B, unless there's also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation.
Relevant claim from an earlier comment of mine, reworded: There does not exist any "objective", human-independent method of comparing and trading the values within human morality functions.
Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents' payoffs are impossible and when they are possible. Isn't this exactly what you're looking for? All that's left is applied stuff - figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That's obviously the most time-consuming, research-intensive part, too.
↑ comment by BerryPick6 · 2013-02-15T18:06:12.042Z · LW(p) · GW(p)
any two theories which make differnt objectlevle predictions can likelwise have truth values.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true? This is another extension of the original question posed, which you've been dodging.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T18:14:05.868Z · LW(p) · GW(p)
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true?
Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature.
This is another extension of the original question posed, which you've been dodging.
I have not been "dodging" it. The question seemed to frame the issue as one of being able to predict events that are observed passively. That misses the point on several levels. For one thing, it is not the case that empirical proof is the only kind of proof. For another, no moral theory "does" anything unless you act on it. And that includes CEV.
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-02-15T19:00:04.121Z · LW(p) · GW(p)
Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature.
This would still be the case, even if Deonotology was false. This leads me to strongly suspect that whether or not it is true is a meaningless question. There is no test I can think of which would determine its veracity.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T19:08:35.171Z · LW(p) · GW(p)
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
This would still be the case, even if Deonotology was false
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
There is no test I can think of which would determine its veracity.
Once again I will state: moral theories are tested by their ability to match moral intuition, and by their internal consistency, etc.
Once again, I will ask: how would you test CEV?
Replies from: DaFranker, BerryPick6↑ comment by DaFranker · 2013-02-15T20:10:41.328Z · LW(p) · GW(p)
how would you test CEV?
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function.
Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true.
If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents.
I can find no measure of which recommendation is "correct" other than inside my own brain somewhere. This directly implies that it is "correct for Frank's Brain", not "correct universally" or "correct across all humans".
Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let's try it!
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let's just assume I'm really good at quickly estimating this kind of physics within this thought experiment)
If I don't push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10% probability.
Expected deaths of not-pushing: 4.95.
Expected deaths of pushing: 1.49.
I apply the deontological rule. That fat man is doomed.
Now let's try the utilitarian vers-- Oh wait. That's already what we did. We created a deontological rule that says to pick the highest expected utility action, and that's also what utilitarianism tells me to do.
See what I mean when I say there is no meaningful distinction? If you calibrate your rules consistently, all "moral theories" I see philosophers arguing about produce the same output. Equal output, in fact.
So to return to the earlier point: D(trolley, Frank's Rule) is correct where trolley is the problem and Frank's is the rules I find most moral. U(trolley, Frank's Utility Function) is also correct. D(trolley, ARBITRARY RULE PICKED AT RANDOM FROM ALL POSSIBLE RULES) is incorrect for me. Likewise, U(trolley, ARBITRARY UTILITY FUNCTION PICKED AT RANDOM FROM ALL POSSIBLE UTILITY FUNCTION) is incorrect for me.
This means that U(trolley) and D(trolley) cannot be "correct" or "incorrect, because in typical Functional Programming fashion, U(trolley) and D(trolley) return curried functions, that is, they return a function of a certain type which takes a rule (for D) or a utility function (for U) and returns a recommendation based on this for the trolley problem.
To reiterate some previous claims of mine, in which I am fairly confident, in the above jargon: There does not exist any single-parameter U(x) or D(x) functions that return a single truth-valuable recommendation without any rules or utility functions as input. All deontological systems rely on the rules supplied to them, and all utilitarian systems rely on the utility functions supplied to them. There exists a utility function equivalent to each possible rule, and there exists a rule equivalent to each possible utility function.
The rules or the utility functions are inside human brains. And either can be expressed in terms of the other interchangeably - which we use is merely a matter of convenience as one will correspond to the brain's algorithm more easily than the other.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-15T22:17:42.819Z · LW(p) · GW(p)
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
I suspect that defining deontology as obeying the single rule "maximize utility" would be a non-central redefinition of the term. something most deontologists would find unacceptable.
Replies from: DaFranker↑ comment by DaFranker · 2013-02-15T22:31:06.481Z · LW(p) · GW(p)
The simplified "Do Not Kill" formulation sounds very much like most deontological rules I've heard of (AFAIK, "Do not kill." is a bread-and-butter standard deontological rule). It also happens to be a rule which I explicitly attempt to implement in my everyday life in exactly the format I've exposed - it's not just a toy example, this is actually my primary "deontological" rule as far as I can tell.
And to me there is no difference between "Pull the trigger" or "Remain immobile" when both are extremely likely to lead to the death of someone. To me, both are "Kill". So if not pulling the trigger leads to one death, and pulling the trigger leads to three deaths, both options are horrible, but I still really prefer not pulling the trigger.
So if for some inexplicable reason it's really certain that the fat man will save the workers and that there is no better solution (this is an extremely unlikely proposition, and by default would not trust myself to have searched the whole space of possible options), then I would prefer pushing the fat man.
If I considered standing by and watching people die because I did nothing to not be "Kill", then I would enforce that rule, and my utility function would also different. And then I wouldn't push the fat man either way, whether I calculate it with utility functions or whether I follow the rule "Do Not Kill".
I agree that it's non-central, but IME most "central" rules I've heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, "do not kill" and "do not steal" are extremely complex. I trust that this part isn't controversial except in naive philosophical journals of armchair philosophizing.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-15T22:41:40.071Z · LW(p) · GW(p)
to me there is no difference between "Pull the trigger" or "Remain immobile" when both are extremely likely to lead to the death of someone. To me, both are "Kill".
I believe that this is where many deontologists would label you a consequentialist.
most "central" rules I've heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, "do not kill" and "do not steal" are extremely complex. I trust that this part isn't controversial except in naive philosophical journals of armchair philosophizing.
There are certainly the complex edge cases, like minimum necessary self-defense and such, but in most scenarios the application of the rules is pretty simple. Moreover, "inaction = negative action" is quite non-standard. In fact, even if I believe that in your example pushing the fat man would be the "right thing to do", I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
Replies from: DaFranker↑ comment by DaFranker · 2013-02-15T23:42:37.893Z · LW(p) · GW(p)
I believe that this is where many deontologists would label you a consequentialist.
With all due respect to all parties involved, if that's how it works I would label the respective hypothetical individuals who would label me that "a bunch of hypocrites". They're no less consequentialist, in my view, since they hide behind words the fact that they have to make the assumption that pulling a trigger will lead to the consequence of a bullet coming out of it which will lead to the complex consequence of someone's life ending.
I wish I could be more clear and specific, but it is difficult to discuss and argue all the concepts I have in mind as they are not all completely clear to me, and the level of emotional involvement I have in the whole topic of morality (as, I expect, do most people) along with the sheer amount of fun I'm having in here are certainly not helping mental clarity and debiasing. (yes, I find discussions, arguments, debates etc. of this type quite fun, most of the time)
In fact, even if I believe that in your example pushing the fat man would be the "right thing to do", I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
I'm not sure it's just a question of not alieving it. There are many good reasons not to believe evidence that this will work, and even more good reasons to believe there is probably a better option, and many reasons why it could be extremely detrimental to you in the long term to push down a fat man onto train tracks, and if push come to shove it might end up being the more rational action in a real-life situation similar to the thought experiment.
↑ comment by BerryPick6 · 2013-02-15T19:16:50.136Z · LW(p) · GW(p)
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
I'm quite aware of that.
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
At this point, I simply must tap out. I'm at a loss at how else to explain what you seem to be consistently missing in my questions, but DaFranker is doing a very good job of it, so I'll just stop trying.
moral theories are tested by their ability to match moral intuition,
Really? This is news to me. I guess Moore was right all along...
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T19:44:16.673Z · LW(p) · GW(p)
You have proof that you should push the fat man?
Replies from: DaFranker↑ comment by DaFranker · 2013-02-15T20:26:47.824Z · LW(p) · GW(p)
You have proof that you should push the fat man?
Lengthy breakdown of my response.
TL;DR: You should push the fat man if and only if X. You should not push the fat man if and only if ¬X.
X can be derived into a rule to use with D(X') to compute whether you should push or not. X can also be derived into a utility function to use with U(X') to compute whether you should push or not. The answer in either case doesn't depend on U or D, it depends on your derivation of X, which itself depends on X.
This is shown by the assumption that for all reasonable a, there exists a g(a) where U(a) = D(g(a)). Since, by their ambiguity and vague definitions, both U() and D() seem to cover an infinite domain and are the equivalent of turing-complete, this assumption seems very natural.
↑ comment by BerryPick6 · 2013-02-15T11:46:54.235Z · LW(p) · GW(p)
I don't know why you would want to say you have an explanation of morality when you are an error theorist.
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
I also don't know why you are an error theorist. U-ism and D-ology are rival answers to the question "what is the right way to resolve conflicts of interest?".
When they are both trying to give accounts of what it would mean for something to be "right", it seems this question becomes pretty silly.
Replies from: DaFranker, whowhowho↑ comment by DaFranker · 2013-02-15T15:59:15.849Z · LW(p) · GW(p)
The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,)
I'm not sure at all what those mean. If they mean that I think there doesn't exist any sentences about morality that can have truth values, that is false. "DaFranker finds it immoral to coat children in burning napalm" is true, with more confidence than I can reasonably express (I'm about as certain of this belief about my moral system as I am in things like 2 + 2 = 4).
However, the sentence "It is immoral to coat children in burning napalm" returns an error for me.
You could say I consider the function "isMoral?" to take as input a morality function, a current worldstate, and an action to be applied to this worldstate that one wants to evaluate whether it is moral or not. A wrapper function "whichAreMoral?" exists to check more complicated scenarios with multiple possible actions and other fun things.
See, if the "morality function" input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
he is precisely asking you what it would mean for U or D to have truth values.
Yes.
In the example above, my "isMoral?" function can only return a truth-value when you give it inputs and run the algorithm. You can't look at the overall code defining the function and give it a truth-value. That's just completely meaningless. My current understanding of U and D is that they're fairly similar to this function.
When they are both trying to give accounts of what it would mean for something to be "right", it seems this question becomes pretty silly.
I agree somewhat. To use another code analogy, here I've stumbled upon the symbol "Right", and then I look back across the code for this discussion and I can't find any declarations or "Right = XXXXX" assignment operations. So clearly the other programmers are using different linked libraries that I don't have access to (or they forgot that "Right" doesn't have a declaration!)
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T16:43:32.206Z · LW(p) · GW(p)
If they mean that I think there doesn't exist any sentences about morality that can have truth values, that is false. "DaFranker finds it immoral to coat children in burning napalm" is true, with more confidence than I can reasonably express
An error theorist could agree with that. it isn't really a statement about morality, it is about belief. Consider "Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies". That isn't a true statement about harpies.
See, if the "morality function" input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
And it doesn't matter what the morality function is? Any mapping from input to output will do?
You can't look at the overall code defining the function and give it a truth-value. That's just completely meaningless.
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
I agree somewhat. To use another code analogy, here I've stumbled upon the symbol "Right", and then I look back across the code for this discussion and I can't find any declarations or "Right = XXXXX" assignment operations.
It's worth noting that natural language is highhly contextual. We know in broad terms what it means to get a theory "right". That's "right" in one context. In this context we want a "right" theory of morality, that is a theoretically-right theory of the morally-right.
Replies from: DaFranker↑ comment by DaFranker · 2013-02-15T18:49:39.751Z · LW(p) · GW(p)
And it doesn't matter what the morality function is? Any mapping from input to output will do?
Yes.
I have a standard library in my own brain that determines what I think looks like a "good" or "useful" morality function, and I only send morality functions that I've approved into my "isMoral?" function. But "isMoral?" can take any properly-formatted function of the right type as input.
And I have no idea yet what it is that makes certain morality functions look "good" or "useful" to me. Sometimes, to try and clear things up, I try to recurse "isMoral?" on different parameters.
e.g.: "isMoral? defaultMoralFunc w1 (isMoral? newMoralFunc w1 BurnBabies)" would tell me whether my default morality function considers moral the evaluation and results of whether the new morality function considers burning babies moral or not.
An error theorist could agree with that. it isn't really a statement about morality, it is about belief. Consider "Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies". That isn't a true statement about harpies.
I'm not sure what you mean by "it isn't really a statement about morality, it is about belief."
Yes, I have the belief that I consider it immoral to coat children in napalm. This previous sentence is certainly a statement about my beliefs. "I consider it immoral to coat children in napalm" certainly sounds like a statement about my morality though.
"isMoral? DaFranker_IdealMoralFunction Universe coatChildInNapalm = False" would be a good way to put it.
It is a true statement about my ideal moral function that it considers it better not to coat a child in burning napalm. The declaration and definition of "better" here are inside the source code of DaFranker_IdealMoralFunction, and I don't have access to that source code (it's probably not even written yet).
Also note that "isMoral? MoralIntuition w a" =/= ""isMoral? [MoralFunctionsInBrain] w a" =/= ""isMoral? DominantMoralFunctionInBrain w a" =/= ""isMoral? CurrentMaxMoralFunctionInBrain w a" =/= "isMoral? IdealMoralFunction w a".
In other words, when one thinks of whether or not to coat a child in burning napalm, many functions are executed in the brain, some of them may disagree on the betterness of some details of the situation, one of those functions usually takes the lead and becomes what the person actually does when faced with that situation (this dominance is dynamically computed at runtime, so at each evaluation the result may be different if, for instance, one's moral intuitions have changed the internal power balance within the brain), but one could in theory make up a function that represents the pareto-optimal compromise of all those functions, and all of this is reviewed in very synthesized form by the conscious mind to generate a Moral Intuition. All of which are very different from what would happen if the conscious mind could read the source code for the set of moral functions in the brain and edit things to be the way it prefers recursively towards a unique ideal moral function.
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
Not quite, but those are different questions. Is the trading software itself "true" or "false"? No. Is my approximate model of how the trading software works "true" or "false"? No.
Is it "true" or "false" that my approximate model of how the trading software works is better than competing alternatives? Yes, it is true (or false). Is it "true" or "false" that the trading software returns a profit? Yes, it is.
See, there's an element of context that lets us ask true/false questions about things. "Politics is true" is meaningless. "Politics is the most efficient method of managing a society" is certainly not meaningless, and with more formal definitions of "efficient" and "managing" one could even produce experimental tests to determine by observations whether that is true or false.
However, when one says "utilitarianism is true", I just don't know what observations to make. "utilitarianism accurately models DaFranker's ideal moral function" is much better - I can compare the two, I can try to refine what is meant by "utilitarianism" here exactly, and I could in principle determine whether this is true or false.
"as per utilitarianism's claim, what is morally best is to maximize the sum of x where each x is a measure u() of each agent's ideal morality function" sounds like it also makes sense. But then you run into a snag while trying to evaluate the truth-value of this. What is "morally best" here? According to what principle? It seems this "morally best" depends on the reader, or myself, or some other point of reference.
We could decide that this "morally best" means that it is the optimal compromise between all of our morality functions, the optimal way to resolve conflicts of interest with the least total loss in utility and highest total gain.
We could assign a truth-value to that; compute all possible forms of social agreement about morality, all possible rule systems, and if the above utilitarian claim is among the pareto-optimal choices on the game payoff matrix, then the statement is true, if it is strictly dominated by some other outcome, then it is false. Of course, actually running this computation would require solving all kinds of problems and getting various sorts of information that I don't even know how to find ways to solve or get. And might requite a Halting Oracle or some form of hypercomputer.
At any rate, I don't think "as per utilitarianism's claim, it is pareto-optimal across all humans to maximize the sum of x where each x is a measure u() of each agent's ideal morality function" is what you meant by "utilitarianism is true".
↑ comment by whowhowho · 2013-02-15T12:26:25.814Z · LW(p) · GW(p)
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
By comparing them to abstract formulas, which don't have truth values ... as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
When they are both trying to give accounts of what it would mean for something to be "right", it seems this question becomes pretty silly.
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.
Replies from: DaFranker, BerryPick6↑ comment by DaFranker · 2013-02-15T16:01:19.618Z · LW(p) · GW(p)
By comparing them to abstract formulas, which don't have truth values ... as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
My main point is that I haven't the slightest clue as to what kind of applied math or equations U and D could possibly be equivalent to. That's why I was asking you, since you seem to know.
Replies from: whowhowho↑ comment by BerryPick6 · 2013-02-15T12:39:30.121Z · LW(p) · GW(p)
By comparing them to abstract formulas, which don't have truth values ... as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
I'll concede I may have misinterpreted them. I guess we shall wait and see what DF has to say about this.
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.
I never said belief in "objective morality" was silly. I said that trying to decide whether to use U or D by asking "which one of these is the right way to resolve conflicts of interest?" when accepting one or the other necessarily changes variables in what you mean by the word 'right' and also, maybe even, the word 'resolve', sounds silly.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T12:51:00.726Z · LW(p) · GW(p)
I said that trying to decide whether to use U or D by asking "which one of these is the right way to resolve conflicts of interest?" when accepting one or the other necessarily changes variables in what you mean by the word 'right' and also, maybe even, the word 'resolve', sounds silly.
That woudl be the case of "right way" meant "morally-right way". But metaethical theories aren't compared by object-level moral rightness, exactly. They can be compared by coherence, practicallity, etc. If metaethics were just obviously unsolveable, someone would have noticed.
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-02-15T12:57:44.981Z · LW(p) · GW(p)
That woudl be the case of "right way" meant "morally-right way".
That's just how I understand that word. 'Right for me to do' and 'moral for me to do' refer to the same things, to me. What differs in your understanding of the terms?
If metaethics were just obviously unsolveable, someone would have noticed.
Remind me what it would look like for metaethics to be solved?
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T13:09:38.758Z · LW(p) · GW(p)
That's just how I understand that word. 'Right for me to do' and 'moral for me to do' refer to the same things, to me. What differs in your understanding of the terms?
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn't morally-right.
Remind me what it would look like for metaethics to be solved?
Unsolved-at-time-T doesn't mean unsolvable. Ask Andrew Wyles.
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-02-15T13:20:13.823Z · LW(p) · GW(p)
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn't morally-right
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn't refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error.
Unsolved-at-time-T doesn't mean unsolvable. Ask Andrew Wyles.
I'm no good at math, but it's my understanding that there was an idea of what it would look like for someone to solve Fermat's Problem even before someone actually did so. I'm skeptical that 'solving metaethics' is similar in this respect.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-15T14:05:57.123Z · LW(p) · GW(p)
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn't refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error.
You seem to have intpreted that the wrong way round. The point was that there are different and incompatible notions of "right". Hence "the right theory of what is right to do" is not circular, so long as the two "rights" mean differnt things. Whcih they do (theorertical correctness and moral obligation, respectively).
I'm no good at math, but it's my understanding that there was an idea of what it would look like for someone to solve Fermat's Problem even before someone actually did so. I'm skeptical that 'solving metaethics' is similar in this respect.
No one knows what a good explanation looks like? But then why even bother with things like CEV, if we can't say what they are for?
↑ comment by BerryPick6 · 2013-02-15T00:50:28.745Z · LW(p) · GW(p)
What would it be like if utilitarianism is true?
I think you've just repeated his question.
comment by Matt_Simpson · 2011-09-12T04:02:47.300Z · LW(p) · GW(p)
For the definition of "moral" that includes how people tend to use the term, this seems about right. However, the word "morality" is used in many different ways. For example, the "morality" I think about when I am legitimately wondering what action I should take - and not letting just an emotional reaction guide my actions - is in the ideal map (it's my preferences).
Replies from: Jack↑ comment by Jack · 2011-09-12T04:46:42.708Z · LW(p) · GW(p)
If your preferences were different (say you had a genuine preference to murder innocent people) would that change what is moral?
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2011-09-12T05:03:35.111Z · LW(p) · GW(p)
Nope. Define original preferences as moral1 and murder preferences as moral2. I'm asking what is moral1 to do, and that doesn't change if my preferences change. What changes is the question I ask (what is moral2 to do?).
Replies from: Jack↑ comment by Jack · 2011-09-12T05:15:13.285Z · LW(p) · GW(p)
Okay, then your morality isn't different from what I outlined here. You're just maybe less emotional about it (I probably overemphasized the matter of emotions in the post). When evaluating morality in the counterfactual a realist would have to look at facts in that world. You project your internal preferences on any proposed counterfactuals.
Put another way- I'm guess you don't think your preferences justify a claim about whether or not a given action is moral. Rather, what you mean when you say some action is immoral is that the action is against (some subset of) your preferences. That sound right?
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2011-09-12T13:05:49.657Z · LW(p) · GW(p)
That does sound right, but moral realism could still be true - actually, the term "moral" is meaningless in our example. Moral1 realism and moral2 realism are what's at stake. Consider two scenarios - scenario1 and scenario2. In scenario1 my preferences are moral1 and in scenario2 my preferences are moral2. In scenario1, moral1 exists in the ideal map - my preferences are an instantiation of moral1 - so moral1 realism is true. Moral2 realism may or may not be true, depending on whether some other agent has those preferences. Similarly, in scenario2, moral2 realism is true and moral1 realism may or may not be true.
Replies from: Jack↑ comment by Jack · 2011-09-12T17:45:27.506Z · LW(p) · GW(p)
As written this isn't clear enough for me to make sense of.
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2011-09-12T21:49:27.439Z · LW(p) · GW(p)
Let me try to be more clear then. Your definition of moral realism is:
There exists a subset X of the ideal map such that X = morality
In our toy example, there is morality1 and morality2. Morality1 is my current preferences, morality2 is my current preferences + a preference for murder. So is moral1 realism true? What about moral2 realism?
Consider scenario1. Under this scenario, my preferences are my actual current preferences, i.e. they are morality1. Now we return to the questions. Is moral1 realism true? Well, my preferences are a subset of the ideal map and in this scenario my preferences the same as morality1, so yes, moral1 realism is true. Is moral2 realism true? My preferences are not the same as morality2, but someone else's preferences could be, so we don't have enough information to decide this statement.
Scenario2, where my preferences are the same as morality2, is analogous (moral2 realism is true, moral1 realism is undecidable without further information).
Is that clearer?
Replies from: Jack↑ comment by Jack · 2011-09-13T19:28:40.188Z · LW(p) · GW(p)
It sounds like you are arguing for meta-ethical relativism where whether or not a moral judgment is true or false is contingent on the preferences of the speaker making the moral judgment. Is that right?
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2011-09-13T21:28:13.271Z · LW(p) · GW(p)
Not really. Whether a moral judgement is true or false is contingent on the definition of moral. If I say "what you're doing is bad!" I probably mean "it's not moral1" where moral1 is my preferences. If the hypothetical-murder-preferring-me says "what you're doing is bad!" this version of me probably means "it's not moral2" where moral2 are those preferences.
But those aren't the only definitions I could be using and in fact it's often ambiguous which definition a given speaker is using (even to the speaker). For example, in both cases in the above paragraph when I say "what you're doing is bad" I could simply mean "what you're doing goes against the traditional morality taught and/or practiced in this region" or "what you're doing makes me have a negative emotional reaction."
To answer the relativism question, you have to pin down the definition of moral. For example, suppose by "moral" we simply mean clippy's utility function, i.e. moral = paperclip maximizing. Now suppose clippy say's "melting down 2 tons of paper clips is immoral." Is clippy right? Of course he is, that's the definition of immoral. Now suppose I say the sentence. Is it still true? It sure is since we pinned down the meaning of moral beforehand.
If we substitute my own (much more complicated) utility function for clippy's as the definition of moral in this example, it becomes harder to evaluate whether or not something is moral, but the correct answer still won't depend on who's asking the question since "moral" is a rigid designator.
Replies from: Jack↑ comment by Jack · 2011-09-13T22:02:47.771Z · LW(p) · GW(p)
Whether a moral judgement is true or false is contingent on the definition of moral.
Of course.
But it just doesn't solve anything to recognize that 'moral' could be defined anyway you like. There are actual social and linguistic facts about how moral language functions. The problem of meta-ethics, essentially, is that those facts happen to be paradoxical. Saying, "my definition of moral is just my preferences" doesn't solve the problem because that isn't anyone else's definition of moral and most people would not recognize it as a reasonable definition of moral. The metaethical answer consistent with that position might be "everyone (or lots of people) mean different things by moral". That position is anti-realist, just instead of being skeptical about the metaphysics you're skeptical of the linguistics- you don't think there is a shared meaning for the word.
As an aside: I find that position less plausible than other versions of anti-realism (people seem to agree on the meaning of moral but disagree on which actions, persons and circumstances are part of the moral and immoral sets.).
Replies from: Matt_Simpson, DaFranker↑ comment by Matt_Simpson · 2011-09-14T00:53:34.099Z · LW(p) · GW(p)
The first part of our disagreement is because either you're implicitly using a different definition of moral antirealism than the one in your post or I just don't understand your definition like you intended. Whatever the case may be, lets set that aside - I'm pretty sure I know what you mean now and concede that under your definition - which is reasonable - moral antirealism is true even for the way I was using the term.
But it just doesn't solve anything to recognize that 'moral' could be defined anyway you like.
I'm not saying that there aren't limits on what definitions of "moral" are reasonable, but the fact remains that the term is used in different ways by different people in different times - or at least it's not obvious that they mean the same thing by moral. Your post goes a long way towards explaining some of those uses, but not all.
There are actual social and linguistic facts about how moral language functions. The problem of meta-ethics, essentially, is that those facts happen to be paradoxical. Saying, "my definition of moral is just my preferences" doesn't solve the problem because that isn't anyone else's definition of moral and most people would not recognize it as a reasonable definition of moral.
Well if you think you have found a moral paradox, it may just be because there are two inconsistent definitions of "moral" in play. This is often the case with philosophical paradoxes. But more to the point, I'm not sure whether or not I disagree with you here because I don't know what paradoxes you are talking about.
As an aside: I find that position less plausible than other versions of anti-realism (people seem to agree on the meaning of moral but disagree on which actions, persons and circumstances are part of the moral and immoral sets.).
As an aside, I'd say they disagree on both. They often have different definitions in mind and even when they have the same definition, it isn't always clear whether something is "moral" or "immoral." The latter isn't necessarily an antirealist situation - both parties may be using the same definition and morality could exist in the ideal map (in the sense that you want), yet it may be difficult in practice to determine whether or not something is moral or not.
Replies from: Jack↑ comment by Jack · 2011-09-14T03:34:43.924Z · LW(p) · GW(p)
I'm not saying that there aren't limits on what definitions of "moral" are reasonable, but the fact remains that the term is used in different ways by different people in different times - or at least it's not obvious that they mean the same thing by moral. Your post goes a long way towards explaining some of those uses, but not all.
Maybe. I was aiming for dominant usage but I think dominant usage in the general public turned out to not be dominant usage here which is part of why the post wasn't all that popular :-)
Well if you think you have found a moral paradox, it may just be because there are two inconsistent definitions of "moral" in play. This is often the case with philosophical paradoxes. But more to the point, I'm not sure whether or not I disagree with you here because I don't know what paradoxes you are talking about.
To be clear- it's not moral paradoxes I'm worried about. I've said nothing and have few opinions about normative ethics. The paradoxical nature of moral language is that it has fact-like aspects and non-fact like aspects. The challenge for the moral realist is to explain how it gets it's non-fact like aspects. And the challenge for the moral anti-realist is to explain how it gets it's fact like aspects. That's what I was trying to do in the post. I don't think there are common uses of moral language which don't involve fact-like aspects and non-fact like aspects.
Fact-like: we refer to moral claims as being true or false, grammatically they are statements, they can figure in logical proofs, changing physical conditions can change moral judgments (you can fill in more).
Non-fact-like: categorically motivating (for undamaged brains at least), normative/directional like a command, epistemologically mysterious, in some accounts metaphysically mysterious, subject of unresolvable contention (you can fill in more)
comment by Will_Sawin · 2011-08-22T17:11:25.356Z · LW(p) · GW(p)
What does the arrow "Projected" mean? Why isn't there another arrow "Beliefs" to "The Map"?
Replies from: Jack↑ comment by Jack · 2011-08-22T17:38:41.848Z · LW(p) · GW(p)
The entire green circle on the right is just a zoomed in version of the green circle on the left. The 'projected' arrow is just what the projectivist thesis is (third subsection). The idea is that our moral beliefs are formed by a basically illegitimate mechanism by projecting our utility function onto the external world. There isn't a an arrow from "beliefs" to "the Map" because those are the same thing.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-08-23T23:52:51.510Z · LW(p) · GW(p)
Good clarification, I now am pretty sure I understand how our beliefs relate.
I am suggesting that our moral beliefs are formed by a totally legitimate mechanism by projecting our utility function onto the external world.
If X is a zoomed-in version of Y, you can't project Z into X. Either Z is part of Y, in which case it's part of X, or it isn't part of Y, in which case it's not part of X.
Replies from: Jack↑ comment by Jack · 2011-08-24T01:08:04.883Z · LW(p) · GW(p)
I'm pretty confused by this comment, you'll have to clarify. If our moral beliefs are formed by projecting our utility function onto the external world I'm unsure of what you could mean by calling this process "legitimate". Certainly it doesn't seem likely to be a way to form accurate beliefs about the world.
If X is a zoomed-in version of Y, you can't project Z into X. Either Z is part of Y, in which case it's part of X, or it isn't part of Y, in which case it's not part of X.
Z is projected into X/Y. It's just too small to see in Y and I didn't think more arrows would clarify things.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-08-24T02:11:39.527Z · LW(p) · GW(p)
"projected onto the external world" isn't really correct. Moral beliefs don't, pretheoretically, feel like specific beliefs about the external world. You can convince someone that moral beliefs are beliefs about God or happiness or paperclips or whatever, but it's not what people naturally believe.
What I want to suggest is that moral beliefs ARE your utility function (and to the extent that your brain doesn't have a utility function, they're the closest approximation of one).
Otherwise, in the diagram, there would be two identical circles in your brain, one labeled "moral beliefs" and the other labeled "utility function".
Thus, it is perfectly legitimate for your moral beliefs to be your utility function.
Replies from: Jack↑ comment by Jack · 2011-08-24T02:43:34.051Z · LW(p) · GW(p)
"projected onto the external world" isn't really correct. Moral beliefs don't, pretheoretically, feel like specific beliefs about the external world. You can convince someone that moral beliefs are beliefs about God or happiness or paperclips or whatever, but it's not what people naturally believe.
Often pre-theoretic moral beliefs are entities unto themselves, like laws of nature sort of. People routinely think of morality as consisting of universal facts which can be debated. Thats what makes them "beliefs". As far as I know nearly everyone is a pre-theoretic moral realist. Of course, moral beliefs might not feel quite the same as say beliefs about whether or not something is dog. But they can still be beliefs.
What I want to suggest is that moral beliefs ARE your utility function (and to the extent that your brain doesn't have a utility function, they're the closest approximation of one).
Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.
A utility function doesn't constrain future experiences. Thats the reason for the conceptual distinction between beliefs and preferences. The projection of our utility function onto our map of the external world (which turns the utility function into a set of beliefs) is illegitimate because it isn't a reliable way of forming accurate beliefs that correspond to the territory.
If you want to just use the word 'belief' to also describe moral principles that seems okay as long as you don't confuse them with beliefs proper.
In any case, it sounds like we're both anti-realists.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-08-24T03:16:23.323Z · LW(p) · GW(p)
If you want to just use the word 'belief' to also describe moral principles that seems okay as long as you don't confuse them with beliefs proper.
The reason I want to do this is because things like logically manipulating moral beliefs / preferences in conjunction with factual beliefs / anticipations makes sense.
But I think this is our disagreement:
A utility function doesn't constrain future experiences. Thats the reason for the conceptual distinction between beliefs and preferences. The projection of our utility function onto our map of the external world (which turns the utility function into a set of beliefs) is illegitimate because it isn't a reliable way of forming accurate beliefs that correspond to the territory.
You say it's illegitimate because it doesn't constrain future experiences. If it constrained future experiences incorrectly, I would agree that it was illegitimate. If it was trying to constrain future experiences and failing, that would also be illegitimate.
But the point of morality is not to constrain our experiences. The point of morality is to constrain our actions. And it does that quite well.
Replies from: Jack↑ comment by Jack · 2011-08-24T03:38:06.178Z · LW(p) · GW(p)
But the point of morality is not to constrain our experiences. The point of morality is to constrain our actions. And it does that quite well.
Agreed! But that means morality doesn't consist in proper beliefs! You can still use belief language if you like, I do.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-08-24T04:04:40.569Z · LW(p) · GW(p)
And doing so is legitimate and not illegitimate.
Replies from: Jackcomment by prase · 2011-08-22T11:36:52.489Z · LW(p) · GW(p)
x(x {is an element of} IM) & (x = M)
Shouldn't x be a subset of IM rather than an element?
Also, do you somewhere define what the ideal map is?
Replies from: Jack↑ comment by Jack · 2011-08-22T11:47:24.483Z · LW(p) · GW(p)
To the first, I just changed it. To the second, I was attempting to do that in the first section, though obviously not formally. What I mean by it is the map that corresponds to the territory at the ideal limit Einstein is talking about.
Replies from: prase↑ comment by prase · 2011-08-22T13:09:33.308Z · LW(p) · GW(p)
What I mean by it is the map that corresponds to the territory at the ideal limit Einstein is talking about.
Well, yes, but, is IM set of sentences compatible with experimental results, or set of sentences whose negation is incompatible with the results? What about sentences speaking about abstract concepts, not directly refering to experimental results?
Replies from: Jack↑ comment by Jack · 2011-08-22T18:35:06.446Z · LW(p) · GW(p)
This depends on your philosophy of science and what science ultimately decides exists. The exact answer doesn't really matter for the purposes of the post and it's a huge question that I probably can answer adequately in a comment. Basically, it's what would be in a universal theory of science. I'm not constraining it to eliminative reductionism- i.e. I have no problem including truths of economics and biology in IM in addition to truths about physics. Certainly the conjunction of 'sentences compatible with experimental results' and 'sentences whose negation is incompatible with experimental results' is too broad (are those sets different?). We would want to trim that set with criteria like generality and parsimony.
Replies from: prase↑ comment by prase · 2011-08-22T19:21:37.426Z · LW(p) · GW(p)
My concern was mainly with propositions which aren't tied to observation, albeit being true in some sense. Mathematical truths are one example, moral truths may be another. The language is presumably able to express any fact about the territory, but there is no clear reason that any expression of language represents a fact about the territory. The language may be broader. Therefore,
Now it might seem that [...] moral realism must be true (where else but the territory could morality be?)
seems a bit unwarranted. Morality could be only in the map.
Replies from: Jack↑ comment by Jack · 2011-08-22T19:25:48.355Z · LW(p) · GW(p)
Now I'm confused. Beliefs that don't correspond to the territory are what we call "wrong".
Replies from: prase↑ comment by prase · 2011-08-22T19:51:15.655Z · LW(p) · GW(p)
Is "this sentence is true" a wrong belief?
Replies from: Jack, lessdazed↑ comment by lessdazed · 2011-08-22T21:41:40.954Z · LW(p) · GW(p)
I understand why people might think this was a snarky and downvote worthy comment with an obvious answer, but I greatly appreciated this comment and upvoted it. That is to say, it fits a pattern for questions the answers of which are obvious to others, though the answer was not obvious to me.
What's worse, at first thought, within five seconds of thinking about it, the answer seemed obvious to me until I thought about it a bit more. Even though I have tentatively settled upon an answer basically the same as the one I thought up in the first five seconds, I believe that that first thought was insufficiently founded, grounded, and justified until I thought about it.
Replies from: prase↑ comment by prase · 2011-08-23T08:48:30.453Z · LW(p) · GW(p)
Just to clarify, I wanted to point out that sentences are not the same category as beliefs (which in local parlance are anticipations of observations). There can be gramatically correct sentences which don't constrain anticipations at all, and not only the self-referential cases. All mathematical statements somehow fall in this category, just imagine, what observations one anticipates because believing "the empty set is an empty set". (The thing is a little complicated with mathematical statements because, at least for the more complicated theorems, believing in them causes the anticipation of being able to derive them using valid inference rules.) Mathematical statements are sometimes (often) useful for deriving propositions about the external world, but themselves don't refer to it. Without further analysing morality, it seems plausible that morality defined as system of propositions works similarly to math (whatever standards of morality are chosen).
The question is, whether this should be included into the ideal map. To peruse the analogy with customary geographic maps, mathematical statements would refer to descriptions of regularities about the map, such as "if three contour lines make nested closed circles, the middle one corresponds to height between the heights of the outermost one and the innermost one". Such facts aren't needed to read the map and are not written there.
If my remark seemed snarky, I apologise.
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-08-23T09:01:07.193Z · LW(p) · GW(p)
Mathematical statements are sometimes (often) useful for deriving propositions about the external world, but themselves don't refer to it.
What's the distinction between the two? (Useful for deriving propositions about smth vs. referring.)
Replies from: komponisto↑ comment by komponisto · 2011-08-23T15:24:38.271Z · LW(p) · GW(p)
What's the distinction between the two? (Useful for deriving propositions about smth vs. referring.)
The derived "propositions about" are distinct from the mathematical statements per se. For example:
Mathematical statement: "2+2 = 4" (nothing more than a theorem in a formal system; no inherent reference to the external world).
Statement about the world: "by the correspondence between mathematical statements and statements about the world given by the particular model we are using, the mathematical statement '2+2=4' predicts that combining two apples with two apples will yield four apples".
↑ comment by Vladimir_Nesov · 2011-08-23T08:54:57.018Z · LW(p) · GW(p)
There can be gramatically correct sentences which don't constrain anticipations at all, and not only the self-referential cases. All mathematical statements somehow fall in this category, just imagine, what observations one anticipates because believing "the empty set is an empty set".
If you build an inference system that outputs statements it proves, or lights up a green (red) light when it proves (disproves) some statement, then your anticipations about what happens should be controlled by the mathematical facts that the inference system reasons about. (More easily, you may find that mathematicians agree with correct statements and disagree with incorrect ones, and you can predict agreement/disagreement from knowledge about correctness.)
Replies from: prase↑ comment by prase · 2011-08-23T12:23:35.261Z · LW(p) · GW(p)
If you build an inference system that outputs statements it proves, or lights up a green (red) light when it proves (disproves) some statement, then your anticipations about what happens should be controlled by the mathematical facts that the inference system reasons about.
That's why I have said "[t]he thing is a little complicated with mathematical statements because, at least for the more complicated theorems, believing in them causes the anticipation of being able to derive them using valid inference rules".
What's the distinction between the two? (Useful for deriving propositions about smth vs. referring.)
The latter are sentences which directly mention the object ("the planet moves along an elliptic trajectory") while the former are statements that don't ("an ellipse is a closed curve"). Perhaps a better distinction would be based on the amount of processing between the statement and sensory inputs; on the lowest level we'll find sentences which directly speak about concrete anticipations ("if I push the switch, I will see light"), the higher level statements would contain abstract words defined in terms of more primitive notions. Such statements could be unpacked to gain a lower-level description by writing out the definition explicitly ("the crystal has O_h symmetry" into "if I turn the crystal 90 degress, it will look the same and if I turn it 180 degrees..."). If a statement can be unpacked in finite number of recursions down to the lowest level containing no abstractions, I would say it refers to the external world.
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-08-23T19:19:13.671Z · LW(p) · GW(p)
"[t]he thing is a little complicated with mathematical statements because, at least for the more complicated theorems, believing in them causes the anticipation of being able to derive them using valid inference rules".
This doesn't look to me like a special condition to be excused, but as a clear demonstration that mathematical truths can and do constrain anticipation.
↑ comment by Vladimir_Nesov · 2011-08-23T19:17:28.951Z · LW(p) · GW(p)
"Directly mentioning" passes the buck of "referring", you can't mention a planet directly, the planet itself is not part of the sentence. I don't see how to make sense of a statement being "unpacked in finite number of recursions down to the lowest level containing no abstractions" (what's "no abstractions", what's "unpacking", "recursions"?).
(I understand the distinction between how the phrases are commonly used, but there doesn't appear to be any fundamental or qualitative distinction.)
Replies from: prase↑ comment by prase · 2011-08-24T12:53:14.902Z · LW(p) · GW(p)
There has to be a definition of base terms standing for primitive actions, observations and grammatical words (perhaps by a list, to determine what to put on the list would ideally need some experimental research of human cognition). An "abstraction" is then a word not belonging to the base language defined to be identical to some phrase (possibly infinitely long) and used as an abbreviation thereof. By "unpacking" I mean replacing all abstractions by their definitions.
comment by Lightwave · 2011-08-22T10:33:26.413Z · LW(p) · GW(p)
Here's what a moral realist might say:
The 'morality' module within the utility function is pretty similar across all humans.
Given that our evolved morality is in part used to solve cooperation and other game theoretic problems, a rational psychopath might want to self-modify to care about 'morality'.
↑ comment by MinibearRex · 2011-08-23T18:14:35.811Z · LW(p) · GW(p)
I would expect a rational psychopath to instead try to study game theory, and try to beat human players that will employ predictable strategies that can be exploited.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-08-27T21:21:47.707Z · LW(p) · GW(p)
If there's a long-term effective strategy for cheating--one that doesn't involve the cheater being detected and punished--why isn't everone using it?
Replies from: MinibearRex↑ comment by MinibearRex · 2011-08-27T22:15:33.113Z · LW(p) · GW(p)
Because we evolved to care about things like fairness in an environment where everyone knew each other, and if you cheated someone, everyone else in the village knew it. And, modern humans still employ their evolved instincts. Therefore, agents who lack moral concerns can exploit the fact that humans are using intuitions that were optimized to work in a different situation. For instance, they can avoid doing things so heinous that society as a whole tries to hunt them down, and once they have exploited someone, they can just move.