Derek Parfit, "On What Matters"

post by Alex Flint (alexflint) · 2011-07-07T16:52:51.007Z · LW · GW · Legacy · 17 comments

Contents

17 comments

Derek Parfit has published his second book, "On What Matters". Here are reviews by Tyler Cowen and Peter Singer.

17 comments

Comments sorted by top scores.

comment by Manfred · 2011-07-08T01:18:14.221Z · LW(p) · GW(p)

Based solely on the reviews, I'm not impressed. The search for the One True something or other goes against the typically LessWrongian idea that human meaning gets assigned to things by computational processes inside human heads. Once you bear this in mind and ask "what does 'ought' mean?", you see that a fully general mind can "ought" to do whatever it damn well pleases. Also relevant.

Optimal solutions are contained within the optimization criteria.

Replies from: Peterdjones
comment by Peterdjones · 2011-07-08T13:47:03.135Z · LW(p) · GW(p)

Language is for communication: language is public. The correctness of a meaning comes from a language community, and only from there (dictionaries are a staging post; either they reflect the language communities usages or they are wrong). The proximate assignment of meanings to words within brains is likewise either in line with usage or wrong. You need a brain to understand a word, but a brain cannot grant correctness to any arbitrary meaning-word assignment,

So you ought not do as you damn well please. Especially if you enjoy serial killing.

The "meaning theory -- the idea that disagreements about morality are disagreements about the meanings of "should", "ought" and "good" -- is put forward to explain a fact about disaggreements. The "theory theory" is an alternative explanation. It is not the case that a disagreement about how to apply a word must be a disagreement about its meaning. In fact, disagreement about a implies common ground -- otherwise it is a case of two people talking past each other. It is not the case that understanding the dictionary meaning of a word, at the level of ordinary linguistic competence, gives competence in applying it. I understand the meaning of the word "cantonese", but I could not distinguish it from mandarin. In many areas, it requires specalised knowledge to apply a word. So, on the theory-theory, there must be a meaning of good/should that anyone can produce. Since there is no disagreement about the basic meaning, we would expect it sound obvious, a truism. I propose that the truisms in question are something like: good acts are praiseworthy, bad acts blameworthy. People who have a theory of X can make assignements, and can explain their rationale for doing so. People who do not have a theory of morality, most people, may or may not be able to make asignments intuiitively, but will not be able to explain their rational. Theists (and philosophers) can answer moral questions because they have a theory, not because they are aware of a some meaning that is denied to other English speakers.

And what Parfitt is offering, and what many people can't, is a theory of "ought", not a definition.

Replies from: Manfred
comment by Manfred · 2011-07-08T18:32:37.303Z · LW(p) · GW(p)

The "meaning theory" isn't quite what I"m getting at - the best name would probably be the "algorithm theory." It goes like this: there is some algorithm that determines whether an agent thinks X ought to happen - which we, as humans communicating, can agree means something or other general about the agent's moral thoughts or decision-making algorithm. This algorithm for sorting "ought" is like a definition - a definition is something we can use to sort objects. But it's not like any old definition - this definition cannot be guaranteed to be smaller than the brain of the agent.

Or to put it another way, by "definition" I mean the full specification of a cloud in idea-space.

So even when it's possible to use "ought" in the same general way to refer to some part of an agent, this only refers to the fuller algorithm inside an agent. And though these simple definitions are useful for communication, inside peoples' heads there is not a simple pointer like "ought not -> blameworthy" - there is a complicated bunch of neurons that takes in sensory information and outputs moral decisions. There is no reason why this complicated bunch of neurons should be exactly identical for each person.

Anyhow, back to some sort of topic:
You seem to be saying that Parfit is not claiming his theory as any sort of One True theory. Is this accurate? The reviews implied it, but maybe I read wrong.

Replies from: Peterdjones
comment by Peterdjones · 2011-07-08T19:22:04.924Z · LW(p) · GW(p)

This algorithm for sorting "ought" is like a definition - a definition is something we can use to sort objects. But it's not like any old definition - this definition cannot be guaranteed to be smaller than the brain of the agent.

Substituting "definition" for "meaning" isn't going to make much difference.

Or to put it another way, by "definition" I mean the full specification of a cloud in idea-space.

  • there is a complicated bunch of neurons that takes in sensory information and outputs moral decisions. There is no reason why this complicated bunch of neurons should be exactly identical for each person.

No. But the correct way to handle that theory is to say that different people have different theories/intuitions. Otherwise you fall into the trap of saying there are no real disagreements about morality, or that serial killer morality is perfectly valid because serial can make up their own meaning definition of "moral".

Anyhow, back to some sort of topic: You seem to be saying that Parfit is not claiming his theory as any sort of One True theory. Is this accurate?

Surely anyone who argues for a theory is saying that.

Replies from: Manfred
comment by Manfred · 2011-07-08T20:16:19.369Z · LW(p) · GW(p)

Anyhow, back to some sort of topic: You seem to be saying that Parfit is not claiming his theory as any sort of One True theory. Is this accurate?

Surely anyone who argues for a theory is saying that.

I dunno, you could just write down your theory to get it out there, maybe to convince other humans (which is possible, us being imperfect) as a means to spreading your morality.

the correct way to handle that theory is to say that different people have different theories/intuitions. Otherwise you fall into the trap of saying there are no real disagreements about morality, or that serial killer morality is perfectly valid because serial can make up their own meaning definition of "moral".

Talking about "validity" just seems to be a way to disparage any morality/theory/set of intuitions that's not your own. From a general level, anything that fills the cognitive role we talked about as a definition, assigning things something like blameworthiness, counts. And yes, that means the serial-killer morality too.

The way to avoid "dead-end relativism" - e.g. not stopping serial killers even though you think it's bad - is to be comfortable with being an agent with a morality the same way a carefully-built AI could be an agent with a morality. It doesn't actually matter that your morality could have been something else. It is what it is, and so it's true that when I say "right" I'm referring to Manfred::right, some specific algorithm, and I'll still stop serial killers because it's the right thing to do.

We're back to trouble with words again. Like the tree falling in the forest making a sound, "right" can mean different things to different people, and the way to solve the problem is not to argue over who's "right" is right, but use more words to just care about the actual state of the universe. So I'll stop a serial killer, but I won't argue with him about whether what he's doing is right. Well, I guess that's an oversimplification - humans are persuadable about the darndest things, so arguing about "right" is sometimes fruitful. But if it the argument goes nowhere, I'm comfortable with him doing Killer::right, and me doing Manfred::right, and then I'll hit him with a big stick.

Replies from: Peterdjones
comment by Peterdjones · 2011-07-08T20:47:37.349Z · LW(p) · GW(p)

Talking about "validity" just seems to be a way to disparage any morality/theory/set of intuitions that's not your own.

You can promote metaethical objectivism without having and particular first order moral theory in mind; and you can hold that the Meaning Theory is a poor argument for subjectivism without holding objectivism to be true.

From a general level, anything that fills the cognitive role we talked about as a definition, assigning things something like blameworthiness, counts.

Not equally. Not without some hefty question begging. Anything that assigns solutions to numeric problems could be called arithmetic, but some assignments are true and others false.

and yes, that means the serial-killer morality too.

Counts as correct?

The way to avoid "dead-end relativism" - e.g. not stopping serial killers even though you think it's bad - is to be comfortable with being an agent with a morality the same way a carefully-built AI could be an agent with a morality. It doesn't actually matter that your morality could have been something else. It is what it is, and so it's true that when I say "right" I'm referring to Manfred::right, some specific algorithm, and I'll still stop serial killers because it's the right thing to do.

Unless you are one.

I don't find it satisfactory to be compelled to stop things--to treat them as if they are wrong--without knowing why, or even that, they are wrong. I like reasons. i guess you could call me a rationalist.

We're back to trouble with words again. Like the tree falling in the forest making a sound, "right" can mean different things to different people,

I've just argued against that. This is going in circles.

and the way to solve the problem is not to argue over who's "right" is right, but use more words to just care about the actual state of the universe. So I'll stop a serial killer, but I won't argue with him about whether what he's doing is right.

I think a universe where force is minimised in favour of persuasion is preferable.

But if it the argument goes nowhere, I'm comfortable with him doing Killer::right, and me doing Manfred::right, and then I'll hit him with a big stick.

What if you are really wrong? What if you are the guy who is rounding the slave owners "property" and dutifully returning them to him?

Replies from: Manfred
comment by Manfred · 2011-07-08T21:58:08.504Z · LW(p) · GW(p)

"right" can mean different things to different people,

I've just argued against that. This is going in circles.

Didn't you just agree that the algorithm for sorting things into "right" and "not right" is different in different in different people? Are we really going to have to taboo "means" now?

if it the argument goes nowhere, I'm comfortable with him doing Killer::right, and me doing Manfred::right, and then I'll hit him with a big stick.

What if you are really wrong? What if you are the guy who is rounding the slave owners "property" and dutifully returning them to him?

Then I'm wrong about some fact that I used in translating my morality into actions, e.g. skin color determines intelligence.
Hmm. Actually, it looks like things get complicated here because of human mutability - we can be persuaded of either a thing or its opposite in different conditions. So I really do have to stick with morality as the algorithm itself and not some run of it if I want consistency (though that's not strictly necessary).

Replies from: Peterdjones
comment by Peterdjones · 2011-07-08T22:24:06.073Z · LW(p) · GW(p)

Didn't you just agree that the algorithm for sorting things into "right" and "not right" is different in different in different people?

Yes, and I also argued, repeatedly, against saying that such an algorithm constitutes either a definition or a meaning.

What if you are really wrong? What if you are the guy who is rounding the slave owners "property" and dutifully returning them to him?

Then I'm wrong about some fact that I used in translating my morality into actions, e.g. skin color determines intelligence.

Not necessarily. You could be wrong about morality itself. You could think property rights are more important than liberty, or that people are means not ends.

. So I really do have to stick with morality as the algorithm itself and not some run of it if I want consistency (though that's not strictly necessary).

Those are not your only choices.

Replies from: Manfred
comment by Manfred · 2011-07-09T00:15:18.573Z · LW(p) · GW(p)

You could be wrong about morality itself.

What sort of impact would being right or wrong about morality have that I could notice? For example, let's say someone thinks taxation is inherently morally wrong. What sort of observations are ruled out by this belief, such that making those observations would falsify the belief?

Replies from: Peterdjones
comment by Peterdjones · 2011-07-09T00:38:57.542Z · LW(p) · GW(p)

The questions is what you should care about.

Is it rational to care more about being able to predict accurately than care about inadvertantly doing evil?

Replies from: Manfred
comment by Manfred · 2011-07-10T01:55:51.937Z · LW(p) · GW(p)

Hah, looks like someone went through and upvoted all your posts in the conversation while downvoting mine. Relativism has at least one anti-fan :P

I didn't understand your last reply, but I'd still like to ask you a favor: imagine what the universe would look like if there weren't any particular best morality, only moralities that were best by some individual's standard, which nobody else was under any particular cognitive necessity to accept. All the electrons would stay in their orbitals, things would look the same, but inside agents would just do what they did for their own reasons and not for others.

Okay, thanks.

Replies from: Peterdjones
comment by Peterdjones · 2011-07-10T12:03:58.130Z · LW(p) · GW(p)

My last post was a question (now edited). You were tacitly assuming that being able to predict is what matters, that non predictive theories can be disregarded. I was questioning that being able to predict matters more than morality (in fact, I was doubting that anything does). I think the does-it-predict test is flawed in that sense.

I also think the other tacit assumption, that morality is non predictive is false. If you act on your morality, it will predict what you observations...whether they are eventually of a death row cell, or a the receipt of a nobel peace prize, for instance. If you don't act on it, why have it? Morality is connected to action, treating it as a theory whose job it is to predict the experiences of a passive observer is a category error.

The problem I have with subjective morality is that I can't see how it differs from no morality:

If subjective morality is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.

If error theory is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.

That, if correct, only goes as far as establishing that morality is either objective or non existent.

You wonder what would change given the truth/falsity of objective morality. What would change is the truth and falsity (and rationality and irrationality) of things that are logically linked to it. You can either be in jail at time T or not; that's objective. If objective punishments and rewards can't be objectively justifiied, there is a certain amount of irrationality in the world. So what objective morality would change is that certain ideas and attitudes, and actions leading from them , would make sense the world would be a more rational place.

Replies from: Manfred
comment by Manfred · 2011-07-10T21:14:05.865Z · LW(p) · GW(p)

If morality is totally non-predictive then it shouldn't be in our model of the world. It's like the sort of "consciousness" where in the non-conscious zombie universe, philosophers write the exact same papers about consciousness despite not being conscious. If morality is non-predictive, then even if we act morally, it's for reasons totally divorced from morality! If morality is non-predictive, then when we try to act morally we might as well just flip a coin, because no causal process can access "morality"! That's why morality has to predict things, and that's why it has to be inside peoples' heads. Because if it ain't in peoples' heads to start with, there's no magical process that puts it there.

Replies from: Peterdjones, Peterdjones
comment by Peterdjones · 2011-07-11T13:03:54.213Z · LW(p) · GW(p)

If morality is totally non-predictive then it shouldn't be in our model of the world.

The point of morality is to change the world, not model it.

If morality is non-predictive, then even if we act morally, it's for reasons totally divorced from morality!

If we act morally, the morality we are acting on predicts our actions. Your beef seems to be with the idea that morality is not some universal causal law -- that you have to choose it. There will be a causal explanation of behaviour at the neuronal level, but that doesn't exclude an explanation at the level of moral reasoning,any more than an explanation of a computers operation at the level of electrons excludes a software level explanation.

If morality is non-predictive, then when we try to act morally we might as well just flip a coin, because no causal process can access "morality"!

A causal process can implement moral reasoning just as it can implement mathematical reasoning. Your objection is a category error. like saying a software is an immaterial abstraction that doesn't cause a computer to do anything.

That's why morality has to predict things, and that's why it has to be inside peoples' heads.

Morality is inside people's heads since it is a form of reasoning. Where did I say otherwise?

Replies from: Manfred
comment by Manfred · 2011-07-11T16:04:01.872Z · LW(p) · GW(p)

Oh, okay, I take back my big rant then. Sorry :D

comment by Peterdjones · 2011-07-11T02:02:17.744Z · LW(p) · GW(p)

OK. You didn't get that morality is as predictive as you make it by acting on it. And you also didn't get that there are more important things than prediction.

comment by torekp · 2011-07-08T02:01:41.716Z · LW(p) · GW(p)

I haven't read "On What Matters". Tyler Cowen's review is so thoroughly plausible and unflattering that I probably won't. Peter Singer's, not so convincing.