Heading Toward: No-Nonsense Metaethics

post by lukeprog · 2011-04-24T00:42:00.360Z · score: 42 (49 votes) · LW · GW · Legacy · 59 comments

Part of the sequence: No-Nonsense Metaethics

A few months ago, I predicted [LW · GW] that we could solve metaethics in 15 years. To most people, that was outrageously optimistic. But I've updated since then. I think much of metaethics can be solved now (depending on where you draw the boundary around the term 'metaethics'.) My upcoming sequence 'No-Nonsense Metaethics' will solve the part that can be solved, and make headway on the parts of metaethics that aren't yet solved. Solving the easier problems of metaethics will give us a clear and stable platform from which to solve the hard questions of morality.

Metaethics has been my target for a while now, but first I had to explain the neuroscience of pleasure [LW · GW] and desire [LW · GW], and how to use intuitions for philosophy.

Luckily, Eliezer laid most of the groundwork when he explained couldness [LW · GW], terminal and instrumental values [LW · GW], the complexity [LW · GW] of human desire [LW · GW] and happiness [LW · GW], how to dissolve philosophical problems [LW · GW], how to taboo [LW · GW] words and replace them with their substance [LW · GW], how to avoid definitional disputes [LW · GW], how to carve reality at its joints [LW · GW] with our words, how an algorithm feels from the inside [LW · GW], the mind projection fallacy [LW · GW], how probability is in the mind [LW · GW], reductionism [LW · GW], determinism [LW · GW], free will, evolutionary psychology, how to grasp slippery things [LW · GW], and what you would do without morality [LW · GW].

Of course, Eliezer wrote his own metaethics sequence. Eliezer and I seem to have similar views on morality, but I'll be approaching the subject from a different angle, I'll be phrasing my solution differently, and I'll be covering a different spread of topics.

Why do I think much of metaethics can be solved now? We have enormous resources not available just a few years ago. The neuroscience of pleasure [LW · GW] and desire [LW · GW] didn't exist two decades ago. (Well, we thought dopamine was 'the pleasure chemical', but we were wrong.) Detailed models of reductionistic meta-ethics weren't developed until the 1980s and 90s (by Peter Railton and Frank Jackson). Reductionism has been around for a while, but there are few philosophers who relentlessly play Rationalist's Taboo [LW · GW]. Eliezer didn't write How an Algorithm Feels from the Inside [LW · GW] until 2008.

Our methods will be familiar ones, already used to dissolve problems ranging from free will to disease [LW · GW]. We will play Taboo with our terms, reducing philosophical questions into scientific ones. Then we will examine the cognitive algorithms that make it feel like open questions remain.

Along the way, we will solve or dissolve the traditional problems of metaethics: moral epistemology, the role of moral intuition, the is-ought gap, matters of moral psychology, the open question argument, moral realism vs. moral anti-realism, moral cognitivism vs. non-cognitivism, and more.

You might respond, "Sure, Luke, we can do the reduce-to-algorithm thing with free will or disease, but morality is different. Morality is fundamentally normative. You can't just dissolve moral questions with Taboo-playing and reductionism and cognitive science."

Well, we're going to examine the cognitive algorithms that generate that intuition, too.

And at the end, we will see what this all means for the problem of Friendly AI.

I must note that I didn't exactly invent the position I'll be defending. After sharing my views on metaethics with many scientifically-minded people in private conversation, many have said something like "Yeah, that's basically what I think about metaethics, I've just never thought it through in so much detail and cited so much of the relevant science [e.g. recent work in neuroeconomics [LW · GW] and the science of intuition]."

But for convenience I do need to invent a name for my theory of metaethics. I call it pluralistic moral reductionism.

Next post: What is Metaethics? [LW · GW]

59 comments

Comments sorted by top scores.

comment by lukeprog · 2011-04-24T00:46:36.097Z · score: 11 (15 votes) · LW(p) · GW(p)

Oh, and...

Yes, the next post in this sequence will be "What the hell is metaethics?" :)

comment by timtyler · 2011-04-24T11:48:14.373Z · score: 0 (0 votes) · LW(p) · GW(p)

More to the point: what are the problems of metaethics?

comment by andreas · 2011-04-24T01:13:32.063Z · score: 9 (11 votes) · LW(p) · GW(p)

Back when Eliezer was writing his metaethics sequence, it would have been great to know where he was going, i.e., if he had posted ahead of time a one-paragraph technical summary of the position he set out to explain. Can you post such a summary of your position now?

comment by lukeprog · 2011-04-24T01:22:39.726Z · score: 9 (9 votes) · LW(p) · GW(p)

Hmmmm. What do other people think of this idea?

I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly and people don't pay attention to the rest of the sequence. But if you had first stepped them through the entire argument, they would have found no place at which they can really disagree. That's a concern, anyway.

comment by komponisto · 2011-04-24T03:39:13.463Z · score: 13 (15 votes) · LW(p) · GW(p)

I endorse the summary idea. It will help me decide whether and how carefully to read your posts.

I would like to know what you think; depending on what it is, I may or may not be interested in the details of why you think it. For example, I'd rather not spend lots of time reading detailed arguments for a position I already find obvious, in a subject (like metaethics) that I have comparatively little interest in. On the other hand, if your position is something I find counterintuitive, then I may be interested in reading your arguments carefully, to see if I need to update my beliefs.

This is Less Wrong, and you have 5-digit karma. We're not going to ignore your arguments because your conclusion sounds silly.

Furthermore, you don't necessarily have to post the summary prominently, if you really don't want to. You could bury it in these comments right here, for example.

comment by TheOtherDave · 2011-04-24T02:18:04.336Z · score: 10 (12 votes) · LW(p) · GW(p)

My reaction to this idea depends a lot on how the sequence gets written.

If at every step along the way you appear to be heading towards a known goal, I'm happy to go along for the ride.

If you start to sound like you're wandering, or have gotten lost in the weeds, or make key assertions I reject and don't dispose of my objections, or I otherwise lose faith that you know where you're going, then having a roadmap becomes important.

Also, if your up-front list of claims turns out to be a bunch of stuff I think is unremarkable, I'll be less interested in reading the posts.

OTOH, if I follow you through the entire argument for an endpoint I think is unremarkable, I'll still be less interested, but it would be too late to act on that basis.

comment by andreas · 2011-04-24T01:31:29.600Z · score: 5 (5 votes) · LW(p) · GW(p)

I am more motivated to read the rest of your sequence if the summary sounds silly than if I can easily see the arguments myself.

comment by MinibearRex · 2011-04-24T06:19:30.659Z · score: 4 (10 votes) · LW(p) · GW(p)

I would vote against the summary idea. Just in general, I like it better if a writer starts off with observations, builds their way up with chains of reasoning, and gets the reader everything they need to draw the conclusion of the author, as opposed to telling the reader what position they have, and then providing arguments for it. In terms of rationality, it's probably better to build to your conclusion.

In addition, if you are proposing anything controversial, posting a summary will spark debates before you really had given the requisite background knowledge.

comment by ata · 2011-04-24T06:22:26.882Z · score: 2 (4 votes) · LW(p) · GW(p)

Agreed on all counts. Plus it would just feel like a spoiler, knowing that there was supposed to be a lot building up to it.

(Maybe, to get the best of both options, Luke could post the summary in Discussions, marking it as containing philospoilers; that way people can read through sequence unspoiled if they prefer, while those who want to see a summary in advance can do so, and discuss and inquire about it, with the understanding that "That question/argument will be answered/addressed in the sequence" will always be an acceptable response.)

comment by Cyan · 2011-04-24T01:31:41.230Z · score: 3 (7 votes) · LW(p) · GW(p)

I disagree with the grandparent and endorse not giving a summary.

comment by Sophronius · 2013-11-19T12:36:19.002Z · score: 2 (2 votes) · LW(p) · GW(p)

Agreed. I know from experience how hard it is to convince someone to change their position on meta ethics. The reason for this is that if you post any specific example or claim that people disagree with, they will then look for reasons to disagree with your meta ethics based on that reason alone. Posting only abstract principles prevents this. It's the exact same motivation as for politics is the mindkiller or any other topic that is both complex and something people feel strongly about (ideal breeding grounds for motivated reasoning).

Nonetheless i would be very interested in seeing such a list.

comment by khafra · 2011-04-25T17:09:33.115Z · score: 1 (1 votes) · LW(p) · GW(p)

I vote for writing a summary, and including it with the last post of the sequence. That way, extra-skeptical people can wait until the sequence has been posted in its entirety before deciding to read it based on the summary, without losing much expected value.

comment by lukeprog · 2011-04-25T17:15:14.797Z · score: 2 (2 votes) · LW(p) · GW(p)

There will without a doubt at least be a summary toward the end of the sequence.

comment by shokwave · 2011-04-25T17:14:10.811Z · score: 1 (1 votes) · LW(p) · GW(p)

I think in practice what would happen is the skeptical people would disagree on each post, and then when presented with the summary would be compelled to disagree with it in order to remain consistent.

comment by khafra · 2011-04-25T17:21:47.746Z · score: 0 (0 votes) · LW(p) · GW(p)

You're right; that sounds like a likely failure unless skeptics could proactively choose to hide that sequence until they could read the summary; which the current LW codebase doesn't support.

comment by TheAncientGeek · 2013-11-19T12:08:36.313Z · score: 0 (0 votes) · LW(p) · GW(p)

I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly a

Why doesn't that apply to abstracts?

comment by CharlesR · 2011-04-24T16:11:06.747Z · score: 0 (2 votes) · LW(p) · GW(p)

That's sort of like reading the end of a novel before you buy it. If you do include a summary, please announce what you're doing and make it something we can skip.

comment by TimFreeman · 2011-04-25T01:07:04.344Z · score: 5 (5 votes) · LW(p) · GW(p)

Novels are meant to be entertaining. Luke's metaethics post(s) would be meant to be useful, so the analogy isn't valid. Even so, novels frequently have a summary on the inside flap of the dust cover. I hope to see the summary.

comment by XiXiDu · 2011-04-24T10:37:11.144Z · score: 0 (2 votes) · LW(p) · GW(p)

I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly and people don't pay attention to the rest of the sequence.

Yeah, I am still waiting for someone to thoroughly cite all the relevant science to back up AI going FOOM.

comment by NMJablonski · 2011-04-24T14:42:29.499Z · score: 8 (14 votes) · LW(p) · GW(p)

The only thing I have consistently rejected on LW is the metaethics. I find that a much simpler Friedmanite explanation of agents pursuing their separate interests fits my experience.

For example, I would pay a significant amount of money to preserve the life of a friend, and practically zero money to preserve the life of an unknown stranger. I would spend more money to preserve the life of a successful scientist or entrepreneur, than I would to preserve the life of a third world subsistence farmer.

This is simply because I value those persons differently. I recognize that some people have an altruistic terminal value of something like:

"See as many agents as possible having their preferences fulfilled."

... and I can see how the metaethics sequence / discussion are necessary for reducing that terminal value to a scientific, physical metric by which to judge possible futures (especially if one wants to use an AI). But, since I don't share that terminal value, I'm consistently left underwhelmed by the metaethics discussions.

That said, this looks like an ambitious sequence. Good luck!

comment by Douglas_Knight · 2011-04-28T22:52:28.129Z · score: 4 (4 votes) · LW(p) · GW(p)

You are talking about ethics, not meta-ethics.

comment by djcb · 2011-04-24T11:53:58.732Z · score: 7 (7 votes) · LW(p) · GW(p)

It's a pretty bold claim to state that you can solve meta-ethics!

I think one important thing to make explicit up front is what you mean by "solving", and how we can see that your particular solution is the correct one. I mean, it might be possible to show that a system is consistent (which is non-trivial...) , maybe even that it is practical, but apart from that, I have a hard time seeing meta-ethics as a problem with a definite solution.

comment by XiXiDu · 2011-04-24T16:53:46.318Z · score: 4 (4 votes) · LW(p) · GW(p)

...and how we can see that your particular solution is the correct one.

I think this is a very important point and would like to see it being addressed.

comment by wedrifid · 2011-04-24T04:47:09.038Z · score: 5 (5 votes) · LW(p) · GW(p)

After sharing my views on metaethics with many scientifically-minded people in private conversation, many have said something like "Yeah, that's basically what I think about metaethics"

You can add me to the list. Except for the part about not citing the science. Your claims appear to be not-insane, a rather unusual feature when people are talking about this subject.

comment by CronoDAS · 2011-04-24T18:17:22.369Z · score: 3 (5 votes) · LW(p) · GW(p)

I think metaethics can be solved now. This solution will be the topic of my upcoming sequence 'No-Nonsense Metaethics.'

Good luck. I think you're going to need it.

comment by XFrequentist · 2011-04-24T18:37:25.099Z · score: 2 (4 votes) · LW(p) · GW(p)

I am confused about metaethics, and I anticipate becoming less confused as I read this planned sequence.

So thanks, man!

comment by shokwave · 2011-04-24T11:40:30.215Z · score: 2 (2 votes) · LW(p) · GW(p)

Any plans to turn this sequence (or parts of it, or things like it) into a philosophy journal article?

comment by lukeprog · 2011-04-24T14:41:47.082Z · score: 1 (1 votes) · LW(p) · GW(p)

It's too long. It would have to be a monograph.

comment by shokwave · 2011-04-24T14:49:58.880Z · score: 1 (1 votes) · LW(p) · GW(p)

Any plans for that? :P

comment by lukeprog · 2011-04-24T14:58:15.655Z · score: 5 (5 votes) · LW(p) · GW(p)

Yes, possibly.

comment by Swimmer963 · 2011-04-24T02:13:11.934Z · score: 2 (2 votes) · LW(p) · GW(p)

I am looking forward to reading the sequence! Metaethics is one of the areas where I'm foggy; a lot of the stuff I read confuses me immensely. A reductionist explanation sounds very helpful.

comment by Bebok · 2011-04-30T21:19:44.470Z · score: 1 (1 votes) · LW(p) · GW(p)

The dopamine - 'pleasure chemical' link doesn't seem to work. Could you fix it?

comment by Armok_GoB · 2011-04-27T14:33:03.595Z · score: 1 (1 votes) · LW(p) · GW(p)

Looking forward to reading this. :)

comment by Vladimir_Nesov · 2011-04-24T13:46:58.645Z · score: 1 (15 votes) · LW(p) · GW(p)

I have a bad feeling about this.

comment by NancyLebovitz · 2011-04-24T14:44:12.030Z · score: 2 (2 votes) · LW(p) · GW(p)

Can you unpack the feeling to get more detail about what you intuit the problem is?

comment by Vladimir_Nesov · 2011-04-24T14:56:44.195Z · score: 7 (9 votes) · LW(p) · GW(p)

As one self-contained point (which doesn't bear most of my intuition, isn't strong in itself), I don't see how finer details about the way brain actually works (e.g. roles of pleasure/desire) can be important for this question. The fact that this is apparently going to be important in the planned sequence tells me that it'll probably go in a wrong direction. Similarly, emphasis on science, where the sheer load of empirical facts can distract from the way they should be interpreted.

comment by lukeprog · 2011-04-25T18:35:34.809Z · score: 4 (4 votes) · LW(p) · GW(p)

Just as a preview, I don't think the neuroscience of pleasure and desire are crucial for metaethics either, but they are useful for illustrative purposes of what possible moral reductions could mean. They can bring some clarity to our thinking about such matters. But yes, of course it matters hugely how one interprets the cognitive science relevant to metaethics.

comment by Kevin · 2011-04-24T14:31:16.935Z · score: -4 (14 votes) · LW(p) · GW(p)

wtf? You are like a black box that has something negative to say about every single post on Less Wrong. Lower your standards.

comment by Vladimir_Nesov · 2011-04-24T14:36:01.170Z · score: 11 (13 votes) · LW(p) · GW(p)

Multiple people made optimistic noises about the sequence, equally without explicit substantiation (we all have too little data to craft explicit reasons from). My honest expectation is that the sequence will go badly in some slightly subtle fashion that will produce confusion, little new understanding, and some debate. Maybe not, but that's what I expect more.

comment by Jordan · 2011-04-24T22:14:01.407Z · score: 5 (5 votes) · LW(p) · GW(p)

This is a more reasonable and measured reply. Negative comments are great, so long as they have substance.

comment by MarkusRamikin · 2011-06-18T18:01:28.639Z · score: 1 (1 votes) · LW(p) · GW(p)

Positive ones don't have to have substance? ;)

comment by Normal_Anomaly · 2011-06-18T18:49:18.451Z · score: 1 (1 votes) · LW(p) · GW(p)

Data point: I have often voted comments up just for being amusing. One of those was negative, but most were positive.

comment by Vladimir_Nesov · 2011-04-24T14:37:09.761Z · score: 6 (14 votes) · LW(p) · GW(p)

Lower your standards.

I care about this garden.

comment by Kevin · 2011-04-28T05:43:16.837Z · score: -4 (8 votes) · LW(p) · GW(p)

I very much look forward to your posts on metaethics, which will hopefully be vastly superior to lukeprog's posts.

comment by cousin_it · 2011-04-28T08:15:40.765Z · score: 4 (6 votes) · LW(p) · GW(p)

I downvoted your comment because Nesov's ability or inability to write good posts on metaethics is irrelevant to whether lukeprog's posts are good.

comment by thomblake · 2011-05-03T18:41:47.662Z · score: 1 (1 votes) · LW(p) · GW(p)

I would've voted this comment up instead of down if you had not included the suggestion "Lower your standards".

Raise your standards. A lot.

ETA: Full disclosure: Am a transhumanist.

comment by shokwave · 2011-04-24T14:49:27.946Z · score: 0 (12 votes) · LW(p) · GW(p)

You are like a black box that has something negative to say about every single post on Less Wrong.

That is a fact about posts on LessWrong, not a fact about Vladimir_Nesov.

comment by wedrifid · 2011-04-28T08:58:08.737Z · score: 1 (5 votes) · LW(p) · GW(p)

You are like a black box that has something negative to say about every single post on Less Wrong.

That is a fact about posts on LessWrong, not a fact about Vladimir_Nesov.

I don't see why it matters either way but it is quite clearly an alleged fact about Vladimir_Nesov.

comment by cousin_it · 2011-04-28T10:06:28.685Z · score: 2 (2 votes) · LW(p) · GW(p)

I guess shokwave meant that this fact reflects badly on LW's current state more than it reflects badly on Nesov. I kinda agree with that, and would love to read a blog that met Nesov's standards :-)

comment by wedrifid · 2011-04-28T11:00:12.211Z · score: 1 (3 votes) · LW(p) · GW(p)

I guess shokwave meant that this fact reflects badly on LW's current state more than it reflects badly on Nesov.

Probably both. Luke's contributions are among the best of those around. Even though they are nothing on Yvain's or Eliezer's back in the day.

comment by jwhendy · 2011-04-24T03:43:40.930Z · score: 0 (0 votes) · LW(p) · GW(p)

Wait... did you switch from Desire Utilitarianism? Or is Desirism within pluralistic moral reductionism? Or should I just wait for "What the hell is metaethics?"

comment by lukeprog · 2011-04-24T04:41:09.883Z · score: 1 (1 votes) · LW(p) · GW(p)

Desirism fits within pluralistic moral reductionism. It is one possible reduction of moral terms, but there are others. But yeah, I'm basically gonna ask you to wait for the sequence. :)

comment by jwhendy · 2011-04-25T15:17:26.312Z · score: 1 (1 votes) · LW(p) · GW(p)

That's fine -- your answer was enough to get a rough sketch of a Venn diagram :)

comment by rhollerith_dot_com · 2011-04-24T01:09:22.074Z · score: 0 (0 votes) · LW(p) · GW(p)

Somehow I have managed to live for 50 years without realizing that metaethics needs solving :)

comment by Furcas · 2011-04-24T02:28:32.327Z · score: -4 (8 votes) · LW(p) · GW(p)

I don't really see how you can add anything more than relatively unimportant details to Eliezer's metaethics sequence.

comment by Randaly · 2011-04-24T02:40:38.399Z · score: 10 (10 votes) · LW(p) · GW(p)

Even if he can add nothing new at all, the metaethics sequence was less well understood than the other sequences; a new explanation of Eliezer's position would be very useful.

comment by wedrifid · 2011-04-24T04:44:02.830Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't really see how you can add anything more than relatively unimportant details to Eliezer's metaethics sequence.

By way of contrast I think Eliezer's sequence was good but stopped too soon. There was the foundation to take things further and, presumably, time constraints prohibited that. The interesting stuff starts a step or two on from where Eliezer left off and I hope Luke covers it. (Because then I can strike it off the rather long list of 'posts that I would write if I was a post writing kind of guy'.)

comment by AlexMennen · 2011-04-24T04:39:46.888Z · score: 0 (2 votes) · LW(p) · GW(p)

I estimate a somewhat greater than 50% chance that you are right. However, I still think that this sequence will be worth following in case lukeprog does come up with something interesting that Eliezer and the rest of us missed.

comment by lukeprog · 2011-04-24T04:45:52.465Z · score: 9 (9 votes) · LW(p) · GW(p)

If nothing else, I'm going to be thoroughly citing all the relevant science to back up the basic perspective, which Eliezer didn't do. And I'll be explaining directly how pluralistic moral reductionism answers or dissolves traditional questions in metaethics, and how it responds to mainstream metaethical views, which Eliezer mostly didn't do. And I think I'll be making it clearer than Eliezer did that rigid designation of value terms to the objects of (something like) second-order desires is not the only useful reduction of moral terms, and that it's not too profitable to argue endlessly about where to draw the boundary. (We should be arguing about substance not symbol; the debate shouldn't be about 'the meaning of right.') And I'll be covering in more depth what the cognitive sciences tell us about how our intuitions about morality are generated. And... well, you'll see.