P/S/A - Sam Harris offering money for a little good philosophy
post by Ben Pace (Benito) · 2013-09-01T18:36:40.217Z · LW · GW · Legacy · 79 commentsContents
79 comments
Sam Harris is here offering a substantial amount of money to anyone who can show a flaw in the philosophy of 'The Moral Landscape' in 1000 word or less, or at least the best attempt.
http://www.samharris.org/blog/item/the-moral-landscape-challenge1
Up to $20,000 is on offer, although that's only if you change his mind. Whilst we know that this is very difficult, note how few people offer large sums of money for the privelage of being disproven.
In case anyone does win, I will remind you that this site is created and maintained by people who work at MIRI and CFAR, which rely on outside donations, and with whom I am not affiliated.
Note: Is this misplaced in Discussion? I imagine that it could be easily overlooked in an open thread by the sorts of people who would be able to use this information well?
79 comments
Comments sorted by top scores.
comment by Jayson_Virissimo · 2013-09-02T04:40:36.217Z · LW(p) · GW(p)
Sam Harris is here offering a substantial amount of money to anyone who can show a flaw in the philosophy of 'The Moral Landscape' in 1000 word or less, or at least the best attempt.
More accurately, he is "offering a substantial amount of money to anyone who can" convince him to publicly acknowledge that there is a "flaw in the philosophy of 'The Moral Landscape' in 1000 word or less." This is quite a different feat from merely finding a flaw.
Up to $20,000 is on offer, although that's only if you change his mind. Whilst we know that this is very difficult, note how few people offer large sums of money for the privelage of being disproven.
I'm not so sure this is a wise decision if you are trying to improve your epistemic rationality. What he has just done, is to give himself a $10,000 reason not to change his mind.
Replies from: None, tgb↑ comment by [deleted] · 2013-09-02T19:37:08.465Z · LW(p) · GW(p)
What he has just done, is to give himself a $10,000 reason not to change his mind.
Maybe. But how much is $10000 to Sam Harris? And how much credit would he get for publicly changing his mind in such a way that costs him $10000? And if he did so, he might be getting an excuse to market another book on morality in the bargain.
↑ comment by tgb · 2013-09-03T12:22:38.697Z · LW(p) · GW(p)
It looks like he's having a third party judge the results, but I can't tell since it's only a tweet and isn't explicit about whether or not the reward is determined by the third party. He tweeted:
"I am happy to say that Russell Blackford has agreed to judge the essays, pick the winner, and evaluate my response."
Replies from: shminux, Jayson_Virissimo↑ comment by Shmi (shminux) · 2013-09-03T23:40:19.953Z · LW(p) · GW(p)
This is for the best essay, not for the main prize.
↑ comment by Jayson_Virissimo · 2013-09-03T18:28:59.779Z · LW(p) · GW(p)
If true, this is good news for the sanity of Sam Harris, although the original post showed no indication that this would be the case.
comment by buybuydandavis · 2013-09-02T03:35:45.236Z · LW(p) · GW(p)
Sam did something similar on his tour for the book. He invited people to come up and correct his views on his book.
It's was either clueless, or fundamentally dishonest. Sam can add 2 and 2. The problems with his book, like most others, are primarily conceptual, and impossible to correct in a 30 second response to Sam after his lecture. He chose not to engage the professional literature on his rehash of utilitarianism and moral objectivism, and then invites people to correct him in a 30 second response to his lecture. Unserious.
I don't think any of his fundamental moves pass a laugh test. But it's extremely difficult to help a conceptually confused person see the error of their ways. We can't do it for him. He has to decide to face serious interrogation by his critics, where he attempts to clarify his own argument, and sees if he can do it. He's shown no indication of a willingness to do this. Instead, he'll just read essays, cram them into his conceptual confusion, and dismiss them, most likely claiming that they didn't understand his argument, where I'd argue that neither did he. What a pointless exercise.
comment by Ishaan · 2013-09-01T23:18:08.563Z · LW(p) · GW(p)
Here's my response. I had a LW-geared TL:DR which assumed shorter inferential distance and used brevity-aiding LW jargon, but then I removed it because I want to see if this makes sense to LW without any of that.
This debate boils down to a semantic confusion.
Lets consider the word "heat(1)". Some humans chose the word "heat" to mean "A specific subset of environmental conditions that lead to the observation of feeling hot, of seeing water evaporate..." and many other things too numerous to mention.
Once "heat" was defined, science could begin to quantify how much of it there was using "temperature". We can use our behavior to increase or decrease the heat, and some behaviors are objectively more heat-inducing than others.
But who defined heat in the first place? We did. We set the definition. It was an arbitrary decision. If our linguistic history had gone differently, "heat" could have meant any number of things.
If we were lucky, a neighboring culture would use "heat(2)" to mean "the colors red and yellow" and everyone would recognize that these were two separate words that meant different things but happened to be homonyms with a common root - since most warm things are red or yellow, it's easy to see how definitions diverge. No one would be so silly as to argue about heat(1) and heat(2). If we were unlucky, a neighboring culture might decide to use "heat(3)" to mean "subjective feelings resulting from temperature-receptor activation", and we'd have endless philosophical debates about what heat really is. All this useless debate because one culture decided to use "heat(3)" to refer to the subjective feeling of being hot, while another culture decided to use "heat(1)" to refer to a complex phenomenon which causes a bunch of observable effects, one of which is usually but not always the subjective experience of feeling hot.
One day, a group of humans which included one named Sam Harris decided to define "Good(1) and Best(1)" as "Well-Being among all Conscious Beings". (Aside - In an effort to address the central theme and avoid tangents, let's just assume that "Conscious Beings" here means "regular humans" and not create hypothetical situations containing eldritch beings with alien goals. Since we haven't rigorously defined "Well-Being" and "Conscious-Being", we won't go into the question of whether "Well-Being" is a coherent construct for all "Conscious Beings" . We can deal with that problem later - that's not the central issue. For now, we will simply go by our common intuitions of what those words mean.)
Can you measure "well-being" in humans? Sure you can! You can use questionnaires to measure satisfaction, you can measure health and vibrancy and do all sorts of things. And you can arrange your actions to maximize these measurements, creating the Best(1) Possible Universe. And some hypothesis about what actions you aught to take to reach the Best(1) Possible Universe are incorrect, while others are correct.
One day, a group of humans which did not include one named Sam Harris decided to define "Good(2)" as "The sum of all my goals". Can science measure that? Actually, yes! - I can measure my emotional response to various hypothetical situations, and try to scientifically pinpoint what my goals are. I can attempt to describe my goals, and sometimes I will be incorrect about my own goals, and sometimes I will be correct - we've almost all been in situations where we thought we wanted something, and then realized we didn't. Likewise, there is a certain set of actions that I can take to maximize the fulfillment of my goals, to reach my Best(2) Possible Universe. And I can use observation and logic to measure your goals as well, and calculate your Best(2) Possible Universe.
But can my goals themselves be incorrect? No - my goals are imbedded in my brain, in my software. My goals are physically a part of the universe. You can't point to a feature of the universe and call it "incorrect". You can only say that my goals are incompatible with yours, that our Best(2) Possible Universes are different. Mine is Better(2) for me, yours is Better(2) for you.
Our culture is unlucky, because Good(1) and Good(2) are homonyms whose definitions are far too close together. It doesn't make sense to ask which definition is "correct" and which is "wrong", any more than it makes sense to ask whether "Ma" means Mother (English) or Horse (Chinese). The entire argument stems from the two sides using the same word to mean entirely different things. It's a stupid argument, and there are no new insights gained from going back and forth on the matter of which arbitrary definition is better. If only Good(1) and Good(2) didn't sound so similar, there would be no confusion.
(Note: Of course, I've ridiculously oversimplified both Good(1) and Good(2), and I haven't gone into Good 1.1, Good 1.2, Good 2.1, Good 2.2, etc. But I think it's safe to say that most definitions of Good currently fall into either camp 1 or camp 2, and this argument is a misunderstanding between the definitional camps)
Replies from: printing-spoon, RobbBB, ChristianKl, army1987↑ comment by printing-spoon · 2013-09-02T02:39:09.126Z · LW(p) · GW(p)
ask whether "Ma" means Mother (English) or Horse (Chinese).
"Ma" also means mother, depending on the tone. Actually, this example backfires since the word "mama" or some variation of it (ma, umma) means "mother" in almost every language in the world.
I haven't read the book but this sounds pretty good to me. Since Harris himself is the judge calling his argument "stupid" might not be the best idea.
Replies from: Ishaan↑ comment by Ishaan · 2013-09-02T07:35:37.018Z · LW(p) · GW(p)
oops. I guess it could be interpreted that way.
I meant that the argument between good(1) and good(2) is stupid. Harris is just one side of the debate - i'm saying the entire debate is misguided in the first place, much like it would be stupid to argue the meaning of Ma.
Using good(1) isn't stupid, and neither is using good(2). It's just stupid to argue which one good really means.
↑ comment by Rob Bensinger (RobbBB) · 2013-09-05T19:00:51.787Z · LW(p) · GW(p)
Could someone provide a quote or two showing that Sam disagrees with any of the above? Steel-manning only a little, I believe Harris' goal isn't to find the One True Definition of morality, but to get rid of some useless folk concepts in favor of a more useful concept for scientific investigation and political collaboration. He antecedently thinks improving everyone's mental health is a worthy goal, so he pins the word 'morality' to that goal to make morality-talk humanly useful. Quoting him (emphasis added):
[T]he fact that millions of people use the term “morality” as a synonym for religious dogmatism, racism, sexism, or other failures of insight and compassion should not oblige us to merely accept their terminology until the end of time. [...]
Everyone has an intuitive “physics,” but much of our intuitive physics is wrong (with respect to the goal of describing the behavior of matter). Only physicists have a deep understanding of the laws that govern the behavior of matter in our universe. I am arguing that everyone also has an intuitive “morality,” but much of our intuitive morality is clearly wrong (with respect to the goal of maximizing personal and collective well-being).
I think this view is more sophisticated than is usually recognized. Though it's definitely true he doesn't do a lot to make that clear, if so.
Replies from: Ishaan↑ comment by Ishaan · 2013-09-05T23:10:56.821Z · LW(p) · GW(p)
I don't think he would disagree if he read it, which is why I thought it was worth submitting. i'm not attempting to change his opinion so much as attempting to dissolve the debate which he is attempting to take sides on. Sam Harris's argument is right if we accept the premise that good=good(1), but wrong if we accept the premise that good=good(2).
My purpose is merely to point out that the choice of whether to use good(1) or good(2) is arbitrary. My aim is to make it explicit. The debate as framed by Sam Harris implicitly assigns good the value of good(1). You can't just do that implicitly when the crux of the debate is about the definition of good.
Replies from: david-spohr↑ comment by David Spohr (david-spohr) · 2023-10-26T21:54:18.209Z · LW(p) · GW(p)
I think you misunderstand the premise. There is no known absolute "good" or "bad" in his description. There is just the landscape of peaks and valleys which we don't know the shape of. So the two "goods" you described can be two peaks partially intersecting each other.
↑ comment by ChristianKl · 2013-09-02T13:04:55.362Z · LW(p) · GW(p)
As far as heat goes, "hot" is a quite interesting word in the english language. Capsaicin that activates temperature receptors in the mouth gets described as hot even when it doesn't have a high temperature.
Replies from: DanArmak↑ comment by DanArmak · 2013-09-02T21:23:55.051Z · LW(p) · GW(p)
Also, beautiful people are described as hot, even though they don't have a high temperature. (These people are often cool at the same time.) People also have hot tempers, hot merchandise (which needs to be fenced), and a huge amount of other hotness.
Replies from: gwern↑ comment by gwern · 2013-09-02T22:55:34.098Z · LW(p) · GW(p)
Also, beautiful people are described as hot, even though they don't have a high temperature.
How about the response they can provoke in viewers? An increase in blood flow may well be hot, given the temperature of one's core vs one's skin.
Replies from: DanArmak↑ comment by DanArmak · 2013-09-03T08:07:38.336Z · LW(p) · GW(p)
That's what the OED thinks. Today we've got people who are hot and cool at the same time, though...
↑ comment by A1987dM (army1987) · 2013-09-02T07:54:35.084Z · LW(p) · GW(p)
group of humans which did not include one named Sam Harris
Given how common a first name “Sam” is and how common a last name “Harris” is, I wouldn't be very sure of that. :-)
comment by Manfred · 2013-09-01T20:03:04.739Z · LW(p) · GW(p)
Constructing a response after reading his response to critics would be good. His core reservations presented seem to be:
If you can say that there's no correct morality, why can't you say that there's no correct math, or no correct science?
If there's two different visions of well-being, isn't this just a small difference? ("This is akin to trying to get me to follow you to the summit of Everest while I want to drag you up the slopes of K2" [...] "In any case, I suspect that radically disjoint peaks are unlikely to exist for human beings.")
And he presents some rationalizations that seem to be ingrained:
"Is it unscientific to value health and seek to maximize it within the context of medicine? No. Clearly there are scientific truths to be known about health." That is, he conflates "there are truths" with "there is a truth of the sort I want."
Later he conflates 'an ideal world by my egalitarian values is possible' with 'so don't bother thinking about other peoples' values,' specifically citing selfish values. This is the logically even worse version of objection #2.
Replies from: somervta, buybuydandavis↑ comment by buybuydandavis · 2013-09-04T03:17:22.099Z · LW(p) · GW(p)
Paraphrasing Sam:
If there's two different visions of well-being, isn't this just a small difference?
Not between the Deathists and me.
comment by Shmi (shminux) · 2013-09-01T21:09:56.729Z · LW(p) · GW(p)
What can convince a philosopher to change her mind, anyway? I mean, it's not like there is an experiment that can be conclusively set up. Is it some logical argument she is unable to find a fault in? If so, then how come there are multiple schools of philosophy disagreeing on the basics? Can someone point to an example of a (prominent) philosopher changing his/her mind and hopefully the stated and unstated reasons for doing so?
Replies from: pragmatist, Ishaan, Estarlio, Solvent↑ comment by pragmatist · 2013-09-02T06:38:15.689Z · LW(p) · GW(p)
Hilary Putnam, one of the most prominent living philosophers, is known for publicly changing his mind repeatedly on a number of issues. In the Philosophical Lexicon, which is kind of an inside-joke philosophical dictionary, a "hilary" is defined thus:
A very brief but significant period in the intellectual career of a distinguished philosopher. "Oh, that's what I thought three or four hilaries ago."
One issue on which Putnam changed his mind is computational functionalism, a theory of mind he actually came up with in the 60s, which is now probably the most popular account of mental states among cognitive scientists and philosophers. Putnam himself has since disavowed this view. Here is a paper tracking Putnam's change of mind on this topic, if you're interested in the details.
The definition of functionalism from that paper:
Computational functionalism is the view that mental states and events – pains, beliefs, desires, thoughts and so forth – are computational states of the brain, and so are defined in terms of “computational parameters plus relations to biologically characterized inputs and outputs” (1988: 7). The nature of the mind is independent of the physical making of the brain: “we could be made of Swiss cheese and it wouldn’t matter” (1975b: 291). What matters is our functional organization: the way in which mental states are causally related to each other, to sensory inputs, and to motor outputs. Stones, trees, carburetors and kidneys do not have minds, not because they are not made out of the right material, but because they do not have the right kind of functional organization. Their functional organization does not appear to be sufficiently complex to render them minds. Yet there could be other thinking creatures, perhaps even made of Swiss cheese, with the appropriate functional organization.
The paper I linked has much more on the structure of Putnam's functionalism and his reasons for believing it.
The reasons for which Putnam subsequently rejected functionalism are a bit hard to convey briefly to someone without a philosophy background. The basic idea is this: many mental states have content, i.e. they somehow say something about the world outside the mind. Beliefs are representations (or possibly misrepresentations) of aspects of the world, desires are directed at particular states of the world, etc. This "outward-pointing" aspect of certain mental states is called, in philosophical parlance, the intentional aspect of mental states. Putnam essentially repudiated functionalism because he came to believe that the functional aspect of a mental state -- it's role in the computational process being implemented by the brain -- does not determine its intentional aspect. And since intentionality is a crucial feature of some mental states, we cannot therefore define a mental state in terms of its functional role.
Putnam's arguments for the gap between the functional and intentional are again detailed in the paper I linked (section 3). It's kind of obvious that if we consider a computational process by itself we cannot conclusively determine what role that process is playing in the surrounding ecology -- syntax doesn't determine semantics. Putnam's initial hope had been that by specifying "biologically characterized inputs and outputs" in addition to the computational structure of the mental process, we include enough information about the relationship to the external world to fix the content of the mental state. But he eventually came up with a thought experiment (the now notorious "Twin Earth" experiment) that (he claimed) showed that two individuals could be implementing the exact same mental computations and have the exact same sensory and motor inputs and outputs, and yet have different mental states (different beliefs, for instance).
Another motivation for Putnam changing his mind is that he claimed to have come up with a proof that every open system can, with appropriate definitions of states, be said to implementing any finite automaton. The gist of the proof is in the linked paper (section 3.2.1). If the conclusion is correct, then functionalism seemingly collapses into vacuity. All open systems, including rocks and carburetors, can be described as having any mental state you'd like. To avoid this conclusion, we need constraints on interpretation -- which physical process can be legitimately interpreted as a computational process -- but this tells against the substrate-independence that is supposed to be at the core of functionalism.
So that's one example. Putnam came to believe in functionalism because he thought there were strong arguments for it, both empirical and theoretical, but he subsequently developed counter-arguments that he regarded as strong enough to reject the position despite those initial arguments. Putnam is particularly known for changing his mind on important issues because he has done it so many times, but there are many other prominent philosophers who have had significant changes of mind. Another very prominent example is Ludwig Wittgenstein, who is basically famous for two books, the first of which promulgated a radical view of the relationship between language, the mind and the world (an early form of logical positivism), and the second of which extensively (and, to my mind, quite devastatingly) criticized this view.
Replies from: Alejandro1, Protagoras, Jayson_Virissimo, cousin_it, shminux, Creutzer↑ comment by Alejandro1 · 2013-09-02T23:40:49.207Z · LW(p) · GW(p)
Excellent response. Another example of a famous philosopher changing his mind publicly a lot is Bertrand Russell; he changed his views in all areas of philosophy, often more than once:
In metaphysics, he started his career as an Absolute Idealist (believing that pluralities of objects are unreal and only an universal spirit is real); then became convinced of the reality of object and extended his newfound realism to relations and mathematical concepts, becoming a Platonist of sorts, and later became more and more of a nominalist, though never a complete one.
Concerning perception, after switching first from idealism to a sort of naive realism, he developed a new theory in which physical objects reduce to collections of sense-data, and later repudiated this theory in favor of one where physical objects cause sense-data.
He also changed his views on the self, from seeing it as an entity to reducing it to a collection of perceptions.
Finally, in metaethics, he started out believing that the Good was an objective, independent property, but was convinced to abandon this view and become more of a naturalist and subjectivist by the arguments that Santayana raised against him. (Santayana's critique can be read here and is a fascinating early version of the kind of metaethical view accepted by Eliezer and most LWers).
↑ comment by Protagoras · 2013-09-02T14:54:12.732Z · LW(p) · GW(p)
At least one of Putnam's changes is a bit of a tricky case; he's famous for being a co-author of the early pro-reductionist essay "Unity of Science as a Working Hypothesis," and for later being one of the most prominent anti-reductionists. However, I have heard that the other co-author of that paper, Paul Oppenheim, paid Putnam (who was then just starting out and so not in the greatest financial shape) to help him write a paper advancing his own views. I've also heard that Putnam was not the only young scholar Oppenheim did this with. All of Oppenheim's well-known publications are co-authored, and I've actually heard that they all involved similar arrangements, but when I heard this story Putnam was cited as the instance my (highly trustworthy) source knew for certain (my source claimed to have heard this from Putnam himself, and is someone Putnam plausibly might have told this to).
↑ comment by Jayson_Virissimo · 2013-09-02T06:48:16.639Z · LW(p) · GW(p)
Excellent examples. Thank you.
↑ comment by cousin_it · 2013-09-02T10:38:27.630Z · LW(p) · GW(p)
It seems to me that both the "Twin Earth" experiment and the question of "which physical process can be legitimately interpreted as a computational process" can be easily solved if you view them as questions of degree rather than binary:
1) Having an identical twin on another Earth is the same as being uncertain about where you are. If I am uncertain whether water is H2O or XYZ, then my idea of "water" refers to a probabilistic mixture of H2O and XYZ.
2) The degree to which a physical process represents a computational process depends on the simplicity of the program that prints out the latter given the former.
↑ comment by Shmi (shminux) · 2013-09-02T07:51:15.615Z · LW(p) · GW(p)
Interesting. So the examples of Putnam and Wittgenstein show that a philosopher can be persuaded by his own logical arguments. Maybe some even listen to the arguments of others, who knows. I wonder what makes an argument persuasive to some philosophers and not to others.
Replies from: pragmatist↑ comment by pragmatist · 2013-09-02T08:10:31.582Z · LW(p) · GW(p)
Well, there's a selection bias involved in published changes of mind that accounts for why the prominent examples involve philosophers being persuaded by their own arguments. If a philosopher is convinced into changing their mind by another philosopher's argument, they're unlikely to publish a paper announcing this.
I wonder what makes an argument persuasive to some philosophers and not to others.
Why do you think the issues involved here are different than those in other academic fields? Disagreement exists in every discipline, not just philosophy, although it is plausibly more pronounced in philosophy than in many other disciplines. In science, surely disagreement doesn't just boil down to one of the disputants being familiar with the empirical evidence and the other not being familiar with it, at least not in prominent cases. So what makes an argument persuasive to some scientists and not to others?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-09-02T18:38:49.100Z · LW(p) · GW(p)
Why do you think the issues involved here are different than those in other academic fields?
I don't. The same issues exist in, e.g., physics when experimental validation is not easily available. For a recent example, see John Preskill's account of the recent conference about the black hole firewall paradox. But in physics there is at least a hope of experimental phenomena being predicted eventually and settling the argument. In philosophy there no such hope, so it's a cleaner setup for studying the question
what makes an argument persuasive to some scientists and not to others?
↑ comment by Creutzer · 2013-09-02T06:56:22.452Z · LW(p) · GW(p)
Note that the fact that Putnam is so very (and almost uniquely) famous for this is evidence that changes of mind like this usually don't happen in philosophy. Do you know to what extent his change of mind was prompted by other people? (I admit I didn't read the paper.)
Replies from: pragmatist↑ comment by pragmatist · 2013-09-02T07:05:47.673Z · LW(p) · GW(p)
He is famous not for changing his mind but for changing his mind repeatedly on a number of different theories that he himself brought into prominence, and also for how radical and foundational some of those changes have been. My supervisor used to say that he could delineate six distinct "versions" of Putnam. That is unusual in philosophy, but I don't think mind-changing itself is, at least not more so than in most other intellectual disciplines, including the sciences. Of course, maybe I'm just mistaken about the extent to which mind-changing occurs among individual scientists, since I'm not part of that community.
Putnam's change of mind, on this issue at least, was to a large extent prompted by arguments he developed himself, although his "Twin Earth" argument is similar to arguments developed by Saul Kripke for other purposes. I'm not sure about the degree of direct influence.
Replies from: Creutzer↑ comment by Creutzer · 2013-09-02T07:46:43.645Z · LW(p) · GW(p)
He is famous not for changing his mind but for changing his mind repeatedly on a number of different theories that he himself brought into prominence, and also for how radical and foundational some of those changes have been.
You know, you're right.
↑ comment by Ishaan · 2013-09-01T23:56:30.324Z · LW(p) · GW(p)
Well shit...your post made me realize I've never really changed my mind on any non-empirical issue - although I have had blank spaces filled in, of course.
Would you consider EY prominant? He is here, at least. Here is a description of his conversion from the (I say surely false) belief that Aumann's agreement theorem would cause rational agents to behave morally to the (I say surely true) belief in No Universally Compelling Arguments. He did it at age 18 and wrote essays on it too, so its not like he just filled in an empty space - he actually had to reject a previous belief, which he had given a lot of thought about.
http://lesswrong.com/lw/u2/the_sheer_folly_of_callow_youth/
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-03T02:54:21.213Z · LW(p) · GW(p)
I had the belief at age 18; I rejected it 20-21.
↑ comment by Estarlio · 2013-09-13T00:33:57.489Z · LW(p) · GW(p)
Is it some logical argument she is unable to find a fault in? If so, then how come there are multiple schools of philosophy disagreeing on the basics?
Maybe their level of logic is just low, or they have bad thought habits in applying that logic. Or maybe there's some system level reward for not agreeing (I imagine that publish||die might have such an effect.)
Replies from: Estarlio↑ comment by Estarlio · 2013-09-13T13:38:52.582Z · LW(p) · GW(p)
I don't see why you'd think it faulty to mention the possibilities there - remember I'm not claiming that they're true, just that they might be potential explanations for the suggested observation.
If you want to share the reason for the downvote, I promise not to dispute it so you don't have to worry about it turning into a time sink and to give positive karma.
↑ comment by Solvent · 2013-09-02T06:11:20.385Z · LW(p) · GW(p)
The famous example of a philosopher changing his mind is Frank Jackson with his Mary's Room argument. However, that's pretty much the exception which proves the rule.
Replies from: Protagoras↑ comment by Protagoras · 2013-09-02T12:20:41.236Z · LW(p) · GW(p)
Jackson is the first example I thought of. As I understand it, he came to be convinced, particularly by the arguments of David Lewis, that rejecting physicalism made it harder, rather than easier, to explain what was going on. But calling it "the exception that proves the rule" seems lazy and unhelpful, especially in light of other examples people have mentioned here.
comment by Sophronius · 2013-09-02T16:28:03.526Z · LW(p) · GW(p)
The error with Harris' main point is hard to pin down, because it seems to me that his main fault is that his beliefs regarding morality aren't clearly worked out in his own head. This can be seen from his confusion as to why anyone would find his beliefs problematic, and his tendency to hand-wave criticism with claims that "it's obvious".
Interpreted favourably, I agree with his main point, that questions about morality can be answered using science, as moral claims are not intrinsically different from any other claim (no separate magisteria s'il vous plaît). Basically, what all morality boils down to is that people have certain preferences, and these preferences determine whether certain actions and outcomes are desirable or not (to those people that is). I agree with Harris that the latter can be deduced logically, or determined scientifically. Furthermore, the question of what people's preferences are in the first place can be examined using for example neuroscience. In this sense, questions of morality can be entirely answered scientifically, assuming they are formulated in a meaningful way (otherwise the answer is mu).
The problem is that Harris' main position can also be taken to mean that science can determine what preferences people ought to have in the first place, which is not possible as this is circular, and this is the main source of criticism he receives. Unfortunately Harris does not seem to get this as he never addresses the issue: In an example of super-intelligent aliens for example, he states that it is "obviously" right for us to let them eat us if this will increase total utility. This implies that everyone should feel compelled to maximise total utility, though he supplies no argument as to why this should be the case. Unfortunately I am not confident I could convince Sam Harris of his own confusion, however.
I suspect that a winning letter to Sam Harris would interpret his position favourably, agree with him on most points, and then raise a compelling new point that he has not yet thought of that causes him to change his mind slightly but which does not address the core of his problem.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-05T18:47:59.337Z · LW(p) · GW(p)
The error with Harris' main point is hard to pin down, because it seems to me that his main fault is that his beliefs regarding morality aren't clearly worked out in his own head.
I think his beliefs are worked out and make sense, but aren't articulated well. What he's really doing is trying to replace morality-speak with a new, slightly different and more homogeneous way of speaking in order to facilitate scientific research (i.e., a very loose operationalization) and political cooperation (i.e., a common language).
But, I gather, he can't emphasize that point because then he'll start sounding like a moral anti-realist, and even appearing to endorse anything in the neighborhood of relativism will reliably explode most people's brains. (The realists will panic and worry we have to stop locking up rapists if we lose their favorite Moral System. The relativists will declare victory and take this metaphysical footnote as a vindication of their sloppy, reflectively inconsistent normative talk.)
The problem is that Harris' main position can also be taken to mean that science can determine what preferences people ought to have in the first place, which is not possible as this is circular, and this is the main source of criticism he receives. Unfortunately Harris does not seem to get this as he never addresses the issue
This is not true. He recognizes this point repeatedly in the book and in follow-ups, and his response is simply that it doesn't matter. He's never claimed to have a self-justifying system, nor does he take it to be a particularly good argument against disciplines that can't achieve the inconsistent goal of non-circularly justifying themselves.
Check out his response to critics. That should clarify a lot.
In an example of super-intelligent aliens for example, he states that it is "obviously" right for us to let them eat us if this will increase total utility. This implies that everyone should feel compelled to maximise total utility, though he supplies no argument as to why this should be the case.
What do you mean by 'utility' here? If 'utility' is just a measure of how much something satisfies our values, then the obviousness seems a lot less mysterious.
I suspect that a winning letter to Sam Harris would interpret his position favourably, agree with him on most points, and then raise a compelling new point that he has not yet thought of that causes him to change his mind slightly but which does not address the core of his problem.
Yeah, I plan to do basically that. (Not just as a tactic, though. I do agree with him on most of his points, and I do disagree with him on a specific just-barely-core issue.)
Replies from: Sophronius, None↑ comment by Sophronius · 2013-09-13T12:23:53.412Z · LW(p) · GW(p)
I did read his response to critics in addition to skimming through his book. As far as I remember his position really does seem vague and inconsistent, and he never addresses things like the supposed is-ought problem properly. He just handwaves it by saying it does not matter, as you point out, but this is not what I would call addressing it properly.
Utility always means satisfying preferences, as far as I know. The reason his answer is not obvious is that it assumes that what is desirable for the aliens must necessarily be desirable for us. In other words, it assumes a universal morality rather than a merely "objective" one (he assumes a universally compelling moral argument, to put it in less wrong terms). My greatest frustration in discussing morality is that people always confuse the ability to handle a moral issue objectively with being able to create a moral imperative that applies to everyone, and Harris seems guilty of this as well here.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-13T17:59:49.051Z · LW(p) · GW(p)
he never addresses things like the supposed is-ought problem properly. He just handwaves it by saying it does not matter, as you point out, but this is not what I would call addressing it properly.
I don't know. What more is there to say about it? It's a special case of the fact that for any sets of sentences P and Q, P cannot be derived from Q if P contains non-logical predicates that are absent from Q and we have no definition of those predicates in terms of Q-sentences. All non-logical words work in the same way, in that respect.
The interesting question isn't Hume's is/ought distinction, since it's just one of a billion other distinctions of the same sort, e.g., the penguin/economics distinction, and the electron/bacon distinction. Rather, the interesting question is Moore's Open Question argument, which is an entirely distinct point and can be adequately answered by: 'Insofar as this claim about the semantics of 'morality' is right, it seems likely that an error theory of morality is correct; and insofar as it is usefully true to construct normative language that is reducible to descriptions, we will end up with a language that does not yield an Open Question in explaining why that is what's 'moral' rather than something else.
I agree Harris should say that somewhere clearly. But this is all almost certainly true given his views; he just apparently isn't interested in hashing it out. TML is a book on the rhetoric and pragmatics of science (and other human collaborations), not on metaphysics or epistemology.
The reason his answer is not obvious is that it assumes that what is desirable for the aliens must necessarily be desirable for us.
Ideally desirable, not actually desired.
In other words, it assumes a universal morality rather than a merely "objective" one (he assumes a universally compelling moral argument, to put it in less wrong terms).
No. See his response to the Problem of Persuasion; he doesn't care whether the One True Morality would persuade everyone to be perfectly moral; he assumes it won't. His claim about aliens is an assertion about his equivalent of our coherently extrapolated moral volition; it's not a claim about what arguments we would currently find compelling.
↑ comment by [deleted] · 2013-09-12T17:54:13.622Z · LW(p) · GW(p)
If you're willing to satisfy my curiosity, what's that specific issue? Would an argument falsifying his position on that issue amount to a refutation of the central argument of the book? If not, wouldn't your essay just be ineligible?
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-12T18:41:02.031Z · LW(p) · GW(p)
The issue I have in mind wasn't explicitly cited in the canonical summary he gives in the FAQ, but I asked Sam personally and he said the issue qualifies as 'central'. I can give you more details in February. :)
comment by DanArmak · 2013-09-01T20:29:24.332Z · LW(p) · GW(p)
note how few people offer large sums of money for the privelage of being disproven.
The usual reason for doing so is signalling: look how sure I am of my ideas, I am willing to put my money on the line. Most people who see this offer (aptly called a "challenge") won't hear "he would be happy to be disproven, what a rational fellow"; they will hear "he is sure he can't be disproven, what a confident fellow".
I haven't read Harris's book and don't know anything about it. However, I do feel that a genuine "challenge" should have a formal verification procedure for proposed answers, or at least a third party to judge them. Judging answers by whether they convince Harris himself requires extremely high confidence in his skills as a rationalist, even apart from his incentives.
On the other hand, what purpose is served by publishing the best answer even if it fails to convince him? He may end up publishing an answer that he thinks is completely wrong (and necessarily saying so), and maybe most other people will think it's wrong too (but that some other answer is right). The submitter will be rewarded with 1000$ although he hadn't convinced anyone, and nobody will change their opinions.
comment by Shmi (shminux) · 2013-09-03T23:40:25.539Z · LW(p) · GW(p)
I believe that many commenters here interpreted Harris uncharitably. He is not giving himself more reasons to not change his mind. He is not interested in an independent judge deciding who is right. He seems to want to genuinely figure out whether he is missing anything important -- to him! Not to other people. That's why he goes into a lot of effort to list, steelman and address all previously made arguments he can think of. If you think you have found an argument he did not bring up, he would likely be interested in hearing it. If you think that you have found an issue with his steelmanning attempt of an existing argument, he would likely be interested in hearing it.
comment by diegocaleiro · 2013-09-02T22:40:51.816Z · LW(p) · GW(p)
I agree with Sam Harris on many topics, and I really enjoyed the Moral Landscape. If you need a proofreding sparring coach to tell you how they feel about your argument. I'm available for such task.
shoot me a mail at diegocaleiro at the provider gmail. I'll be glad to be shown why do people dislike Sam's arguments so much anyway.
comment by scientism · 2013-09-01T22:23:39.552Z · LW(p) · GW(p)
I'd be willing to give this a shot, but his thesis, as stated, seems very slippery (I haven't read the book):
"Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe."
This needs to be reworded but appears to be straightforwardly true and uncontroversial: morality is connected to well-being and suffering.
"Conscious minds and their states are natural phenomena, fully constrained by the laws of Nature (whatever these turn out to be in the end)."
True and uncontroversial on a loose enough interpretation of "constrained".
"Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science."
This is the central claim in the thesis - and the most (only?) controversial one - but he's already qualifying it with "potentially." I'm guessing any response of his will turn on (a) the fact that he's only saying it might be the case and (b) arbitrarily broadening the definition of science. Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers. But given that this is obvious, it's hard to imagine that one could change his mind. It's rather like being invited to challenge the thesis of someone who claims scientific theories are works of fiction. You've got your work cut out when somebody has found themselves that far off the beaten path. I suspect the argument of the book runs: this philosophical thesis is misguided, this philosophical thesis is misguided, etc, science is good, we can get something that sort of looks like morality from science, so science - i.e., he takes himself to be explaining morality when he's actually offering a replacement. That's very hard to argue against. I think, at best, you're looking at $2000 for saying something he finds interesting and new, but that's very subjective.
"On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life."
Assuming "what they deem important in life" is supposed to be parsed as "morality" then this appears to follow from his thesis.
Replies from: buybuydandavis, timtyler, jmmcd↑ comment by buybuydandavis · 2013-09-02T06:12:18.006Z · LW(p) · GW(p)
"Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe."
So if we couldn't suffer, we wouldn't have any values? I don't think so.
↑ comment by timtyler · 2013-09-02T01:42:41.933Z · LW(p) · GW(p)
He skips the qualifier in his FAQ:
Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of Nature (whatever these turn out to be in the end). Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.
↑ comment by jmmcd · 2013-09-02T10:07:33.171Z · LW(p) · GW(p)
Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers.
You can't go from an is to an ought. Nevertheless, some people go from the "well-being and suffering" idea to ideas like consequentialism and utilitarianism, and from there the only remaining questions are factual. Other people are prepared to see a factual basis for morality in neuroscience and game theory. These are regular topics of discussion on LW. So calling it "obvious" begs the whole question.
comment by RomeoStevens · 2013-09-01T21:19:21.312Z · LW(p) · GW(p)
It seems like he's groping towards the concept of CEV?
Replies from: calef↑ comment by calef · 2013-09-02T04:06:36.262Z · LW(p) · GW(p)
I got that impression as well. And to be honest, I haven't ever seen a good argument for why CEV has any fixed points in morality-space. Or rather, if fixed points exist, it's not immediately obvious to me why two distinct CEV-flows couldn't result in mutually irreconcilable value systems.
Which is why Sam's argument isn't super convincing to me.
Replies from: Ishaan, RomeoStevens↑ comment by RomeoStevens · 2013-09-02T10:38:09.112Z · LW(p) · GW(p)
I think he's essentially arguing that for any given set of minds, some CEV must exist. I do think he's somewhat confused in getting there though.
comment by [deleted] · 2013-09-01T19:28:29.164Z · LW(p) · GW(p)
Read the short FAQ underneath. At first glance it seems the book might be right about a lot of things. Damn.
Replies from: lukeprog↑ comment by lukeprog · 2013-09-01T20:05:17.261Z · LW(p) · GW(p)
Yeah. I think the thing Harris should change his mind about is the simplicity of "well-being of conscious creatures is all that matters." We might care about other things, value extrapolation is needed and might surprise us, blah blah blah. I also don't like the way he frames the issue of defining morality; I'd prefer he talk more like this. But that's more of a semantic quibble.
comment by Carinthium · 2013-09-02T00:50:22.071Z · LW(p) · GW(p)
I don't have the book, so I don't think I'm eligible for the prize. Suffice to say that I've read his summary on "Response to Critics", and anybody who can't refute the tripe philosophy shown there (maybe he's got better in the book, I can't be sure) doesn't deserve to be considered anything more than a crap philosopher.
EDIT: Making criticisms as I go.
1- There is a fundamental difference between the question of science and the question of morality. Scientific inquiry percieves facts which are true and useful except for goals which run directly counter to science. Morality perceives 'facts' which are only useful to those who wish to follow a moral route.
2- Say I were a psycopath obsessed with exterminating all humanity for some reason. I do research on a weapon to do so using every principle of Science known about how to do it, testing hypotheses scientifically, making dry runs, having peer review through similiarly psycopath colleagues, etc etc. Although most people would be outraged, they probably wouldn't call it unscientific.
Harris could object that this is further from what most people associate science with. However, scientific research is associated with a lot of things- white coats for example. Why should morality be any better?
I might also point out that many projects seen in reality as scientific would be unscientific under Harris's definition.
3- As far as I can tell, Harris does not account for the well-being of animals. This is an ethical question his pseudo-philosophy cannot answer. It merely assumes humans are all that matters. He also cannot account for why all humans should be considered equal despite years of history showing humans usually don't consider them such.
4- Much human morality has little or no relationship to well-being. Say A murders B's entire family in cold blood. Not only B, but many others who witnessed the deed will have a moral desire for A to be punished independent of, and contrary to, human well being. Deterrent is nowhere in their brains.
Replies from: Ishaan, jmmcd, buybuydandavis, buybuydandavis↑ comment by Ishaan · 2013-09-02T15:48:09.665Z · LW(p) · GW(p)
I neither disagree nor agree with Harris (see my post for what I actually think), but I don't think you've understood the argument sufficiently to refute it. I'll pretend to be Harris and counter your arguments:
1) Scientific inquiry elucidates all facts that are available to our perception. Morality is a perception, therefore science can study it.
2) Yeah, so? Science doesn't force us to be moral - but it can tell us what is moral and what is not. The scientific psychopath would know that his behavior was immoral, and wouldn't care.
3, 4) Science will discover whether or not those humans are correct to believe that that course of actions is moral.
Read here: http://lesswrong.com/lw/fv3/by_which_it_may_be_judged/ to get Harris's viewpoint, stated more articulately
Replies from: Carinthium↑ comment by Carinthium · 2013-09-03T01:51:15.080Z · LW(p) · GW(p)
1: Harris compares pursuing moral goals to pursuing health and claims they are fundamentally similiar (i.e. both part of the basic purview of science). This is what I'm disputing here.
2: See the reply I've made already, both here and my other argument.
3: Harris could claim that a question of the worth of animals could be solved by checking the brains of humans, but this begs questions of why human brains are the only ones that are taken into account. In addition, human brains are likely often contradictory on the subject- a law of averages could be used, but why is it so valid?
4: Harris claims all morality is about the well-being of conscious creatures. That's what I'm objecting to here.
Replies from: TrE, Ishaan↑ comment by TrE · 2013-09-03T06:04:20.083Z · LW(p) · GW(p)
I think 3) is your strongest point, may I try to expand on it?
I wonder, what is Sam's response to utility monsters, small chances of large effects and torture vs. dust specks? In saying that science can answer moral questions by examining the well-being of humans, isn't he making the unspoken assumption that there is a way to combine the diverse "well-being-values" of different humans into one single number by which to order outcomes, and, more importantly, that science can find this method? Then the question remains, how shall science do this? Is this function to be found anywhere in nature? Perhaps in the brains of conscious beings? What if these beings hold different views on what is "fair"?
I simply can't imagine what one would measure to determine what is the "correct" distribution of happiness, although that failure to imagine may be on my part.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-09-03T22:09:34.883Z · LW(p) · GW(p)
Sam would be subject to all the usual objections to utilitarianism, altruism, and moral objectivism available in the existing literature. He has justified not addressing that literature with a glib comment that he was sparing people from boredom. As I said before, he is fundamentally unserious and even dishonest in arguing his case.
Replies from: TrE↑ comment by Ishaan · 2013-09-03T05:53:13.028Z · LW(p) · GW(p)
this begs questions of why human brains are the only ones that are taken into account.
Harris has decided to define "good" as "that thing in human brains which typically corresponds to the word good".
Under this definition, an agent using orange/blue compass rather than a black/white compass doesn't have a different morality - rather, it's simply unconcerned with moral questions. "Good" and "Moral" are defined as the human-specific-value-thingies. That is why only human brains are taken into account - because they are embedded in his definition of "good".
Replies from: Carinthium↑ comment by Carinthium · 2013-09-03T06:51:16.897Z · LW(p) · GW(p)
Yes, but he's effectively ignoring a significant number of ethical questions regarding Why Humans? In addition, the principle that all humans are about equally weighted appears to be significant in his morality.
↑ comment by jmmcd · 2013-09-02T10:17:25.111Z · LW(p) · GW(p)
I disagree with all your points, but will stick to 4: "Deterrent is nowhere in their brains" is wrong -- read about altruism, game theory, and punishment of defectors, to understand where the desire comes from.
Replies from: Carinthium↑ comment by Carinthium · 2013-09-02T12:02:47.861Z · LW(p) · GW(p)
Evolutionarily it is a REASON why the desire evolved that way, but it is not the same thing as what the person FEELS, on a conscious or subconscious level. If you claim that evolutionary reasons are a person's 'true preferences', then it follows that a proper morality should focus on maximising everyone's relative shares of the gene pool at the expense of, say, animals rather than anything else.
EDIT: I'm also curious about your response to all of my arguments.
Replies from: jmmcd↑ comment by jmmcd · 2013-09-02T12:48:55.728Z · LW(p) · GW(p)
If you claim that evolutionary reasons are a person's 'true preferences'
No, of course not. It's still wrong to say that deterrent is nowhere in their brains.
Concerning the others:
Scientific inquiry percieves facts which are true and useful except for goals which run directly counter to science. Morality perceives 'facts' which are only useful to those who wish to follow a moral route.
I don't see what "goals which run directly counter to science" could mean. Even if you want to destroy all scientists, are you better off knowing some science or not? Anyway, how does this counter anything Harris says?
Although most people would be outraged, they probably wouldn't call it unscientific.
Again, so what? How does anything here prevent science from talking about morality?
As far as I can tell, Harris does not account for the well-being of animals.
He talks about well-being of conscious beings. It's not great terminology, but your inference is your own.
Replies from: Carinthium↑ comment by Carinthium · 2013-09-02T13:06:23.239Z · LW(p) · GW(p)
A- O.K, demonstrate that the idea of deterrent exists somewhere within their brains.
B- Although it would be as alien as being a paperclip maximiser, say I deliberately want to know as little as possible. That would be a hypothetical goal for which science would not be useful.
As for how this counters Harris- Harris claims that some things are moral by definition and claims that proper morality is a subcategory of science. I counterargue that the fundamental differences between the nature of morality and the nature of science are problems with this categorisation.
I'm not sure if Harris's health analogy is relevant enough to this part of the argument to put here, but it falls flat because health is relevant to far more potential human goals than morality is. Moral dilemnas in which a person has to choose between two possible moral values are plausibly enough adressed (though I have reservations) I'll give him a pass on that one- but what about a situation where a person has to choose between acting selfishly and acting selflessly? You can say one is the moral choice by defintion depending on the definition of moral, but saying "It's moral so do it" leads to the question "Why should I do what is moral"? With health people don't actually question it because it tends to support their goals, although there is a similarity Harris and his critics do not appear to realise in that a person can and might ask "Why should I do what is healthy?" in some circumstances.
C- What I am trying to say argue with my psycopath analogy is that something can be good science without in any way being moral that Sam Harris would recognise as 'moral'. The psycopath is in my scenario using the scientific method in every way except those which he can't by definition given his goals- he even has a peer review commitee! His behaviour is therefore just as scientific as the scientist trying to, say, cure cancer.
D- I was only acting from what I read in his responses to the critics, which was my disclaimer from the start. I made a mistake, but I left open the possibility of such for lack of time.
Replies from: jmmcd↑ comment by jmmcd · 2013-09-02T14:12:32.788Z · LW(p) · GW(p)
O.K, demonstrate that the idea of deterrent exists somewhere within their brains.
Evolutionary game theory and punishment of defectors is all the answer you need. You want me to point at a deterrent region, somewhere to the left of Broca's?
You say that science is useful for truths about the universe, whereas morality is useful for truths useful only to those interested in acting morally. It sounds like you agree with Harris that morality is a subcategory of science.
something can be good science without in any way being moral that Sam Harris would recognise as 'moral'.
Still, so what? He's not saying that all science is moral (in the sense of "benevolent" and "good for the world"). That would be ridiculous, and would be orthogonal to the argument of whether science can address questions of morality.
Replies from: Carinthium↑ comment by Carinthium · 2013-09-02T14:35:51.864Z · LW(p) · GW(p)
A- Not so. If the human does not consciously nor subconsciously care about deterrent, evolutionary reasons are irrelevant.
B- Only if, and this is a big if, you agree with the Elizier-Harris school of thought which say some things are morally true by definition. Because Harris agrees with him, I was granting him that as his own unique idea of what being moral is. However, at that point I was concerned with demonstrating morality cannot fit as a subcategory of science.
C- Harris appears to claim that there is a scientific basis for valuing wellbeing- he repudiates the hypothesis that there is none explicitly by claiming it comparable to the claim there is no scientific basis for valuing health.
Replies from: jmmcd↑ comment by buybuydandavis · 2013-09-03T21:51:47.734Z · LW(p) · GW(p)
On 1, I believe you're begging the question on the is-ought divide, which is the point of contention with Sam.
On 2, my recollection is that Sam basically excommunicates psychopaths from the human race. They don't count. In the end, I don't think that particularly helps him, as he'll have to excommunicate anyone who isn't a universalist altruist, and not just for humans, but for all conscious creatures.
On 3, I believe you're mistaken. The usual rubric Sam's utilitarianism goes by in the circles of Sam is WBCC, Well Being of Conscious Creatures. He grants that other creatures can be conscious, that there are degrees of consciousness, and that their well being counts in proportion to their degree of consciousness.
On 4, Sam is at least consistent, in that he'll argue that punishment for criminals is an icky leftover of our primate evolution, and fundamentally an evil in that it doesn't maximize WBCC, which is the standard by which Good is measured. The objective morality that Sam believes in is not the morality that people objectively have.
Replies from: Carinthium↑ comment by Carinthium · 2013-09-04T00:26:35.767Z · LW(p) · GW(p)
Disclaimer- I only went from his responses to critics, in which some points weren't clear.
1: I just assumed (perhaps wrongly) that even Sam Harris would see the validity of an is-ought divide to some extent. If he hasn't, then I can refer to Hume and copy-paste his arguments for the win.
2: O.K then.
3: Refer to my disclaimer.
4: Which makes it even harder for him to go from an is to an ought, as he can't use the idea that he's merely following human intuitions somehow. He's following his own, very alien intuitions instead and can't justify them.
↑ comment by buybuydandavis · 2013-09-02T06:17:57.097Z · LW(p) · GW(p)
I don't have the book, so I don't think I'm eligible for the prize. Suffice to say that I've read his summary on "Response to Critics", and anybody who can't refute the tripe philosophy shown there (maybe he's got better in the book, I can't be sure) doesn't deserve to be considered anything more than a crap philosopher.
I have read the book, and consider it tripe all the way down. I've "discussed" it with others at his project-reason site as well. None of his supporters can make a coherent case out of what he said.