Beware the Nihilistic Failure Mode
post by Gram_Stone · 2015-07-09T15:31:53.016Z · LW · GW · Legacy · 24 commentsContents
24 comments
I have noticed that the term 'nihilism' has quite a few different connotations. I do not know that it is a coincidence. Reputedly, the most popular connotation, and in my opinion, the least well-defined, is existential nihilism, 'the philosophical theory that life has no intrinsic meaning or value.' I think that most LessWrong users would agree that there is no intrinsic meaning or value, but also that they would argue that there is a contingent meaning or value, and that the absence of such intrinsic meaning or value is no justification to be a generally insufferable person.
There is also the slightly similar but perhaps more well-defined moral nihilism; epistemological nihilism; and the not-unrelated fatalism.
Here, it goes without saying that each of these positions is wrong.
If we want to make sense of the claim that physics is better at predicting than social science is, we have to work harder to explicate what it might mean. One possible way of explicating the claim is that when one says that physics is better at predicting than social science one might mean that experts in physics have a greater advantage over non‐experts in predicting interesting things in the domain of physics than experts in social science have over non‐experts in predicting interesting things in the domain of social science. This is still very imprecise since it relies on an undefined concept of “interesting things”. Yet the explication does at least draw attention to one aspect of the idea of predictability that is relevant in the context of public policy, namely the extent to which research and expertise can improve our ability to predict. The usefulness of ELSI‐funded activities might depend not on the absolute obtainable degree of predictability of technological innovation and social outcomes but on how much improvement in predictive ability these activities will produce. Let us hence set aside the following unhelpful question:"Is the future of science or technological innovation predictable?"A better question would be,"How predictable are various aspects of the future of science or technological innovation?"But often, we will get more mileage out of asking,"How much more predictable can (a certain aspect of) the future of science or technologicalinnovations become if we devote a certain amount of resources to study it?"Or better still:"Which particular inquiries would do most to improve our ability to predict those aspects of the future of S&T that we most need to know about in advance?"Pursuit of this question could lead us to explore many interesting avenues of research which might result in improved means of obtaining foresight about S&T developments and their policy consequences.Crow and Sarewitz, however, wishing to side‐step the question about predictability, claim that it is “irrelevant”:"preparation for the future obviously does not require accurate prediction; rather, it requires a foundation of knowledge upon which to base action, a capacity to learn from experience, close attention to what is going on in the present, and healthy and resilient institutions that can effectively respond or adapt to change in a timely manner."This answer is too quick. Each of the elements they mention as required for the preparation for the future relies in some way on accurate prediction. A capacity to learn from experience is not useful for preparing for the future unless we can correctly assume (predict) that the lessons we derive from the past will be applicable to future situations. Close attention to what is going on in the present is likewise futile unless we can assume that what is going on in the present will reveal stable trends or otherwise shed light on what is likely to happen next. It also requires prediction to figure out what kind of institutions will prove healthy, resilient, and effective in responding or adapting to future changes. Predicting the future quality and behavior of institutions that we create today is not an exact science.
24 comments
Comments sorted by top scores.
comment by buybuydandavis · 2015-07-10T16:34:26.450Z · LW(p) · GW(p)
If it's a giving up, it's a giving up on a conceptual confusion about a real thing that one has set a nonsensical standard for.
Daniel Dennett quotes from "Net of Magic", by Lee Siegel
Quote from book:
"I'm writing a book on magic, " I explain, and I'm asked, "Real magic?" By real magic people mean miracles, thaumaturgical acts, and supernatural powers. "No, " I answer: "Conjuring tricks, not real magic."
Dennett:
Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic.
For many, real morality, real free will, real knowledge is some conceptual gibberish. The morality, free will, and knowledge that actually exists is rejected as not real morality, free will, and knowledge.
The kind of nihilist you describe is the one who has rejected the existence of "real" magic in his mind, but has not given it up in his heart, his valuations. His mind knows he can't have real magic, it has "given up" on real magic, but his heart still yearns for it, still judges the magic that actually exists as lesser, as not real magic. Real magic is felt to be shinier and fluffier and altogether better than the magic that actually exists, that's the one he wants, but knows that he can't have.
Hence that sense of futility and impotence you sense in them.
(I've never met anyone who calls himself a philosophical nihilist who fulfills this trope of the "fatalist nihilist". This imagined villain seems to live mainly in the mind of the believers in real magic, next door to the "misanthropic egoist".)
comment by UtilonMaximizer · 2015-07-09T17:26:30.609Z · LW(p) · GW(p)
Here, it goes without saying that each of these positions is wrong.
I am under the impression that many in this community are consequentialist and that all consequentialists are moral nihilists by default in that they don't believe in the existence of inherent moral truths (moral truths that don't necessarily affect utility functions).
Replies from: Darklight, Gram_Stone↑ comment by Darklight · 2015-07-09T23:52:51.994Z · LW(p) · GW(p)
Uh, I was under the impression that most consequentialists are moral universalists. They don't believe that morality can be simplified into absolute statements like "lying is always wrong", but do still believe in conditional moral universals such as "in this specific circumstance, lying is wrong for all subjects in the same circumstance".
This is fundamentally different from moral relativism that argues that morality depends on the subject, or moral nihilism that says that there are no moral truths at all. Moral universalism still believes there are moral truths, but that they depend on the conditions of reality (in this case, that the consequences are good).
Even then, most Utilitarian consequentialists believe in one absolute inherent moral truth, which is that "happiness is intrinsically good", or that "the utility function, should be maximized."
Admittedly some consequentialists try to deny that they believe this and argue against moral realism, but that's mostly a matter of metaethical details.
↑ comment by Gram_Stone · 2015-07-09T17:39:11.075Z · LW(p) · GW(p)
I find that the nihilism-relativism-universalism trichotomy, among other things, doesn't really divide things well.
I would describe most LessWrong users as universalists that are not absolutists. If what is moral is what you value, and there is a fact of the matter as to what you value, then there is an objective morality, even if it is contingent rather than ontologically fundamental.
Replies from: None, ChaosMote, DefectiveAlgorithm↑ comment by [deleted] · 2015-07-09T21:31:12.570Z · LW(p) · GW(p)
To me it seems you're just playing with words here. "you can say it's an objective contigent morality", or you can just say it's a "subjective" morality. Either way, you're just playing with words, and not changing the underlying meaning.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-07-11T13:25:41.540Z · LW(p) · GW(p)
The problems with subjective mortality are that it is too ready to vary, and too hard to achieve coordination. If Objective Contingent morality solves those problems, that would be a real and worthwhile difference,
↑ comment by ChaosMote · 2015-07-17T01:29:07.156Z · LW(p) · GW(p)
I think that using this notation is misleading. If I am understanding you correctly, you are saying that given an individual, we can derive their morality from their (real/physically grounded) state, which gives real/physically grounded morality (for that individual). Furthermore, you are using "objective" where I used "real/physically ground". Unfortunately, one of the common meanings of objective is "ontologically fundamental and not contingent", so your statement sounds like it is saying something that it isn't.
On a separate note, I'm not sure why you are casually dismissing moral nihilism as wrong. As far as I am aware, moral nihilism is the position that morality is not ontologically fundamental. Personally, I am a moral nihilist; my experience shows that morality as typically discussed refers to a collection of human intuitions and social constructs - it seems bizarre to believe that to be an ontologically fundamental phenomenon. I think a sizable fraction of LW is of like mind, though I can only speak for myself.
I would even go further and say that I don't believe in objective contingent morality. Certainly, most people have an individual idea of what they find moral. However, this only establishes that there is an objective contingent response to the question "what do you find moral?" There is similarly an objective contingent response to the related question "what is morality?", or the question "what is the difference between right and wrong?" Sadly, I expect the responses in each case to differ (due to framing effects, at the very least). To me, this shows that unless you define "morality" quite tightly (which could require some arbitrary decisions on your part), your construction is not well defined.
Note that I expect that last paragraph to be more relativist then most other people here, so I definitely speak only for myself there.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-07-17T03:06:49.157Z · LW(p) · GW(p)
I think that using this notation is misleading. If I am understanding you correctly, you are saying that given an individual, we can derive their morality from their (real/physically grounded) state, which gives real/physically grounded morality (for that individual). Furthermore, you are using "objective" where I used "real/physically ground". Unfortunately, one of the common meanings of objective is "ontologically fundamental and not contingent", so your statement sounds like it is saying something that it isn't.
I used 'objective and contingent' instead of 'subjective' because ethical subjectivists are usually moral relativists. I noted that I was referring to an objective morality that is contingent rather than ontologically fundamental.
On a separate note, I'm not sure why you are casually dismissing moral nihilism as wrong. As far as I am aware, moral nihilism is the position that morality is not ontologically fundamental. Personally, I am a moral nihilist; my experience shows that morality as typically discussed refers to a collection of human intuitions and social constructs - it seems bizarre to believe that to be an ontologically fundamental phenomenon. I think a sizable fraction of LW is of like mind, though I can only speak for myself.
But there's that language again that people use when they talk about moral nihilism, where I can't tell if they're just using different words, or if they really think that morality can be whatever we want it to be, or that it doesn't mean anything to say that moral propositions are true or false.
I would even go further and say that I don't believe in objective contingent morality. Certainly, most people have an individual idea of what they find moral. However, this only establishes that there is an objective contingent response to the question "what do you find moral?" There is similarly an objective contingent response to the related question "what is morality?", or the question "what is the difference between right and wrong?" Sadly, I expect the responses in each case to differ (due to framing effects, at the very least). To me, this shows that unless you define "morality" quite tightly (which could require some arbitrary decisions on your part), your construction is not well defined.
I wouldn't ask people those questions. People can be wrong about what they value. The point of moral philosophy is to know what you should do. It's probably best to do away with the old metaethical terms and just say: To say that you should do something is to say that if you do that thing, then it will fulfill your values; you and other humans have slightly different values based on individual, cultural and perhaps even biological differences, but have relatively similar values to one another compared to a random utility function because of shared evolutionary history.
Replies from: ChaosMote↑ comment by ChaosMote · 2015-07-17T06:23:42.192Z · LW(p) · GW(p)
But there's that language again that people use when they talk about moral nihilism, where I can't tell if they're just using different words, or if they really think that morality can be whatever we want it to be, or that it doesn't mean anything to say that moral propositions are true or false.
Okay. Correct me if any of this doesn't sound right. When a person talks about "morality", you imagine a conceptual framework of some sort - some way of distinguishing what makes actions "good" or "bad", "right" or "wrong", etc. Different people will imagine different frameworks, possibly radically so - but there is generally a lot of common ground (or so we hope), which is why you and I can talk about "morality" and more or less understand the gist of each other's arguments. Now, I would claim that what I mean when I say "morality", or what you mean, or what a reasonable third party may mean, or any combination thereof - that each of these is entirely unrelated to ground truth.
Basically, moral propositions (e.g. "Murder is Bad") contain unbound variables (in this case, "Bad") which are only defined in select subjective frames of reference. "Bad" does not have a universal value in the sense that "Speed of Light" or "Atomic Weight of Hydrogen" or "The top LessWrong contributor as of midnight January 1st, 2015" do. That is the main thesis of Moral Nihilism as far as I understand it. Does that sound sensible?
I wouldn't ask people those questions. People can be wrong about what they value. The point of moral philosophy is to know what you should do.
Alright; let me rephrase my point. Let us say that you have access to everything there that can be known about a individual X. Can you explain how you compute their objective contingent morality to an observer who has no concept of morality? You previous statement of "what is moral is what you value" would need to define "what you value" before it would suffice. Note that unless you can do this construction, you don't actually have something objective.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-07-17T19:11:27.457Z · LW(p) · GW(p)
What you're proposing sounds more like moral relativism than moral nihilism.
I think that you're confusing moral universalism with moral absolutism and value monism. If a particular individual values eating ice cream and there are no consequences that would conflict with other values of this individual for eating ice cream in these particular circumstances, then it is moral for that individual to eat ice cream, and I do not believe that it makes sense to say that it is not meaningful to say that it is true that it is moral for this individual to eat ice cream in these circumstances. This does not mean that there is some objective reason to value eating ice cream or that regardless of the individual or circumstances that it is true that it is moral to eat ice cream. The sense in which morality is universal is not on the level of actions or values, but on the level of utility maximization, and the sense in which it is objective is that it is not whatever you want it to be.
Replies from: ChaosMote↑ comment by ChaosMote · 2015-07-18T17:09:31.407Z · LW(p) · GW(p)
What you're proposing sounds more like moral relativism than moral nihilism.
Ah, yes. My mistake. I stand corrected. Some cursory googling suggests that you are right. With that said, to me Moral Nihilism seems like a natural consequence of Moral Relativism, but that may be a fact about me and not the universe, so to speak (though I would be grateful if you could point out a way to be morally relativist without morally nihilist).
I think that you're confusing moral universalism with moral absolutism and value monism.
The last paragraph of my previous post was a claim that unless you an objective way of ordering conflicting preferences (and I don't see how you can), you are forced to work under value pluralism. I did use this as an argument against moral universalism , though that argument may not be entirely correct. I concede the point.
↑ comment by DefectiveAlgorithm · 2015-07-09T19:15:31.614Z · LW(p) · GW(p)
Personally, when I use the word 'morality' I'm not using it to mean 'what someone values'. I value my own morality very little, and developed it mostly for fun. Somewhere along the way I think I internalized it at least a little, but it still doesn't mean much to me, and seeing it violated has no perceivable impact on my emotional state. Now, this may just be unusual terminology on my part, but I've found that a lot of people at least appear based on what they say about 'morality' to be using the term similarly to myself.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-07-09T19:28:20.491Z · LW(p) · GW(p)
You say what you do not mean by 'morality,' but not what you do mean.
If you mean that you have a verbal, propositional sort of normative ethical theory that you have 'developed mostly for fun and the violation of which has no perceivable impact on your emotional state,' then that does not mean that you are lacking in morality, it just means that your verbal normative theory is not in line with your wordless one. I do not believe that there is an arbitrary thing that you currently truly consider horrifying that you could stop experiencing as horrifying by the force of your will; or that there is an arbitrary horrible thing that you could prevent that would currently cause you to feel guilty for not preventing, and that you could not-prevent that horrible thing and stop experiencing the subsequent guilt by the force of your will. I do not believe that your utility function is open season.
Replies from: DefectiveAlgorithm↑ comment by DefectiveAlgorithm · 2015-07-09T19:54:12.735Z · LW(p) · GW(p)
I mean that what I call my 'morality' isn't intended to be a map of my utility function, imperfect or otherwise. Along the same lines, you're objecting that self-proclaimed moral nihilists have an inaccurate notion of their own utility function, when it's quite possible that they don't consider their 'moral nihilism' to be a statement about their utility function at all. I called myself a moral nihilist for quite a while without meaning anything like what you're talking about here. I knew that I had preferences, I knew (roughly) what those preferences were, I would knowingly act on those preferences, and I didn't consider my nihilism to be in conflict with that at all. I still wouldn't. As for what I do mean by morality, it's kinda hard to put into words, but if I had to try I'd probably go with something like 'the set of rules of social function and personal behavior which result in as desirable a world as possible the more closely they are followed by the general population, given that one doesn't get to choose what one's position in that world is'.
EDIT: But that probably still doesn't capture my true meaning, because my real motive was closer to something like 'society's full of people coming up with ideas of right and wrong the adherence to which wouldn't create societies that would actually be particularly great to live in, so, being a rather competitive person, I want to see if I can do better', nothing more.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-07-09T20:10:55.143Z · LW(p) · GW(p)
It sounds like you agree with me, but are just using the words morality and nihilism differently, and are particularly using nihilism in a way that I don't understand or that you have yet to explicate.
It also seems to me that you're already talking about what you value when you talk about desirable worlds.
Replies from: DefectiveAlgorithm↑ comment by DefectiveAlgorithm · 2015-07-09T20:21:18.363Z · LW(p) · GW(p)
That's my point. You're saying the 'nihilists' are wrong, when you may in fact be disagreeing with a viewpoint that most nihilists don't actually hold on account of them using the words 'nihilism' and/or 'morality' differently to you. And yeah, I suppose in that sense my 'morality' does tie into my actual values, but only my values as applied to an unrealistic thought experiment, and then again a world in which everyone but me adhered to my notions of morality (and I wasn't penalized for not doing so) would still be preferable to me than a world in which everyone including me did.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-07-09T20:26:53.602Z · LW(p) · GW(p)
But you still have yet to explicitly describe what you mean by nihilism. Could you? How have I misrepresented whom you believe to be the average self-identifying nihilist?
And yeah, I suppose in that sense my 'morality' does tie into my actual values, but only my values as applied to an unrealistic thought experiment, and then again a world in which everyone but me adhered to my notions of morality (and I wasn't penalized for not doing so) would still be preferable to me than a world in which everyone including me did.
Can you explain how the statement 'A world in which everyone but me does not murder is preferable to a world in which everyone including me does not murder' is a misinterpretation of this quotation?
Replies from: DefectiveAlgorithm↑ comment by DefectiveAlgorithm · 2015-07-09T20:33:22.750Z · LW(p) · GW(p)
What I meant when I called myself a nihilist was essentially that there was no such thing as an objective, mind-independent morality. Nothing more. I would still consider myself a nihilist in that sense (and I expect most on this site would), but I don't call myself that because it could cause confusion.
Can you explain how the statement 'A world in which everyone but me does not murder is preferable to a world in which everyone including me does not murder' is a misinterpretation of this quotation?
It isn't, although that doesn't mean I would necessarily murder in such a world.
EDIT: Well, my nihilism was also a justification for the belief that it's silly to care about morality, and in that sense at least I'm no longer a nihilist in the sense that I was. That was just one aspect of my 'my eccentricities make me superior, everyone else's eccentricities are silly' phase, which I think I moved beyond around the time I stopped being a teenager.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-07-09T20:41:23.510Z · LW(p) · GW(p)
What I meant when I called myself a nihilist was essentially that there was no such thing as an objective, mind-independent morality. Nothing more. I would still consider myself a nihilist in that sense (and I expect most on this site would), but I don't call myself that because it could cause confusion.
I agree that morality is not in the quarks.
It isn't.
That doesn't seem like a huge bullet to bite?
Replies from: DefectiveAlgorithm↑ comment by DefectiveAlgorithm · 2015-07-09T20:52:01.578Z · LW(p) · GW(p)
What bullet is that? I implicitly agreed that murder is wrong (as per the way I use the word 'wrong') when I said that your statement wasn't a misinterpretation. It's just that as I mentioned before, I don't care a whole lot about the thing that I call 'morality'.
comment by Toggle · 2015-07-10T00:51:55.692Z · LW(p) · GW(p)
I've always been particularly frustrated with the dismissal of materialism as nihilism in the sense of 'the philosophical theory that life has no intrinsic meaning or value.'
What it really means is that life has no extrinsic value; we designate no supranatural agent to grant meaning to life or the universe. Instead, we rely on agents within the universe to assign meaning to it according to their own state; a state that is, in turn, a natural phenomenon. If anything, we're operating under the assumption that meaning in the universe is inherently intrinsic.
comment by [deleted] · 2015-07-09T16:36:02.745Z · LW(p) · GW(p)
I've noticed that people tend to resort to the above, and then cease theorizing about those class of questions, rather than follow their own line of thinking back out of the hole it lead them into. If you follow the concepts that it entails, you either off yourself ( :( ) or end up meandering through life anyway.
And there is the next question, which is stated in the sequences. What do you do anyway, if you had no morality or epistemic compass?
I doubt very many who take a nihilistic route manage to stay in the conversation about it for long, or if they do they lose coherency. Either way the proposition seems null (pardon) at a glance, excepting any casualties it costs us.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-07-09T16:47:31.145Z · LW(p) · GW(p)
Whenever people tell me that there exists nothing of value, I ask them why they're so damned motivated to tell me about it.
Replies from: None↑ comment by [deleted] · 2015-07-09T17:03:24.417Z · LW(p) · GW(p)
I had a friend that was fairly confused about morality, although he was a decent person. He would only bring up thoughts that were adjacent to nihilistic concepts when a conversation was already going. He never bought into them, but I think he's still kinda epistemically paralyzed.
It is fairly obvious in his case that he feels like i.e. saving the world might be/is most likely impossible, although he hasn't verbally confirmed it. Meh. He's caught between just kinda living as he wants to and a vague, lonely concept that something more is possible and worthwhile.
I'm not coherent with my knowledge of morality enough to pull him out of it for sure, and so I haven't really tried. Potentially divisive conversations, and all. : /