Posts
Comments
I'd say that the incoherent speaker is arguing at DH(-1). DH0 would be an improvement. You would be counterarguing at DH(No) - argument by pointing out conversational emptiness.
(edited to clarify that it is the person who makes the incoherent argument who is arguing badly, and the person arguing against that who is doing something entirely outside the hierarchy.
Other DH(No) arguments-that-are-about-non-argument include "We aren't actually arguing about the same thing" and "let me take some time to do more reading before I reply.")
Point taken - in some cases, the significance of the gaps is more evident to the outside view.
In that case, we can replace "point out where in their argument they went wrong" with "point out where our underlying value judgments seem to diverge."
If they then try to argue that your values are wrong and theirs are right, either you have to move the discussion up a meta-level or, yes, screaming.
An application of this hierarchy:
Jack the Scarecrow. My crystal healing pills will give you eternal life. For $50.00 each, you need never die, suckers.
--
DH0: "I'm not interested for myself, but can I buy you a border collie and give her some? If you're going to live forever, you're going to need a smart friend to make the really tricky decisions."
DH1: What, exactly, is your profit margin on these crystal healing pills? If we don't live forever, would you still make money off of them?
DH2: Any post that ends in the word "suckers" directed at the readers is difficult to read charitably.
DH3: > My crystal healing pills will give you eternal life.
WRONG.
According to this hierarchy, D3 is arguing on a higher level than DH0, DH1, and DH2. And, well, maybe that's correct. "Higher" doesn't necessarily mean "more subtle." But in the absence of this hierarchy, I'd have been tempted to order the arguments from low- to high-level as 0 < 3 < 2 < 1.
(EDITED to remove potential applause light; EDITED again to better hit the idea of DH2)
Why are my opponents ignoring what I say because I said it angrily, or sadly, or confrontationally, or in passing, or whatever?
The way you say something may signal that you are trying to diminish their status. If you say it with a sufficiently negative tone, it may even be taken as a signal (a generally reliable signal) that you care more about diminishing their status than about having a truth-seeking discussion.
In other words, what wedrifid said, but less simply and more explicitly.
I think that in some contexts, like arguing over mathematical proofs (as orthonormal noted), spending a little time arguing with yourself to bring out X'Y'Z' is polite and a sign of good faith. In other cases, I'd rather just trot out A'B'C' early on, as long as it doesn't require too much effort, and deal with both arguments at once without ever explicitly raising X'Y'Z'.
I was aware of the genre it spoofed, but I didn't know that it was so specifically targeted. I'm tempted to try to find that made-for-TV movie and watch clips just to increase my appreciation of Airplane!
In this case, I'd even drop my initial thoughts about rudeness. If you can prove that somebody's gone down mathematical blind alley, it's downright polite to do so, since there's no ambiguity about the relevance of the steel man here.
Ideally, a reasonable counterargument that applies to the strong form will also apply to the weak form without significant editing. If the person one was arguing with would have been receptive to DH7 in the first place, that alone should stop them from making the strong form argument - the countering evidence has already been provided.
Where this fails... well, I said "at first" in my thread-starter for a reason.
I don't ALWAYS have low confidence in the other arguer's ability to tolerate a steel man version of their own argument. I do have low confidence in the ability of most people, especially me, to decide what constitutes a non-gratuitous steel man. I have an unfortunate, but understandable, bias in favor of my own creations, and I suspect that this bias is widely shared.
I can respect the person I'm arguing with, and consider them to be truth-searching, and still not want to antagonize the part of their hardware that likes winning. I also dislike having my primate hardware antagonized unnecessarily; I tolerate it for the sake of truth-seeking, but it's not fun.
I see two likely cases here:
A) I come up with a tougher version of their argument in my head, in order to be as careful as possible, but I still have a good way to refute it. This is DH7.
In this case, announcing the tougher version doesn't get us any closer to the truth. A dead steel man is as dead as a dead straw man. I might as well refute what was actually said, rather than risk being unnecessarily smug.
B) I come up with a tougher version of their argument in my head, and I can't actually defeat the tougher version.
In this case, I definitely ought to announce this problem.
But this is not DH7 as posted. This is my actual purpose in making a steel man - the possibility that the steel man may actually force me to change my mind. I'm not trying to argue with my opponent on a higher level when I do this, I'm trying to argue myself out of being cognitively lazy.
A good rule of thumb: DH7 should be really really REALLY hard to do well if you're arguing with reasonably smart people who have thought carefully about their positions. In fact, it is so hard that anybody who could do it consistently would never need other people to argue with.
EDIT: In the interests of dealing with the worst possible construct, I should add:
A) In the case where openly announcing DH7-level arguments lets both parties see that they've misinterpreted each other, going to DH7 is a net win.
B) An expert DH7-level arguer may still need other people to argue with if they have been exposed to very different sets of evidence.
But generally speaking, the cognitive effort needed to communicate a steel-man version of someone else's position is better spent on expressing one's own evidence.
DH7 should be kept internal, at least at first. Being misinterpreted as trying to construct a straw man when you've been trying to do the opposite can derail a conversation. To actually believe that you've made a steel man, not a straw man, the person you're arguing with would have to admit that you've created a stronger argument for their own position than they could.
It's probably best to practice up to DH7 internally, and only up to DH6 vocally.
If we imagine arguments as soldiers, as they tend to be, the problem becomes even clearer:
(A and B are about to fight.)
A. Ah! My worthy opponent! I shall send my greatest soldier to crush you... GOLIATH! ATTACK!
B. His sword's a little wimpy. Let me give him a bazooka.
If I were A, I wouldn't trust that bazooka on B's word alone, I'd be annoyed at the slight against my blacksmiths, and, even if it turned out to be a totally legitimate bazooka, I would, at the very least, consider B a tactless grandstander.
(Though if the bazooka did work, I'd use it, obviously. I just wouldn't like using it.)
If the original work is itself a satire, do you try to make a humorless version of it?
Hmm...
"In the seminal Zucker, Zucker, and Abrams opus Airplane!, one character, played by Leslie Nielsen, asks another to pilot an passenger airliner in an emergency. The would-be pilot responds with incredulity, but is coolly rebuffed by the Leslie Nielsen character. This evinces laughter from the audience, as the exchange involves a confusion between two near-homophones."
Heh, heh... still funny.
For less goofy, more drily satirical stuff, I think that making a satire of the satire is still a viable option.
Because I accidentally derailed my last post into pedantry, let me try again with a clearer heuristic:
A TEST FOR ART YOU REALLY LIKE:
Try to make fun of it.
If you can make fun of it, and you still like it, then you don't like it just because it's sacred.
This doesn't have to be a deep parody - I don't really think I could write a deep parody of Bach's Magnificat in D. But I can definitely imagine the parts that move me the most, the sublime moments that touch me to my core, played by a synthesizer orchestra that only does fart noises.
If somebody enjoys something that they read or experience alone, then they must get some utility from art that isn't connected with the associated social signals. I suspect that there are many people who are capable of appreciating art without talking about it.
(This does not apply if they read something alone, brag about it, and try to signal super-high status and nonconformity by only liking obscure things. THAT is the status game that I associate with hipsters.)
I consider that sort of social signaling basically orthogonal to liking art for being pretty, funny, thought-provoking, or sublime. Art that is liked solely for social reasons is unlikely to survive a change in social environment.
(EDITED for pronoun trouble)
That's common to every art, apart from perhaps cinema or literature. Modern art? Just a load of paint thrown at canvases and unmade beds. Modern music? Just a load of random notes strung together. Modern poetry? Doesn't even rhyme.
I'm not sure which is worse - liking all modern art because one is supposed to like it, or hating all modern art because one is supposed to hate it. Either way, the category lines are not being drawn usefully. As the original post notes, there ought to be more to this than just going along with social signals.
W. H. Auden had an excellent heuristic for dealing with this problem:
"Between the ages of 20 and 40, the surest sign that a man has a taste of his own is that he is unsure of it."
I can like or dislike anything I want, as long as I'm willing to update. The space of possible art is huge, and I would cheat my future self if I excluded entire genres from consideration on the belief that they exist solely as pedant-bait.
I was slightly unhappy to see "Prufrock" mentioned in the same rhetorical breath as modern poetry that relaxes the demands of scansion, rhyme, and readability. I also dislike free verse, generally speaking, But "Prufrock" isn't even close to that! It uses some of the same metrical tricks as John Milton's "Lycidas":
I come to pluck your Berries harsh and crude,
And with forc'd fingers rude,
Shatter your leaves before the mellowing year.
--- John Milton, "Lycidas" (1637)
Let us go, through certain half-deserted streets,
The muttering retreats
Of restless nights in one-night cheap hotels
--- T. S. Eliot, "The Love Song of J. Alfred Prufrock" (1920)
It's not as modern as it looks!
There are many places where prefixing the word "poetry" with the word "modern" signals that it can be dismissed off-hand, but I think that this is a bad way to categorize poetry. For one thing, it hides the way that new poems draw inspiration from older ones.
I dislike The Catcher in the Rye, feel as if I ought to like Animal Farm, and genuinely like Moby-Dick. I can see why other people would dislike Moby-Dick, but I still like the damn thing.
My hypothesis: Because I was not taught Moby-Dick in school, I did not associate reading it with work, but with relaxation. This is borne out by my love of David Copperfield (read alone) and only vague enjoyment of Great Expectations (assigned in school).
Downvoted for telling me what I'm arguing for and against, for something like the third time now, when I am fairly certain that our intuitive ideas of how abstraction works are somewhat different. This is one of the few things that breaks my internal set of "rules for a fair argument."t.
(Note: I am NOT downvoting for the paragraph beginning "OF COURSE they do", because it's given me a hunch as to what is going on here, is clearly written, and makes your actual objections to the candy bowl case clearer.
I SHOULD not be downvoting for the first paragraph, but it affected the decision.)
I habsolutely zero intentions. I had hoped that you would be capable of being a rational agent in this dialogue. If, however, that isn't something you care to do, we can end this conversation here and now.
When I tried to work out what you meant by second-order simulacra, you linked me to a cryptic Wikipedia article discussing a vague description of the term, along with confused-looking statements about the nature of reality. I really did NOT know what your intentions were, and I genuinely was getting exasperated.
I am sorry for implying bad faith. I should have said, "I have no clue what I am supposed to take from this article, but it sends extremely dubious signals to me about the validity of this concept."
In a side-note, why did you feel the need to push this particular variation of your question on me when I had already answered it?
Because you hadn't. I presented an example where second-order simulacra fail. Reading the reply, I was unsatisfied to find a description of a different case, followed by a statement that second-order simulacra fail in the candy bowl case, but for reasons that weren't consistent with the example.
What, exactly, did you think the Simulacraton example was?
An example chosen in which your heuristic gave a semi-plausible answer, when I had asked about a place where it ceases to work.
Or did you not make the connection merely because you used candies and ratios and I used people and percents?
I did. I did not conceive, however, that your answer would be:
The stereotypical bowl-candy is perfectly safe. It likely has a neighbor that has a razorblade in it.
The analogy to the population of people was stretched enough - and not just for reasons of ratios and percents - that there was no WAY I'd come to the above answer without questioning it.
- The scale is vastly too small to allow for abstraction to be useful.
- The topic at hand focuses on the group in question rather than some other topic to which the group is tangential.
This is getting closer to what I actually am looking for - a situation where I ought to use second-order simulacra. However, I still do not think these are problems for the candy bowl.
1: Abstractions can work on an arbitrarily small sample size. "A bowl of candies, some of which are unsafe" IS an abstraction.. If that is not abstract enough, what about a pie chart showing the proportion of unsafe candies?
- If a group is truly tangential to a topic, how do you decide which features are important enough to include in your abstraction? Why include ANY features in your abstraction besides "lives in Simulacraton?" It does no good to say that one would abstract the Joneses as being of the plurality race. For example, I could imagine them as being racially indeterminate. But I have trouble imagining them at all.
Good luck getting through life without ever constructing a symbolic representation of anything at any time ever under any circumstances: because that's what you are arguing against.
Generally speaking, that is not representations work in my mind. The phrase "generalizing from one example" is ringing a bell right now.
When I am told "the population of Simulacraton is 40% white," I don't really feel any need to abstractly represent the population with one person, neighbors or no, or to refer to such a person in conversation. I would not say, "People from Simulacraton are {X}," and I tend to react to such statements with skepticism because I see them as unqualified statements about an entire set of people based on weak evidence.
How do I describe the average family in that town? With reluctance. I default to mapping by groups. In fact, I'm not used to visual or instance-based representation in general. It may be developmental - I was born blind and raised blind for a month before surgery. This may have affected my brain development in odd ways; I'm still bad with faces.
It does seem likely to me that a more visual thinker would find it convenient to imagine an average family as having visibly defined properties representing a plurality, rather than properties that can't be visually imagined as easily. But my 'average member' is just a bunch of loosely defined properties tied together with a name, and many of the properties that are needed to visualize a person clearly are missing from that set.
I don't think ONLY in verbally described sets, of course. I also think in free-floating sensory memories that rarely remain in my consciousness for very long. But "thinks in sets defined by verbal descriptions" is a good approximation of what I do.
Example: I have never been to Paris. If I were to talk about the Eiffel Tower, and for some reason felt the need to mention a Parisian in the description, I would likely say "a Parisian." I wouldn't give them a name or any properties unless I had to. If I did, the properties would be based on what I saw in movies, not any properties that reflect a plurality of Parisians, and I would assign them in a miserly way. My second-order simulacrum would be useless for anything but fake local flavor.
What about questions where "a Parisian" is just a tangential feature, where precision in the description of the Parisian is unimportant? Surely I use a second-order simulacrum then, right?
Nope.
For me, it is cognitively cheaper to not reference "a typical Parisian" when asked a question that tangentially involves people from Paris, because that would require me to represent a typical Parisian symbolically, and I have trouble imagining such a thing as "a typical Parisian." Instead, I would simply say, "a random Parisian," and my mental representation of such a Parisian would be the word "Parisian" with attached possible properties, half-formed images, and phrases spoken in movies.
THIS is why qualifiers like "almost always," "generally", "about half of the time," "on occasion", and "almost never" strike me as informative - they are quick and dirty ways to adjust the sets in my head! They are cognitively cheap for me, though not NEARLY as cheap for me as numerical probability estimates, which are great when people actually bother to give them.
Now, I am not naive enough to think that a "set" is part of the territory itself, but once one starts to cluster entities together, using a second-order generalization may reinforce confusion about the properties of entities in that cluster. When I discourage the use of second-order simulacra without disclaimers, it is not because I fail to realize my set-based map is not the territory, but because many people will name a cluster of entities, pick a single entity from that cluster, generalize to the entire cluster, and imagine that they have actually described a lot of territory in a useful way.
People do this constantly in politicized arguments. Context is not enough, and the more unwilling someone is to add a proviso, the more I suspicious I grow of the reasons that they are unwilling to do so. I suspect that my attitude towards unqualified generalizations is very similar to your attitude toward qualified generalizations. They seem like useless maps to me because I don't use them and don't really know how to.
The article says:
Second-order simulacra, a term coined by Jean Baudrillard, are symbols without referents, that is, symbols with no real object to represent. Simply put, a symbol is itself taken for reality and further layer of symbolism is added. This occurs when the symbol is taken to be more important or authoritative of the original entity, authenticity has been replaced by copy (thus reality is replaced by a substitute).
If I'm reading this correctly, it leaves me even more leery about the value of second-order simulacra.
Also from the article:
Baudrillard argues that in the postmodern epoch, the territory ceases to exist, and there is nothing left but the map; or indeed, the very concepts of the map and the territory have become indistinguishable, the distinction which once existed between them having been erased.
... did you intend for me to read this charitably? At best, it's a descriptive statement that says that people no longer care about the territory, and talk about maps without even realizing that they are not discussing territory. At worst, it says that reality has ceased to be real, which is Not Even Wrong.
If you want me to understand your ideas, please link me to clearer writing.
I am going to avoid using race or sex examples. I appreciate that you used Simulacraton as an object-level example, as it made your meaning much clearer, but I'd rather not discuss race when I am still unhappy with the resolution of the candy bowl problem.
I will revise my question for clarity:
"What is a reasonable second-order simulacrum of the contents of that basket of candy, and why? If no reasonable second-order simulacrum exists, why not?"
Second-order simulacra will always fail when you use them in ways that they are not meant to be used: such as actually being representative of individual instantiations of a thing: I.e.;, when you try to pretend they are anything other than an abstraction, a mapping of the territory designed for use as high-level overview to convey basic information without the need for great depth of inspection of the topic.
True, but none of the above reservations apply to the bowl of candy.
I am not claiming that the second-order simulacrum should represent the individual candies in the bowl. It may be wrong in any individual case. I am simply trying to convey a useful impression of the POPULATION, which is what you claim that SO S's are useful for.
I am not pretending that a simulacrum is anything more than an abstraction. I think it is a kind of abstraction that is not as useful as other kinds of abstraction when talking about populations.
I DO want a high-level overview, not a great depth of information. This overview should ideally reflect one REALLY important feature of the candy bowl.
(The statement that I would use to map the basket's population in detail would be "Ten of the sixty candies in the basket contain razorblades." The statement that I would use to map the basket broadly, without close inspection, would be, "Several of the candies in that basket contain razorblades."
if I had to use a second-order simulacrum, I would choose one of the candies with razorblades as my representative case, not the candy without. But this seems to break the plurality rule. Or perhaps, if feeling particularly perverse, I'd say "The candy in that basket contains one-sixth of a razorblade.")
I believe that second-order simulacra fail badly in the case of the candy basket. And if second-order simulacra can't handle simple hypothetical cases, shouldn't I be at least a little suspicious of this mapping strategy in general?
Wait, wait, I think I see something here. I think I see why we are incapable of agreeing.
If and only if you meant "always" in the first place and want to be less than perfectly accurate. "In the majority of cases" is an inaccurate method of expressing how S-O S's work -- as I mentioned above, with "the largest minority" being the representative entity of the body.
This seems more like a description of how S-O S's fail.
Can you offer any reason why I should treat S-O S's as a useful or realistic representational scheme if my goal is to draw accurate conclusions about actual, existing people?
Let me try to make my confusion clearer:
If I come upon a Halloween basket containing fifty peanut butter cups without razorblades, and ten peanut butter cups with razorblades, what is the second-order simulacrum I use to represent the contents of that basket? "A basket of delicious and safe peanut butter cups?"
Is this even a legitimate question, or am I still not grasping the concept?
Upvoted for clear communication.
I'm sort of puzzled, though, as to how I could have possibly interpreted your statements as applying to anything but the post and the comments on it; I saw no context clues suggesting that you meant "in everyday conversation." Did I miss these?
That said, if one of us had added just three or four words of proviso earlier, limiting our generalizations explicitly, we could have figured the disconnect out more quickly. I could have said that my generalizations apply best to essays and edited posts. You could have said that your generalizations apply best to situations where the added cost of qualifiers carries a higher burden.
Because we did not explicitly qualify our generalizations, but instead relied on context, we fell prey to a fake disagreement. However, any vindication I feel at seeing my point supported is nullified by the realization that I, personally, failed to apply the communication strategy that I was promoting.
Oops.
When someone adds that proviso "asexual/homosexual" -- they are changing the relevant level of precision necessary to the conversation.
No, they are pointing out that in order to apply to a case they are interested in, the conversation must be made more precise.
For example; if I say "Men and women get married because they love each other", then the fact that some men/women don't marry, or the fact that intersex people aren't necessarily men or women, or the fact that GLBT people who marry are also likely to do so because of love, or the fact that some marriages are loveless is only a distraction to the conversation at hand.
The last one isn't a distraction, it's a counterexample. If you want to meaningfully say that men and women marry out of love, you must implicitly claim that loveless marriages are a small minority. If someone says, "A significant number of of marriages are loveless," they aren't trying to get you to add a trivializing proviso. They're saying that your generalization is false.
Consider the difference in meaning between "Men and women marry each other because they love each other" and "Men/women/intersex individuals and other men/women/intersex individuals may or may not marry one another in groups as small as two with no upper bound for reasons that can vary depending on the situation."
This isn't a reductio, it's a strawman. When you add provisos to a statement that is really nontrivial, you do not turn "generally" into "may or may not." You turn "always" into "generally", or "generally" into "in the majority of cases".
In any case, what about "People who marry generally do so out of love?" This retains the substance of the original statement while incorporating the provisos. All that is gained is real clarity. All that is lost is fake clarity. (And if enough people are found who marry for other reasons, it is false.)
Each of those little "costs next to nothing" statements actually do have a cost, one that isn't necessarily clear initially.
The cost of omitting them isn't clear initially, either.
Are you familiar at all with how errors propagate in measurements? Each time you introduce new provisos, those statements affect the "informational value" of each dependent statement in its nest. This creates an analogous situation to the concept of significant digits in discourse.
I was generally taught to carry significant figures further than strictly necessary to avoid introducing rounding errors. If my final answer would have 3 significant digits, using a few buffer digits seemed wise. They're cheap.
Propagation of uncertainty is not a reason to drop qualifiers. It's a reason to use them. When reading an argument based on a generalization, I want to know the exceptions BEFORE the argument begins, not afterwards. That way, I can have a sense of how the uncertainties in each step affect the final conclusion.
For a topic like lukeprog's, in other words, the difference between 99% and 80% of women is below the threshold of significance. Eliminating it altogether (until such time as it becomes significant) is an important and valuable practice in communication.
If I want an answer to three significant figures, I do not begin my reasoning by rounding to two sigfigs, then trying to add in the last sigfig later.
If one person thinks that an argument depends on an assumption that fails in 1 in 100 cases, and someone else thinks the assumption fails in 1 in 5 cases, and they don't even know that they disagree, and pointing out this disagreement is regarded as some kind of map-territory error, they will have trouble even noticing when the disagreement has become significant.
Failure to effectively exercise that practice will result in needless 'clarifications' distracting from the intended message, hampering dialogs with unnecessary cognitive burden resultant from additional nesting of "informational quanta." In other words; if you add too many provisos to a statement, an otherwise meaningful and useful one will become trivially useless.
This tends to happen to bad generalizations, yes. Once you consider all of the cases in which they are wrong, suddenly they seem to only be true in the trivial cases!
Good generalizations are still useful even after you have noted places where they are less likely to hold. Adding any number of true provisos will not make them trivial.
As for the cognitive load, why not state assumptions at the beginning of an essay where possible, rather than adding them to each individual statement? If the reader shares the assumptions, they'll just nod and move on. If the reader does NOT share the assumptions, then relieving them of the cognitive burden of being aware of disagreement is not a service.
I did quite a bit of EEG neurofeedback at the age of about 11 or 12. I may have learned to concentrate a little better, but I'm really not sure. The problem is that once I was off the machine, I stopped getting the feedback!
Consider the following interior monologue:
"Am I relaxing or focusing in the right way? I don't have the beeping to tell me, how do I know I am doing it right?"
In theory, EEG is a truly rational way to learn to relax, because one constantly gets information about how relaxed one is and can adjust one's behavior to maximize relaxation. In practice, I'm not sure if telling 12-year-old me that I was going to have access to electrical feedback from my own brain was the best way to relax me.
The EEG did convince me that physicalism was probably true, which distressed me because I had a lot of cached thoughts about how it is bad to be a soulless machine. My mother, who believed in souls at the time, reassured me that if I really was a machine that could feel and think, there'd be nothing wrong with that.
I wonder how my rationality would have developed if, at that point, she had instead decided to argue against the evidence?
A statement like "Women want {thing}" leaves it unclear what the map is even supposed to be, barring clear context cues. This can lead to either fake disagreements or fake agreements.
Fake disagreements ("You said that Republicans are against gun control, but I know some who aren't!") are not too dangerous, I think. X makes the generalization, Y points out the exception, X says that it was a broad generalization, Y asks for more clarity in the future, X says Y was not being sufficiently charitable, and so on. Annoying to watch, but not likely to generate bad ideas.
Fake agreements can lead to deeper confusion. If X seriously believes that 99% of women have some property, and Y believes that only 80% of women have some property, then they may both agree with the generalization even if they have completely different ideas about what a charitable reading would be!
It costs next to nothing to say "With very few exceptions, women...", "A strong majority of women...." or "Most women...." The three statements mean different things, and establishing the meaning does not make communication next-to-impossible; it makes communication clearer. This isn't about charity, but clarity.
The fact that it sounds accurate is what makes it a funny category error, rather than a boring category error. "2 + 2 = 3 is morally wrong" is not funny. "Deontological ethics is morally wrong" is funny.
It calls to mind a scenario of a consequentialist saying: "True, Deontologist Dan rescued that family from a fire, which was definitely a good thing... but he did it on the basis of an morally wrong system of ethics."
That''s how I reacted to it, anyway. It's been a day, I've had more sleep, and I STILL find the idea funny. Every time I seriously try to embrace consequentialist ethics, it's because I think that deontological ethics depend on self-deception.
And lying is bad.
EDIT: I am in no way implying that other consequentialists arrive at consequentialism by this reasoning. I am simply noting that the idea that consequentialist principles are better and more rational, so we should be rational consequentialists (regardless of the results), is very attractive to my own mental hardware, and also very funny.
Cracked you up? Rather than just seeming like a straightforward implication of conflicting moral systems?
I think it is not a straightforward implication at all. Maybe this rephrasing would make the joke clearer:
"A deontological theory of ethics is not actually right. It is morally wrong, in principle."
If that doesn't help:
"It is morally wrong to make decisions for deontological reasons."
What makes it funny is that moment wherein the reader (or at least, this reader) briefly agrees with it before the punchline hits.
"But what's actually right probably doesn't include a component of making oneself stupid with regard to the actual circumstances in order to prevent other parts of one's mind from hijacking the decision.
What you probably meant: "Rational minds should have a rational theory of ethics; this leads to better consequences."
My late-night reading: "A deontological theory of ethics is not actually right. It is wrong. Morally wrong."
I am not sure what caused me to read it this way, but it cracked me up.
This doesn't strike me as an inherently bad objection. Even the post offers the caveat that we're running on corrupt hardware. One can't say that consequentialist theories are WRONG on such grounds, but one can certainly object to the likely consequences of combining ambiguous expected values with brains that do not naturally multiply and are good at imagining fictional futures.
I think the argument can be cut down to this:
- In theory, we should act to create the best state of affairs.
- People are bad at doing that without predefined moral rules.
- Can we at least believe that we believe in those rules?
This is lousy truth-seeking, but may be excellent instrumental rationality if enough people are poor consequentialists and decent enough deontologists. It's not my argument of choice; step 3 seems suspiciously oily.
But then again, "That which can be destroyed by the truth, should be" has kind of a deontological ring to it...
How to step outside the rational box without going off the deep end. Essentially, techniques for maintaining a lifeline back to normality so you can explore the further reaches of the psyche in some degree of safety.
I developed some of these!
I had a manic episode as well, but it was induced by medication and led to hypersocial behavior. I quickly noticed that I was having bizarre and sudden convictions, and started adopting heuristics to deal with them. I thought I was normal, or even better than normal. Then I realized that such a thought was very abnormal for me, and compensated.
Mania, for me, was like thinking in ALL CAPS ABOUT THINGS I USUALLY IGNORED. It was suddenly giving credence to religion not because I ceased to be an atheist, but because WE ARE ALL CONNECTED REALLY! It was fuzzy thinking, but damned if it didn't make people like me more for a bit. It was looking people IN THE EYE, BECAUSE THAT IS WHAT TRUST AND SOCIAL COMMUNICATION IS ALL ABOUT, all the time, when I am normally shy of eye contact.
(If you find the CAPSLOCK intrusions in the above paragraph annoying, imagine THINKING THIS WAY and you begin to see why mania is a very tiring thing and NOT RECOMMENDED unless you REALLY KNOW WHAT YOU'RE DEALING WITH.)
Compensation strategies:
Another person in the mental ward, who had lived with mania for a longer time, taught me that breathing exercises can help. Stretch arms upward; inhale. Slowly lower them, exhaling. Repeat as needed.
I realized that because I was now trusting people (read: believing everything I heard), I was susceptible to getting extremely paranoid. This is not as contradictory as it sounds. After all, if you trust people who don't trust their doctors, you will trust in their paranoia. I therefore told myself, repeatedly, to trust my doctors. Over and over. This self-brainwashing was a good move in hindsight. Chaining myself to the mast of somebody else's sane clinical judgment protected me and insured that I left the mental ward quickly.
I tended to think that I should try to "help people." Mania amplified that hero complex. I therefore repeated a mantra to myself, over and over, with manic fervor: People help themselves. People help themselves. You don't help people. People help themselves.
I was encouraged by a visiting parent to take notes of ideas, so I could pursue them later. Result: Lots of notes that I later sorted out into "reasonable" and "not worth pursuing." This was helpful. Nothing permanently insightful, but some decent ideas.
Another mantra: Even brilliant ideas are wrong 99% of the time. No matter how good your idea is, it is probably wrong. You are probably wrong. You are probably wrong. Under normal circumstances, this isn't a great mantra. During mania, it is essential.
If anybody ever questions my credentials as a rationalist, I think I can safely say that I tried very hard to be a traditional rationalist with an eye for biases even when I was technically not in my right mind.
This is one reason why I worry about overemphasis on "learning styles" in teaching. Yes, we shouldn't overgeneralize from our own brains to those of others, and different people learn differently. But it's too easy to say that because I am Not a Visual Person, Having Been Born Blind and Treated By Surgery, I therefore can't learn to excel at visual tasks.
This internal sense that I am "not a visual learner" caused me serious difficulty in training to do many tasks, until I learned to just compensate by practicing for a longer period of time!
The danger of learned blankness isn't that it's entirely inaccurate. A person really might be slower to pick up skills in one domain than in another. The danger, I think, is that we overestimate our own specialization and lock ourselves out of useful and fun skills. I CAN draw and shade a simple shape; it's not magic. I CAN dissect a small insect in the lab; it just takes longer.
In a universe that contained no minds, a clean table and a cluttered table would both be neutral objects, but in the world-simulation that Mary’s brain builds, a cluttered table is obviously bad and cleaning is neutral.
In a universe that contained no minds, a table with an image painted on it that offends most people in this universe's US would also be a neutral object. As it stands, it would not be a good idea to keep such a table uncovered if you were expecting guests and wanted to maintain positive social status.
The same goes for a messy house. It may be a matter of subjective preference, but it's a subjective preference that a lot of people share. If someone prefers a messy house to the labor of cleaning it up, they may inadvertently send the signal that they do not care about the aesthetic preferences of others, just as they would if they preferred not showering to the annoyance of showering.
Furthermore, a messy house, if allowed to become messier over time, will eventually become more difficult to navigate. Even if movement isn't blocked or made hazardous, finding objects becomes a matter of mind-reading, as there is no longer an expectation that they will be returned to a specific place. Coordinating tasks also becomes more difficult - if there's no place for dirty laundry and dirty dishes, ensuring that everything gets cleaned efficiently becomes a matter of approximation. Clean dishes are a preference insofar as not having cockroaches and ants is a preference. Clean laundry is a preference insofar as having a higher probability of keeping a job is a preference.
I've seen the "if it bothers you, clean it" approach taken, and it quickly leads to a Tragedy of the Commons situation. Everyone can make a mess individually, but the cost is shared. Conversely, anyone can clean, but the social benefits go to everyone.
Likewise, negotiating with personal utility functions in mind simply gives an advantage (in terms of time spent on cleaning) to the person who dislikes cleaning. If cleaning is seen as a way of dealing with the collective harm of a mess, saying "I don't like it or care, so I shouldn't have to do as much as someone who cares about it" makes as much sense as saying "I don't mind the smell of smoke, so why can't I smoke in the house just because you dislike it? What if I only smoke in the house 50% of the time? Isn't that a compromise?"
A heuristic that works well in cases of shared harm, I think, is to give each person responsibility over minimizing harm in some specific area. In other words, "you clean the bathroom, I clean the kitchen, and our own bedrooms will be as dirty or clean as we like."
That said, all of this assumes that nobody prefers being surrounded by scavenging arthropods. Having once, some time ago, lived in such a messy way that a colony of pillbugs moved into my room to live off of the debris, I can vouch that they were pretty cute. But practically speaking, they had to go.
The general US norm is not that drawing the prophet Muhammed is forbidden, it's not that violent videogames are a sin, it's not that the casual treatment of women as nothing but sex objects is unacceptable.
Either I'm being confused by a triple-negative, or we are living in very different contexts. Even people who are avowedly anti-feminist will usually say that casually treating women as nothing but sex objects breaks their norms. They might disagree that a model on a billboard is a sex object.
More generally, the problem is not manufacturing offense where none exists, but deciding where it can reasonably exist. And even if you don't think that this is a meaningful problem, and that the best answer is to simply not take offense, ever, note that this:
we risk emboldening the true villains, the hypocrite brains who are torturing people to score cheap political points.
... sounds suspiciously like another kind of offense, the offense of anti-offense backlash. This line of argument also makes out feminists and game-pacifists to be "inexcusably ignorant" or "deliberately malicious," and thus is wielding a very similar rhetorical club to the one that was just denounced.
There is no conscious consideration of this, but somewhere deep in our hypocrite brains, we decide to pretend that our desired norms are the actual norms.
"The actual norms?"
Like "general US norm," that's a phrase that does not, as far as I can tell, dissect the space of possible norms in a useful way. If there were a single agreed-upon set of norms, or even an agreed-upon set of rules for describing an agreed-upon set of norms, these discussions would be a lot easier. As it stands, declaring offense can, in fact, shift norms if done enough. In some cases, it can shift them for the better.
Level 1: Trying to deal with problems that cause human suffering.
Level 2: Using programs to help deal with those problems more effectively.
Level 3: Optimizing the way that those programs think and solve problems in general.
Level 4: Figure out better ways to think about programs that think, so that they are not only optimal at problem-solving, but also optimal at not killing us.
Level 5: Sharing essays on how we can be more rational about the level 4 problem without succumbing to bias.
Level 6: Commenting on those essays to support strong conclusions, question weak ones, and make them more memorable and effective by contributing to a community ethos.
Level 7: Upvoting my comment.
Thinking of Level 1 actions as maintenance is an excellent analogy.
This talk of swimming suggests another analogy for spending too much time on high level actions:
Overoptimizing is like trying to infer the properties of an optimal raft while you are drowning.