Bad Concepts Repository

post by moridinamael · 2013-06-27T03:16:14.136Z · score: 22 (24 votes) · LW · GW · Legacy · 204 comments

We recently established a successful Useful Concepts Repository.  It got me thinking about all the useless or actively harmful concepts I had carried around for in some cases most of my life before seeing them for what they were.  Then it occurred to me that I probably still have some poisonous concepts lurking in my mind, and I thought creating this thread might be one way to discover what they are.

I'll start us off with one simple example:  The Bohr model of the atom as it is taught in school is a dangerous thing to keep in your head for too long.  I graduated from high school believing that it was basically a correct physical representation of atoms.  (And I went to a *good* high school.)  Some may say that the Bohr model serves a useful role as a lie-to-children to bridge understanding to the true physics, but if so, why do so many adults still think atoms look like concentric circular orbits of electrons around a nucleus?  

There's one hallmark of truly bad concepts: they actively work against correct induction.  Thinking in terms of the Bohr model actively prevents you from understanding molecular bonding and, really, everything about how an atom can serve as a functional piece of a real thing like a protein or a diamond.

Bad concepts don't have to be scientific.  Religion is held to be a pretty harmful concept around here.  There are certain political theories which might qualify, except I expect that one man's harmful political concept is another man's core value system, so as usual we should probably stay away from politics.  But I welcome input as fuzzy as common folk advice you receive that turned out to be really costly.

204 comments

Comments sorted by top scores.

comment by Stabilizer · 2013-06-27T03:31:57.039Z · score: 41 (43 votes) · LW · GW

The concept of "deserve" can be harmful. We like to think about whether we "deserve" what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism. I prefer to invert "deserve" into the future: deserve your luck by exploiting it.

Of course, "deserve" can be a useful social mechanism to increase desired actions. But only within that context.

comment by jimmy · 2013-06-27T07:30:34.986Z · score: 11 (13 votes) · LW · GW

Also "need". There's always another option, and pretending sufficiently bad options don't exist can interfere with expected value estimations.

And "should" in the moralizing sense. Don't let yourself say "I should do X". Either do it or don't. Yeah, you're conflicted. If you don't know how to resolve it on the spot, at least be honest and say "I don't know whether I want X or not X". As applied to others, don't say "he should do X!". Apparently he's not doing X, and if you're specific about why it is less frustrating and effective solutions are more visible. "He does X because it's clearly in his best interests, even despite my shaming. Oh..." - or again, if you can't figure it out, be honest about it "I have no idea why he does X"

comment by [deleted] · 2013-06-28T11:40:42.314Z · score: 4 (4 votes) · LW · GW

Don't let yourself say "I should do X". Either do it or don't.

That would work nice if I was so devoid of dynamic inconsistency that “I don't feel like getting out of bed” would reliably entail “I won't regret it if I stay in bed”; but as it stands, I sometimes have to tell myself “I should get out of bed” in order to do stuff I don't feel like doing but I know I would regret not doing.

comment by jimmy · 2013-06-29T01:09:27.690Z · score: 2 (2 votes) · LW · GW

This John Holt quote is about exactly this.

comment by Larks · 2013-06-27T10:50:25.043Z · score: 3 (3 votes) · LW · GW

if you're specific about why it is less frustrating

This is a fact about you, not about "should". If "should" is part of the world, you shouldn't remove it from your map just because you find other people frustrating.

and effective solutions are more visible.

One common, often effective strategy is to tell people they should do the thing.

if you can't figure it out, be honest about it "I have no idea why he does X"

The correct response to meeting a child murderer is "No, Stop! You should not do that!", not "Please explain why you are killing that child." (also physical force)

comment by jimmy · 2013-06-27T17:40:38.688Z · score: 6 (6 votes) · LW · GW

This is a fact about you, not about "should". If "should" is part of the world, you shouldn't remove it from your map just because you find other people frustrating.

It's not about having conveniently blank maps. It's about having more precise maps.

I realize that you won't be able to see this as obviously true, but I want you to at least understand what my claim is: after fleshing out the map with specific details, your emotional approach to the problem changes and you become aware of new possible actions without removing any old actions from your list of options - and without changing your preferences. Additionally, the majority of the time this happens, "shoulding" is no longer the best choice available.

One common, often effective strategy is to tell people they should do the thing.

Sometimes, sure. I still use the word like that sometimes, but I try to stay aware that it's short hand for "you'd get more of what you want if you do"/"I and others will shame you if you don't". It's just that so often that's not enough.

The correct response to meeting a child murderer is "No, Stop! You should not do that!", not "Please explain why you are killing that child." (also physical force)

And this is a good example. "Correct" responses oughtta get good results; what result do you anticipate? Surely not "Oh, sorry. didn't realize... I'll stop now". It sure feels appropriate to 'should' here, but that's a quirk of your psychology that focuses you on one action to the exclusion of others.

Personally, I wouldn't "should" a murderer any more than I'd "should" a paperclip maximizer. I'd use force, threats of force and maybe even calculated persuasion. Funny enough, were I to attempt to therapy a child murderer (and bold claim here - I think I could do it), I'd start with "so why do ya kill kids?"

comment by TheOtherDave · 2013-06-27T17:48:20.955Z · score: 2 (2 votes) · LW · GW

Mostly, the result I anticipate from "should"ing a norm-violator is that other members of my tribe in the vicinity will be marginally more likely to back me up and enforce the tribal norms I've invoked by "should"ing. That is, it's a political act that exerts social pressure. (Among the tribal members who might be affected by this is the norm-violator themselves.)

Alternative formulas like "you'll get more of what you want if you don't do that!" or "I prefer you not do that!" or "I and others will shame you if you do that!" don't seem to work as well for this purpose.

But of course you're correct that some norm-violators don't respond to that at all, and that some norm-violations (e.g. murder) are sufficiently problematic that we prefer the violator be physically prevented from continuing the violation.

comment by DSherron · 2013-06-27T13:45:20.681Z · score: -2 (6 votes) · LW · GW

"Should" is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It's a distinctly human invention, and it's meaning shifts as the user desires. Moral obligations are great for social interactions, but they don't reflect anything deeper than an extension of tribal politics. Saying "you should x" (in the moral sense of the word) is just equivalent to saying "I would prefer you to x", but with bonus social pressure.

Just because it is sometimes effective to try and impose a moral obligation does not mean that it is always, or even usually, the case that doing so is the most effective method available. Thinking about the actual cause of the behavior, and responding to that, will be far, far more effective.

Next time you meet a child murderer, you just go and keep on telling him he shouldn't do that. I, on the other hand, will actually do things that might prevent him from killing children. This includes physical restraint, murder, and, perhaps most importantly, asking why he kills children. If he responds "I have to sacrifice them to the magical alien unicorns or they'll kill my family" then I can explain to him that the magical alien unicorns dont't exist and solve the problem. Or I can threaten his family myself, which might for many reasons be more reliable than physical solutions. If he has empathy I can talk about how the parents must feel, or the kids themselves. If he has self-preservation instincts then I can point out the risks for getting caught. In the end, maybe he just values dead children in the same way I value children continuing to live, and my only choice is to fight him. But probably that's not the case, and if I don't ask/observe to figure out what his motivations are I'll never know how to stop him when physical force is no option.

comment by ArisKatsaris · 2013-06-27T14:43:42.542Z · score: 2 (4 votes) · LW · GW

Saying "you should x" (in the moral sense of the word) is just equivalent to saying "I would prefer you to x", but with bonus social pressure.

I really think this is a bad summarization of how moral injuctions act. People often feel a conflict for example between "I should X" and "I would prefer to not-X". If a parent has to choose between saving their own child, and a thousand other children, they may very well prefer to save their own child, but recognize that morality dictated they should have saved the thousand other children.

My own guess about the connection between morality and preferences is that morality is an unconscious estimation of our preferences about a situation, while trying to remove the bias of our personal stakes in it. (E.g. the parent recognizes that if their own child wasn't involved, if they were just hearing about the situation without personal stakes in it, they would prefer that a thousand children be saved rather that only one.)

If my guess is correct it would also explain why there's disagreement about whether morality is objective or subjective (morality is a personal preference, but it's also an attempt to remove personal biases - it's by itself an attempt to move from subjective preferences to objective preferences).

comment by [deleted] · 2013-06-28T02:48:50.297Z · score: 0 (0 votes) · LW · GW

That's a good theory.

comment by DSherron · 2013-06-27T17:22:13.384Z · score: -3 (3 votes) · LW · GW

This is because people are bad at making decisions, and have not gotten rid of the harmful concept of "should". The original comment on this topic was claiming that "should" is a bad concept; instead of thinking "I should x" or "I shouldn't do x", on top of considering "I want to/don't want to x", just look at want/do not want. "I should x" doesn't help you resolve "do I want to x", and the second question is the only one that counts.

I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it's simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature "I should x but I want to y", stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner's dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of "should", you will be free from that type of trap unless it is in your best interests to remain there.

Our moral intuitions do not exist for good reasons. "Fairness" and it's ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo "morality", "should", and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying "you should c because it's right", and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you prefer those decisions to any you might have otherwise made. It also helps you to understand that you're allowed to like yourself more than you like other people.

comment by asr · 2013-06-27T15:28:34.328Z · score: 1 (1 votes) · LW · GW

Objective morality is meaningless, and subjective morality reduces to preferences.

These aren't the only two possibilities. Lots of important aspects of the world are socially constructed. There's no objective truth about the owner of a given plot of land, but it's not purely subjective either -- and if you don't believe me, try explaining it to the judge if you are arrested for trespassing.

Social norms about morality are constructed socially, and are not simply the preferences or feelings of any particular individual. It's perfectly coherent for somebody to say "society believes X is immoral but I don't personally think it's wrong". I think it's even coherent for somebody to say "X is immoral but I intend to do it anyway."

comment by DSherron · 2013-06-27T16:54:48.555Z · score: -1 (1 votes) · LW · GW

You're sneaking in connotations. "Morality" has a much stronger connotation than "things that other people think are bad for me to do." You can't simply define the word to mean something convenient, because the connotations won't go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn't change.

Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say "x is immoral" then I haven't actually told you anything about x. In normal usage I've told you that I think people in general shouldn't do x, but you don't know why I think that unless you know my value system; you shouldn't draw any conclusions about whether you think people should or shouldn't x, other than due to the threat of my retaliation.

"Morality" in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying "x is morally wrong" or "x is morally right" doesn't have any additional effect on our actions, once we've run the best preference algorithms we have over them. Every single bit of information contained in "morally right/wrong" is also contained in our other decision algorithms, often in a more accurate form. It's not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.

comment by asr · 2013-06-27T17:47:16.497Z · score: 1 (1 votes) · LW · GW

My original point was just that "subjective versus objective" is a false dichotomy in this context. I don't want to have a big long discussion about meta-ethics, but, descriptively, many people do talk in a conventionalist way about morality or components of morality and thinking of it as a social construction is handy in navigating the world.

Turning now to the substance of whether moral or judgement words ("should", "ought", "honest", etc) are bad concepts -- At work, we routinely have conversations about "is it ethical/honest to do X", or "what's the most ethical way to deal with circumstance Y". And we do not mean "what is our private preference about outcomes or rules" -- we mean something imprecise but more like "what would our peers think of us if they knew" or "what do we think our peers ought to think of us if they knew". We aren't being very precise how much is objective, subjective, and socially constructed, but I don't see that we would gain from trying to speak with more precision than our thoughts actually have.

Yes, these terms are fuzzy and self-referential. Natural language often is. Yes, using 'ethical' instead of other terms smuggles in a lot of connotation. That's the point! Vagueness with some emotional shading and implication is very useful linguistically and I think cognitively.

The original topic was "harmful" concepts, I believe, and I don't think all vagueness is harmful. Often the imprecision is irrelevant to the actual communication or reasoning taking place.

comment by DSherron · 2013-06-27T19:32:54.565Z · score: -1 (1 votes) · LW · GW

The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn't protect you from being wrong; you can talk all day about "is it ethical to steal this cookie" but you are wasting your time. Either you're actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you're babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing "is this moral", unless what you're really asking is "What are the social consequences" or "will person x think this is immoral" or whatever. It's a dangerous habit epistemically and serves no instrumental purpose.

comment by buybuydandavis · 2013-06-28T01:50:24.044Z · score: 0 (0 votes) · LW · GW

"Should" is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences.

Subjectivity is part of the territory.

comment by DSherron · 2013-06-28T01:58:17.464Z · score: -1 (1 votes) · LW · GW

Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. "Should" is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of "should" at least. Telling anyone "you shouldn't do that" when what you really mean is "I want you to stop doing that" isn't productive. If they want to do it then they don't care what they "should" or "shouldn't" do unless you can explain to them why they in fact do or don't want to do that thing. In the sense that "should do x" means "on reflection would prefer to do x" it is useful. The farther you move from that, the less useful it becomes.

comment by buybuydandavis · 2013-06-28T09:06:33.772Z · score: 3 (3 votes) · LW · GW

Telling anyone "you shouldn't do that" when what you really mean is "I want you to stop doing that" isn't productive.

But that's not what they mean, or at least not all that they mean.

Look, I'm a fan of Stirner and a moral subjectviist, so you don't have to explain the nonsense people have in their heads with regard to morality to me. I'm on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space.

But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It's handy to have a label to separate those out, and "moral" is the accurate one, regardless of the other nonsense people have in their heads about morality.

comment by DSherron · 2013-06-28T13:44:51.552Z · score: -2 (2 votes) · LW · GW

I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about "moral" situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a "moral" preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the "preferences" cluster? Do moral preferences really have different implications than preferences about shoes and I've cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences ("I want Greta's house" vs "Greta is morally obligated to give me her house"). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they'd better all work the same way or we're gonna be in a heap of trouble.

comment by buybuydandavis · 2013-06-30T20:50:41.039Z · score: 1 (1 votes) · LW · GW

but the very feeling that makes them seem different at all stems from the core confusion!

I don't think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn't what makes my preference for ice cream preferences different from my moral preferences.

Is there a behavioral cluster about "moral"? Sure.

Do moral preferences really have different implications than preferences about shoes and I've cream?

How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don't eat it? For their tolerance of a preference in ice cream in others?

Not many that I see. So yeah, it's really different.

I mean, a preference is a preference is a term in a utility function.

And matter is matter, whether alive or dead, whether your shoe or your mom.

comment by buybuydandavis · 2013-06-28T01:49:17.696Z · score: 1 (1 votes) · LW · GW

Also "need".

I can't remember where I heard the anecdote, but I remember some small boy discovering the power of "need" with "I need a cookie!".

comment by Fhyve · 2013-07-02T04:42:17.672Z · score: 0 (0 votes) · LW · GW

I think any correct use of "need" is either implicitly or explicitly a phrase of the form "I need X (in order to do Y)".

comment by PhilGoetz · 2013-06-27T23:20:30.841Z · score: 4 (4 votes) · LW · GW

"Deserve" is harmful because we would often rather destroy utility than allow an undeserved outcome distribution. For instance, most people would probably rather punish a criminal than reform him. I nominate "justice" as the more basic bad concept. It's a good concept for sloppy thinkers who are incapable of keeping in mind all the harm done later by injustices now, a shortcut that lets them choose actions that probably increase utility in the long run. But it is a bad concept for people who can think more rigorously.

A lot of these "bad concepts" will probably be things that are useful given limited rationality.

“Are the gods not just?"

"Oh no, child. What would become us us if they were?”

― C.S. Lewis, Till We Have Faces

comment by Viliam_Bur · 2013-06-28T08:58:04.300Z · score: 3 (3 votes) · LW · GW

I'd say "justice" is a heuristics; better than nothing, but not the best possible option.

For instance, most people would probably rather punish a criminal than reform him.

This could be connected with their beliefs about probability of successfully reforming the criminal. I guess the probability strongly depends on the type of crime and type of treatment, and even is not the same for all classes of criminals (e.g. sociopaths vs. people in relative rare situation that overwhelmed them). They may fear that with a good lawyer, "reform, don't punish" is simply a "get out of jail free" card.

To improve this situation, it would help to make the statistics of reform successes widely known. But I would expect that in some situations, they are just not available. This is partially an availability heuristics on my part, and partially my model saying that many good intentions fail in real life.

Also, what about unique crimes? For example, an old person murders their only child, and they do not want to have any other child, ever. Most likely, they will never do the same crime again. How specifically would you reform them? How would you measure the success of reforming them? If we are reasonably sure they never do the same thing again, even without a treatment, then... should we just shrug and let them go?

The important part of the punishment is the precommitment to punish. If a crime already happened, causing e.g. pain to the criminal does not undo the past. But if the crime is yet in the future, precommiting to cause pain to the criminal influences the criminal's outcome matrix. Will precommitment to reforming have similar effects? ("Don't shoot him, or... I will explain you why shooting people is wrong, and then you will feel bad about it!")

comment by buybuydandavis · 2013-06-28T01:55:57.378Z · score: 0 (0 votes) · LW · GW

I nominate "justice" as the more basic bad concept. It's a good concept for sloppy thinkers who are incapable of keeping in mind all the harm done later by injustices now,

Actually, I think that's some of what they are keeping in mind and find motivating.

comment by PhilGoetz · 2013-06-29T15:46:13.194Z · score: 0 (0 votes) · LW · GW

If they were able to keep it in mind separately, they could include that in their calculations, instead of using justice as a kind of sufficient statistic to summarize it.

comment by Eugine_Nier · 2013-06-29T06:19:09.977Z · score: -1 (1 votes) · LW · GW

"Deserve" is harmful because we would often rather destroy utility than allow an undeserved outcome distribution.

Would you also two box on Newcomb’s problem?

comment by PhilGoetz · 2013-06-29T15:56:43.535Z · score: 1 (1 votes) · LW · GW

You can still use precommitment, but tie it to consequences rather than to Justice. Take Edward Snowden. Say that the socially-optimal outcome is to learn about the most alarming covert government programs, but not about all covert programs. So you want some Edward Snowdens to reveal some operations, but you don't want that to happen very often. The optimal behavior may be to precommit to injustice, punishing government employees who reveal secrets regardless of whether their actions were justified.

comment by Eugine_Nier · 2013-06-30T04:56:18.926Z · score: 0 (2 votes) · LW · GW

International espionage is probably one of the worst examples to attempt to generalize concepts like justice from. It's probably better to start with simpler (and more common) examples like theft or murder and then use the concepts developed on the simpler examples to look at the more complicated one.

comment by Kaj_Sotala · 2013-06-27T23:35:08.019Z · score: 3 (3 votes) · LW · GW

Upvoted, but I would note that it's interesting to see a moral value listed in a (supposedly value-neutral) "bad concepts repository". The idea that "deserve" in the sense in which you mention is a harmful and meaningless concept is a rather consequentialist notion, and seeing this so highly upvoted says something about the ethics that this community has adopted - and if I'm right in assuming that a lot of the upvoters probably thought this a purely factual confusion with no real ethical element, then it says a bit about the moral axioms that we tend to take for granted.

Again, not saying this as a criticism, just as something that I found interesting.

E.g. part of my morality used to say that if I only deserved some pleasures in case I had acted in the right ways or was good enough: and this had nothing to do with a consequentialist it-is-a-way-of-motivating-myself-to-act-right logic, it was simply an intrinsic value that I would to some extent have considered morally right to have even if possessing it was actively harmful. Somebody coming along and telling me that "in reality, your value is not grounded in any concrete mechanism" would have had me going "well, in that case your value of murder being bad is not grounded in any concrete mechanism either". (A comment saying that "the concept of murder can be harmful, since in reality there is no mechanism for determining what's murder" probably wouldn't have been upvoted.)

comment by Larks · 2013-06-27T10:46:36.302Z · score: 2 (4 votes) · LW · GW

We like to think about whether we "deserve" what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism.

So you're saying we like thinking about a moral property, but we're wrong to do so, because this property is not reliably instanciated? Desert theorist do not need to disagree - there's no law of physics that means people necessarily get what they deserve. Rather, we are supposed to be the mechanism - we must regulate our own affairs so as to ensure that people get what they deserve.

comment by Leonhart · 2013-06-27T15:01:30.303Z · score: 1 (1 votes) · LW · GW

Perhaps the bad concept here is actually "karma", which I understand roughly to be the claim that there is a law of physics that means people necessarily get what they deserve.

comment by fubarobfusco · 2013-06-27T22:52:10.717Z · score: 2 (2 votes) · LW · GW

I think around here we can call that the just-world fallacy.

comment by Randy_M · 2013-06-27T15:03:34.742Z · score: 1 (1 votes) · LW · GW

To me deserve flows from experiencing the predicatable consequences of one's actions. If the cultural norms for my area is to wait in line at the bank, checkout, restraunt, etc., and I do so, I deserve to be served when I reach the front of it (barring any prior actions towards the owners like theft, or personal connections). Someone who comes in later does not deserve to be served until others in the queue have been. Or, less in a less relative example, if I see dark clouds and go out dressed for warm weather when I have rain clothes at hand, I deserve to feel uncomforable. I do not deserve to be assaulted by random strangers, when I have not personally performed any actions that would initaiate conflict that violence would resolve or done anything which tends to anger other people. Of course, the certainty of getting what one deserves is not 1, and one must expect that the unexpected will happen in some context eventually.

comment by Kawoomba · 2013-06-27T10:37:23.814Z · score: 1 (1 votes) · LW · GW

On the flipside, egalitarian instincts (e.g. "justice and liberty for all", "all men are created equal") are often deemed desirable, even though many a times "deserve" stems from such concepts of how a society should supposedly be like, "what kind of society I want to live in".

There is a tension between decrying "deserve" as harmful, while e.g. espousing the (in many cases) egalitarian instincts they stem from ("I should have as many tech toys as my neighbor", "I'm trying to keep up with the Joneses", etc.).

comment by pinyaka · 2013-06-27T12:36:58.741Z · score: 0 (0 votes) · LW · GW

I think this is a different flavor of deserving. Stabilizer is using deserve to explain how people got into the current situation while you're using it to describe desirable future situation. The danger is assuming that because we are capable of acting in a way that gives people what they deserve, that in all situations someone must have already done so, so everyone must have acted in such a way that they have earned their present circumstances through moral actions.

comment by Eugine_Nier · 2013-06-29T06:12:52.713Z · score: -1 (1 votes) · LW · GW

The concept of "deserve" is only harmful to the extent people apply it to things they don't in fact deserve. In this respect, it's no different from the concept of "truth".

comment by ThrustVectoring · 2013-06-28T02:54:01.207Z · score: 0 (0 votes) · LW · GW

It's part of a larger pattern of mistaking your interpretations of reality as reality itself. There's no ephemeral labels floating around that are objectively true - you can't talk too much, work too hard, or be pathetic. You can only say things that other people would prefer not to hear, do work to the exclusion of other objectives, or be pitied by someone.

comment by wedrifid · 2013-06-28T05:44:25.104Z · score: 0 (0 votes) · LW · GW

There's no ephemeral labels floating around that are objectively true - you can't talk too much, work too hard, or be pathetic.

If excessive work causes an overuse injury or illness then "worked too hard" would seem to be a legitimate way to describe reality. (Agree with the other two.)

comment by [deleted] · 2013-06-28T00:10:05.938Z · score: 0 (0 votes) · LW · GW

I agree with that. I also suspect many people treat deserving of rewards and deserving of punishments as separate concepts. As a result they might reject one while staying attached to the other and become even more confused.

comment by Will_Newsome · 2013-06-27T05:13:59.412Z · score: 14 (14 votes) · LW · GW

(Thinking about this for a bit, I noticed that it was more fruitful for me to think of "concepts that are often used unskillfully" rather than "bad concepts" as such. Then you don't have to get bogged down thinking about scenarios where the concept actually is pretty useful as a stopgap or whatever.)

comment by drethelin · 2013-06-27T05:25:06.533Z · score: 2 (2 votes) · LW · GW

That's well-known as the mindslaver problem in MTG

comment by wedrifid · 2013-06-27T07:30:54.353Z · score: 6 (6 votes) · LW · GW

That's well-known as the mindslaver problem in MTG

Can you explain more how that problem relates to the mindslaver card in the MTG community? (Or provide a link? The top results on google were interesting but I think not the meme you were referring to.)

comment by ShardPhoenix · 2013-06-27T11:28:00.732Z · score: 17 (17 votes) · LW · GW

I think this is a slightly different issue. In Magic there's a concept of "strictly better" where one card is deemed to be always better than another (eg Lightning Bolt over Shock), as opposed to statistically better (eg Silver Knight is generally considered better than White Knight but the latter is clearly preferable if you're playing against black and not red). However, some people take "strictly better" too, um, strictly, and try to point out weird cases where you would prefer to have the seemingly worse card. Often these scenarios involve Mindslaver (eg if you're on 3 life and your opponent has Mindslaver you'd rather have Shock in hand than Lightning Bolt).

The lesson is to not let rare pathological cases ruin useful generalizations (at least not outside of formal mathematics).

comment by Stabilizer · 2013-06-28T08:10:05.673Z · score: 2 (2 votes) · LW · GW

The lesson is to not let rare pathological cases ruin useful generalizations (at least not outside of formal mathematics).

By the way even in formal mathematics (and maybe especially in formal mathematics), while pathological cases are interesting, nobody discards perfectly useful theories just because the theory allows pathologies. For example, nobody hesitates to use measure theory in spite the Banach-Tarski paradox; nobody hesitates to use calculus even though the Weierstrass function exists; few people hesitate in using the Peano axioms in spite of the existence of non-standard models of that arithmetic.

comment by Fhyve · 2013-07-02T05:03:01.319Z · score: 1 (1 votes) · LW · GW

Nitpick: I would consider the Weierstrass function a different sort of pathology than non-standard models or Banach-Tarski - a practical pathology rather than a conceptual pathology. The Weierstrass function is just a fractal. It never smooths out no matter how much you zoom in.

comment by Stabilizer · 2013-07-02T05:26:38.139Z · score: 0 (0 votes) · LW · GW

I agree that the Weierstrass function is different. I felt a tinge of guilt when I included the Weierstrass function. But I included it since it's probably the most famous pathology.

That being said, I don't quite understand the distinction you're making between a practical and a conceptual pathology. The distinction I would make between the Weierstrass and the other two is that the Weierstrass is something which is just counter-intuitive whereas the other two can be used as a reason to reject the entire theory. They are almost antithetical to the purpose of the theory. Is that what you were getting at?

comment by wedrifid · 2013-06-27T12:13:36.059Z · score: 1 (1 votes) · LW · GW

However, some people take "strictly better" too, um, strictly, and try to point out weird cases where you would prefer to have the seemingly worse card. Often these scenarios involve Mindslaver

Ahh, that would do it. The enemy being the one who uses the card would tend to make inferiority desirable in rather a lot of cases.

comment by Epiphany · 2013-06-28T02:55:12.014Z · score: 13 (15 votes) · LW · GW

Bad Concept: Obviousness

Consider this - what distinguishes obviousness from a first impression? Like some kind of meta semantic stop sign, "it's obvious!" can be used as an excuse to stop thinking about a question. It can be shouted out as an argument with an implication to the effect of "If you don't agree with me instantly, you're an idiot." which can sometimes convince people that an idea is correct without the person actually supporting their points. I sometimes wonder if obviousness is just an insidious rationalization that we cling to when what we really want is to avoid thinking or gain instant agreement.

I wonder how much damage obviousness has done?

comment by sixes_and_sevens · 2013-06-28T11:14:23.232Z · score: 12 (14 votes) · LW · GW

I've found the statement "that does not seem obvious to me" to be quite useful in getting people to explain themselves without making them feel challenged. It's among my list of "magic phrases" which I'm considering compiling at posting at some point.

comment by John_Maxwell (John_Maxwell_IV) · 2013-06-29T18:19:05.009Z · score: 5 (5 votes) · LW · GW

It's among my list of "magic phrases" which I'm considering compiling at posting at some point.

Looking forward to this.

comment by Elo · 2014-10-29T06:40:04.176Z · score: 1 (1 votes) · LW · GW

Magic phrases please?

comment by sixes_and_sevens · 2014-10-29T11:27:26.022Z · score: 1 (1 votes) · LW · GW

This seems like a good premise for a post inviting people to contribute their own "magic phrases". Sadly, I've used up my Discussion Post powers by making an idle low-quality post about weird alliances last week. I now need to rest in my crypt for a week or so until people forget about it.

comment by gjm · 2014-10-29T17:11:09.271Z · score: 0 (0 votes) · LW · GW

I've used up my Discussion Post powers [...] I now need to rest in my crypt [...]

OK, I'm confused. (Probably because I'm missing a joke.) Reading the above in isolation I'd take it as indicating that you posted something that got you a big ball o' negative karma, which brought you below some threshold that meant you couldn't post to Discussion any more.

Except that your "weird alliances" post is at +7, and your total karma is over 4k, and your last-30-days karma is over 200, and none of your posts or comments in the last week or so is net negative, and those are all very respectable numbers and surely don't disqualify anyone from doing anything.

So, as I say, I seem to be missing a joke. Oh well.

comment by sixes_and_sevens · 2014-10-29T17:57:00.018Z · score: 2 (2 votes) · LW · GW

Making non-trivial posts carries psychological costs that I feel quite acutely. I would love to be able to plough through this (c.f. Comfort Zone Expansion) by making a lot of non-trivial posts.

Unfortunately, making non-trivial posts also carries time costs that I feel quite acutely. I have quite fastidious editorial standards that make writing anything quite time-consuming (you would be alarmed at how much time I've spent writing this response), and this is compounded by engaging in long, sticky discussions.

The Weird Alliances post was an attempt to write something quickly to lower standards, and as a result it was of lower quality than I would have liked. This made the psychological cost greater. I've yet to figure out how to unknot this perverse trade-off between psychological and time costs, but it means I would prefer to space out making posts.

comment by gjm · 2014-10-29T23:12:02.657Z · score: 2 (2 votes) · LW · GW

Ah, OK, understood. Best of luck with the unknotting. (I'd offer advice, but I have much the same problem myself.)

comment by Kaj_Sotala · 2013-06-29T05:36:27.288Z · score: 3 (3 votes) · LW · GW

Related: On Saying the Obvious

comment by Epiphany · 2013-07-01T00:13:57.717Z · score: 0 (0 votes) · LW · GW

Good link. I like that Grognor mentions that obviousness is just a matter of perception and people's ideas about what's obvious will vary, so we shouldn't assume other people know "obvious" things. However, I think that it's really important for us to be aware that if you think something is obvious, you stop questioning, and you're then left with what is essentially a first impression - but I don't see Grognor mention that semantic stop sign like effect in the post, nor do I see anything about people using obviousness as a way to falsely support points.

Do you think Grognor would be interested in updating the article to include additional negative effects of obviousness? Then again putting too many points into an article makes articles confusing and less fun to read. Maybe I should write one. Do you know if anyone has written an article yet on obviousness as a meta semantic stop sign, or obviousness as a false supportive argument? If not, I'll do it.

comment by gwern · 2013-07-01T00:20:39.095Z · score: 1 (1 votes) · LW · GW

Do you think Grognor would be interested in updating the article to include additional negative effects of obviousness?

No; he's quit LW.

comment by Kaj_Sotala · 2013-07-01T13:04:47.616Z · score: 0 (0 votes) · LW · GW

Do you know if anyone has written an article yet on obviousness as a meta semantic stop sign, or obviousness as a false supportive argument? If not, I'll do it.

Not that I could recall.

comment by Epiphany · 2013-07-01T17:02:22.283Z · score: 0 (0 votes) · LW · GW

Ok, I'll post about this in the open thread to gauge interest / see if anyone else knows of a pre-existing LW post on these specific obviousness problems.

comment by bokov · 2013-09-11T22:37:14.696Z · score: 2 (2 votes) · LW · GW

The worst professors I have had disproportionally shared the habit of dismissing as obvious concepts that weren't. Way to distract students from the next thing you were going to say.

comment by wedrifid · 2013-06-28T05:39:29.612Z · score: 2 (2 votes) · LW · GW

See also: Expecting Short Inferential Distances

comment by Epiphany · 2013-06-28T08:31:55.004Z · score: 0 (0 votes) · LW · GW

That's not quite what I meant, but that's a good article.

What I meant is more along the lines of... two people are trying to figure out the same thing together, one jumps to a conclusion and the other one does not. It's that distance between the first observation and the truth I am referring to, not the distance between one person's perspective and another's.

Reads that article again. I think this is my third time.

comment by Eugine_Nier · 2013-06-29T06:41:47.819Z · score: 1 (3 votes) · LW · GW

Well, in mathematics papers it tends to mean, "I'm certain this is true, but now that I can't think of an argument at the moment".

comment by Epiphany · 2013-07-01T00:19:40.658Z · score: 0 (0 votes) · LW · GW

Hahahah! Oh, that's terrible. Now I just realized that my meaning was not entirely explicit. I edited my statement to add the part about not supporting points.

comment by Armok_GoB · 2013-06-29T21:55:16.831Z · score: 0 (0 votes) · LW · GW

That seems like just a wrong use of obvious. When I say "obvious" I usually mean I cannot explain something because my understanding is subconscious and opaque to introspection.

comment by Epiphany · 2013-07-01T00:46:24.837Z · score: 1 (1 votes) · LW · GW

I'm glad you seem to be aware of this problem. Unfortunately, I don't think the rest of the world is aware of this. The dictionary currently defines obvious as meaning "easily seen" and "evident", unfortunately.

comment by shminux · 2013-06-27T17:32:58.205Z · score: 13 (17 votes) · LW · GW

Implicitly assuming that you mapped out/classified all possible realities. One of the symptoms is when someone writes "there are only two (or three or four...) possibilities/alternatives..." instead of "The most likely/only options I could think of are..." This does not always work even in math (e.g. the statement "a theorem can be either true or false" used to be thought of as self-evidently true), and it is even less reliable in a less rigorous setting.

In other words, there is always at least one more option than you have listed! (This statement itself is, of course, also subject to the same law of flawed classification.)

comment by fubarobfusco · 2013-06-27T22:21:57.012Z · score: 9 (9 votes) · LW · GW

There's a Discordian catma to the effect that if you think there are only two possibilities — X, and Y — then there are actually Five possibilities: X, Y, both X and Y, neither X nor Y, and something you haven't thought of.

comment by buybuydandavis · 2013-06-28T01:30:36.698Z · score: 6 (6 votes) · LW · GW

Jaynes had a recommendation for multiple hypothesis testing - one of the hypotheses should always be "something I haven't thought of".

comment by FeepingCreature · 2013-06-27T15:45:32.680Z · score: 13 (15 votes) · LW · GW

The word "is" in all its forms. It encourages category thinking in lieu of focussing on the actual behavior or properties that make it meaningful to apply. Example: "is a clone really you?" Trying to even say that without using "is" poses a challenge. I believe it should be treated the same as goto: occasionally useful but usually a warning sign.

comment by [deleted] · 2013-06-27T23:11:03.660Z · score: 9 (9 votes) · LW · GW

So some, like Lycophron, were led to omit 'is', others to change the mode of expression and say 'the man has been whitened' instead of 'is white', and 'walks' instead of 'is walking', for fear that if they added the word 'is' they should be making the one to be many. -Aristotle, Physics 1.2

ETA: I don't mean this as either criticism or support, I just thought it might be interesting to point out that the frustration with 'is' has a long history.

comment by Viliam_Bur · 2013-06-28T10:22:53.247Z · score: 5 (5 votes) · LW · GW

E-Prime.

We could support speaking this way on LW by making a "spellchecker" that would underline all the forbidden words.

comment by J_Taylor · 2013-06-28T00:05:53.970Z · score: 5 (5 votes) · LW · GW

In that sentence, I find the words "clone", "really" and "you" to be as problematic as "is".

comment by [deleted] · 2013-06-28T02:41:45.953Z · score: 6 (6 votes) · LW · GW

You're perfectly comfortable with the indefinite article?

comment by J_Taylor · 2013-06-29T05:10:19.598Z · score: 3 (3 votes) · LW · GW

No, but I am much more comfortable with it than I am with the other words.

comment by [deleted] · 2013-06-28T12:26:26.069Z · score: 2 (2 votes) · LW · GW

Not having a word for “is” didn't stop the Chinese from coming up with the “white horse not horse” thing, though.

comment by Stabilizer · 2013-06-27T06:35:47.149Z · score: 13 (13 votes) · LW · GW

There is a cultural heuristic (especially in Eastern cultures) that we should respect older people by default. Now, this is not a useless heuristic, as the fact that older people have had more life experiences is definitely worth taking into account. But at least in my case (and I suspect in many other cases), the respect accorded was disproportionate to their actual expertise in many domains.

The heuristic can be very useful when respecting the older person is not really a matter of whether he/she is right or wrong, but more about appeasing power. It can be very useful to distinguish between the two situations.

comment by Viliam_Bur · 2013-06-28T09:24:38.805Z · score: 7 (7 votes) · LW · GW

How old is the "older" person? 30? 60? 90? In the last case, respecting a 90-years old person is usually not about appeasing power.

It seems more like retirement insurance. A social contract that while you are young, you have to respect old people, so that while you are old, you will get respect from young people. Depends on what specifically "respecting old people" means in given culture. If you have to obey them in their irrational decisions, that's harmful. But if it just means speaking politely to them and providing them hundred trivial advantages, I would say it is good in most situations.

Specifically, I am from Eastern Europe, where there is a cultural norm of letting old people sit in the mass transit. As in: you see an old person near you, there are no free places to sit, so you automatically stand up and offer the old person to sit down. The same for pregnant women. (There are some seats with a sign that requires you to do this, but the cultural norm is that you do it everywhere.) -- I consider this norm good, because for some people the difference in utility between standing and sitting is greater than for average people. (And of course, if you have a broken leg or something, that's an obvious exception.) So it was rather shocking for me to hear about cultures where this norm does not exist. Unfortunately, even in my country in recent decades this norm (and the politeness is general) is decreasing.

comment by wedrifid · 2013-06-27T07:27:32.908Z · score: 5 (5 votes) · LW · GW

Now, this is not a useless heuristic, as the fact that older people have had more life experiences is definitely worth taking into account.

More relevant to the social reasons for the heuristic, they have also had more time to accrue power and allies. For most people that is what respect is about (awareness of their power to influence your outcomes conditional on how much deference you give them).

The heuristic can be very useful when respecting the older person is not really a matter of whether he/she is right or wrong, but more about appeasing power. It can be very useful to distinguish between the two situations.

Oh, yes, those were the two points I prepared in response to your first paragraph. You nailed both, exactly!

Signalling social deference and actually considering an opinion to be strong Bayesian evidence need not be the same thing.

comment by PhilGoetz · 2013-06-27T23:13:41.914Z · score: 2 (2 votes) · LW · GW

But I think that in America today, we don't respect older people enough. Heck, we don't often even acknowledge their existence. Count what fraction of the people you pass on the street today are "old". Then count what fraction of people you see on TV or in the movies are old.

comment by buybuydandavis · 2013-06-28T02:06:58.213Z · score: 5 (5 votes) · LW · GW

I think that our age cohorted Lord of the Flies educational system has much to do with "we" being age cohorted as well.

comment by Stabilizer · 2013-06-28T00:30:15.965Z · score: 2 (2 votes) · LW · GW

It is not surprising that there aren't a proportional number of old people in TV/movies right now. And I suspect there never were. TV/movie audience desire to view people who possess high-status markers. Two important markers are beauty and power. In reality, younger people typically have beauty but not much power. Older people have more power and less beauty. Since TV/movies don't have the constraints of reality, we can make young people who are beautiful also powerful. We can rarely make old people beautiful with some exceptions, which TV/movies often exploit. I don't think this has anything to do with respect.

comment by jklsemicolon · 2013-06-29T11:24:48.188Z · score: 2 (4 votes) · LW · GW

TV/movie audience desire to view people who possess high-status markers...I don't think this has anything to do with respect

This is a contradiction.

comment by Stabilizer · 2013-06-30T01:59:20.301Z · score: 2 (2 votes) · LW · GW

Sorry if it was confusing but you are taking it out of context. I actually meant: the fact that we don't have a proportional number of old people in TV/movies as in real life is not because we respect old people less in real life. It is simply a reflection of the freedoms available in TV/movies.

comment by gothgirl420666 · 2013-06-28T13:02:07.163Z · score: 11 (13 votes) · LW · GW

"Your true self", or "your true motivations". There's a tendency sometimes to call people's subconscious beliefs and goals their "true" beliefs and goals, e.g. "He works every day in order to be rich and famous, but deep down inside, he's actually afraid of success." Sometimes this works the other way and people's conscious beliefs and goals are called their "true" beliefs and goals in contrast to their unconscious ones. I think this is never really a useful idea, and the conscious self should just be called the conscious self, the subconscious self should just be called the subconscious self, and neither one of them needs to be privileged over the other as the "real" self. Both work together to dictate behavior.

"Rights". This is probably obvious to most consequentialists, but framing political discussions in terms of rights, as in "do we have the right to have an ugly house, or do our neighbors not have the right not to look at an ugly house if they don't want to?" is usually pretty useless. Similarly, "freedom" is not really a good terminal value, because pretty much anything can be defined as freedom, e.g. "by making smoking in restaurants illegal, the American people have the freedom not to smell smoke in a restaurant if they don't want to."

comment by [deleted] · 2013-06-29T03:09:31.466Z · score: 1 (1 votes) · LW · GW

Sometimes this works the other way and people's conscious beliefs and goals are called their "true" beliefs and goals in contrast to their unconscious ones.

Most examples I recall, of pointing out which - conscious vs unconscious - is the "true" motivation, were attempts to attack someone's behavior. An accuser picks one motivation that is disagreeable or unpleasant, and uses it to cast aspersion on a positive behavior.

I don't think that one self is being privileged over the other solely because of confusion as to which motivations really dictate behavior. It largely depends on which is more convenient for the accuser who designates the "true" self.

Also, you may want to put your two bad concepts into different comments. That way they can be upvoted or downvoted separately.

comment by elharo · 2013-06-27T11:18:18.975Z · score: 11 (21 votes) · LW · GW

Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems.

For example, many Christians hold this belief under the names the Kingdom, the Rapture, and/or the second coming (details depend on sect). It leads to excessive discounting of the future, and consequent poor choices. In Collapse Jared Diamond writes about how apocalyptic Christians who control a mining company cause environmental problems in the United States.

Belief in a magic problem solving genie also causes people to fail to take effective action to improve their lives and help others, because they can just wait for the genie to do it for them.

comment by Desrtopa · 2013-06-27T19:28:59.265Z · score: 4 (4 votes) · LW · GW

I think this would probably be a pretty destructive idea were it not for the fact that for most people who hold it, it seems to be such a far belief that they scarcely consider the consequences.

comment by Viliam_Bur · 2013-06-28T10:31:51.868Z · score: 2 (2 votes) · LW · GW

If I believe the world will be destroyed during the next year, the near reaction would be to quit the job, sell everything I can, and enjoy the money while I can. Luckily, most people who share this belief don't do that.

But there are also long-term plans, such as getting more education, protecting the nature, planning for retirement... and those need to be done in far mode, where "but the world will be destroyed this year" can be used as an excuse. -- I wonder how often people do this. Probably more often than the previous example.

comment by bokov · 2013-09-11T22:38:48.030Z · score: 1 (1 votes) · LW · GW

Or that we will create a magic genie to grant all our wishes and solve our problems?

comment by Lumifer · 2013-06-27T16:03:27.378Z · score: 7 (7 votes) · LW · GW

I am not sure I am comfortable with the idea of an entirely context-less "bad concept". I have the annoying habit of answering questions of the type "Is it good/bad, useful/useless, etc." with a counter-question "For which purpose?"

Yes, I understand that rare pathological cases should not crowd out useful generalizations. However given the very strong implicit context (along with the whole framework of preconceived ideas, biases, values, etc.) that people carry around in their heads, I find it useful and sometimes necessary to help/force people break out of their default worldview and consider other ways of looking at things. In particular, ways where good/bad evaluation changes the sign.

To get back to the original point, a concept is a mental model of reality, a piece of a map. A bad concept would be wrong and misleading in the sense that it would lead you to incorrect conclusions about the territory. So a "bad concept" is just another expression for a "bad map". And, um, there are a LOT of bad maps floating around in the meme aether...

comment by buybuydandavis · 2013-06-28T01:47:06.698Z · score: 1 (1 votes) · LW · GW

Good, for what, for whom.

Similarly, instead of grousing how the world isn't the way I'd like it, or a person isn't the way I'd like them, I try to ask "what's valuable here for me?", which is a more productive focus.

comment by John_Maxwell (John_Maxwell_IV) · 2013-06-29T18:18:18.232Z · score: 0 (0 votes) · LW · GW

I have the annoying habit of answering questions of the type "Is it good/bad, useful/useless, etc." with a counter-question "For which purpose?"

"Should" is another word like this. Generally when people say should, they either mean with respect to how best to achieve some goal, or else they're trying to make you follow their moral rules.

comment by komponisto · 2013-06-27T10:29:29.236Z · score: 5 (13 votes) · LW · GW

"Harmony" -- specifically the idea of root) progressions -- in music theory. (EDIT: That's "music theory", not "music". The target of my criticism is a particular tradition of theorizing about music, not any body of actual music.)

This is perhaps the worst theory I know of to be currently accepted by a mainstream academic discipline. (Imagine if biologists were Lamarckians, despite Darwin.)

comment by maia · 2013-06-27T11:19:35.683Z · score: 6 (6 votes) · LW · GW

What's wrong with it?

comment by komponisto · 2013-06-27T11:34:11.435Z · score: 7 (7 votes) · LW · GW

See discussion here, which has more links.

comment by maia · 2013-06-28T21:15:49.939Z · score: -1 (1 votes) · LW · GW

Er. That's an article about the history of philosophy. Am I missing something, or was it supposed to be about music theory?

comment by komponisto · 2013-06-28T23:38:04.041Z · score: 1 (1 votes) · LW · GW

The link is to a comment.

comment by maia · 2013-06-29T04:05:49.417Z · score: 1 (1 votes) · LW · GW

Ah, ok. I was on my cellphone, so probably assumed that the instant-scroll-down-to-comment-section was a bug instead of a feature (or possibly it went to the wrong place, even).

comment by RichardKennaway · 2013-06-27T12:53:00.809Z · score: 4 (4 votes) · LW · GW

Could you expand on that? It has never been clear to me what music theory is — what constitutes true or false claims about the structure of a piece of music, and what constitutes evidence bearing on such claims.

What makes the idea of "harmony" wrong? What alternative is "right"? Schenker's theory? Westergaard's? Riemann? Partsch? (I'm just engaging in Google-scholarship here, I'd never heard of these people until moments ago.) But what would make these, or some other theory, right?

comment by komponisto · 2013-06-28T11:32:14.217Z · score: 7 (7 votes) · LW · GW

Could you expand on that? It has never been clear to me what music theory is — what constitutes true or false claims about the structure of a piece of music, and what constitutes evidence bearing on such claims.

You're in good company, because it's never been clear to music theorists either, even after a couple millennia of thinking about the problem.

However, I do have my own view on the matter. I consider the music-theoretical analogue of "matching the territory" to be something like data compression. That is, the goodness of a musical theory is measured by how easily it allows one to store (and thus potentially manipulate) musical data in one's mind.

Ideally, what you want is some set of concepts such that, when you have them in your mind, you can hear a piece of music and, instead of thinking "Wow! I have no idea how to do that -- it must be magic!", you think "Oh, how nice -- a zingoban together with a flurve and two Type-3 splidgets" , and -- most importantly -- are then able to reproduce something comparable yourself.

comment by pianoforte611 · 2013-07-07T20:03:01.527Z · score: 0 (0 votes) · LW · GW

I'm afraid that despite reading a fair chunk of Mathemusicality I've given up on Westergaard's "An Introduction to Tonal Theory" in favor of Steven Laitz's "The Complete Musician". Steven Laitz is a Schenkerian but his book is fairly standard and uses harmony, voice leading and counterpoint.

Actually I'm beginning to conclude that if you want to compose, then starting off by learning music theory of any sort is totally wrongheaded. It is like trying to learn French by memorizing vocabulary and reading books on grammar (which is disturbingly how people try to learn languages in high school). The real way that people learn French is by starting off with very simple phrases and ideas then gradually expanding their knowledge by communicating with people who speak French. Grammar books and vocabulary books are important but as a supplement only to the actual learning that takes place from trying to communicate. Language and music are subconscious processes

I don't know what a similar approach to music composition would look like, but I'm reasonably convinced that it would be much better than the current system.

I should admit though that I am monolingual and I can't compose music - so my thoughts are based only on theory and anecdotes.

comment by komponisto · 2013-07-08T00:24:07.746Z · score: 2 (2 votes) · LW · GW

If I may ask, what was your issue with Westergaard?

(As a polyglot composer, I agree that there is an analogy of language proficiency to musical composition, but would draw a different conclusion: harmonic theory is like a phrasebook, whereas Westergaardian theory is like a grammar text. The former may seem more convenient for certain ad hoc purposes, but is hopelessly inferior for actually learning to speak the language.)

comment by pianoforte611 · 2013-07-08T02:04:16.078Z · score: 2 (2 votes) · LW · GW

I don't have any particular issue with Westergaard, I just couldn't make it through the book. Perhaps with more more effort I could but I'm lacking motivation due to low expectancy. It was a long time ago that I attempted the book, but If I had to pinpoint why, there are few things I stumbled over:

The biggest problem was that I have poor aural skills. I cannot look at two lines and imagine what they sound like so I have to play them on a piano. Add in more lines and I am quickly overwhelmed.

A second problem was the abstractness of the first half of the book. Working through counterpoint exercises that didn't really sound like music did not hold my attention for very long.

A third problem was the disconnect between the rules I was learning and my intuition. Even though I could do the exercises by following the rules, too often I felt like I was counting spaces rather than improving my understand of how musical lines are formed.

I think that your comparison is very interesting because I would predict that a phrasebook is much more useful than a grammar text for learning a language. The Pimsleur approach, which seems to be a decent way to start to learning a language, is pretty much a phrase book in audio form with some spaced repetition thrown in for good measure. Of course the next step, where the actual learning takes place, is to start trying to communicate with native speakers, but the whole point of Pimsleur is to get you to that point as soon as possible. This important because most people use grammatical rules implicitly rather than explicitly. Certainly grammar texts can be used to improve your proficiency in a language, but I highly doubt that anyone has actually learned a language using one. Without the critical step of communication, there is no mechanism for internalizing the grammatical rules.

(Sorry for taking such a long tangent into language acquisition, I wasn't initially planning on stretching the analogy that far.)

comment by komponisto · 2013-07-08T13:49:54.547Z · score: 3 (3 votes) · LW · GW

Thanks for your feedback on the Westergaard text. I think many of your problems will be addressed by the material I plan to write at some indefinite point in the future. It's unfortunate that ITT is the only exposition of Westergaardian theory available (and even it is not technically "available", being out of print), because your issues seem to be with the book and not with the theory that the book aims to present.

There is considerable irony in what you say about aural skills, because I consider the development of aural skills -- even at the most elementary levels -- to be a principal practical use of Westergaardian theory. Unfortunately, Westergaard seems not to have fully appreciated this aspect of his theory's power, because he requests of the reader a rather sophisticated level of aural skills (namely the ability to read and mentally hear a Mozart passage) as a prerequisite for the book -- rather unnecessarily, in my opinion.

This leads to the point about counterpoint exercises, which, if designed properly, should be easier to mentally "hear" than real music -- that is, indeed, their purpose. Unfortunately, this is not emphasized enough in ITT.

I think that your comparison is very interesting because I would predict that a phrasebook is much more useful than a grammar text for learning a language

Thank goodness I'm here to set you straight, then. Phrasebooks are virtually useless for learning to speak a language. Indeed they are specifically designed for people who don't want to learn the language, but merely need to memorize a few phrases (hence the name), for -- as I said -- ad hoc purposes. (Asking where the bathroom is, what someone's name is, whether they speak English, that sort of thing.)

Here's an anecdote to illustrate the problem with phrasebooks. When I was about 10 years old and had just started learning French, my younger sister got the impression that pel was the French word for "is". The reason? I had informed her that the French translation of "my name is" was je m'appelle -- a three syllable expression whose last syllable is indeed pronounced pel. What she didn't realize was that the three syllables of the French phrase do not individually correspond to the three syllables of the English phrase. Pel does not mean "is"; rather, appelle means "call", je means "I", and m' means "myself". Though translated "my name is", the phrase actually means "I call myself".

A phrasebook won't tell you this; a grammar will. If you try to learn French from a phrasebook, you might successfully learn to introduce yourself with je m'appelle, but you will be in my sister's position, doomed to making false assumptions about the structure of the language that may require vast amounts of data to correct. (It's no defense of a wrong theory that it didn't prevent you from learning the right theory eventually.) Whereas if you learn from a grammar, not only will you learn je m'appelle without thinking pel means "is", but you will also be able to generalize outside the scope of the "Greetings" section of your phrasebook and produce apparently unrelated phrases such as "I call you" (je t'appelle).

I think your comments are revealing about the mindset of people who resist or "don't get" my attack on harmonic theory. It seems to be assumed that of course no one actually learns musical thinking from a harmony book. Likewise, in defending phrasebooks, you help yourself to the assumption that the learner is going to have access to extensive amounts of data in the form of communication with speakers, and that this will be where the "actual learning" is going to occur. Well in that case, what do you need a phrasebook for? You can, after all, learn a language simply by immersion, with nothing other than the data itself to guide you. If you're going to have any preliminary or supplementary instruction at all, it surely may as well be in an organized fashion, aimed at increasing the efficiency of the learning process by directing one toward correct theories and away from incorrect ones -- which is exactly what grammar books do and phrasebooks don't do.

Harmony is actually worse than a phrasebook, because at least a phrasebook won't cause you to make worse mistakes than you would make otherwise; and it doesn't pretend to be a grammar of the language. With harmony, the situation is different. Harmony books are written as if they were presenting an actual musical theory, something that would be useful to know before sifting through vast amounts of musical data doing, as you put it, "actual learning". But then, when push comes to shove and it is pointed out how terrible, how actively misleading the harmony pseudo-theory is for this purpose, its defenders retreat to a position of "oh, well, of course everybody knows that you can't actually learn music from a book" -- as if that were a defense against an alternative theory that actually is helpful. It's enough to drive one mad!

(You'll understand, I hope, that I'm not reacting particularly to you in the preceding paragraph, but to my whole history of such discussions going back a number of years.)

comment by pianoforte611 · 2013-10-12T17:53:39.807Z · score: 2 (2 votes) · LW · GW

Alright I've read most of the relevant parts of ITT. I only skimmed the chapter on phrases and movements and I didn't read the chapter on performance.

I do have one question is the presence of the borrowing operation the only significant difference between Westergaardian and Schenkerian theory?

As for my thoughts, I think that Westergaardian theory is much more powerful than harmonic theory. It is capable of accounting for the presence of every single note in a composition unlike harmonic theory which seems to be stuck with a four part chorale texture plus voice leading for the melody. Moreover, Westergaardian analyses feel much more intuitive and musical to me than harmonic analyses. In other words its easier for me to hear the Westergaardian background than it is for me to hear the chord progression.

For me the most distinctive advantage of Westergaardian analyses is that it respects the fact that notes do not have to "line up" according to a certain chord structure. Notes that are sounding at the same time may be performing different functions, whereas harmonic theory dictates that notes sounding at the same time are usually "part of a chord" which is performing some harmonic function. For example its not always clear to me that a tonic chord in a piece (which harmonic theory regards as being a point of stability) is really an arrival point or a result of notes that just happen to coincide at that moment. The same is true for other chords.

A corollary of this seems to be that Harmonic analyses work fine when the notes do consistently line up according to their function, which happens all the time in pop music and possibly in Classical music although I'm not certain of this.

Having said that, my biggest worry with Westergaardian theory is that it is almost too powerful. Whereas Harmonic theory constrains you to producing notes that do sound in some sense tonal (for a very powerful example of this see here), Westergaardian theory seems to allow you to do almost anything whether it sounds musical or not. While it is very easy to come up with a Westergaardian analysis, it is very difficult for me to understand why someone who had a certain framework in mind would have performed the operations that would have led them to the music in its actual form. The main culprits of this seem to be anticipatory notes and borrowing.

One more thing: Have you read "why I am not a Schenkerian" by Lodewidjk Muns? Here is the link: http://lmuns.home.xs4all.nl/WhyIamNotaSchenkerian.pdf

One of his criticisms is that you can have harmonic consistency without following contrapuntal rules . Here is my attempt at fleshing out a more specific example: http://i.imgur.com/ruEYlhD.png I can't figure out how to generate those using using Westergaardian theory.

comment by bogus · 2013-10-25T12:13:21.350Z · score: 2 (2 votes) · LW · GW

A corollary of this seems to be that Harmonic analyses work fine when the notes do consistently line up according to their function, which happens all the time in pop music and possibly in Classical music although I'm not certain of this. Having said that, my biggest worry with Westergaardian theory is that it is almost too powerful. Whereas Harmonic theory constrains you to producing notes that do sound in some sense tonal

Note that when analyzing tonal music with Westergaardian analysis, it is generally the case that anticipation and delay tend to occur at relatively shallow levels in the piece's structure. The deeper you go, the more notes are going to be "aligned", just like they might be expected to be in a harmonic analysis. Moreover, the constraints of consonance and dissonance in aligned lines (as given by the rules of counterpoint; see Westergaard's chapters on species counterpoint) will also come into play, when it comes to these deeper levels. So it seems that Westergaardian analysis can do everything that you expect harmonic analysis to do, and of course even more. Instead of having "harmonic functions" and "chords", you have constraints that force you to have some kind of consonance in the background.

comment by komponisto · 2013-10-14T22:27:15.866Z · score: 2 (4 votes) · LW · GW

I do have one question is the presence of the borrowing operation the only significant difference between Westergaardian and Schenkerian theory?

The short answer is: definitely not. The long answer (a discussion of the relationship between Schenkerian and Westergaardian theory) is too long for this comment, but is something I plan to write about in the future. For now, be it noted simply that the two theories are quite distinct (for all that Westergaardian theory owes to Schenker as a predecessor) -- and, in particular, a criticism of Schenker can by no means necessarily be taken as a criticism of Westergaard, and vice-versa (see below).

For me the most distinctive advantage of Westergaardian analyses is that it respects the fact that notes do not have to "line up" according to a certain chord structure. Notes that are sounding at the same time may be performing different functions, whereas harmonic theory dictates that notes sounding at the same time are usually "part of a chord" which is performing some harmonic function.

The way I like to put it is that in Westergaardian theory, the function of a note is defined by its relationship to other notes in its line (and to the local tonic, of course), and not by its relationship to the "root" of the "chord" to which it belongs (as in harmonic theory).

A corollary of this seems to be that Harmonic analyses work fine when the notes do consistently line up according to their function

If by "work fine" you mean that it is in fact possible to identify the "appropriate" Roman numerals to assign in such cases, sure, I'll give you that. But what is such an "analysis" telling you? Taken literally, it means that you should understand the notes in the passage in terms of the indicated progression of "roots". Which, in turn, implies that in order to hear the passage in your head, you should first, according to the analyst, imagine the succession of roots (which often, indeed typically, move by skip), and only then imagine the other notes by relating them to the roots -- with the connection of notes in such a way as to form lines being a further, third step. To me, this is self-evidently a preposterously circuitous procedure when compared with the alternative of imagining lines as the fundamental construct, within which notes move by step -- without any notion of "roots" entering at all.

Having said that, my biggest worry with Westergaardian theory is that it is almost too powerful. Whereas Harmonic theory constrains you to producing notes that do sound in some sense tonal (for a very powerful example of this see here)

I am as profoundly unimpressed with that "demonstration" as I am with that whole book and its author -- of which, I must say, this example is entirely characteristic, in its exclusive obsession with the most superficial aspects of musical hearing and near-total amputation of the (much deeper) musical phenomena that I care most about and find most interesting. As far as I am concerned, there is no aesthetic difference between any of the passages (a) through (d) for the simple reason that all four of them are too short to possess much of any aesthetic characteristics in the first place: they all consist of three bars of four chords each. They are stylistically distinct, I suppose (though not actually very much, in the scheme of things), but any of them could be continued into something interesting or something less than interesting. One thing, however, is certain: if any of them were to be continued in the way they were generated (i.e. at random), the result would be nothing short of awful -- and equally so in all four cases.

The essence of musical composition -- at least its most fundamental and "elusive" aspect -- has to do with projecting coherent (i.e. recognizably human-designed) gestures over long time spans. (How long "long" is depends on context: even if you're writing a ten-second piece, you will want to carefully design its global structure.) The point being that multileveled thinking -- control of all the various degrees of locality and globality and their interrelationships -- is at the core of this art form. For that, you need a hierarchical or "reductive" theory (the very thing that our author explicitly says he doesn't want, even claiming that to hear this way is beyond human cognitive capacities -- I'm not making this up, see the last part of Chapter 7), which harmonic theory isn't. To be impressed by the difference between (a) and (d) -- as readers are apparently expected to be -- is to miss most of the point of what music is about.

Westergaardian theory seems to allow you to do almost anything whether it sounds musical or not.

Not as Westergaard sees it (see e.g. the last paragraph of p. 294 of ITT). I actually think he's wrong about this, and that the theory should allow any note to happen at any time; the theory after all is supposed to constrain analytical choices, not compositional ones. A composer can write anything, and the question for the theorist or analyst is how a given listener understands what the composer writes.

While it is very easy to come up with a Westergaardian analysis, it is very difficult for me to understand why someone who had a certain framework in mind would have performed the operations that would have led them to the music in its actual form. The main culprits of this seem to be anticipatory notes and borrowing.

It's hard to address this without a specific example to discuss.

One more thing: Have you read "why I am not a Schenkerian" by Lodewidjk Muns? Here is the link: http://lmuns.home.xs4all.nl/WhyIamNotaSchenkerian.pdf

That's not an interesting critique of Schenker, let alone Westergaard (who is not mentioned or cited even once). It basically goes like this:

(1) Schenker did not adhere to rigorous philosophical standards in his rhetoric.

(2) I disagree with (or don't understand) some of Schenker's analyses and those of his disciples.

(3) Therefore, harmonic theory is correct.

I'll also note that while some of the criticisms of Schenker are legitimate (if boring), others are completely wrong (e.g. the idea that the highest structural dominant is necessarily the final one).

Here is my attempt at fleshing out a more specific example: http://i.imgur.com/ruEYlhD.png I can't figure out how to generate those using using Westergaardian theory

Use octave transfer (ITT sec. 7.7).

comment by pianoforte611 · 2013-10-15T01:52:35.530Z · score: 1 (1 votes) · LW · GW

Use octave transfer (ITT sec. 7.7).

Thanks, this operation being notably absent in Schenkerian theory (I think).

The short answer is: definitely not

I suppose I will have to live with that for now.

If by "work fine" you mean that it is in fact possible to identify the "appropriate" Roman numerals to assign in such cases, sure, I'll give you that

By work fine, I mean the the theory is falsifiable, and has predictive power. If you are given half of the bars in a Mozart piece, using harmonic theory can give a reasonable guess as to the rest. I'm not that confident about Mozart though, certainly pop music can be predicted using harmonic theory.

As far as I am concerned, there is no aesthetic difference between any of the passages (a) through (d) for the simple reason that all four of them are too short to possess much of any aesthetic characteristics in the first place: they all consist of three bars of four chords each.

...

The essence of musical composition -- at least its most fundamental and "elusive" aspect -- has to do with projecting coherent (i.e. recognizably human-designed) gestures over long time spans ... To be impressed by the difference between (a) and (d) -- as readers are apparently expected to be -- is to miss most of the point of what music is about

Could it be that your subjective experience of music is different than most people? It certainly sounds very alien to me. While its true that listening to the long range structure of a sonata is pleasurable to me, there are certainly 3 to 4 bar excerpts that I happen to enjoy in isolation without context. But you think that 3 bars is not enough to distinguish non-music from music.

You also claim that the stylistic differences are minor, yet I would wager that virtually 100% of people (with hearing) can point out d) as being to only tonal example.

the question for the theorist or analyst is how a given listener understands what the composer writes

This is very strange to me; suppose mozart were to replace all of the f's in sonata in c major with f sharps. I think that the piece of music would be worse. Not objectively, or fundamentally worse. Just worse to a typical listener's ears. A pianist who was used to playing mozart might wonder if there was a mistake in the manuscript.

comment by komponisto · 2013-10-15T06:33:51.635Z · score: 2 (4 votes) · LW · GW

Use octave transfer (ITT sec. 7.7).

Thanks, this operation being notably absent in Schenkerian theory (I think).

On the contrary, Schenker uses it routinely.

By work fine, I mean the the theory is falsifiable, and has predictive power. If you are given half of the bars in a Mozart piece, using harmonic theory can give a reasonable guess as to the rest.

If you're talking about the expectations that a piece sets up for the listener, Westergaardian theory has much more to say about that than harmonic theory does. Or, let me rather say: an analyst equipped with Westergaardian theory is in a better position to talk about that, in much greater detail and precision, than one equipped with harmonic theory.

You might try having a closer look at Chapter 8 of ITT, which you said you had only skimmed so far. (A review of Chapter 7 wouldn't hurt either.)

Could it be that your subjective experience of music is different than most people?

Not in the sense that you mean, no. (Otherwise my answer might be "I should hope so!") I'm not missing anything that "most people" would hear. It's the opposite: I almost certainly hear more than an average human: more context, more possibilities, more vividness. (What kind of musician would I be were it otherwise?) I'm acutely aware of the differences between passages (a) through (d). It's just that I also see (or, rather, hear) a much larger picture -- a picture that, by the way, I would like more people to hear (rather than being discouraged from doing so and having their existing prejudices reinforced).

But you think that 3 bars is not enough to distinguish non-music from music.

That is not what I said. You would be closer if you said I thought 3 bars were not enough to distinguish good music from bad music. But of course it depends on how long the 3 bars are, and what they contain. My only claim here is that these particular excerpts are too short and contain too little to be judged against each other as music. And again, this is not because I don't hear the effect of the constraints that produced (d) as opposed to (a), but rather most probably because: (1) I'm not impressed by (d) because I understand how easy it is to produce; and (2) I hear structure in (a) that "most people" probably don't hear (and certainly aren't encouraged to hear by the likes of Tymoczko), not because they can't hear it, but mostly because they haven't heard enough music to be in the habit of noticing those phenomena; and, most, importantly, (3) I understand the aesthetic importance of large-scale design, which is absent from all four excerpts (as is implicit in my calling them "excerpts").

Music can have great moments, and I enjoy such moments as much as anyone else; but to listen to music as a sequence of isolated moments is a very impoverished way to listen to music. (And to anyone who knows a lot of music, (d) just isn't that great of a moment.)

You also claim that the stylistic differences are minor, yet I would wager that virtually 100% of people (with hearing) can point out d) as being to only tonal example.

Far fewer than 100% of people know what the word "tonal" means. (I also suspect that you overestimate the aural skills of the average human: more people than you probably realize would simply hear all four as roughly "a bunch of piano chords, (a) having more high notes".) Regardless, the fact that the differences are eminently perceptible does not imply that they are aesthetically significant. (Imagine if some of the excerpts were loud, and others were soft. A very hearable difference, but a stylistically minor one, given how often loud and soft mix freely in the same piece. Similarly, I feel that I could fairly easily compose a piece that incorporated all four excerpts.)

suppose mozart were to replace all of the f's in sonata in c major with f sharps. I think that the piece of music would be worse. Not objectively, or fundamentally worse. Just worse to a typical listener's ears. A pianist who was used to playing mozart might wonder if there was a mistake in the manuscript. Its also not clear to me that Westergaardian theory would predict that this set of notes is unusual, whereas harmonic theory would.

Replacing the F's with F sharps would severely undermine the C-major tonality, for starters. That's an assertion that can be made just as easily in Westergaardian theory, Schenkerian theory, or harmonic theory. But Westergaardian theory tells you even more: that by undermining the tonality, you necessarily undermine the rhythmic structure.

comment by pianoforte611 · 2013-12-15T19:32:35.199Z · score: 0 (0 votes) · LW · GW

After looking at Chapter 8, its becoming obvious that learning Westergaardian theory to an extent that it would be actually useful to me is going to take a lot of time and analyses (and I don't know if I will get around to that any time soon).

Regarding harmony, this document may be of interest to you - its written by a Schenkerian who is familiar with Westergaard:

http://www.artsci.wustl.edu/~rsnarren/texts/HarmonyText.pdf

comment by pianoforte611 · 2013-08-02T18:10:51.423Z · score: 0 (0 votes) · LW · GW

One more question. Do you also think that Westergaardian theory is superior for understanding jazz? I've encountered jazz pianists on the internet who insist that harmony and voice leading are ABSOLUTELY ESSENTIAL for doing jazz improvisation and anyone suggests otherwise is a heretic who deserves to be burnt at the stake. Hyperbole aside, jazz classes do seem to incorporate a lot of harmony and voice leading into their material and their students do seem to make fine improvisers and composers.

Oh, and for what its worth, you've convinced me to give Westergaard another shot.

comment by komponisto · 2013-08-04T08:35:22.014Z · score: 1 (1 votes) · LW · GW

Do you also think that Westergaardian theory is superior for understanding jazz?

Yes. My claim is not repertory-specific. (Note that this is my claim I'm talking about, not Westergaard's.)

More generally, I claim that the Westergaardian framework (or some future theory descended from it) is the appropriate one for understanding any music that is to be understood in terms of the traditional Western pitch space (i.e. the one represented by a standardly-tuned piano keyboard), as well as any music whose pitch space can be regarded as an extension, restriction, or modification of the latter.

I've encountered jazz pianists on the internet who insist that harmony and voice leading are ABSOLUTELY ESSENTIAL for doing jazz improvisation and anyone suggests otherwise is a heretic who deserves to be burnt at the stake.

How many of them are familiar with Westergaardian (or even Schenkerian) theory?

I've encountered this attitude among art-music performers as well. My sense is that such people are usually confusing the map and the territory (i.e. confusing music theory and music), à la Phil Goetz above. They fail to understand that the concepts of harmonic theory are not identical to the musical phenomena they purport to describe, but instead are merely one candidate theory of those phenomena.

jazz classes do seem to incorporate a lot of harmony and voice leading into their material and their students do seem to make fine improvisers and composers

Some of them do -- probably more or less exactly the subset who have enough tacit knowledge not to need to take their theoretical instruction seriously, and the temperament not to want to.

Oh, and for what its worth, you've convinced me to give Westergaard another shot.

I'm delighted to hear that, of course, although I should reiterate that I don't expect ITT to be the final word on Westergaardian theory.

comment by pianoforte611 · 2013-08-05T00:02:59.248Z · score: 0 (0 votes) · LW · GW

Some of them do -- probably more or less exactly the subset who have enough tacit knowledge not to need to take their theoretical instruction seriously, and the temperament not to want to.

This was my hypothesis as well (which is what the jazz musician responded with hostility to). If this is true though, then why are jazz musicians so passionate about harmony and voice leading? They seem to really believe that its a useful paradigm for understanding music. Perhaps this is just belief in belief?

comment by komponisto · 2013-08-06T05:24:54.519Z · score: 0 (2 votes) · LW · GW

why are jazz musicians so passionate about harmony and voice leading?

It's difficult to know what other people are thinking without talking to them directly. With this level of information I would make only two points:

1) It doesn't count as "passionate about harmony and voice leading" unless they understand Westergaardian theory well enough to contrast the two. Otherwise it just amounts to "passionate about music theory of some kind".

2) It doesn't have anything to do with jazz. If they're right that harmony is the superior theory for jazz, then it's the superior theory of music in general. Given the kind of theory we're looking for (cf. Chapter 1 of ITT), different musical traditions should not have different theories. (Analogy: if you find that the laws of physics are different on different planets, you have the wrong idea about what "laws of physics" means.)

comment by pianoforte611 · 2013-07-08T16:03:19.792Z · score: 0 (0 votes) · LW · GW

I don't think that we disagree all that much. We both agree that there are some people who are able to learn structural rules implicitly without explicit instruction. We typically call these people "good at languages" or "good at music". Our main disagreement therefore, is how large that set of people is. I happen to think that it is very large given that everyone learns the grammatical rules of their first language this way, and a fair number of polyglots learn their second language this way as well (Unless you deny the usefulness of Pimsleur like approaches). If I understand you correctly, you think that the group of people who are able to properly learn a language/music this way is smaller, because it often results in bad habits and poor inferences about the structure of the language. I would endorse this as well - grammatical texts are useful for refining your understanding of the structure of a language.

Well in that case, what do you need a phrasebook for? You can, after all, learn a language simply by immersion, with nothing other than the data itself to guide you. If you're going to have any preliminary or supplementary instruction at all, it surely may as well be in an organized fashion, aimed at increasing the efficiency of the learning process by directing one toward correct theories and away from incorrect ones -- which is exactly what grammar books do and phrasebooks don't do.

Because it is scary to learn to swim without arm floats even if there is someone else helping you (I think that phrase books are analogous to arm floats). Other than that I would agree with most of this. If you want secondary instruction in a language then you should probably use a grammar book and not a phrase book and I may return to Westergaard after I have taken some composition lessons. Also I would go one step further and say that not only is it possible to learn a language via immersion, it is necessary, and any other tools you may use to learn a language should help to support this goal.

comment by NancyLebovitz · 2013-07-22T14:00:26.205Z · score: 0 (0 votes) · LW · GW

I would endorse this as well - grammatical texts are useful for refining your understanding of the structure of a language.

Tentatively-- grammatical texts have a complex relationship with language. They can be somewhat useful but still go astray because they're for a different language, with the classic example being grammar based on Latin being used to occasionally force English out of its normal use.

I suspect the same happens when formal grammar is used to claim that casual and/or spoken English is wrong.

comment by [deleted] · 2013-07-22T18:58:42.164Z · score: 1 (1 votes) · LW · GW

Modern descriptive grammars (like this one) aren't anywhere near that bad.

comment by Douglas_Knight · 2013-11-04T21:05:24.257Z · score: 0 (0 votes) · LW · GW

Yes, accurate grammars are better than inaccurate grammars. But I think you are focusing too much on the negative effects and not noticing the positive effects. It is hard to notice people's understanding of grammar except when they make a mistake or correct someone else, both of which are generally negative effects.

Americans are generally not taught English grammar, but often are taught a foreign language, including grammar. Huge numbers of them claim that studying the foreign grammar helped them understand English grammar. Of course, they know the grammar is foreign, so they don't immediately impose it on English. But they start off knowing so little grammar that the overlap with the other language is already quite valuable, as are the abstractions involved.

comment by Fhyve · 2013-07-02T22:30:05.474Z · score: 0 (0 votes) · LW · GW

I have read around and I still can't really tell what Westergaardian theory is. I can see how harmony fails as a framework (it doesn't work very well for a lot of music I have tried to analyze) so I think there is a good chance that Westergaard is (more) right. However, other than the fact that there are these things called lines, and that there exist rules (I have not actually found a list or description of such rules) for manipulating them. I am not sure how this is different from counterpoint. I don't want to go and read a textbook to figure this out, I would rather read ~5-10 pages of exposition and big-picture

comment by komponisto · 2013-07-03T17:00:22.940Z · score: 1 (1 votes) · LW · GW

I have read around and I still can't really tell what Westergaardian theory is....I don't want to go and read a textbook to figure this out, I would rather read ~5-10 pages of exposition and big-picture

The best I can recommend is the following article:

Peles, Stephen. "An Introduction to Westergaard's Tonal Theory".In Theory Only 13:1-4 [September 1997] pp. 73-94

It's a rather obscure journal, but if you have access to a particularly good university library (or interlibrary loan), you may be able to find it. Failing that, if you PM me with your email address, I can send you the text of the article (without figures, unfortunately).

comment by Douglas_Knight · 2013-07-09T00:01:47.936Z · score: 1 (1 votes) · LW · GW

The defunct journal's web site is open access. Text (search for Peles). Table of contents of page by page scans; first page.

comment by komponisto · 2013-07-09T00:28:49.223Z · score: 1 (1 votes) · LW · GW

Wow, thanks!

comment by PhilGoetz · 2013-06-27T23:06:26.148Z · score: -3 (5 votes) · LW · GW

No. Just no. You're trying to enshrine your aesthetic preferences as rational. Besides, chord progressions work. Most people like music that uses chord progressions better than music that doesn't. Compare album sales of Elvis vs. Arnold Schoenberg.

comment by komponisto · 2013-06-28T03:19:55.058Z · score: 3 (5 votes) · LW · GW

You've completely misunderstood my claim, as arundelo pointed out. It's like accusing moridinamael of denying the atomic theory of matter (or worse, being opposed to scientific inquiry) because he/she criticized the Bohr model.

I.e. you're taking for granted the very thing I'm claiming is wrong, and then somehow using my statement to deduce other unrelated beliefs that I don't in fact hold.

(I'm somewhat surprised, because we had some fairly extensive discussions about all this in person a couple months ago. )

comment by PhilGoetz · 2013-06-29T16:24:52.201Z · score: -1 (3 votes) · LW · GW

I'm afraid my brain chose to remember the jogging path, the view of the Potomac, the bridges, and some of the joggers, but nothing about what we said. If you converted me to your view, I have lapsed back into my old ways. I have to learn everything several times.

I don't see how I've misunderstood your claim. I realize you claim harmony doesn't cut reality at the joints. I think that's an aesthetic judgment. You say that Westergardian theory allows one to treat the music of Berg, Schoenberg, and Webern as belonging to the same school as earlier Western music, as if this were a point in favor of that theory. To me, it is a proof that the theory is both wrong and destructive, because my aesthetic sense says that music is crap. We agree that the test of a theory of music is whether it helps one compose good music. I've never tried to write music using either theory, but if using Westergardian theory allows one to write music like that of Berg, my aesthetic judgements, which are different than yours, say that proves it is a bad theory.

Perhaps if I had been raised in a culture that used Westergardian composition techniques, I would be acclimatized to it, and would appreciate that music, and have a low opinion of harmonic theory. Even supposing that were true, which I doubt, it would only mean that this is culturally relative. Not a failure of rationality.

It seems to me that to claim that harmonic theory is objectively wrong, you must also claim that the tastes of people like me, who like things written using harmonic theory and dislike things not using harmonic theory, are also objectively wrong.

If you showed that Westergardian theory gave a simpler explanation of the music that I like, that would help convince me that it was a superior theory. (I don't expect you can do this in a blog post.) But even then, calling it a bad concept would be like calling Newtonian physics a bad concept because it doesn't explain motion at relativistic speeds.

comment by bogus · 2013-06-29T19:15:29.089Z · score: 4 (4 votes) · LW · GW

You say that Westergardian theory allows one to treat the music of Berg, Schoenberg, and Webern as belonging to the same school as earlier Western music, as if this were a point in favor of that theory. To me, it is a proof that the theory is both wrong and destructive, because my aesthetic sense says that music is crap.

This is not really true, for a variety of reasons:

  1. Schenker and Westergaard do not claim that their theory can explain atonal music. A claim that Schenckerian/Westergaardian analysis helps explain tonal music is much stronger than the claim about atonal music, and should be evaluated on its own merits. In particular, we know that Schencker was aware of early atonal music, and didn't like it.
  2. People's "aesthetic sense" seems to be quite dependent on their musical experience. Modern atonal music was the result of a very gradual development of taking existing (e.g. tonal) music and adding more and more "atonality" (whatever that means: some would say dissonance, others would talk about modulation, or complexity). People generally learn to appreciate atonal music by retracing these developments gradually, and listening to more and more challenging pieces. Thus, while your aesthetic sense says that this music sucks, this may not prove much.
  3. There is plenty of music that was clearly "not written using harmonic theory" insofar as harmonic theory (e.g. as detailed by Rameau's Treatise on Harmony) postdates it. And yet, Renaissance and Baroque period music (and even a lot of secular Medieval music) is generally appreciated, just as much as music written after harmony-based theories became established.

If you showed that Westergardian theory gave a simpler explanation of the music that I like, that would help convince me that it was a superior theory.

I do agree that this would be quite relevant.

comment by komponisto · 2013-06-30T10:29:37.070Z · score: 2 (4 votes) · LW · GW

I have to learn everything several times.

I understand and sympathize. (It wasn't that I thought I converted you to my view, but that I thought I had done a better job of conveying what my complaints about harmonic theory were.)

I don't see how I've misunderstood your claim.

The misunderstanding is most evident when you write a phrase like:

things written using harmonic theory

which begs the whole question. You assume that harmonic theory is an accurate description of "how those things are written", which is the very thing I deny. You seem to be confusing music theory with music, which is like mixing up the map and the territory.

We agree that the test of a theory of music is whether it helps one compose good music

Not quite. At least, the emphasis is on "helps", not on "good". You should think of a work of music (including its aesthetic qualities) being held fixed when we evaluate theories; the parameter we're measuring that determines how good the theory is is how easily the theory allows us to produce the music in question.

(Furthermore, it certainly can't be the case that harmonic theory's classifications track your likes and dislikes. After all, you apparently don't like Beethoven's Great Fugue, and yet as far as harmonic theory is concerned it's in the same category as his other works, which you do like.)

But even then, calling it a bad concept would be like calling Newtonian physics a bad concept because it doesn't explain motion at relativistic speeds.

I disagree that harmonic theory is anywhere near as good as Newtonian physics. I would instead compare it -- unfavorably -- to pre-Darwinian theories of biodiversity. I specifically believe it to be one of the worst theories of all time (whereas Newtonian physics is one of the best).

comment by PhilGoetz · 2013-07-04T23:35:16.726Z · score: 0 (2 votes) · LW · GW

I don't understand music theory enough to continue the debate. I don't even understand what you mean by harmonic theory, since I assume you don't mean we should throw away 1-3-5 chords. I have noticed that Baroque music tends more often than classical or romantic music to have passages that starts on one chord, and the different parts walk their different ways to another chord with no pivot chords, just walk the bass and damn the torpedoes in between. is that related to what you're talking about?

comment by komponisto · 2013-07-05T09:51:26.535Z · score: 2 (2 votes) · LW · GW

I don't even understand what you mean by harmonic theory, since I assume you don't mean we should throw away 1-3-5 chords.

By harmonic theory I mean the idea proposed by Jean-Philippe Rameau in 1722 of analyzing music as a succession of simultaneities ("chords"), to each of which is assigned a "root", and with the order of chords being governed by relationships among the roots.

I have noticed that Baroque music tends more often than classical or romantic music to have passages that starts on one chord, and the different parts walk their different ways to another chord with no pivot chords, just walk the bass and damn the torpedoes in between. is that related to what you're talking about?

The above doesn't make any literal sense, but if what you mean by this is that Baroque music violates Rameau's rules of root progression more often than later music (which, believe it or not, is actually what I think you mean), then this is almost certainly not the case: generally speaking, music gets more complex as you go forward in history, and the more complex it is, the more likely it is to crash Rameau's theory.

(Yes, I know that popular histories tell you that Classical music was simpler than Baroque. This is wrong.)

The reality is that the torpedoes were always damned. Rameau and his theoretical successors mistook certain superficial patterns (which automatically arise in particularly simple musical contexts) for underlying laws. The actual underlying laws were discovered by Schenker and Westergaard.

comment by PhilGoetz · 2013-07-06T05:25:18.912Z · score: -1 (1 votes) · LW · GW

(Yes, I know that popular histories tell you that Classical music was simpler than Baroque. This is wrong.)

Would you deny that Baroque music deviates from common chords more often than classical music does?

comment by komponisto · 2013-07-07T06:47:23.878Z · score: 2 (2 votes) · LW · GW

Yes. Look at how many Baroque vs. Classical entries there are on this list of examples of augmented sixth chords, for instance.

comment by PhilGoetz · 2013-08-13T21:47:21.749Z · score: 0 (0 votes) · LW · GW

That appears to be an effect of the data compiler's bias. This list of I-5-7 chords from the same source has the same ratio.

comment by komponisto · 2013-08-14T03:12:04.213Z · score: 0 (0 votes) · LW · GW

From Wikipedia:

This chord has its origins in the Renaissance, further developed in the Baroque, and became a distinctive part of the musical style of the Classical and Romantic periods.

This implies that its use increased over time, and in particular was greater in the Classical and Romantic periods than in the Baroque.

comment by PhilGoetz · 2013-07-17T18:51:13.635Z · score: -2 (4 votes) · LW · GW

That's an argument that classical music uses more augmented sixths chords, which are not especially uncommon. Contrast that with something like the chord held at the start of Bach's Fugue in D minor -- it's got a C#, a D, and an E it in; what the hell is it?

That's what I was talking about when I said "I have noticed that Baroque music tends more often than classical or romantic music to have passages that starts on one chord, and the different parts walk their different ways to another chord with no pivot chords, just walk the bass and damn the torpedoes in between," which makes perfectly simple literarl sense. Classical music moves from one resolved chord to another thru a series of pivot chords. Baroque music sometimes just walks the bass, and maybe the top note also, by one half-step per "chord" until it arrives at the destination chord, passing through intermediate states that aren't any kind of recognized chord, certainly nothing so common as an augmented 6th.

Now, if when we say Baroque you're thinking Vivaldi and I'm thinking Bach's organ music, that could account for the difference of opinion.

comment by gjm · 2013-07-17T20:01:36.066Z · score: 1 (1 votes) · LW · GW

the chord held at the start of Bach's Fugue in D minor

Bach wrote umpteen different fugues in D minor, none of which is so obviously better or more important than the others as to deserve the title "Bach's Fugue in D minor". And it's kinda unusual for a fugue to begin with any sort of held chord, though maybe whichever one you're thinking of does.

Would you care to be more specific?

comment by arundelo · 2013-07-17T22:54:22.738Z · score: 2 (2 votes) · LW · GW

I bet PhilGoetz is talking about the toccata in the famous Toccata and Fugue in D minor, BWV 565, which has a C# diminished 7 over a D pedal tone (about 30 seconds into this recording).

comment by gjm · 2013-07-18T06:30:03.616Z · score: 0 (2 votes) · LW · GW

Yeah, I thought he might be talking about that too, so I looked at the score. The chord immediately before the start of the fugue doesn't fit Phil's description.

comment by PhilGoetz · 2013-07-21T19:41:44.250Z · score: -2 (2 votes) · LW · GW

Yes, I'm taking about BWV 565. I was too lazy to look up the number, and I should have said "Tocatta and Fugue in Dm". He only wrote two things called "Tocatta and Fugue in Dm", and this is the more famous one.

And, YES, the chord does fit my description. I don't have to look it up; I play it, and I know you begin by striking a very low D, then the C# almost an octave above it, then the E just above that, and more notes beyond as well.

AND I just went downstairs and checked the score, just in case you were actually right. I think you may be talking about the next chord. What I'm calling the "chord" is written as an ascending series of notes, but most players hold them all down until the last one. It's the weird one, not the "pivot" & not the resolution.

comment by gjm · 2013-07-22T05:54:08.925Z · score: 0 (0 votes) · LW · GW

At least one of us is very confused. I don't think it's me.

At the end of the toccata there is a chord containing the following notes, from bottom to top: D (in the bass, on the pedals), another D (lowest note on the manuals), F, A, D. This is a perfectly ordinary chord of D minor, of course. After that there is a semiquaver rest and then the fugue subject begins (or, perhaps better, the fugue subject begins with a semiquaver rest). At that point, as is normal in a fugue, there is only one voice sounding.

Oh, wait, you weren't talking about the fugue at all? You meant the chord a few bars into the toccata? Well, OK then, that chord contains the notes you said it does. (Though, I repeat, it isn't "the chord held at the start of Bach's Fugue in D minor"; it's in the toccata, not the fugue; in a discussion of music analysis such distinctions are really worth making.)

But there's nothing weird about that chord! It's a standard diminished-7th chord (everything at intervals of 3 semitones from some starting point; in this instance C#, E, G, Bb). If I may quote from that bastion of the avant garde, Wikipedia:

The most common form of the diminished seventh chord is that rooted on the leading tone [...] These notes occur naturally in the harmonic minor scale.

Diminished seventh, check. Rooted on the leading tone, check. Minor key, check. It's perfectly commonplace. (There are plenty of much weirder things in Bach.)

comment by PhilGoetz · 2013-08-13T21:39:12.242Z · score: -1 (1 votes) · LW · GW

The chord is in measure 2 of the piece, and contains these notes: D, C#, E, G, Bb, C#, E.

A diminished 7th in Dm should have D, F, Ab, Bb, shouldn't it? This is a diminished 7th C#, so what's the D doing there?

Anyway, my impression is that diminished 7ths are much more common in organ music than in piano music. I think of them as "that organ-music chord". And if you look up diminished 7th in the same music database that komponisto linked to above, you'll see it has a much higher fraction of baroque entries than any of the other items on that list.

Perhaps part of the issue is when I hear "baroque" I think Bach, and when I hear "classical" I think Mozart. I think Bach does more weird chords than Mozart does. Or consider Beethoven's Moonlight Sonata--it's chock full of different chords juxtaposed in unusual ways, but they're almost all common chords.

comment by arundelo · 2013-08-13T21:55:39.155Z · score: 1 (1 votes) · LW · GW

Wikipedia: pedal point

comment by komponisto · 2013-09-24T04:02:56.613Z · score: 0 (0 votes) · LW · GW

Anyway, my impression is that diminished 7ths are much more common in organ music than in piano music.

I don't share this impression at all. How much piano music do you know? There's probably a lot more of it than there is of organ music. This is certainly the case in the nineteenth century, which was probably the heyday of the diminished seventh (while being the low point of the organ repertory).

And if you look up diminished 7th in the same music database that komponisto linked to above, you'll see it has a much higher fraction of baroque entries than any of the other items on that list.

Eh? Among a combined total of 70-80 examples on this page and this one, I count about 7-8 Baroque examples, so about 10%. I'm not going to count through all the other 24 pages for comparison, but I don't think this supports the thesis that the diminished seventh is particularly characteristic of the Baroque as opposed to the Classical or Romantic; indeed, it is the Romantic which dominates the examples, as I predicted above. (And note by the way that not one of the Baroque examples that I could find was specifically an organ piece!)

I think Bach does more weird chords than Mozart does.

What data is this based on? And for what definition of "weird"? Did you see the Mozart example I cited in my other comment? Do you have any reason to think that example is particularly uncharacteristic (in a way that your Bach example isn't)?

Or consider Beethoven's Moonlight Sonata--it's chock full of different chords juxtaposed in unusual ways, but they're almost all common chords.

This a piece with plenty of diminished sevenths! (And what do you mean by "juxtaposed in unusual ways"?)

Phil, in all seriousness, you really ought to look at the Westergaard book. You would like it, and it would really help clarify your thinking about music. (I believe I have already directed you to an electronic copy via e-mail.)

comment by komponisto · 2013-07-17T21:04:43.806Z · score: 0 (2 votes) · LW · GW

That's an argument that classical music uses more augmented sixths chords, which are not especially uncommon

"Uncommon" doesn't mean anything without reference to a time period; the point is that they are more uncommon in the Baroque period than in the Classical. The Classical period uses a richer "vocabulary of chords" than the Baroque, if one insists on thinking in such terms (as a Westergaardian, I don't think in terms of a "vocabulary of chords", of course).

Contrast that with something like the chord held at the start of Bach's Fugue in D minor -- it's got a C#, a D, and an E it in; what the hell is it?

First of all "Bach's Fugue in D minor" is highly ambiguous; Wikipedia lists 10 such works by J.S. Bach alone (BWV 538, 539, 554, 565, 851, 875, 899, 903, 905, and 948).

But you can find a chord containing those same three pitch-classes (along with G# and B) in the first movement of Mozart's Symphony No. 29 (p.4, second system, 4th measure, 1st and 3rd quarter).

Classical music moves from one resolved chord to another thru a series of pivot chords.

"Pivot chord" is a technical term in harmonic theory (which, again, I don't subscribe to) meaning a chord shared by two different keys which is used in modulating between them. You don't appear to be using this term correctly here (we're not talking about key changes), and I'm not sure exactly what you do mean. "Resolved chord" is not a standard term at all, but maybe you mean "consonant chord". (?) However, both Baroque and Classical music "move from one [consonant] chord to another" (well, except when moving to dissonant chords, which also occurs in both periods...) So this sentence reads like confused gobbledygook to me. A musical example of the phenomenon which you think occurs in Baroque music but not Classical would help (but we know it isn't "a chord with C#, D, and E", as the Mozart example I gave shows).

Now, if when we say Baroque you're thinking Vivaldi and I'm thinking Bach's organ music, that could account for the difference of opinion

You just have to compare apples to apples. If the most complex works of J.S. Bach are what you mean by "Baroque", then the most complex works of Haydn, Mozart, and (at least early) Beethoven have to be what you mean by "Classical".

I think what actually accounts for the "difference of opinion" is that you underestimate the complexity of Classical works.

Baroque music sometimes... pass[es] through intermediate states that aren't any kind of recognized chord

Indeed! Thus harmonic theory is inadequate even to the description (mere description, mind you) of Baroque music, let alone Classical or Romantic.

comment by arundelo · 2013-06-28T00:06:10.004Z · score: 2 (2 votes) · LW · GW

You couldn't be expected to tell it from the grandparent, but komponisto is saying not that tonal music is bad but that the standard set of harmony concepts does not cut reality at the joints, even when dealing with Elvis or Bach. See also the link given in komponisto's other comment.

(I haven't looked into this enough to have a strong opinion on it. I will say that the standard set of harmony concepts is an extremely important part of my mental furniture.)

comment by komponisto · 2013-06-28T03:25:53.151Z · score: 1 (3 votes) · LW · GW

You couldn't be expected to tell it from the grandparent, but komponisto is saying not that tonal music is bad

The title of the post is "Bad Concepts Repository", not "Bad Musical Repertory". Shouldn't that make it a given that theories of things, rather than things themselves, are what what we're critiquing here?

comment by arundelo · 2013-07-02T16:49:24.789Z · score: 0 (0 votes) · LW · GW

Hopefully you can take my comment as an application of the principle of charity to PhilGoetz rather than a critique of your comment that he was responding to.

("Harmony is a bad concept?! But all my favorite music was written using that concept!")

comment by bogus · 2013-06-28T00:25:46.404Z · score: 1 (1 votes) · LW · GW

I agree that whether "the standard set of harmony concepts" is actually superseded by Schenkerian/Westergaardian analysis is not really obvious.

Westergaard has a highly non-trivial theory of what counts as "consonance" or "dissonance" in a melodic line, which is roughly equivalent to "harmony" in standard music theory. The other way that traditional "harmony" is recovered is that this kind of analysis allows for a note in the 'background'/'deep' structure to be tonicized over, effectively becoming a "temporary tonic" and admitting the construction of tonic triads ('arpeggiation').

It would not be hard to make a strong case that "harmony" is a derived phenomenon; just take a bunch of chord progressions (or pieces that are commonly analyzed in terms of chord progressions) and re-analyze them in terms of the Schenkerian/Westergaardian concepts (deep structures, arpeggiation, tonicization). Then show how this leads either to a simplified analysis, or to one that's a better description of the music.

comment by komponisto · 2013-06-28T10:33:50.710Z · score: 2 (4 votes) · LW · GW

I agree that whether "the standard set of harmony concepts" is actually superseded by Schenkerian/Westergaardian analysis is not really obvious.

If you don't find it obvious after studying Westergaard and comparing it to (say) Piston, then my best guess is that you're relying on tacit musical knowledge that you don't realize others lack, or which you mistakenly think is being communicated in Piston (etc.) but which actually isn't.

Westergaard has a highly non-trivial theory of what counts as "consonance" or "dissonance" in a melodic line, which is roughly equivalent to "harmony" in standard music theory.

Not so -- there is nothing in Westergaard about root progressions (Rameau's "fundamental bass"), which is the defining concept of "harmony" in the traditional (theoretical) sense. Consonance and dissonance are part of traditional contrapuntal theory, which goes back to long before Rameau. (Yes, Westergaard does draw on the tradition of contrapuntal theory, as did Schenker.)

The other way that traditional "harmony" is recovered is that this kind of analysis allows for a note in the 'background'/'deep' structure to be tonicized over, effectively becoming a "temporary tonic" and admitting the construction of tonic triads ('arpeggiation').

Again, if you think this is what is meant by "harmony", you are missing the point. (Yes, Rameau kinda sorta had this idea as part of his theory -- but not really. It's really a Schenkerian idea.)

In harmonic theory, the "hierarchy" has only two levels of structure: a note is either part of the chord, or not part of the chord ("nonharmonic tones"). In Westergaardian theory (as in Schenkerian theory), there is no limit to the number of levels. Take the Mozart analysis that folds out from the back of the Westergaard book. The data in that analysis cannot be expressed in terms of harmonic theory. The latter is simply not rich enough. All you can do in harmonic theory is write Roman numerals under the score, which (at best) might be considered roughly equivalent to showing one level of reduction in the Westergaardian analysis (though not really, because the Roman numerals only contain pitch-class information, not pitch information like the Westergaardian version; plus harmonic theory's "chords" frequently and typically mix up different levels of Westergaardian structure).

comment by ikrase · 2013-06-28T10:30:29.911Z · score: 4 (6 votes) · LW · GW

Entitlement and Anti-entitlement, especially in the context of: 1. the whole Nice Guy thing and 2. the discourse on the millennial generation. It becomes a red herring, and in the former case leads to ambiguity between 'a specific person must do something' and 'this should be easier than it is. Plus it seems to turn semi-utilitarians deontologist. In the case of millennials, it tends to involve big inferential distance problems.

comment by Adele_L · 2013-06-27T12:47:36.465Z · score: 4 (8 votes) · LW · GW

This one is well known, but having an identity that is too large can make you more susceptible to being mind killed.

comment by shminux · 2013-06-28T05:41:31.023Z · score: 2 (2 votes) · LW · GW

How much of an identity is just right?

comment by wedrifid · 2013-06-28T06:02:07.189Z · score: 4 (10 votes) · LW · GW

How much of an identity is just right?

"I'm a gorgeous blonde child who roams the forest alone stealing food from bears." is just right.

comment by tondwalkar · 2013-06-28T13:24:05.004Z · score: 3 (3 votes) · LW · GW

Paul Graham suggests keeping your identity as small as sustainable. [1] That is, it's beneficial to keep your identity to just "rationalist" or just "scientist", since they contradict having a large identity. He puts it better than I do:

There may be some things it's a net win to include in your identity. For example, being a scientist. But arguably that is more of a placeholder than an actual label—like putting NMI on a form that asks for your middle initial—because it doesn't commit you to believing anything in particular. A scientist isn't committed to believing in natural selection in the same way a bibilical literalist is committed to rejecting it. All he's committed to is following the evidence wherever it leads.

Considering yourself a scientist is equivalent to putting a sign in a cupboard saying "this cupboard must be kept empty." Yes, strictly speaking, you're putting something in the cupboard, but not in the ordinary sense.

[1] http://www.paulgraham.com/identity.html

comment by Armok_GoB · 2013-06-29T22:01:58.211Z · score: 0 (0 votes) · LW · GW

This goes well for belief's included in your identity, but I've always been uncertain about it it's supposed to also extend to things like episodic memories (separated from believing the information contained in them), realtionship in neutral groups such as a family or a fandom, precommitments, or mannerisms?

comment by tondwalkar · 2013-07-02T22:35:02.717Z · score: 0 (0 votes) · LW · GW

things like episodic memories (separated from believing the information contained in them)

I'm not sure what you're saying here; you think of your memories as part of your identity?

realtionship[sic] in neutral groups such as a family or a fandom, precommitments, or mannerisms?

These memberships are all heuristics for expected interactions with people. Nothing actionable is lost if you bayes-induct for each situation separately, save the effort you're using to compute and the cognitive biases and emotional reactions you get from claiming "membership". Alternately you could still use the membership heuristic, but with a mental footnote that you're only using it because it's convenient, and there are senses in which the membership's representation of you may be misleading.

comment by Armok_GoB · 2013-07-02T23:16:30.261Z · score: 1 (1 votes) · LW · GW

@episodic memories: I don't personally have any like that, but I hear many people do consider the subjective experience of pivotal events in their life as part of who they are.

@relationships: I'm talking the literal membership here, the thing that exists as a function of the entanglement between states in different brains.

To clarify, I'm not talking about "your identity" here as in the information about what you consider your identity, but rather the referent of that identity. To many people, their physical bodies are part of their identity in this sense. Even distant objects, or large organizations like nations, can be in extreme cases. Just because it's a trend here to only have information that resides in your own brain as part of your identity doesn't mean it's necessary, or even especially common in it's pre form in most places.

comment by tondwalkar · 2013-07-03T12:44:50.278Z · score: 1 (1 votes) · LW · GW

To clarify, I'm not talking about "your identity" here as in the information about what you consider your identity, but rather the referent of that identity.

Ah, it appears we're talking about different things. I'm referring to ideological identity ("I'm a rationalist" , "I'm a libertarian", "I'm pro-choice", "I'm an activist" ), which I think is distinct from "I'm my mind" identity. In particular, you can be primed psychologically and emotionally by the former more than the latter.

comment by Armok_GoB · 2013-07-03T15:12:32.139Z · score: 1 (1 votes) · LW · GW

It seems like we both, and possibly the original Keeping Your Identity Small article, are committing the typical mind fallacy.

comment by hylleddin · 2013-06-28T15:04:44.399Z · score: 1 (1 votes) · LW · GW

My guess would be only as large as necessary to capture your terminal values, in so far as humans have terminal values.

comment by Will_Newsome · 2013-07-09T10:40:39.730Z · score: 0 (0 votes) · LW · GW

"How much" I'm not sure, but a strategy that I find promising and that is rarely talked about is identity min-maxing.

comment by buybuydandavis · 2013-06-28T02:10:03.917Z · score: 3 (7 votes) · LW · GW

Death is good.

comment by buybuydandavis · 2013-06-28T08:57:24.618Z · score: 6 (6 votes) · LW · GW

When I "die", I won't cease to exist, I'll be united with my dead loved ones where we'll live forever and never be separated again.

comment by [deleted] · 2013-06-29T02:41:17.476Z · score: 2 (4 votes) · LW · GW

To elaborate on the harm of the "live forever" belief, it makes people apathetic to the suffering of human life.

We all get one life, nothing more. Some - many - people spend their entire lives in great pain from starvation, diseases, oppression, etc. An observer's belief in a perfect, eternal afterlife mitigates the horror of this waste of human life. "They may suffer now, but after death, they'll have an eternity of happiness and content."

comment by taelor · 2013-07-23T05:38:10.371Z · score: 1 (1 votes) · LW · GW

This argument presupposes that the "live forever" belief is false. While it is, offering it as an explanation for why the "death is good" belief is bad is unhelpful, as nearly all the people who hold the latter belief also hold the former.

comment by Oscar_Cunningham · 2013-06-27T16:37:34.961Z · score: 2 (6 votes) · LW · GW

The concept that forgiveness is a good thing. This is a bad concept because the word "forgive" suggests holding a grudge and then forgiving someone. It's simpler and better to just never hold grudges in the first place.

comment by Kaj_Sotala · 2013-06-28T00:00:06.313Z · score: 9 (9 votes) · LW · GW

Retracted my previous comment, because it was agreeing with your claim that it's better to never hold grudges in the first place, which I quickly realized I also disagreed with.

A grudge is an act of retaliation against someone who has harmed you. They hurt you, so you now retract your cooperation - or even engage in active harm against them - until they have made sufficient amends. If they hurt you by accident or it was something minor, then yes, probably better not to hold a grudge. But if they did something sufficiently bad, then it is better to hold a grudge to show them that you will not accept such behavior, and that you will only engage in further cooperation once they have made some sign of being trustworthy. Otherwise you are encouraging them to do it again, since you've shown that they can do it with impunity - and by this you are also harming others, by not punishing untrustworthy people and making it more profitable to be untrustworthy. You do not forgive DefectBot, nor do you avoid developing a grudge in the first place, you hold a grudge against it and will no longer cooperate.

In this context, "forgiveness is a good thing" can be seen as a heuristic that encourages us to err on the side of punishing leniently, because too eager punishment will end up alienating people who would've otherwise been allies, because we tend to overestimate the chance of somebody having done a bad thing on purpose, because holding grudges is psychologically costly, or for some other reason.

comment by Prismattic · 2013-06-28T00:50:34.176Z · score: 6 (6 votes) · LW · GW

Even worse: "Forgive and forget" as advice. It combines the problem with forgiveness with explicitly advising people not to update on the bad behavior of others.

comment by [deleted] · 2013-06-28T23:32:26.156Z · score: 2 (2 votes) · LW · GW

Why blame forgiveness for the existence of grudges? The causal chain didn't go: moral philosophers invent forgiveness -> invention of resentment follows because everyone wants to give forgiveness a try.

comment by PrometheanFaun · 2013-07-07T09:35:21.974Z · score: 1 (1 votes) · LW · GW

It's also cowardly or anti-social. Forgiving is the easy thing to do, forgive and you longer have to enact any reprisal and you can potentially keep an ally. You also allow a malefactor to get away with their transgression, which will enable them to continue to pull the same shit on other people.

comment by Kaj_Sotala · 2013-06-27T23:41:46.729Z · score: 0 (0 votes) · LW · GW

sixes and sevens's comment applies to this one as well, I think.

I would agree with this if I didn't have a human brain that got stuck to past events.

comment by Qiaochu_Yuan · 2013-07-25T02:07:15.514Z · score: 1 (1 votes) · LW · GW

Can someone put a "repository" tag on this post? Thanks!

comment by buybuydandavis · 2013-06-28T01:44:00.368Z · score: 1 (3 votes) · LW · GW

"It isn't fair."

Ask someone to what "it" refers, and they'll generally be shocked by the notion that their words should have referents. When the shock wears off, it will be that "the situation" is unfair, which is a category error. The state of the universe is unfair? Is gravity unfair too? How about the fact that it rained yesterday?

Fairness is a quality of a moral being or rules enforced by moral beings. But there is rarely any particular unfair being or rule enforced by beings behind "it isn't fair".

"It isn't fair" empirically means "I don't like it and I approve of and support taking something out of someone's hide to quell my discomfort."

comment by RomeoStevens · 2013-06-28T07:27:31.892Z · score: 5 (5 votes) · LW · GW

I have no problem with referring to states of the universe as unfair.

comment by buybuydandavis · 2013-06-28T08:54:34.489Z · score: 0 (4 votes) · LW · GW

I'm sure the universe feels terribly guilty about it's transgression when you do.

comment by pragmatist · 2013-06-28T09:10:00.507Z · score: 2 (2 votes) · LW · GW

Inducing guilt in the target of the judgment is not the sole (or even primary) purpose of moral judgment, nor is it a necessary feature. That the target must be capable of experiencing guilt is not a necessary feature either. Do you disagree with any of this?

I am, in general, much more inclined to attribute unfairness to states of affairs than to people. Usually it's a state of affairs that people could potentially do something to alter/mitigate, though, so I wouldn't call a law of nature unfair.

comment by buybuydandavis · 2013-06-28T09:31:49.799Z · score: 0 (0 votes) · LW · GW

In case it wasn't clear, my comment on the universe felling guilty was my way of pointing out the futility of considering the universe unfair.

Do you disagree with any of this?

No.

comment by pragmatist · 2013-06-28T09:37:36.775Z · score: 3 (3 votes) · LW · GW

In case it wasn't clear, my comment on the universe felling guilty was my way of pointing out the futility of considering the universe unfair.

But human beings can change states of the universe. Is your point that they will not be motivated to do so if the judgment of unfairness is impersonal?

comment by wedrifid · 2013-07-03T16:35:59.155Z · score: 1 (1 votes) · LW · GW

"It isn't fair" empirically means "I don't like it and I approve of and support taking something out of someone's hide to quell my discomfort."

It quite often means "I don't like it and will attempt to change it by the application of social pressure and other means as deemed necessary".

comment by elharo · 2013-06-27T11:28:05.870Z · score: 1 (9 votes) · LW · GW

Within my lifetime, the world will end.

This too is a common belief of fundamentalist Christians (though by no means limited to them), and has many of the same effects as the belief that "Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems." For instance, no one will save for retirement if they think the world will end before they retire. And it's not important to worry about the state of the environment in 50 years, if the world ends in 25.

However this belief has an important distinction from the belief in magic genies. To the extent that the belief the world is going to end is based in actual facts and not superstition, and to the extent that it leads people to take effective action to prevent the world from ending, this belief can be helpful. For instance, one reason the world didn't already end in nuclear war is that many people worked very hard to avoid that fate. On a smaller, not-quite-world-ending scale, Y2K was a non-event because thousands of programmers spent tens, perhaps hundreds-of-thousands of work-years fixing the problem before it could instantiate.

However, if a belief that the world will end leads one to fatalism, and to giving up on planning for or considering the future, then it is equally as harmful as a belief in magic wish-granting genies.

comment by Eugine_Nier · 2013-06-29T06:34:12.355Z · score: -1 (3 votes) · LW · GW

On a smaller, not-quite-world-ending scale, Y2K was a non-event because thousands of programmers spent tens, perhaps hundreds-of-thousands of work-years fixing the problem before it could instantiate.

Except, it was a non-event even in those places where this didn't happen.

comment by [deleted] · 2013-06-27T05:41:41.345Z · score: 1 (7 votes) · LW · GW

There's one hallmark of truly bad concepts: they actively work against correct induction.

Sir Karl Popper (among others) made some strong arguments that induction is a bad concept.

comment by timtyler · 2013-06-28T22:55:23.645Z · score: -3 (5 votes) · LW · GW

Sir Karl Popper (among others) made some strong arguments that induction is a bad concept.

Those arguments are now known to be nonsense.

comment by Armok_GoB · 2013-06-29T21:36:38.567Z · score: -1 (1 votes) · LW · GW

In the lies-to-children/simplification for the purpose of heuristics department there are a largish reference class of these concepts that are basically built into the mind, that there is no way to remotely explain the proper replacement in words due to large amounts of math and technicality and so unknown to almost everyone, but that none the less can be very dangerous to take at face value. Some examples include (with approximate name for replacement concept in parenthesis): "real"(your utility function), "truth"(provability), "free will"(optimizing agent), "is-a"(configuration spaces)

comment by Leonhart · 2013-06-27T15:01:50.830Z · score: -1 (5 votes) · LW · GW

Similarity and contagion.

comment by AspiringRationalist · 2013-07-05T19:25:02.362Z · score: 0 (0 votes) · LW · GW

Care to elaborate?

comment by Leonhart · 2013-07-06T17:43:54.045Z · score: 0 (0 votes) · LW · GW

This old post is a decent elaboration, which I should have linked in the first place.

comment by RichardKennaway · 2013-06-27T09:54:31.298Z · score: -1 (9 votes) · LW · GW

"Come to terms with." Just update already. See also "seeking closure", "working through", "processing", all of which pieces of psychobabble are ways of clinging to not updating already.

comment by sixes_and_sevens · 2013-06-27T10:03:48.970Z · score: 15 (15 votes) · LW · GW

I would agree with this if I didn't have a human brain that got stuck to past events.

comment by RichardKennaway · 2013-06-27T10:15:00.160Z · score: -6 (14 votes) · LW · GW

Sounds like another version of those excuses.

comment by sixes_and_sevens · 2013-06-27T10:28:22.489Z · score: 13 (13 votes) · LW · GW

It's entirely possible that you're talking about a completely different usage of these terms, but they certainly seem to have legitimate usage. Telling, say, a PTSD-sufferer to "just update already" would not be productive.

There are plenty of events in my life that are not presently salient. No-one I ever meet from now on will know what I looked like in secondary school, or remember any of the embarrassing things I said when I was nineteen. That stupid debate I had eight years ago with some dude on a tech forum does not matter. The arguments I had with my ex are about things that only exist in the past. I know this, but on some level my brain thinks they're still happening, and I can't just "update" them away.

I've had an pretty easy and inconsequential life, so I can only imagine how hard it would be to have genuinely traumatic events rumbling around up there.

Is this the context you were talking about?

comment by RichardKennaway · 2013-06-27T11:03:15.902Z · score: -1 (5 votes) · LW · GW

It's entirely possible that you're talking about a completely different usage of these terms, but they certainly seem to have legitimate usage. Telling, say, a PTSD-sufferer to "just update already" would not be productive.

I agree that when someone is in the thick of something like that, there's no point in telling them to just get over it. In that context it would be the sort of useless and obnoxious advice that consists of telling someone with a problem that they just need to solve the problem.

However, it is possible to get over things effectively, and not wallow in jam-tomorrow "working through". The right time to acquire that skill, of neither ignoring the feelings nor obsessing over them, but instead attending to what needs to be done, is when there is no overwhelming trauma in progress.

comment by sixes_and_sevens · 2013-06-27T11:25:40.513Z · score: 12 (12 votes) · LW · GW

FYI, your original post came across as exactly that sort of useless and obnoxious advice.

comment by RichardKennaway · 2013-06-28T09:17:19.977Z · score: -2 (2 votes) · LW · GW

You have a point. But it's not intended for advice to those in the middle of such things.

comment by sixes_and_sevens · 2013-06-28T09:36:20.318Z · score: 2 (2 votes) · LW · GW

Can you indulge me in a data point? Do you believe ego depletion is describing a real phenomenon?

comment by RichardKennaway · 2013-06-28T12:53:35.817Z · score: 0 (0 votes) · LW · GW

Ego depletion. Well, the idea that self-control takes effort, and that the ability can be cultivated, goes back at least to the ancient Greeks, and I would be unsurprised to find it in all cultures everywhere throughout human history. It seems quite likely true.

I'm not sure what value experimental psychologists have added to this piece of universal folk psychology, because of general concerns like Ioannidis' work and the extraordinarily parochial range of experimental subjects that a lot of experimental psychology uses. Ioannidis studied medical research, but psychological research has in principle all of the same hazards plus a few of its own: the WEIRD issue, and the fact that most of the entities studied do not have the same objective existence as those of medicine. You can exhibit someone's liver, but not their "self-control". I would be interested to see someone do for psychology what Ioannidis has done for medicine.

So, ego depletion, fine. Everyday phenomenon known to everyone. What of it? BTW, it's curious that although the Wikipedia article begins by describing it as "a model that relates self-control to a muscle, which can become both strengthened and fatigued", all of the experiments described there relate only to weakening. The strengthening part gets left out.

Since it's costing me 5 karma per post to post down here, I'll try and respond in advance to what I think your answer would be to my "What of it?", not because I'm concerned about the karma, but because I agree with the norm of not prolonging downvoted discussions.

If someone is overwhelmed by a crisis, well, then, they are overwhelmed by a crisis. There is no point in telling them then that they could have dealt with it better had they been better prepared to handle such things. The time to become prepared is before the crisis. Soldiers are not trained by throwing them straight into battle with equipment they know nothing about.

And yet, that side of the ego depletion metaphor is little studied.

comment by sixes_and_sevens · 2013-06-28T15:26:08.638Z · score: 1 (1 votes) · LW · GW

Annoyingly, this exchange feels like it's starting to get somewhere interesting now.

My response to "what of it" would kind of along those lines, but rather than having some sort of binary state of crisis/not-crisis, there are ongoing areas or subjects in one's life that are cognitively expensive to think about. These subjects might popularly be called "unresolved issues", but that carries a lot of unnecessary connotations.

An especially banal personal example is a task I have to occasionally do at work, which involves dealing with a particularly counterintuitive data structure that we haven't automated yet. It's horrible to think about, and as a result I find it very draining to work with (cf. ego depletion), and this influences decisions I make regarding it. I avoid dealing with it even though it's quite important, and my distress at dealing with it, coupled with its generally perverse structure, means I make a lot of mistakes when doing so.

If I had a good way of thinking about this data structure, it wouldn't be so exhausting or unpleasant to work with. But the process of coming up with a good way of thinking about it is itself exhausting and unpleasant. This would be me "coming to terms with", "working through", "seeking closure" or "processing" the general problem of my evil data structure, but it's laborious and nasty, so I can't just do it. There's a cost involved.

If you don't acknowledge that cost (such as not believing in something like ego depletion, as some people don't), it would be easy to say "just update already", but updating isn't free. It's work, and that work can't necessarily be carried out in one go.

comment by Kaj_Sotala · 2013-06-27T23:40:13.434Z · score: 5 (5 votes) · LW · GW

that skill, of neither ignoring the feelings nor obsessing over them, but instead attending to what needs to be done

Isn't that exactly what "working through" a bad feeling means?

comment by Armok_GoB · 2013-06-29T22:06:30.146Z · score: 4 (4 votes) · LW · GW

I always assumed it stood for "I have updated on that specific belief, but I have to also go through all the myriad connected ones and re-evaluating them, and then seeing how it all propagates back, and iterate this until the web is relaxed, and this will take a while because I have limited clock speed."

comment by TimS · 2013-06-28T02:30:41.246Z · score: 2 (2 votes) · LW · GW

In parallel to what sixes is saying, be careful about conflating "closure" and "working through."

Closure: comes from an external source - can be unhealthy to pursue because you cannot force another person / entity to give whatever "it" is to you.

Working through it: comes from an internal process - can be healthy if done successfully.

In practice, effectively coming to terms with some loss involves shifting from seeking closure to working through the loss.