The 5-Second Level

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-07T04:51:45.004Z · LW · GW · Legacy · 328 comments

Contents

328 comments

To develop methods of teaching rationality skills, you need to learn to focus on mental events that occur in 5 seconds or less.  Most of what you want to teach is directly on this level; the rest consists of chaining together skills on this level.

As our first example, let's take the vital rationalist skill, "Be specific."

Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"

A couple of formative childhood readings that taught me to be specific:

"What is meant by the word red?"
"It's a color."
"What's a color?"
"Why, it's a quality things have."
"What's a quality?"
"Say, what are you trying to do, anyway?"

You have pushed him into the clouds.  If, on the other hand, we habitually go down the abstraction ladder to lower levels of abstraction when we are asked the meaning of a word, we are less likely to get lost in verbal mazes; we will tend to "have our feet on the ground" and know what we are talking about.  This habit displays itself in an answer such as this:

"What is meant by the word red?"
"Well, the next time you see some cars stopped at an intersection, look at the traffic light facing them.  Also, you might go to the fire department and see how their trucks are painted."

-- S. I. Hayakawa, Language in Thought and Action

and:

"Beware, demon!" he intoned hollowly.  "I am not without defenses."
"Oh yeah?  Name three."

-- Robert Asprin, Another Fine Myth

And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?"  Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."

But the real subject of today's lesson is how to see skills like this on the 5-second level.  And now that we have a specific example in hand, we can proceed to try to zoom in on the level of cognitive events that happen in 5 seconds or less.

Over-abstraction happens because it's easy to be abstract.  It's easier to say "red is a color" than to pause your thoughts for long enough to come up with the example of a stop sign.  Abstraction is a path of least resistance, a form of mental laziness.

So the first thing that needs to happen on a timescale of 5 seconds is perceptual recognition of highly abstract statements unaccompanied by concrete examples, accompanied by an automatic aversion, an ick reaction - this is the trigger which invokes the skill.

Then, you have actionable stored procedures that associate to the trigger.  And "come up with a concrete example" is not a 5-second-level skill, not an actionable procedure, it doesn't transform the problem into a task.  An actionable mental procedure that could be learned, stored, and associated with the trigger would be "Search for a memory that instantiates the abstract statement", or "Try to come up with hypothetical examples, and then discard the lousy examples your imagination keeps suggesting, until you finally have a good example that really shows what you were originally trying to say", or "Ask why you were making the abstract statement in the first place, and recall the original mental causes of your making that statement to see if they suggest something more concrete."

Or to be more specific on the last mental procedure:  Why were you trying to describe redness to someone?  Did they just run a red traffic light?

(And then what kind of exercise can you run someone through, which will get them to distinguish red traffic lights from green traffic lights?  What could teach someone to distinguish red from green?)

When you ask how to teach a rationality skill, don't ask "How can I teach people to be more specific?"  Ask, "What sort of exercise will lead people through the part of the skill where they perceptually recognize a statement as overly abstract?"  Ask, "What exercise teaches people to think about why they made the abstract statement in the first place?"  Ask, "What exercise could cause people to form, store, and associate with a trigger, a procedure for going through hypothetical examples until a good one or at least adequate one is invented?"

Coming up with good ways to teach mental skills requires thinking on the 5-second level, because until you've reached that level of introspective concreteness, that fineness of granularity, you can't recognize the elements you're trying to teach; you can't recognize the patterns of thought you're trying to build inside a mind.

To come up with a 5-second description of a rationality skill, I would suggest zooming in on a concrete case of a real or hypothetical person who (a) fails in a typical fashion and (b) successfully applies the skill.  Break down their internal experience into the smallest granules you can manage:  perceptual classifications, contexts that evoke emotions, fleeting choices made too quick for verbal consideration.  And then generalize what they're doing while staying on the 5-second level.

Start with the concrete example of the person who starts to say "Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common."  What did they do on the 5-second level?

  1. Perceptually recognize a statement they made as overly abstract.
  2. Feel the need for an accompanying concrete example.
  3. Be sufficiently averse to the lack of such an example to avoid the path of least resistance where they just let themselves be lazy and abstract.
  4. Associate to and activate a stored, actionable, procedural skill, e.g:
    4a.  Try to remember a memory which matches that abstract thing you just said.
    4b.  Try to invent a specific hypothetical scenario which matches that abstract thing you just said.
    4c.  Ask why you said the abstract thing in the first place and see if that suggests anything.

and

If you are thinking on this level of granularity, then you're much more likely to come up with a good method for teaching the skill "be specific", because you'll know that whatever exercise you come up with, it ought to cause people's minds to go through events 1-4, and provide examples or feedback to train perception 0.

Next example of thinking on the 5-second scale:  I previously asked some people (especially from the New York LW community) the question "What makes rationalists fun to be around?", i.e., why is it that once you try out being in a rationalist community you can't bear the thought of going back?  One of the primary qualities cited was "Being non-judgmental."  Two different people came up with that exact phrase, but it struck me as being not precisely the right description - rationalists go around judging and estimating and weighing things all the time.  (Noticing small discordances in an important description, and reacting by trying to find an exact description, is another one of those 5-second skills.)  So I pondered, trying to come up with a more specific image of exactly what it was we weren't doing, i.e. Being Specific, and after further visualization it occurred to me that a better description might be something like this:  If you are a fellow member of my rationalist community and you come up with a proposal that I disagree with - like "We should all practice lying, so that we feel less pressure to believe things that sound good to endorse out loud" - then I may argue with the proposal on consequentialist grounds.  I may judge.  But I won't start saying in immense indignation what a terrible person you must be for suggesting it.

Now I could try to verbally define exactly what it is we don't do, but this would fail to approach the 5-second level, and probably also fail to get at the real quality that's important to rationalist communities.  That would merely be another attempt to legislate what people are or aren't allowed to say, and that would make things less fun.  There'd be a new accusation to worry about if you said the wrong thing - "Hey!  Good rationalists don't do that!" followed by a debate that wouldn't be experienced as pleasant for anyone involved.

In this case I think it's actually easier to define the thing-we-avoid on the 5-second level.  Person A says something that Person B disagrees with, and now in Person B's mind there's an option to go in the direction of a certain poisonous pleasure, an opportunity to experience an emotional burst of righteous indignation and a feeling of superiority, a chance to castigate the other person.  On the 5-second level, Person B rejects this temptation, and instead invokes the procedure of (a) pausing to reflect and then (b) talking about the consequences of A's proposed policy in a tone that might perhaps be worried (for the way of rationality is not to refuse all emotion) but nonetheless is not filled with righteous outrage and indignation which demands that all others share that indignation or be likewise castigated.

(Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists.  It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you.  You're even allowed to say "Oops" and change your mind, if you want to backtrack (but that's a whole 'nother topic of 5-second skills)...)

The point of 5-second-level analysis is that to teach the procedural habit, you don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers (by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences.  Rather you try to come up with exercises which, if people go through them, causes them to experience the 5-second events - to feel the temptation to indignation, and to make the choice otherwise, and to associate alternative procedural patterns such as pausing, reflecting, and asking "What is the evidence?" or "What are the consequences?"

What would be an exercise which develops that habit?  I don't know, although it's worth noting that a lot of traditional rationalists not associated with LW also have this skill, and that it seems fairly learnable by osmosis from watching other people in the community not be indignant.  One method that seems worth testing would be to expose people to assertions that seem like obvious temptations to indignation, and get them to talk about evidence or consequences instead.  Say, you propose that eating one-month-old human babies ought to be legal, because one-month-old human babies aren't as intelligent as pigs, and we eat pigs.  Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone.  (Did that last sentence offend you?  Pause and reflect!)  The point being, not to persuade anyone of anything, but to get them to introspectively recognize the moment of that choice between indignation and not-indignation, and walk them through an alternative response, so they store and associate that procedural skill.  The exercise might fail if the context of a school-exercise meant that the indignation never got started - if the temptation/choice were never experienced.  But we could try that teaching method, at any rate.

(There's this 5-second skill where you respond to mental uncertainty about whether or not something will work, by imagining testing it; and if it looks like you can just go test something, then the thought occurs to you to just go test it.  To teach this skill, we might try showing people a list of hypotheses and asking them to quickly say on a scale of 1-10 how easy they look to test, because we're trying to teach people a procedural habit of perceptually considering the testableness of ideas.  You wouldn't give people lots of time to think, because then that teaches a procedure of going through complex arguments about testability, which you wouldn't use routinely in real life and would end up associating primarily to a school-context where a defensible verbal argument is expected.)

I should mention, at this point, that learning to see the 5-second level draws heavily on the introspective skill of visualizing mental events in specific detail, and maintaining that introspective image in your mind's eye for long enough to reflect on it and analyze it.  This may take practice, so if you find that you can't do it right away, instinctively react by feeling that you need more practice to get to the lovely reward, instead of instinctively giving up.

Has everyone learned from these examples a perceptual recognition of what the "5-second level" looks like?  Of course you have!  You've even installed a mental habit that when you or somebody else comes up with a supposedly 5-second-level description, you automatically inspect each part of the description to see if it contains any block units like "Be specific" which are actually high-level chunks.

Now, as your exercise for learning the skill of "Resolving cognitive events to the 5-second level", take a rationalist skill you think is important (or pick a random LW post from How To Actually Change Your Mind); come up with a concrete example of that skill being used successfully; decompose that usage to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures etcetera; check your description to make sure that each part of it can be visualized as a concrete mental process and that there are no non-actionable abstract chunks; come up with a teaching exercise which seems like it ought to cause those sub-5-second events to occur in people's minds; and then post your analysis and proposed exercise in the comments.  Hope to hear from you soon!

328 comments

Comments sorted by top scores.

comment by jimrandomh · 2011-05-07T15:09:29.860Z · LW(p) · GW(p)

IAWYC, and introspective access to what my mind was doing on this timescale was one of the bigger benefits I got out of meditation. (Note: Probably not one of the types of meditation you've read about). However, I don't think you've correctly identified what went wrong in the example with red. Consider this analogous conversation:

What's a Slider? It's a Widget.
What's a Widget? It's a Drawable.
What's a Drawable? It's an Object.

In this example, as with the red/color example, the first question and answer was useful and relevant (albeit incomplete), while the next two were useless. The lesson you seem to have drawn from this is that looking down (subclassward) is good, and looking up (superclassward) is bad. The lesson I draw from this is that relevance falls off rapidly with distance, and that each successive explanation should be of a different type. It is better to look a short distance in each direction rather than to look far in any one direction. Compare:

X is a color. This object is X. (One step up, one step down)
X is a color. A color is a quality that things have. (Two steps up)
This object is X. That object is also X. (Two steps down)

I would expect the first of these three explanations to succeed, and the other two to fail miserably.

Replies from: Eliezer_Yudkowsky, TrE
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-07T19:29:41.832Z · LW(p) · GW(p)

"One step up and one step down" sounds like a valuable heuristic; it's what I actually did in the post, in fact. Upvoted.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-09T00:21:59.494Z · LW(p) · GW(p)

A few months later, I've been teaching Anna and Luke and Will Ryan and others this rule as the "concrete-abstract pattern". Give a specific example with enough detail that the listener can visualize it as an image rather than as a proposition, and then describe it on the level of abstraction that explains what made it relevant. I.e., start with an application of Bayes's Theorem, then show the abstract equation that circumscribes what is or isn't an example of Bayes's Theorem.

comment by TrE · 2011-05-08T19:08:36.407Z · LW(p) · GW(p)

Also, it is very important to give counter-examples: 'This crow over there belongs to the bird category. But the plane in the sky and the butterfly over there do not.' Or, more fitting the 'red' example: 'That stop sign and that traffic light are red. But this other traffic sign (can't think of an example) doesn't.'

And as well, this could be done with categories. 'Red is a color. Red is not a sound.'

I guess this one has something to do with confirmation bias, as cwillu suggested.

comment by jimmy · 2011-05-07T06:53:55.992Z · LW(p) · GW(p)

I'm a big fan of breaking things down to the finest grain thoughts possible, but it still surprises me how quickly this gets complicated when trying to actually write it down.

http://lesswrong.com/lw/2l6/taking_ideas_seriously/

Example: Bob is overweight and an acquaintance mentions some "shangri-la" diet that helps people lose weight through some "flavor/calorie association". Instead of dismissing it immediately, he looks into it, adopts the diet, and comfortably achieves his desired weight.

1) Notice the feeling of surprise when encountering a claim that runs counter to your expectations.

2) Check in far mode the importance of the claim if it were true by running through a short list of concrete implications (eg "I can use this diet and as a result, I can enjoy exercise more, I can feel better about my body, etc")

  • If any thoughts along the lines of "but it's not true!" come up, remind yourself that you need to be able to clearly understand the implications of the statement and its importance separately from deciding its truth value, and that this is good practice even if this example is obviously false.

3) Imagine reaping the benefits in near mode to help build appropriate motivation.

  • Ask yourself "What would the world look like if this were true?", and if no glaring contradictions come up, mentally explore this world.

  • If necessary, imagine a reversal test to help the situation feel normal, even if flagged with uncertainty.

  • Cultivate and keep this mindset and feeling of "this is really important!" for things that are really important.

4) Using this calculation (ie NOT the 'gut feel' calculation) and econ101 based heuristics to determine how much effort to put into verifying/analyzing implications of the statement.

Done thoroughly and sequentially, this will take much more than 5 seconds, but a crude nonverbal run through can be done quickly, and the process can be repeated in increasing detail once the ball is rolling.

To train people, start by having them run though an example of taking things seriously in an obvious case (eg "What? I was supposed to drive south!?"). Do this in as much imaginative detail as possible to help bring to mind the associated mindset and feelings as strongly as possible.

Encourage them to 'try out' this new habit by having them imagine increasingly non obvious examples with the mindset of "this is how I really think and it's crazy to not have this habit" until it can be done quickly, 'automatically', and in a way that feels natural. Keep going until they have a sense of what it would feel like for some important and confidently held beliefs to be wrong.

This "imagine it as if it were real" part is really really important, and I have personally had success with that method in general.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-07T19:37:52.999Z · LW(p) · GW(p)

Upvoted for being the only one to try the exercise.

comment by cousin_it · 2011-05-07T11:55:49.784Z · LW(p) · GW(p)

"Be specific" is a nice flinch, I've always had it and it helps a lot. "Don't moralize" is a flinch I learned from experience and it also helps. Here's some other nice flinches I have:

  1. "Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

  2. "Don't hesitate." With some effort I got a working version of this flinch for tasks like programming, drawing or physical exercise. If something looks like it would make a good code fix or a good sketch, do it immediately. Would be nice to have this behavior for all other tasks too, but the change would take a lot of effort and I'm hesitating about it (ahem).

  3. "Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.

Replies from: MartinB, gjm, None, Swimmer963, gjm, Sniffnoy
comment by MartinB · 2011-05-13T12:47:20.684Z · LW(p) · GW(p)

"Don't wait."

A nice hack from GTD is to keep a 'wait-for' list. I use that for orders, reactions to inquires, everything where someone has to get back to me. Put it on a list and forget about it.

Extra points if you do not check the arrival time of you internet purchases at all during the first week of waiting.

comment by gjm · 2011-05-08T09:49:11.454Z · LW(p) · GW(p)

I have the same debt-flinch, and the same feeling about how well it works, but with one qualification: I was persuaded to treat mortgage debt differently (though I've always been very conservative about how much I'd take on) and that seems to have served me very well too.

This isn't meant as advice about mortgages: housing markets vary both spatially and temporally. More as a general point: it's probably difficult to make very sophisticated flinch-triggers, which means that even good flinching habits are likely to have exceptions from time to time, and sometimes they might be big ones.

Replies from: taryneast, Swimmer963
comment by taryneast · 2011-05-09T20:46:05.550Z · LW(p) · GW(p)

Agreed. Kiyosaki's "Rich dad Poor dad" has lots of good advice about the difference between "good debt" and "bad debt".

AFAI recall it boiled down to "only borrow money for assets, not liabilities"

ie - good debt is borrowing for things that will continue to make you more money (including your appreciating house or your business) and bad debt is for things like holidays or house redecorating projects - things that simply take cash our of your hand.

This has worked pretty well for me so far too.

Replies from: gjm, rhollerith_dot_com
comment by gjm · 2011-05-10T18:27:54.255Z · LW(p) · GW(p)

Kiyosaki's "Rich Dad, Poor Dad" has also received some extremely harsh criticism, some of it at least from people who seem to have a clue what they're talking about. I haven't looked at it myself and am not a financial expert, but would advise anyone considering reading it and/or taking Kiyosaki's advice to exercise caution.

Replies from: lukeprog, JohnH, BillyOblivion
comment by lukeprog · 2011-05-15T03:27:06.225Z · LW(p) · GW(p)

The classic takedown of Kiyosaki is from John T. Reed.

Replies from: taryneast
comment by taryneast · 2011-05-26T08:51:21.375Z · LW(p) · GW(p)

Thanks for the link. ok, that made me reconsider entirely. Lots of good points here.

I guess I liked the motivational tone of the book - but yep, it looks like his facts are not so hot (and in a lot of cases entirely fictional).

comment by JohnH · 2011-05-15T06:15:44.911Z · LW(p) · GW(p)

The same can and should be said about any book that purports to advise people on how to become rich.

I wish people were required to include in the appendix of such a book their net worth as independently assessed by an external audit and tax returns and other filings presented to show that they are wealthy and have actually gained that wealth in the manner described by the book.

Even then caution would still be needed as if markets are efficient (or even slightly efficient) then something that provided market beating returns 3-5 years ago (or however long it has been since they gained their wealth) should be expected to only provide market rates of return currently.

comment by BillyOblivion · 2011-05-13T10:54:24.255Z · LW(p) · GW(p)

Is there any financial advisor or financial book that you can recommend without reservation and that people can take without exercising caution?

Replies from: gjm, Blueberry, MartinB
comment by gjm · 2011-05-13T14:19:49.847Z · LW(p) · GW(p)

I doubt it. But there are some for which no more caution is needed than could be taken largely for granted with an intelligent bunch of people like the readership of Less Wrong, and some that aren't very approachable by anyone who isn't quite expert already. There's no need to say "exercise caution" about those. It appears that Kiyosaki's book is very approachable and may be very unreliable. That's an especially dangerous combination, if true.

comment by Blueberry · 2012-03-30T11:23:27.055Z · LW(p) · GW(p)

The classic is Andrew Tobias, "The Only Investment Guide You'll Ever Need." You can trust it because he's not selling anything and teaches common-sense, conservative advice: no risky speculation or anything.

Replies from: BillyOblivion
comment by BillyOblivion · 2012-04-16T12:01:48.419Z · LW(p) · GW(p)

Sorry, I was attempting to be clever, cynical and hip. This apparently impeded effective communication.

Let me rephrase it so that it is more difficult to misunderstand:

All financial advice should be received with reservation and taken with caution.

Better?

comment by MartinB · 2011-05-13T12:44:35.056Z · LW(p) · GW(p)

Ramith Sethi: iwillteachyoutoberich.com

Kiyosaki is nice for some mindset and basic approach, but horrible on the concrete advise. Do not go into buying houses due to his books.

My small favorite is George Clayson: the richest man in Babylon. Then there is a galore of more modern books. Check out Ramiths recommended readings.

comment by RHollerith (rhollerith_dot_com) · 2011-05-26T15:45:03.474Z · LW(p) · GW(p)

AFAI recall it boiled down to "only borrow money for assets, not liabilities"

Only borrow money for assets, not expenses.

Replies from: taryneast
comment by taryneast · 2011-05-26T21:39:03.874Z · LW(p) · GW(p)

The book defines a liability as "something that takes money from your pocket" - so the two can be considered roughly equivalent.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-05-26T22:51:22.070Z · LW(p) · GW(p)

OK, but that's not the standard definition of a liability used by accountants and such.

Replies from: taryneast
comment by taryneast · 2011-05-28T08:53:26.932Z · LW(p) · GW(p)

Yes, that is discussed in the book. He makes a big deal about the difference. In fact he discuses the seeming inconsistency of accountant putting large items into the "assets" column that do nothing but depreciate in value...

I'd argue that the main point of Rich dad, poor dad can be summarised as:

1) assets put money into your pocket, liabilities take money out of it 2) you gain wealth by adding to your assets instead of your liabilities

It's roughly equivalent to the dietary advice of "you lose weight by making sure there are more calories being spent than eaten"

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-05-28T09:58:04.848Z · LW(p) · GW(p)

Well, it makes me sad to see a very standardized and crisp term like "liability" used in such a confusing and nonstandard way. Especially when there is another equally crisp and very standardized term ("expense") that could be used instead. And I do not want to talk about it anymore.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-08T13:16:07.466Z · LW(p) · GW(p)

This is what my mother said to me: all types of debt are bad, but mortgage debt is unavoidably. My chosen career field is nursing, which is a pretty reliable income source, so I'm not worried about taking on a mortgage when the time comes.

comment by [deleted] · 2011-05-07T13:51:10.951Z · LW(p) · GW(p)

"Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

Could you elaborate a bit on that?

I noticed that I often wait for small tasks that end up taking a lot of time. For example, I need to compile a library or finish a download and estimate that it won't take long, maybe a few minutes at most. But I find it really hard to just do something else instead of waiting. I can't just go read a book or do some Anki reps. Whenever I tried that, I either have the urge to constantly check up on the blocking task or I get caught up in the replacement (or on reddit). So I end up staring at a screen, doing nothing, just so I don't lose my mental context. At worst, I can sit for half an hour and get really frustrated with myself.

Replies from: Antisuji, cousin_it
comment by Antisuji · 2011-05-07T19:00:41.379Z · LW(p) · GW(p)

I find that I worry a lot less about checking up on background tasks (compiles, laundry, baking pies, brewing tea, etc.) if I know I'll get a clear notification when the process is complete. If it's something that takes a fixed amount of time I'll usually just set a timer on my phone — this is a new habit that works well for tea in particular. Incidentally, owning an iPhone has done a surprising amount for my effectiveness just by reducing trivial inconveniences for this sort of thing.

For compiles, do something like

$ make; growlnotify -m "compile done!"

or run a script that sends you an SMS or something. This is something that I'm not in the habit of doing, but I just wrote myself a note to figure something out when I get into work on Monday.[1] (For most of my builds it's already taken care of, since it brings up a window when it's done. This would be for things like building the server, which runs in a terminal, and for svn updates, which are often glacial.)

[1] This is another thing that helps me a lot. Write things down in a place that you look at regularly. Could be a calendar app, could be a text file in Dropbox, whatever.

Replies from: MBlume, matt, rhollerith_dot_com, Antisuji
comment by MBlume · 2011-05-08T05:34:14.640Z · LW(p) · GW(p)

This would be for things like building the server, which runs in a terminal, and for svn updates, which are often glacial.

I assume someone's already told you you'll be better off with Git?

Replies from: taryneast
comment by taryneast · 2011-05-09T20:41:34.813Z · LW(p) · GW(p)

Not necessarily true. git and svn are suited to slightly different applications. For one thing - sometimes you want One Source of Truth... which svn gives you, and git does not.

Replies from: sketerpot
comment by sketerpot · 2011-05-10T01:40:08.407Z · LW(p) · GW(p)

If you have a central git repository to which all contributors have write privileges, you can treat it a lot like a svn-style centralized VCS that just happens to be git. Is there a significant advantage of svn over this kind of git setup?

comment by matt · 2011-05-08T04:31:57.494Z · LW(p) · GW(p)

If it's something that takes a fixed amount of time I'll usually just set a timer on my phone

Consider…

<ctrl><space> invokes Quicksilver.app
. enters text mode
<message>
<tab> to action pane
Large Type
<ctrl><enter> to make a compound object
Run after Delay… or Run at Time…

… and Quicksilver.app does this very nicely without your fingers ever leaving the keyboard (if you're making tea… your fingers probably already left the keyboard).

Consider also

<ctrl><space> invokes Quicksilver.app
. enters text mode
<message>
<tab> to action pane
Speak Text (Say)

(These suggestions live in mac land. If you live in Windows land, consider moving. If you live in Linux land you'll probably figure our how to do this yourself pretty quickly :)

comment by RHollerith (rhollerith_dot_com) · 2011-05-10T01:21:28.618Z · LW(p) · GW(p)

I couldn't get growlnotify to work reliably on my Snow Leopard. And some of Growl's preference panes are absurd. And Growl insists on growling at you every time it auto-updates itself, with no way to turn that off. My friend Darius dislikes it, too.

Replies from: Antisuji
comment by Antisuji · 2011-05-10T05:22:30.301Z · LW(p) · GW(p)

Is there a better alternative?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-05-10T05:50:51.133Z · LW(p) · GW(p)

I'll tell you what I do even though it is far from ideal.

I have the program play a sound file to notify me. Sound is not the best way for a program to notify me because I have a habit of taking off my headphones, but leaving them plugged in.

After you install the free app "Adium" you can find some nice chimes in /Applications/Adium.app/Contents/Resources/Sounds/

I use the following command line to play a chime:

open -a VLC /Applications/Adium.app/Contents/Resources/Sounds//TokyoTrainStation.AdiumSoundset/Contact_On.m4a

Of course this presupposes you have VLC installed. And the first time I play a chime, there's a delay of a few seconds while VLC loads the chime.

ADDED. I also use a visual signal as follows. In the "Hearing" tab on the Universal Access system pref pane, I check the box "Flash the screen when an alert sound occurs". I use the Emacs function DING to generate the aforementioned alert sound. Sorry, I do not know how to generate an alert sound from the shell.

Replies from: sullyj3
comment by sullyj3 · 2014-10-28T00:53:41.095Z · LW(p) · GW(p)

why not use mplayer for the sound?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2014-10-30T14:52:22.507Z · LW(p) · GW(p)

These days I use /usr/bin/afplay. The advantages are (1) lightweight program that loads quickly, (2) installed by default on all Macs.

comment by Antisuji · 2011-05-09T21:01:24.445Z · LW(p) · GW(p)

Just to follow up: there is indeed a Growl for Windows, and it comes bundled with a growlnotify.exe that I can run from a cygwin bash shell. Rejoice!

comment by cousin_it · 2011-05-07T15:47:03.879Z · LW(p) · GW(p)

I usually continue coding during long recompiles (over a minute or so), just don't save the my edits until it's finished.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2011-05-07T20:00:47.700Z · LW(p) · GW(p)

You could also make a version control commit before compiling and then use "git stash" or equivalent to save your while-compiling edits.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-08T13:03:18.170Z · LW(p) · GW(p)

"Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.

Same. And it has also served me well, although maybe not solely because of that preference–I was in a better financial situation to start with than many university students, and I'm a workaholic with a part-time job that I enjoy, and I also enjoy living frugally and don't consider it to diminish my quality of life the way some people do.

comment by gjm · 2011-05-08T09:44:51.626Z · LW(p) · GW(p)

The trouble with not waiting is that it increases your number of mental context switches, and they can be really expensive. Whether "don't wait" is good advice probably depends on details like the distribution of waiting times, what sort of tasks one's working on, and one's mental context-switch speed.

comment by Sniffnoy · 2011-05-08T08:09:23.494Z · LW(p) · GW(p)

"Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

For purposes of avoiding ambiguity this might be better phrased as "don't block" or "don't busy-wait". Although combined with #2 it might indeed become "don't wait" in the more general sense to some extent!

comment by Oscar_Cunningham · 2011-05-08T09:43:40.140Z · LW(p) · GW(p)

My attempt at the exercise for the skill "Hold Off On Proposing Solutions"

Example: At a LessWrong meet up someone talks about some problem they have and asks for advice, someone points out that everyone should explore the problem before proposing solutions. Successful use of the skill involves:

1) Noticing that a solution is being asked for. This is the most important sub-skill. It involves listening to everything you ever hear and sorting it into appropriate categories.

2) Come up with a witty and brilliant solution. This happens automatically.

3) Suppress the urge to explain the solution to everyone, even though it is so brilliant, and will make you look so cool, and (gasp) maybe someone else has thought of it, and you better say it before they do, otherwise it will look like it was their idea!

4) Warn other people to hold off on proposing solutions.

Exercise: Best done in a group, where the pressure to show intelligence is greatest. Read the group a list of questions. Use many different types of questions, some about matters of fact, some about opinion, and some asking for a solution. The first two types are to be answered immediately. The last type are to be met with absolute silence. Anyone found talking after a solution has been requested loses points.

Encourage people to write down any solutions they do come up with. After the exercise is finished, destroy all the written solutions, and forbid discussion of them.

Replies from: alexflint
comment by Alex Flint (alexflint) · 2011-05-08T14:13:47.724Z · LW(p) · GW(p)

Wouldn't it be better to realise right after step (1) that one needs to avoid coming up with solutions and deliberately focus one's mind on understanding the problem. Avoiding verbalization of solutions is good, but they can still pollute your own thinking, even if not others'.

comment by Cayenne · 2011-05-07T20:01:08.853Z · LW(p) · GW(p)

I think that the big skill here is not being offended. If someone can say something and control your emotions, literally make you feel something you had no intention to feel beforehand, then perhaps it's time to start figuring out why you're allowing people to do this to you.

At a basic level anything someone can say to you is either true or false. If it's true then it's something you should probably consider and accept. If it's false then it's false and you can safely ignore/gently correct/mock the person saying it to you. In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.

This isn't the same as never being angry! This is simply about keeping control for yourself over when and why you get angry or offended, rather than allowing the world to determine that for you.

Edit - please disregard this post

Replies from: wilkox, wedrifid
comment by wilkox · 2011-05-08T12:37:59.643Z · LW(p) · GW(p)

In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.

It seems really, really difficult to convey to people who don't understand it already that becoming offended is a choice, and it's possible to not allow someone to control you in that way. Maybe "offendibility" is linked to a fundamental personality trait.

Replies from: loqi, Cayenne
comment by loqi · 2011-05-10T19:01:37.674Z · LW(p) · GW(p)

What constitutes a "choice" in this context is pretty subjective. It may be less confusing to tell someone they could have a choice instead of asserting that they do have a choice. The latter connotes a conscious decision gone awry, and in doing so contradicts the subject's experience that no decision-making was involved.

Replies from: wilkox
comment by wilkox · 2011-05-10T23:28:45.310Z · LW(p) · GW(p)

Good point. Reading my comment again, it seems obvious that I committed the typical mind fallacy in assuming that it really is a choice for most people.

Replies from: erikerikson
comment by erikerikson · 2012-12-20T23:34:11.756Z · LW(p) · GW(p)

I'd take this differently.

I would at least hope that you are claiming that there is, in fact, a choice, whether the subjective experience of the moment provides indication of the choice or not.

Maybe stated differently you could be claiming that there is the possibility of choice for all people whether a person is aware or capable of taking advantage of that fact. That a person can alter his or her self in order to provide his or her self with the opportunity to choose in such situations.

Loqi's feedback seems to me to be suggesting that individuals who do not have a belief that they have such a "possibility of choice" could have a more positive phenomenological experience of your assertion and as a result be more likely to integrate the belief into their own belief set and [presumably] gain advantage by encountering it.

That is me asserting that Loqi does not appear to be rejecting your assertion but only suggesting a manner by which it can be improved.

Replies from: erikerikson
comment by erikerikson · 2012-12-20T23:50:53.486Z · LW(p) · GW(p)

Of course, Loqi's suggestion could contingently be less optimal than the less easy to accept presentation.

While the approach you suggest could provide a more subjectively negative experience, the cognitive dissonance could cause the utterance to gain more attention with the brain as a more aberrant occurrence in its stimuli and as a result be worthy of further analysis and consideration.

I am generally in favor of delivering notions I believe to be helpful in a manner which can/will be accepted. In some cases however, others are able and more likely to accept a less than pleasant delivery mechanism. This is contingent upon the audience, of course, as well as the level of knowledge you have about your audience. In the absence of such knowledge, the more gentle approach seems advisable.

comment by Cayenne · 2011-05-08T17:35:39.184Z · LW(p) · GW(p)

It could be. It seems not just difficult but actually against most culture on the planet. Consider that crimes of passion, like killing someone when you find them sleeping around on you, often get a lower sentence than a murder 'in cold blood'. If someone says 'he made me angry' we know exactly what that person means. Responding to a word with a bullet is a very common tactic, even in a joking situation; I've had things thrown at me for puns!

It does seem like a learn-able skill even so. I did not have this skill when I was child, but I do have it now. The point I learned it in my life seems to roughly correspond to when I was first trained and working as technical support. I don't know if there's a correlation there.

In any case, merely being aware that this is a skill may help a few people on this forum to learn it, and I can see only benefit in trying. It is possible to not control anger but instead never even feel it in the first place, without effort or willpower.

Edit - please disregard this post

Replies from: bbleeker, mendel
comment by Sabiola (bbleeker) · 2011-05-09T19:41:14.241Z · LW(p) · GW(p)

I imagine you wouldn't have lasted long in tech support if you hadn't learned that skill. :-)

comment by mendel · 2011-05-08T21:27:25.495Z · LW(p) · GW(p)

And yet, not to feel an emotion in the first place may obscure you to yourself - it's a two-sided coin. To opt to not know what you're feeling when I struggle to find out seems strange to me.

Replies from: Cayenne
comment by Cayenne · 2011-05-08T22:04:27.606Z · LW(p) · GW(p)

I think you're misunderstanding what I said. I'm not obscuring my feelings from myself. I'm just aware of the moment when I choose what to feel, and I actively choose.

I'm not advocating never getting angry, just not doing it when it's likely to impair your ability to communicate or function. If you choose to be offended, that's a valid choice... but it should also be an active choice, not just the default.

I find it fairly easy to be frustrated without being angry at someone. It is, after all, my fault for assuming that someone is able to understand what I'm trying to argue, so there's no point in being angry at them for my assumption. They might have a particularly virulent meme that won't let them understand... should I get mad at them for a parasite? It seems pointless.

Edit - please disregard this post

Replies from: mendel
comment by mendel · 2011-05-09T00:08:16.983Z · LW(p) · GW(p)

Well, it seems I misunderstand your statement, "It is possible to not control anger but instead never even feel it in the first place, without effort or willpower."

I know it is possible to experience anger, but control it and not act angry - there is a difference between having the feeling and acting on it. I know it is also possible to not feel anger, or to only feel anger later, when distanced from the situation. I'm ok with being aware of the feeling and not acting on it, but to get to the point where you don't feel it is where I'm starting to doubt whether it's really a net benefit.

And yes, I do understand that with understand / assumptions about other people, stuff that would have otherwise bothered me (or someone else) is no longer a source of anger. You changed your outlook and understanding of that type of situation so that your emotion is frustration and not anger. If that's what you meant originally, I understand now.

Replies from: Cayenne
comment by Cayenne · 2011-05-10T11:46:45.146Z · LW(p) · GW(p)

Mostly I don't even feel frustration, but instead sadness. I'd like to be able to help, but sometimes the best I can do is just be patient and try to explain clearly, and always immediately abandon my arguments if I find that I'm the one with the error.

Edit - please disregard this post

comment by wedrifid · 2011-05-07T20:09:32.390Z · LW(p) · GW(p)

I (really) like what you're saying here and it is something I often recommend (where appropriate) to people that have no interest in rationality whatsoever.

Well, except for drawing a line at 'true/false' with respect to when it an be wise to take actions to counter the statements. Truth is only one of the relevant factors. This doesn't detract at all for your core point.

I extend this philosophy to when evaluating socially relevant interactions of others. When things become a public scene that for some reason I care about I do not automatically attribute the offense, indignation or anger of the recipient to be the responsibility of the person who provided the stimulus.

Replies from: Cayenne
comment by Cayenne · 2011-05-07T20:19:22.556Z · LW(p) · GW(p)

The true/false isn't the only line, but I feel that it's the most important. If something someone says to or about you is true, then no matter what you should own it in some way. Acknowledge that they're right, try to internalize it, try to change it, but never never just ignore it! (edit: If you're getting mad when someone says something truthful about you, then this should raise other warning flags as well! Examine the issue carefully to figure out what's really happening here.)

If the thing they say is false, then don't get mad first! Think it through carefully, and then do the minimum you can to deal with it. The most important thing is to not obsess over it afterward, because if you're doing that you're handing a piece of your life away for a very low or even negative return. Laugh about it, ignore it, get over it, but don't let it sit and fester in your mind.

Edit - please disregard this post

Replies from: wedrifid
comment by wedrifid · 2011-05-08T02:16:41.361Z · LW(p) · GW(p)

If you're getting mad when someone says something truthful about you, then this should raise other warning flags as well! Examine the issue carefully to figure out what's really happening here.

When it comes to making the most beneficial responses feeling anger is almost never useful when you have a sufficient foundation in the mechanisms of social competition, regardless of truth. It tends to show weakness - the vulnerability to provocation that you are speaking of gives an opportunity for one upmanship that social rivals will instinctively hone in on.

In terms of the benefits and necessity of making a response it is the connotations that are important. Technical truth is secondary.

Replies from: Cayenne
comment by Cayenne · 2011-05-08T03:16:58.843Z · LW(p) · GW(p)

Very true.

I didn't mean to suggest that the truth/falsehood line was as usefully socially as I believe it is internally. The social reaction you may decide on is mostly independent from truth.

Internally, it's important to recognize that truth, since it is vital feedback that can tell you when you may need to change.

Edit - please disregard this post

Replies from: wedrifid
comment by wedrifid · 2011-05-08T03:19:24.210Z · LW(p) · GW(p)

Internally, it's important to recognize that truth, since it is vital feedback that can tell you when you may need to change.

And, when false, when you may need to change what you do such that others don't get that impression (or don't think they can get away with making the public claim even though they know it is false).

comment by wedrifid · 2011-05-07T07:54:50.336Z · LW(p) · GW(p)

rationalists don't moralize

I like the theory but 'does not moralize' is definitely not a feature I would ascribe to Eliezer. We even have people quoting Eliezer's moralizing for the purpose of spreading the moralizing around!

"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."

In terms of general moralizing tendencies of people who identify as rationalists they seem to moralize slightly less than average but the most notable difference is what they choose to moralize about. When people happen to have similar morals to yourself it doesn't feel like they are moralizing as much.

Replies from: Eliezer_Yudkowsky, BenAlbahari
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-07T19:37:15.402Z · LW(p) · GW(p)

Not everything that is not purely consequentialist reasoning is moralizing. You can have consequentialist justifications of virtue ethics or even consequentialist justifications of deontological injunctions, and you are allowed to feel strongly about them, without moralizing. It's a 5-second-level emotional direction, not a philosophical style.

Sigh. This is why I said, "But trying to define exactly what constitutes 'moralizing' isn't going to get us any closer to having nice rationalist communities."

Replies from: BenAlbahari, fubarobfusco, wedrifid, Clippy
comment by BenAlbahari · 2011-05-08T10:39:12.835Z · LW(p) · GW(p)

Sigh.

A 5-second method (that I employ to varying levels of success) is whenever I feel the frustration of a failed interaction, I question how it might have been made more successful by me, regardless of whose "fault" it was. Your "sigh" reaction comes across as expressing the sentiment "It's your fault for not getting me. Didn't you read what I wrote? It's so obvious". But could you have expressed your ideas almost as easily without generating confusion in the first place? If so, maybe your reaction would be instead along the lines of "Oh that's interesting. I thought it was obvious but I guess I can see how that might have generated confusion. Perhaps I could...".

FWIW I actually really like the central idea in this post, and arguably too many of the comments have been side-tracked by digressions on moralizing. However, my hunch is that you probably could have easily gotten the message across AND avoided this confusion. My own specific suggestion here is that stipulative definitions are semantic booby traps, so if possible avoid them. Why introduce a stipulative definition for "moralize" when a less loaded phrase like "suspended judgement" could work? My head hurts reading these comments trying to figure out how each person is using the term "moralize" and I now have to think twice when reading the term on LW, including even your old posts. This is an unnecessary cognitive burden. In any case, my final note here would be to consider that you'd be lucky if your target audience for your upcoming book(s) was anywhere near as sharp as wedrifid. So if he's confused, that's a valuable signal.

comment by fubarobfusco · 2011-05-07T22:11:12.860Z · LW(p) · GW(p)

Eliezer, did you mean something different by the "does not get bullet" line than I thought you did? I took it as meaning: "If your thinking leads you to the conclusion that the right response to criticism of your beliefs is to kill the critic, then it is much more likely that you are suffering from an affective death spiral about your beliefs, or some other error, than that you have reasoned to a correct conclusion. Remember this, it's important."

This seems to be a pretty straightforward generalization from the history of human discourse, if nothing else. Whether it fits someone's definition of "moralizing" doesn't seem to be a very interesting question.

Replies from: Eliezer_Yudkowsky
comment by wedrifid · 2011-05-07T19:43:59.041Z · LW(p) · GW(p)

I agree with the parent but maintain everything in the grandparent. There just isn't any kind of contradiction of the kind that from the sigh I assume is intended.

Replies from: matt
comment by matt · 2011-05-08T04:40:58.659Z · LW(p) · GW(p)

I find myself frequently confused by Eliezer's "sigh"s.

Replies from: katydee
comment by katydee · 2011-05-08T05:12:38.725Z · LW(p) · GW(p)

Noticing your confusion is the first step to understanding.

Replies from: wedrifid
comment by wedrifid · 2011-05-08T07:14:08.620Z · LW(p) · GW(p)

Noticing your confusion is the first step to understanding.

Poster child for ADBOC.

Replies from: katydee
comment by katydee · 2011-05-08T07:45:02.337Z · LW(p) · GW(p)

Good point, link added.

comment by Clippy · 2011-05-07T20:09:21.133Z · LW(p) · GW(p)

You say rationalists don't moralize. Could you give me three concrete examples of moralizing that also promote a moral imperative that rationalists agree with, such as "One should respond to bad arguments with counterarguments rather than gunfire"?

comment by BenAlbahari · 2011-05-07T12:57:04.556Z · LW(p) · GW(p)

people who identify as rationalists they seem to moralize slightly less than average

Really? The LW website attracts aspergers types and apparently morality is stuff aspergers people like.

Replies from: wedrifid, BillyOblivion
comment by wedrifid · 2011-05-07T15:11:20.258Z · LW(p) · GW(p)

Really? The LW website attracts aspergers types and apparently morality is stuff aspergers people like.

That's true, and usually I say 'a lot more' rather than 'slightly less'. However in this instance Eliezer seemed to be referring to a rather limited subset of 'moralizing'. He more or less excluded being obnoxiously judgemental but phrasing your objections with consequentialist language. So the worst of nerd-moralizing was cut out.

comment by BillyOblivion · 2011-05-12T12:56:11.323Z · LW(p) · GW(p)

I suspect that what aspergers types like--if that post is correct and they do like it--is more the rules part of morality than the being judgmental[1] part of it. Rules for strict rules for interacting with other folks make social interactions less error prone when you literally don't--can't--get those social cues others do.

I've been judged to be at best borderline aspery (absent any real testing, who knows) and manifest many of the more subtle symptoms, and my take(s) on morality are (1) that it is much like driving regulations. No one gives a flying f' which side of the road you drive on as long as everybody does. (no need to get judgemental about it unless someone is deliberately doing it wrong) and (2) that the human animal (at least neurotypical human animals) have behavior patterns that are a result of both evolution and society. Following these behavior patterns will keep you from some fun and lots of pain, and will generally get you into the fat part of the bell curve. Break the wrong ones and you will wind up in the ugly part of the curve. Figure out how to break the right ones the right way and you get into the cool part of the bell curve where interesting shit happens.

Oh, and sometimes when you break these rules you hurt other people. When you hurt them by accident that's bad, when you hurt them on purpose and they don't deserve it, that's even worse. If they do deserve it then it's probably because they broke one of the rules.

People do shit for all sorts of reasons, and in contemporary society there are all sorts of people in power advocating all sorts of mildly to wildly stupid shit. Can't really blame someone all that much if they spent 12 years in schools that pushed the sort of "education" that you get from compromising between fundamentalist Christians, New Age Fruit Cakes, Universal Church Members, and your typical politicians. Oh, and people with masters degrees in Education, much less Doctorates.

Seriously, you're better off with the f'ing Jesuits. They might believe in God (it's hard to tell sometimes) but at least they also believe in Latin, in Logic and Math, and in "The book of nature"

Oh, that got a little off topic. Oops.

[1]Making judgments is what thinking people do all day long. Using those judgements to attach moral worth to someone is different.

comment by lessdazed · 2011-05-08T03:26:59.359Z · LW(p) · GW(p)

When people say they appreciate rationalists for their non-judgmentalism, I think they mean more than just that rationalists tend not to moralize. What they also mean is that rationalists are responsive to people's actual statements and opinions. This is separate from moralizing and in my opinion is more important, both because it precedes it in conversation and because I think people care about it more.

Being responsive to people means not (being interpreted as [inappropriately or] incorrectly) assuming what a person you are listening to thinks.

If someone tells says "I think torture, such as sleep deprivation, is effective in getting information," and they support, say, both the government doing such and legalizing it, judging them to be a bad person for that and saying so won't build communal ties, but it's unlikely to be frustrating for the first person.

If, on the other hand, they don't support the legalization or morality of it despite their claim it is effective, indignation will irritate them because it will be based on false assumptions about their beliefs.

If someone says "I'm thinking of killing myself", responding with "That violates my arbitraty and ridiculous deontological system", or some variation thereof, is probably unwelcome.

On the other hand, responding with "You'll get over being depressed", when your interlocutor does not feel depressed, will frustrate them. "Being depressed is a sin" would be an even worse response, combining both misinterpretation and moralizing.

Refraining from filling in the blanks in others' arguments happens to be a good way to avoid moralizing, since in order to be indignant about something you have to believe in its existence.

Scott Adams has a good example of something that only causes offense to some people, supposedly dependent on their general penchant for smashing distinct statements together, which is one way people inappropriately fill in blanks.

The dog might eat your mom's cake if you leave it out. A dog also might eat his own turd.

When you read those two statements, do you automatically suppose I am comparing your mom's cake to a dog turd? Or do you see it as a statement that the dog doesn't care what it eats, be it a delicious cake or something awful?

In this pair, it is easy to get someone to agree with both statements and also say they think they would hypothetically feel offense towards the speaker were it not a mere test...at least I am one for one, and I imagine it would work for others. I also think the person I asked actually felt real offense

Something like this pair would be good for teaching because the student agrees with the component statements. Offense is a result of inappropriately combining them to infer a particular intent by the speaker.

If you are offended, ask yourself: "What am I assuming about the other person (that makes me think they are innately evil)?

Replies from: RobinZ
comment by RobinZ · 2011-05-08T14:31:46.278Z · LW(p) · GW(p)

My usual method when confronted with a situation where a speaker appears to be stupid, crazy, or evil is to assume I misunderstood what they said. Usually by the time I understand what the opposite party is saying, I no longer have any problematic affective judgment.

Replies from: wedrifid
comment by wedrifid · 2011-05-09T00:00:29.743Z · LW(p) · GW(p)

My usual method when confronted with a situation where a speaker appears to be stupid, crazy, or evil is to assume I misunderstood what they said. Usually by the time I understand what the opposite party is saying, I no longer have any problematic affective judgment.

I usually find that I do understand what they are saying and it belongs in one of the neglected categories of 'bullshit' or "".

Replies from: wilkox, RobinZ, RobinZ
comment by wilkox · 2011-05-09T01:09:24.715Z · LW(p) · GW(p)

"things that people say that really actionable beliefs even though they may not be clear on the difference"

This sounds interesting, but I can't parse it.

Replies from: wedrifid
comment by wedrifid · 2011-05-09T05:38:23.831Z · LW(p) · GW(p)

This sounds interesting, but I can't parse it.

That's because you are using an English parser while my words were not valid English.

comment by RobinZ · 2011-05-09T13:38:43.966Z · LW(p) · GW(p)

Those don't usually give me much trouble - I find that the nonsense people propose is usually self-consistent in an interesting way, much like speculative fiction. On reflection, what really gives me trouble is viewpoints I understand and disagree with all within five seconds, like [insert politics here].

Replies from: BillyOblivion
comment by BillyOblivion · 2011-05-12T13:21:02.266Z · LW(p) · GW(p)

My experience is opposite. On one hand you'll have people who do job that require a sort of met

comment by RobinZ · 2011-05-09T01:08:56.172Z · LW(p) · GW(p)

"things that people say that" what? The grammar gets a little odd toward the latter half of that.

Replies from: endoself, wedrifid
comment by endoself · 2011-05-09T01:19:54.578Z · LW(p) · GW(p)

Presumably "things that people say that aren't really actionable beliefs"; though this reply feels awkward in a discussion about misunderstanding, I'm pretty sure that was the intended phrase.

comment by wedrifid · 2011-05-09T05:39:43.776Z · LW(p) · GW(p)

Fixed.

Replies from: RobinZ
comment by RobinZ · 2011-05-09T13:15:14.576Z · LW(p) · GW(p)

Thanks!

comment by Mitchell_Porter · 2011-05-07T09:00:38.212Z · LW(p) · GW(p)

On the topic of the "poisonous pleasure" of moralistic critique:

I am struck by the will to emotional neutrality which appears to exist among many "aspies". It's like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up. They refuse to take part in the "emotional games", and they refuse to resist in the usual way when those games are directed against them - the usual form of defense being a counterattack - because that would make them just as bad as the aggressor normals.

For someone like that, it may be important to get in touch with their inner moralizer! Not just for the usual reason - that being able to fight back is empowering - but because it's actually a healthy part of human nature. The capacity to denounce, and to feel the sting of being denounced without exploding or imploding, is not just some irrational violent overlay on our minds, without which there would be nothing but mutual satisficing and world peace. It has a function and we neutralize it at our peril.

Replies from: Plasmon, Leonhart, Barry_Cotter, Wei_Dai, mutterc, Desrtopa, TimFreeman, lukstafi
comment by Plasmon · 2011-05-08T16:34:19.035Z · LW(p) · GW(p)

If the message you intend to send is "I am secure in my status. The attacker's pathetic attempts at reducing my status are beneath my notice.", what should you do? You don't seem to think that ignoring the "attacks" is the correct course of action.

This is a genuine question. I do not know the answer and I would like to know what others think.

Replies from: TimFreeman, novalis, wedrifid, mendel
comment by TimFreeman · 2011-05-09T18:26:23.983Z · LW(p) · GW(p)

"I am secure in my status. The attacker's pathetic attempts at reducing my status are beneath my notice."

I think the real message is "The attacker's attempt to reduce my status is too ineffective to need a response".

On a good day I'd say "okay" so he knows I heard him, and then start a conversation with someone else, unless there's some instrumental value in confronting him or continuing the conversation given that I now know he's playing status games. I don't know a good way to carry on a useful conversation with someone who is playing status games, so I'm stuck in that situation too.

comment by novalis · 2011-05-10T04:54:21.142Z · LW(p) · GW(p)

Sarcasm.

comment by wedrifid · 2011-05-09T00:07:37.710Z · LW(p) · GW(p)

If the message you intend to send is "I am secure in my status. The attacker's pathetic attempts at reducing my status are beneath my notice.", what should you do?

Ignoring the attempts is a good default. It gives a decent payoff while being easy to implement. More advanced alternatives are the witty, incisive comeback or the smooth, delicately calibrated communication of contempt for the attacker to the witnesses. In the latter case especially body language is the critical component.

comment by mendel · 2011-05-08T21:37:52.782Z · LW(p) · GW(p)

My opinion? I'd not lie. You've noticed the attempt, why claim you didn't? Display your true reaction.

Replies from: wedrifid
comment by wedrifid · 2011-05-09T00:02:06.177Z · LW(p) · GW(p)

My opinion? I'd not lie. You've noticed the attempt, why claim you didn't? Display your true reaction.

Noticing the attempt and doing nothing is not a lie. It is a true reaction.

Replies from: mendel
comment by mendel · 2011-05-09T10:21:43.181Z · LW(p) · GW(p)

beneath my notice

I'm referring to that. Sending that message is an implicit lie -- well, you could call it a "social fiction", if you like a less loaded word.

It is also a message that is very likely to be misunderstood (I don't yet know my way around lesswrong well enough to find it again, but I think there's an essay here someplace that deals with the likelyhood of recipients understanding something completely different than what you intended to mean, but you not being able to detect this because the interpretation you know shapes your perception of what you said).

So if your true reaction is "you are just trying to reduce my status, and I don't think it's worth it for me to discuss this further", my choice, given the option to not display it or to display it, would usually be to display it, if a reaction was expected of me.

I hope I was able to clarify my distinction between having a true reaction, and displaying it. In a nutshell, if you notice something, you have a reaction, and by not displaying it (when it is expected of you), you create an ambiguous situation that is not likely to communicate to the other person what you want it to communicate.

Replies from: Barry_Cotter, wedrifid
comment by Barry_Cotter · 2011-05-09T17:49:01.565Z · LW(p) · GW(p)

implicit lie vs. social fiction

I don't think these are normally useful ways of thinking about status posturing. Verbalising this stuff is a faux pas in the overwhelming majority of human social groups.

I'm not sure if I disagree with you on whether the message is "very likely" to be understood. In my limited experience, and with my below average people reading skills, I'd say that most status jockeying in non-intimate contexts is obvious enough for me to notice if I'm paying attention to the interaction.

The post you meant is probably Illusion of Transparency. I contend that it applies less strongly to in person status jockeying than to lingual information transfer. I suggest you watch a clip of a foreign language movie if you disagree.

Replies from: mendel
comment by mendel · 2011-05-11T00:35:36.445Z · LW(p) · GW(p)

Yes, that's the post I was referring to. Thank you!

comment by wedrifid · 2011-05-09T11:46:42.839Z · LW(p) · GW(p)

So if your true reaction is "you are just trying to reduce my status, and I don't think it's worth it for me to discuss this further", my choice, given the option to not display it or to display it, would usually be to display it, if a reaction was expected of me.

This can work sometimes but it in most contexts it is difficult to pull off without sounding awkward or crude. At best it conveys that you are aware that social dynamics exist but aren't quite able to navigate them smoothly yet. Mind you unless there is a pre-existing differential in status or social skills in their favour they will tend to come off slightly worse than you in the exchange. A costly punishment.

comment by Leonhart · 2011-05-07T12:27:54.310Z · LW(p) · GW(p)

It's like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up.

Mitchell, yes, that was me back in high school. But IIRC I thought I was doing this.

comment by Barry_Cotter · 2011-05-07T13:03:36.952Z · LW(p) · GW(p)

You don't need to be angry to hit someone, or to spread gossip, or to otherwise retaliate against them. If you recognise that someone is a threat or an obstacle you can deal with them as such without the cloud of rage that makes you stupider. You do not need to be angry to decide that someone is in your way and that it will be necessary to fuck them up.

Replies from: Vladimir_M, fiddlemath, Swimmer963
comment by Vladimir_M · 2011-05-07T22:08:31.731Z · LW(p) · GW(p)

If you recognise that someone is a threat or an obstacle you can deal with them as such without the cloud of rage that makes you stupider.

Then why didn't humans evolve to perform rational calculations of whether retaliation is cost-effective instead of uncontrollable rage? The answer, of course, is largely in Schelling. The propensity to lose control when enraged is a strategic precommitment to lash out if certain boundaries are overstepped.

Now of course, in the modern world there are many more situations where this tendency is maladaptive than in the human environment of evolutionary adaptedness. Nevertheless, I'd say that in most situations in which it enters the strategic calculations it's still greatly beneficial.

Replies from: Barry_Cotter
comment by Barry_Cotter · 2011-05-08T21:07:11.357Z · LW(p) · GW(p)

Now of course, in the modern world there are many more situations where this tendency is maladaptive than in the human environment of evolutionary adaptedness. Nevertheless, I'd say that in most situations in which it enters the strategic calculations it's still greatly beneficial.

I agree, or at least agree for situations where people are in their native culture or one they're intimately familiar with, so that they're relatively well-calibrated. What I wrote was poorly phrased to the point of being wrong without lawyerly cavilling.

To rephrase more carefully; you can act in a manner that gets the same results as anger without being angry. You can have a better, more strategic response. I'm not claiming it's easy to rewire yourself like this, but it's possible. If your natural anger response is anomalously low, as is the case for myself and many others on the autism spectrum, and you're attempting some relatively hardcore rewiring anyway, why not go for the strategic analysis instead of trying to decrease your threshold for blowing up?

Replies from: Vladimir_M
comment by Vladimir_M · 2011-05-09T06:15:09.478Z · LW(p) · GW(p)

I'm not sure if you understand the real point of precommitment. The idea is that your strategic position may be stronger if you are conditionally committed to act in ways that are irrational if these conditions are actually realized. Such precommitment is rational on the whole because it eliminates the opponent's incentives to create these conditions, so if the strategy works, you don't actually have to perform the irrational act, which remains just a counterfactual threat.

In particular, if you enter confrontations only when it is cost-effective to do so, this may leave you vulnerable to a strategy that maneuvers you into a situation where surrender is less costly than fighting. However, if you're precommitted to fight even irrationally (i.e. if the cost of fighting is higher than the prize defended), this makes such strategies ineffective, so the opponent won't even try them.

So for example, suppose you're negotiating the price you'll charge for some work, and given the straightforward cost-benefit calculations, it would be profitable for you to get anything over $10K, while it would be profitable for the other party to pay anything under $20K, so the possible deals are in that range. Now, if your potential client resolutely refuses to pay more than $11K, and if it's really impossible for you to get more, it is still rational for you to take that price rather than give up on the deal. However, if you are actually ready to accept this price given no other options, this gives the other party the incentive to insist with utter stubbornness that no higher price is possible. On the other hand, if you signal credibly that you'd respond to such a low offer by getting indignant that your work is valued so little and leaving angrily, then this strategy won't work, and you have improved your strategic position -- even though getting angry and leaving is irrational assuming that $11K really is the final offer.

(Clearly, the strategy goes both ways, and the buyer is also better off if he gets "irrationally" indignant at high prices that still leave him with a net plus. Real-life negotiations are complicated by countless other factors as well. Still, this is a practically relevant example of the basic principle.)

Now of course, an ideally rational agent with a perfect control of his external behavior would play the double game of signaling such precommitment convincingly but falsely and yielding if the bluff is called (or perhaps not if there would be consequences on his reputation). This however is normally impossible for humans, so you're better off with real precommitment that your emotional propensity to anger provides. Of course, if your emotional propensities are miscalibrated in any way, this can lead to strategic blunders instead of benefits -- and the quality of this calibration is a very significant part of what differentiates successful from unsuccessful people.

Replies from: wedrifid, Barry_Cotter
comment by wedrifid · 2011-05-09T06:29:39.769Z · LW(p) · GW(p)

I'm not sure if you understand the real point of precommitment. The idea is that your strategic position may be stronger if you are conditionally committed to act in ways that are irrational if these conditions are actually realized. Such precommitment is rational on the whole because it eliminates the opponent's incentives to create these conditions, so if the strategy works, you don't actually have to perform the irrational act, which remains just a counterfactual threat.

I agree with what you are saying and would perhaps have described it as "ways that would otherwise have been irrational".

comment by Barry_Cotter · 2011-05-09T17:27:24.686Z · LW(p) · GW(p)

I obviously need to work on phrasing things more clearly.

Anger functions as a strategic precommitment which improves your bargaining position. Two examples of a precommitment would be as follows (1) A car buyer going to a dealership with a contract stating that for every dollar they pay over a predetermined price (manufacturers price plus average industry margin presumably) they must pay ten dollars to some other party (who can credibly hold them to it). (2) Destroying your means of retreat when you plan aggression against another party, so that you have no motive to hold anything back, like Cortes did when he burned his ships upon landing in Mexico.

Now (1) is more like anger than (2) is because it's a public signal, but both of them reduce your options to strengthen your position, (1) in a negotiation, (2) as a committed, cohesive group. (1) is very much like throwing the steering wheel out the window in the game of chicken. Pretending your hands are tied and you can't go above/below the stated price without going further up the chain of command is actually one of those negotiating tricks that are in all the books, like the car salesman who goes "Oh, I'm not sure; I'll have to consult my boss" and smokes a cigarette in the office before coming back and agreeing to a lower price.

Swimmer963 asked me:

If you're not angry, what would motivate you to do any of those things?

and I replied

If you are dealing with someone in your social circle, or can be seen by someone in your social circle and you want to build or maintain a reputation as someone it is not wise to cross. Even if it's more or less a one shot game, if you make a point of not being a doormat it is likely to impact your self-image, which will impact your behaviour, which will impact how others treat you.

Even if in the short run retaliating helps nobody and slightly harms you, it can be worth it for repuatational and self-concept reasons.

which I think shows at least a weak grasp of how these precommitments can work; one builds a reputation, and given that we're meatbags with malleable conceptions of self, a reason to make such precommitments even when they cannot effect our reputation.

If "normally impossible" means very, very hard I agree completely; robust self-behavioural modification is hard even for small things, never mind for something as difficult to bring into conscious awareness or control as anger.

Would you consider expanding upon quality of calibration?

Replies from: Vladimir_M
comment by Vladimir_M · 2011-05-09T20:30:07.987Z · LW(p) · GW(p)

Yes, I think we understand each other now. Funny, I had the "must consult my boss" trick pulled on me just a few days ago by a guy whom I called up to haul off some trash. I still managed to make him lower the supposedly boss-mandated price by about 20%. (And when I later thought about the whole negotiation more carefully, I realized I could have probably lowered it much more.)

Regarding the quality of calibration, it's straightforward. Emotional reactions can serve as strategic precommitments the way we just discussed, and often they also serve as decision heuristics in problems where one lacks the necessary information and processing power for a conscious rational calculation. In both cases, they can be useful if they are well-calibrated to produce strategically sound actions, but if they're poorly calibrated, they can lead to outright irrational and self-destructive behavior.

So for example, if you fail to feel angry indignation when appropriate, you're in danger of others maneuvering you into a position where they'll treat you as a doormat, both in business and in private life. On the other hand, if such emotions are triggered too easily, you'll be perceived as short-tempered, unreasonable, and impossible to deal with, again with bad consequences, both professional and private.

It seems to me that the key characteristic that distinguishes high achievers is the excellent calibration of their emotional reactions -- especially compared to people who are highly intelligent and conscientious and nevertheless have much less to show for it.

comment by fiddlemath · 2011-05-07T20:13:32.003Z · LW(p) · GW(p)

You do not need to be angry to decide that someone is in your way and that it will be necessary to fuck them up.

No; but it certainly makes it likelier that you will bring yourself to action.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-07T20:19:06.348Z · LW(p) · GW(p)

You don't need to be angry to hit someone, or to spread gossip, or to otherwise retaliate against them.

If you're not angry, what would motivate you to do any of those things? If someone injures me in some way or takes something that I wanted, usually neither hitting them nor spreading gossip about them will in any way help me repair my injury or get back what they took from me. So I don't. Unless I'm angry, in which case it kind of just happens, and then I regret it because it usually makes the situation worse.

Replies from: AdeleneDawner, TheOtherDave, Barry_Cotter
comment by AdeleneDawner · 2011-05-08T21:31:45.428Z · LW(p) · GW(p)

If you're not angry, what would motivate you to do any of those things?

Put simply, sometimes displaying a strong emotional response (genuine or otherwise) is the only way to convince someone that you're serious about something. This seems to be particularly true when dealing with people who aren't inclined to use more 'intellectual' communication methods.

Replies from: wedrifid, Swimmer963
comment by wedrifid · 2011-05-08T23:22:59.132Z · LW(p) · GW(p)

Put simply, sometimes displaying a strong emotional response (genuine or otherwise) is the only way to convince someone that you're serious about something. This seems to be particularly true when dealing with people who aren't inclined to use more 'intellectual' communication methods.

I think you're right. Mind you as someone who is interested in communication that doesn't involve control via strong emotional responses I most definitely don't reward bad behaviour by giving the other what they want. This applies especially if they use the aggressive tactics of the kind mentioned here. I treat those as attacks and respond in such a way as to discourage any further aggression by them or other witnesses.

This is not to say I don't care about the other's experience or desires, nor does it mean that a strong emotional response will rule out me giving them what they want. If the other is someone that I care about I will encourage them towards expressions that actually might work for getting me to give them what they want. I'll guide them towards asking me for something and perhaps telling me why it matters to them. This is more effective than making demands or attempting to emotionally control.

I'm far more generous than I am vulnerable to dominance attempts and I'm actually willing to consciously make myself vulnerable to personal requests to just behind the line of being an outright weakness because I have a strong preference for that mode of communication. Mind you even this tends to be strongly conditional on a certain degree of reciprocation.

Point being that I agree with the sometimes qualifier; the benefit to such displays (genuine or otherwise) is highly variable. We also have the ability to influence whether people make such displays to us. Partly by the incentive they have and partly by simple screening.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-08T22:17:37.553Z · LW(p) · GW(p)

Seems true. Nevertheless I've never used it in this way. This may have more to do with my personality than anything: from what I've read here, I'm more of a conformist than the average Less Wrong reader, and I put a higher value on social harmony. I hate arguments that turn personal and emotional.

comment by TheOtherDave · 2011-05-07T21:11:04.786Z · LW(p) · GW(p)

I might hit someone because they're pointing a gun at me and I believe hitting them is the most efficient way to disarm them. I might hit someone because they did something dangerous and I believe hitting them is the most efficient way to condition them out of that behavior. I might spread gossip about them because they are using their social status in dangerous ways and I believe gossiping about them is the best available way of reducing their status.

None of those cases require anger, and they might even make the situation better. (Or they might not.)

Or, less nobly, I might hit someone because they have $100 I want, and I think that's the most efficient way to rob them. I might spread gossip about them because we're both up for the same promotion and I want to reduce their chance of getting it.

None of those cases require anger, either. (And, hey, they might make the situation better, too. Or they might not.)

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-08T01:00:14.453Z · LW(p) · GW(p)

I suppose the context of my comment was limited to a) me personally (I don't have any desire to steal money or reduce other people's chances of promotion) and b) to the situations I have encountered in the past (no guns or danger involved). Your points are very valid though.

comment by Barry_Cotter · 2011-05-08T21:14:37.125Z · LW(p) · GW(p)

If you're not angry, what would motivate you to do any of those things?

If you are dealing with someone in your social circle, or can be seen by someone in your social circle and you want to build or maintain a reputation as someone it is not wise to cross. Even if it's more or less a one shot game, if you make a point of not being a doormat it is likely to impact your self-image, which will impact your behaviour, which will impact how others treat you.

Even if in the short run retaliating helps nobody and slightly harms you, it can be worth it for repuatational and self-concept reasons.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-08T22:23:25.721Z · LW(p) · GW(p)

Point taken. I am a doormat. People have told me this over and over again, so I probably have a reputation as a doormat, but that has certain value in itself; I have a reputation as someone who is dependable, loyal, and does whatever is asked of me, which is useful in a work context.

comment by Wei Dai (Wei_Dai) · 2011-05-09T05:40:53.386Z · LW(p) · GW(p)

It has a function and we neutralize it at our peril.

Can you be more specific? What exactly are the dangers of neutralizing our "inner moralizers"?

Also, see my previous comments, which may be applicable here. I speculate that "aspies" free up a large chunk of the brain for other purposes when they ignore "emotional games", and it's not clear to me that they should devote more of their cognitive resources toward such games.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2011-05-09T08:53:55.460Z · LW(p) · GW(p)

Can you be more specific? What exactly are the dangers of neutralizing our "inner moralizers"?

Having brought up this topic, I find that I'm reluctant to now do the hard work of organizing my thoughts on the matter. It's obvious that the ability to moralize has a tactical value, so doing without it is a form of personal or social disarmament. However, I don't want to leave the answer at that Nietzschean or Machiavellian level, which easily leads to the view that morality is a fraud but a useful fraud, especially for deceptive amoralists. I also don't want to just say that the human utility function has a term which attaches significance to the actions, motives and character of other agents, in such a way that "moralizing" is sometimes the right thing to do; or that labeling someone as Bad is an efficient heuristic.

I have glimpsed two rather exotic reasons for retaining one's capacity for "judging people". The first is ontological. Moral judgments are judgments about persons and appeal to an ontology of persons. It's important and useful to be able to think at that level, especially for people whose natural inclination is to think in terms of computational modules and subpersonal entities. The second is that one might want to retain the capacity to moralize about oneself. This is an intriguing angle because the debate about morality tends to revolve around interactions between persons, whether morality is just a tool of the private will to power, etc. If the moral mode can be applied to one's relationship to reality in general (how you live given the facts and uncertainties of existence, let's say), and not just to one's relationship to other people, that gives it an extra significance.

The best answer to your question would think through all that, present it in an ordered and integrated fashion, and would also take account of all the valid reasons for not liking the moralizing function. It would also have to ground the meaning of various expressions that were introduced somewhat casually. But - not today.

Replies from: mendel, Wei_Dai
comment by mendel · 2011-05-09T09:55:14.100Z · LW(p) · GW(p)

In another comment on this post, Eugine Nier linked to Schelling. I read that post, and the Slate page that mentions Schelling vs. Vietnam, and it became clear to me that acting moral acts as an "antidote" to these underhanded strategies that count on your opponent being rational. (It also serves as a Gödelian meta-layer to decide problems that can't be decided rationally.)

If, in Schellings example, the guy who is left with the working radio set is moral, he might reason that "the other guy doesn't deserve the money if he doesn't work for it", and from that moral strongpoint refuse to cooperate. Now if the rationalist knows he's working with a moralist, he'll also know that his immoral strategy won't work, so he won't attempt it in the first place - a victory for the moralist in a conflict that hasn't even occurred (in fact, the moralist need never know that the rationalist intended to cheat him).

This is different from simply acting irrationally in that the moralist's reaction remains predictable.

So it is possible that moral indignation helps me to prevent other people from manouevering me into a position where I don't want to be.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2011-09-05T11:48:37.781Z · LW(p) · GW(p)

Seems like morality is (inter alia) a heuristic for improving one's bargaining position by limiting one's options.

comment by Wei Dai (Wei_Dai) · 2011-05-09T17:36:48.876Z · LW(p) · GW(p)

It occurs to me that I'm not less judgmental than the typical human, just judgmental in a different way and less vocal about it (except in the "actions speak louder than words" sense). My main judgement of a person is just whether it is worth my time to talk to / work with / play with / care about that person, and if my "inner moralizer" says no, I simply ignore or get away from them. I'm not sure if I can be considered an "aspie" but I suspect many of them are similar in this way.

Compared to what's more typical, this method of "moralizing" seems to have all of the benefits you listed (except the last one, "If the moral mode can be applied to one's relationship to reality in general", which I don't understand) but fewer costs. It is less costly in mental resources, and less likely to get you involved in negative-sum situations. I note that it wouldn't have worked well in an ancestral environment where you lived in a small tribe and couldn't ignore or get away from others freely, which perhaps explains why it doesn't come naturally to most people despite its advantages.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2011-05-11T07:57:21.621Z · LW(p) · GW(p)

the benefits you listed (except the last one, "If the moral mode can be applied to one's relationship to reality in general", which I don't understand)

See the comments here on the psychological meaning of "kingship". That's one aspect of the "relationship to reality" I had in mind. If you subtract from consideration all notions of responsibility towards other people, are all remaining motivations fundamentally hedonistic in nature, or is there a sense in which you could morally criticize what you were doing (or not doing), even if you were the only being that existed?

There is a tendency, in discussions here and elsewhere about ethics, choice, and motivation, either to reduce everything to pleasure and pain, or to a functionalist notion of preference which makes no reference to subjective states at all. Eliezer advocates a form of moral realism (since he says the word "should" has an objective meaning), but apparently the argument depends on behavior (in the real world, you'd pull the child on the train tracks out of harm's way) and on the hypothesized species-universality of the relevant cognitive algorithms. But that doesn't say what is involved in making the judgment, or in making the meta-judgment about how you would act. Subjectively, are we to think of such judgments as arising from emotional reactions (e.g. basic emotions like disgust or fear)? It leaves open the question of whether there is a distinctive moral modality - a mode of perception or intuition - and my further question would be whether it only applies to other people (or to relations between you the individual and other people), or whether it can ever apply to yourself in isolation. In culture, I see a tendency to regard choices about how to live (that don't impact on other people) as aesthetic choices rather than ethical choices.

Mostly I have questions rather than answers here.

comment by mutterc · 2011-05-08T00:22:41.851Z · LW(p) · GW(p)

WIth Aspies it's probably less that they won't take part in emotional games, than can't.

comment by Desrtopa · 2011-05-10T04:59:36.056Z · LW(p) · GW(p)

I'm not sure I'm correctly interpreting what you're referring to here. Could you give a concrete example?

Replies from: Mitchell_Porter, Mitchell_Porter
comment by Mitchell_Porter · 2011-05-11T08:29:27.543Z · LW(p) · GW(p)

Could you give a concrete example?

The Zen thing to do would be to flame you with absurd viciousness for being excessively vague in your own request for clarification, in the hope that your response would be combative (rather than purely analytical), but still appropriate - because then you would have provided the example yourself. But that's a high-risk conversational strategy. :-)

comment by Mitchell_Porter · 2011-05-11T06:41:26.793Z · LW(p) · GW(p)

Can you be more specific?

comment by TimFreeman · 2011-05-07T13:21:16.243Z · LW(p) · GW(p)

For someone [with at least a shade of Asperger's Syndrome], it may be important to get in touch with their inner moralizer!

Agreed, although I don't know that I have any Asperger's. Here's a sample dialogue I actually had that would have gone better if I had been in touch with my inner moralizer. I didn't record it, so it's paraphrased from memory:

X: It's really important to me what happens to the species a billion years from now. (X actually made a much longer statement, with examples.)

Me: Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time. It seems much more likely that you perceive talking about things a billion years off to be high status, and what you really want is the short term status gain from saying you have impressive plans. People aren't really that altruistic.

X: I hate it when people point out that there are two of me. The status-gaming part is separate from the long-term planning part.

Me: There are only one of you, and only one of me.

X: You're selfish! (This actually made more sense in the real conversation than it does here. This was some time ago and my memory has faded.)

Me: (I exited the conversation at this point. I don't remember how.)

I exited because I judged that X was making something he perceived to be an ad-hominem argument, and I knew that X knew that ad-hominem arguments were fallacious, and I couldn't deal with the apparent dishonesty. It is actually true that I am selfish, in the sense that I acknowledge no authority over my behavior higher than my own preferences. This isn't so bad given that some of my preferences are that other people get things they probably want. Today I'm not sure X was intending to make an ad-hominem argument. This alternative for my last step would have been better:

Me if I were in touch with my inner moralizer: Do I correctly understand that you are trying to make an ad-hominem argument?

If I had taken that path, I would either have clear evidence that X is dishonest, or a more interesting conversation if he wasn't; either way would have been better.

When I visualize myself taking the alternative I presently prefer, I also imagine myself stepping back so I would be just out of X's reach. I really don't like physical confrontation.

My original purpose here was give an example, but the point at the end is interesting: if you're going to denounce, there's a small chance that things might escalate, so you need to get clear on what you want to do if things escalate.

Replies from: Peter_de_Blanc, shokwave, wedrifid
comment by Peter_de_Blanc · 2011-05-07T14:09:52.594Z · LW(p) · GW(p)

Me: Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time.

In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?

Replies from: TimFreeman
comment by TimFreeman · 2011-05-07T20:37:22.099Z · LW(p) · GW(p)

In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?

I have a really poor intuition for time, so I"m the wrong person to ask.

I can imagine a thousand things as a 10x10x10 cube. I can imagine a million things as a 10x10x10 arrangements of 1K cubes. My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years.

In order to have desires about something, you have to have a compelling internal representation of that something so you can have a desire about it.

X didn't say "I can too imagine a billion years!", so none of this pertains to my point.

Replies from: Richard_Kennaway, Peter_de_Blanc
comment by Richard_Kennaway · 2011-05-09T11:31:39.565Z · LW(p) · GW(p)

My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years.

Would it help to be more specific? Imagine a little cube of metal, 1mm wide. Imagine rolling it between your thumb and fingertip, bigger than a grain of sand, smaller than a peppercorn. Yes?

A one-litre bottle holds 1 million of those. (If your first thought was the packing ratio, your second thought should be to cut the corners off to make cuboctahedra.)

Now imagine a cubic metre. A typical desk has a height of around 0.75m, so if its top is a metre deep and 1.33 metres wide (quite a large desk), then there is 1 cubic metre of space between the desktop and the floor.

It takes 1 billion of those millimetre cubes to fill that volume.

Now find an Olympic-sized swimming pool and swim a few lengths in it. It takes 2.5 trillion of those cubes to fill it.

Fill it with fine sand of 0.1mm diameter, and you will have a few quadrillion grains.

A bigger problem I have with the original is where X says "It's really important to me what happens to the species a billion years from now." The species, a billion years from now? That sounds like a failure to comprehend just what a billion years is: the time that life has existed on Earth so far. I confidently predict that a billion years hence, not a single presently existing species, including us, will still exist in anything much like its present form, even imagining "business as usual" and leaving aside existential risks and singularities.

Replies from: TimFreeman
comment by TimFreeman · 2011-05-09T13:28:12.947Z · LW(p) · GW(p)

Excellent. I can visualize a billion now. Thank you.

comment by Peter_de_Blanc · 2011-05-08T02:33:45.835Z · LW(p) · GW(p)

First, I imagine a billion bits. That's maybe 15 minutes of high quality video, so it's pretty easy to imagine a billion bits. Then I imagine that each of those bits represents some proposition about a year - for example, whether or not humanity still exists. If you want to model a second proposition about each year, just add another billion bits.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-05-09T09:59:20.966Z · LW(p) · GW(p)

That's maybe 15 minutes of high quality video, so it's pretty easy to imagine a billion bits.

Perhaps I don't understand your usage of the word 'imagine' because this example doesn't really help me 'imagine' them at all. Imagine their result (the high quality video) sure, but not the bits themselves.

comment by shokwave · 2011-05-09T14:48:41.636Z · LW(p) · GW(p)

Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time.

I can't imagine the difference between sixteen million dollars and ten million dollars - in my imagination, the stuff I do with the money is exactly the same. I definitely prefer 16 to 10 though. In much the same way, my imagination of a million dollars and a billion dollars doesn't differ too much; I would also prefer the billion. I don't know if I need to imagine a billion years accurately in order to prefer it, or have concerns about it becoming less likely.

comment by wedrifid · 2011-05-08T03:17:35.503Z · LW(p) · GW(p)

Agreed, although I don't know that I have any Asperger's. Here's a sample dialogue I actually had that would have gone better if I had been in touch with my inner moralizer.

One of the great benefits that being in touch with the inner moralizer can have is that can warn you about how what you say will be interpreted by another. It would probably recommend against speaking your first paragraph, for example.

I suspect the inner moralizer would also probably not treat the "You're selfish" as an ad hominem argument. It technically does apply but from within a moral model what is going on isn't of the form of the ad hominem fallacy. It is more of the form:

  • Not expressing and expecting others to express a certain moral position is bad.
  • You are bad.
  • You should fear the social consequences of being considered bad.
  • You should change your moral position.

I'm not saying the above is desirable reasoning - it's annoying and has its own logical probelms. But it is also a different underlying mistake than the typical ad hominem.

Replies from: TimFreeman
comment by TimFreeman · 2011-05-08T12:30:40.495Z · LW(p) · GW(p)

One of the great benefits that being in touch with the inner moralizer can have is that can warn you about how what you say will be interpreted by another. It would probably recommend against speaking your first paragraph, for example.

If it works that way, I don't want it. My relationship with X has no value to me if the relevant truths cannot be told, and so far as I can tell that first paragraph was both true and relevant at the time.

Now if that had been a coworker with whom I needed ongoing practical cooperation, I would have made some minimal polite response just like I make minimal polite responses to statements about who is winning American Idol.

...But it is also a different underlying mistake than the typical ad hominem.

Okay, there might be some detailed definition of ad hominem that doesn't exactly match the mistake you described. I presently fail to see how the difference is important. The purpose of both ad hominem and your offered interpretation is to use emotional manipulation to get the target (me in this example) to shut up. Would I benefit in some way from making a distinction between the fallacy you are describing and ad hominem?

comment by lukstafi · 2011-05-07T12:34:47.714Z · LW(p) · GW(p)

Could you be more specific? Is the "inner moralizer", as opposed to, say, "inner consequentialist", a virtue by the human condition (by how the brain is wired), or is it "objectively good solution given limited cognitive resources"? Is your statement rather about humans, or rather about moralization?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2011-05-08T08:32:48.963Z · LW(p) · GW(p)

I am still thinking this through. It's a very subtle topic. But having begun to think about it, the sheer number of arguments that I have found (which are in favor of preserving and employing the moral perspective) encourages me to believe that I was right - I'm just not sure where to place the emphasis! Of course there is such a thing as moral excess, addiction to moralizing, and so forth. But eschewing moral categories is psychologically and socially utopian (in a bad sense), the intersubjective character of the moral perspective has a lot going for it (it's cognitively holistic since it is about whole agents criticizing whole agents; you can't forgive someone unless you admit that they have wronged you; something about how you can't transcend the moral perspective, in the attractive emotional sense, unless you understand it by passing through it)... I wouldn't say it's just about computational utility.

Replies from: lukstafi
comment by lukstafi · 2011-05-08T12:28:39.992Z · LW(p) · GW(p)

I must clarify that I've been concerned with contrasting the function of moralization, and the mechanism of moralization, which is ingrained very deeply to the effect that without enough praise children develop dysfunctionally, etc.

comment by MixedNuts · 2011-05-15T20:42:51.781Z · LW(p) · GW(p)

take a rationalist skill you think is important

Facing Reality, applied to self-knowledge

come up with a concrete example of that skill being used successfully;

"It sure seems I can't get up. Yet this looks a lot like laziness or attention-whoring. No-no-I'm-not-this-can't-be-STOP. Yes, there is a real possibility I could get up but am telling myself I can't, and I should take that into account. But upon introspection, and trying to move the damn things, it does feel like I can't, which is strong evidence.

So I'm going to figure out some tests. Maybe see a doctor; try to invoke reflexes that would make me move (careful, voluntary movement can truly fail even if reflexes don't); ask some trusted people, telling them the whole truth. Importantly, I'm going to refuse to use it as an excuse to slack off. I can crawl!"

crawls to nearest pile of homework, and works lying prone, occasionally trying to get up

decompose that use to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures;

  • try to move legs, fail
  • compare with expectation (possibly verbalizing it "Those are legs. They're used to move around.", more likely not), be surprised
  • recognize this as an obstacle to reaching a goal, thwarting the "decide to work => get up => walk to desk => sit down => work" chain
  • recognize this obstacle as unusual and un/insufficiently planned for
  • pattern-match "weird obstacle" to "overly convenient excuse"
  • automatically think "No, other people use convenient excuses, but I don't, I'm sincere"
  • recognize this as wishful thinking (re: self-image)
  • accept the unpleasant hypotheses as possible (this looks litany-of-Gendlin-ish); "I do not want to be a lazy attention whore, but believing I am not won't help" - ick reaction to the process of rejecting the thought before reflecting on it in detail, flinch towards the painful thought
  • recognize you have a hypothesis to test; do a little dance and exclaim "yay, science!"
  • look for ways to test the hypothesis, as triggered by the recognition
  • implement easy tests, note others for later use
  • mark this train of thought with a little [closed] tag
  • go back to the original problem (easy in this example, since the awkward position triggers it)
  • examine the overly convenient excuse and check what it excuses from
  • feel a jolt of determination ("Oh yeah? You think you can stop me?") and look for roundabout ways to reach your goal anyway, partially out of spite and competitiveness
  • implement one of these ways
  • feel good about being The Determinator
  • optionally, reconsider the "I'm a lazy attention whore" hypothesis in light of the (totally rigged) test; move probability mass away from it towards "I have a legitimate problem, which I'm totally overcoming because I'm awesome" and "Sure I am, but look, I'm recovering"; award self a gold star

For a problem previously but rarely encountered, this takes about 5 seconds. For completely new problems it takes longer in tests, and there are a few more steps battling fear.

check your description to make sure that each part of it can be visualized as a concrete mental process and that there are no non-actionable abstract chunks;

Tricky; mental events are hard to visualize. I think "check what it excuses from" is the vaguest step (but it's not a crucial one, anyway), it could be done in more than one way.

come up with a teaching exercise which seems like it ought to cause those 5-second events to occur in people's minds;

Steps that need teaching:

  • pattern-match "weird obstacle" to "overly convenient excuse"
  • recognize this as wishful thinking (re: self-image)
  • accept the unpleasant hypotheses as possible (this looks litany-of-Gendlin-ish); "I do not want to be a lazy attention whore, but believing I am not won't help" - ick reaction to the process of rejecting the thought before reflecting on it in detail, flinch towards the painful thought
  • feel a jolt of determination ("Oh yeah? You think you can stop me?") and look for roundabout ways to reach your goal anyway, partially out of spite and competitiveness

The first is easiest to learn. Show people a lot of cases where people use convenient excuses. Hell, most people probably overfit here, look at all the disabled people told they're just lazy.

The second is crucial. It can be taught with a stern teacher; student describes their life to the teacher, and whenever something looks like self-deception ("No, really, I'm not gay, I just have sex with men sometimes, and of course I don't look at women, that would be cheating on my wife") the teacher calls their bluff. (Is this what therapy does?) That demands a lot of time and trust.

For a more self-teaching route, maybe try to explain every one of your behaviors with a bad character trait, rather than circumstance or a good trait. Might feel too fake, though. At least, reflect upon behavior that looks bad, even if you have good private reasons for it. The point of this step is to notice the possibility you have a bad trait, not to test it.

The third step is to accept it once noticed. I would go with two sets of exercises. One set teaches general flinching towards pain, like talking to strangers and walking on the roofs of tall buildings and resisting delicious cake. The second teaches singlethink; an obvious method is to write down all thoughts and notice flinches away from painful thoughts and rationalizations, and face them squarely, both immediately (with a set topic) and over time. Also, recite the litanies, and freak yourself out with horror stories of self-deception. This may well take more than five seconds for beginners, but I've found it becomes near-instant with comparatively little training.

The fourth step is rather me-specific. You may prefer other attitudes like "I'm so clever!" or "Okay, I noticed, moving on" or "Other people have it so much worse, how dare I whine".

There are standard exercises to teach determination. Pick your favorite shounen character, and use him or her (okay, him) as a role model - what would Edward Elric do? Use motivators liberally, and have a laugh when you outdo them (as in my example; Courage Wolf thinks paralysis is an excuse).

comment by HopeFox · 2011-05-09T00:04:56.348Z · LW(p) · GW(p)

I think I've started to do this already for Disputing Definitions, as has my girlfriend, just from listening to me discussing that article without reading it herself. So that's a win for rationality right there.

To take an example that comes up in our household surprisingly often, I'll let the disputed definition be " steampunk ". Statements of the form "X isn't really steampunk!" come up a lot on certain websites, and arguments over what does or doesn't count as steampunk can be pretty vicious. After reading "Disputing Definitions", though, I learnt how to classify those arguments as meaningless, and get to the real question, being "Do I want this thing in my subculture / on my website"? I think the process by which I recognise these questions goes something like this:

1) Make the initial statement. "A hairpin made out of a clock hand isn't steampunk!"

2) Visualise, even briefly, every important element in what I've just said. Visualising a hairpin produces an image of a thing stuck through a woman's hair arrangement. Visualising a clock hand produces a curly, tapered object such as one might see on an antique clock. Visualising "steampunk" produces... no clearly defined mental image.

3) Notice that I am confused. Realise that I've just made a statement about something that I can't properly visualise, something that I don't think I've properly defined in my own brain, so how can I expect anyone else to have a proper definition at all, let alone one that agrees with mine? (Honestly, the fact that I keep writing "steampunk" in quotation marks should have been a clue already.)

4) Correct my mistake. "Hmm, now that I think about it, what I just said didn't actually mean anything. What's the point of this discussion again? Are we arguing about whether or not this picture should be on the website, or whether this person should be going to conventions, or what? If so, let's talk about that specifically. Let's not pretend that "steampunk" exists as a concrete category boundary in the phase space of fashion accessories, okay?"

Now, this process can fall down at step 2 when I, personally, have a very well-defined mental image of what a word means (such as "sound", which I will always take to mean "compression waves of the sort that a human or other animal might detect as auditory input, whether or not a listener is actually present"), but which other people might interpret differently. Here, the trick to step 2 is to imagine my listener's most obvious responses, based on my experience in discussing the topic previously (such as "But there's nobody to hear it, so by definition there's no sound!"). If I can imagine somebody saying this, without also being forced to imagine that the speaker is hopelessly misinformed, mentally deficient, or some other kind of irrational mutant, then what I'm saying must have some defect, and I should re-examine my words.

As for a training exercise, step 2 seems to be the one to train. The "rationalist taboo" technique seems pretty effective here. Discuss a topic with the student, and when they use a word that doesn't seem to mean anything, or means too many things at once, taboo it and get them to restate their point. Encourage the student to visualise everything they say, if only briefly, and explain that anything they can't visualise properly is suspect.

Alternatively, allow the student to get into a couple of disputes over definitions, let them experience firsthand how frustrating it is, then point them to this blog and show them that there's a solution. Their frustration will drive them to adopt a method of implementing the solution in their own discourse. Worked for me!

comment by atucker · 2011-05-08T03:58:26.225Z · LW(p) · GW(p)

"Don't be stopped by trivial inconveniences"

I used to do really stupid things and waste lots of time trying to do something in the path of least resistance. I'm not sure if other people have the same problem, but might as well post.

An example of being stopped: "Hmm, I can't find any legitimate food stands around here. I guess I'll go eat at the ice cream stand right here then."

An example of overcoming: "Hmm, I can't find any legitimate food stands around here. That's weird. Lemme go to the information desk and ask where there is one."

What it feels like:

  1. You have a goal

  2. You realize that there are particular obstacles in your way

  3. You decide to take a suboptimal road as a result

What you do to prevent it:

Notice that the obstacle isn't that big of a deal, and figure out if there are ways to circumvent this. If those ways are easy, do them. Basically, move something from not reachable to reachable.

Replies from: Cayenne
comment by Cayenne · 2011-05-08T08:08:56.337Z · LW(p) · GW(p)

Yak Shaving? http://sethgodin.typepad.com/seths_blog/2005/03/dont_shave_that.html

Edit - please disregard this post

Replies from: atucker
comment by atucker · 2011-05-08T15:15:25.896Z · LW(p) · GW(p)

I should have made it clear when a trivial inconvenience ceases to be trivial.

Basically, if you have an object level understanding of what's in your way, can think of a way to avoid the problem, and don't see any other steps involved, then you should go ahead and do it.

I personally am normed to give up waaay too easily compared to what I can do.

Replies from: Cayenne
comment by Cayenne · 2011-05-08T18:13:53.167Z · LW(p) · GW(p)

Oh, ok. I see the difference you mean..

Edit - please disregard this post

comment by sriku · 2011-05-12T13:51:07.220Z · LW(p) · GW(p)

I haven't seen meditative practices described much here and I've known first hand how they can help with this level of introspection. So, for those who might wish to try, I'll briefly describe the plain instruction given to zen students. If you want to read in a bit more detail, the thin book "zen in plain English" is an excellent intro.

Sit in a quiet place, with lights dimmed, facing a wall, with your back straight (ex: use a cushion for lower back support). Half-close your eye lids. Adjust your breathing by taking a few deep breaths and then fall back to natural effortless breathing. Count your exhalations. Inhale-1-inhale-2-inhale-3...10 and cycle back to 1. If you lose count in the middle (yes you will) just start again at 1. Try this for at least 5mins. You can go up to 30 mins. That's all!

You can stop reading and try it.

When I began (don't laugh) I barely could count to 3. Here's how it went -

Inhale-1-inhale-2 ... what am I doing? What is this supposed to get me? Never stared at a wall before. Oh drats, back to 1.

Inhale-1-inhale-2... the plaster on the wall looks like a gorgon's face ... wonder what the others are thinking about .... Where was I? .. ok focus. 1..

Inhale-1-inhale-2... Damn is this what the famous sages did day in and day out? ... Oh shit lost it again. Am I that incapable of focusing? .. Ok back to 1

Inhale-1-inhale-2... Wait did I just chastise myself for something so trivial as counting my breath? .. (sigh) back to 1.

(Slowly the noise comes down and you get more real noise.)

Inhale-1-inhale-2 ... should I be taking deep breaths? Was the previous one long enough? ... Ok ok just sit and breathe ... Back to 1 ...

..... and so it goes. Just try it. The "back to 1" breakpoint works like a lens into your thought stream.

PS: apologies for the rough post. Just thought of writing this while on the bus.

comment by NancyLebovitz · 2011-05-07T07:54:29.178Z · LW(p) · GW(p)

why is it that once you try out being in a rationalist community you can't bear the thought of going back

Nitpick: It took me a bit to realize you meant "going back to being among non-rationalists" rather than "going back to the meeting".

Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone. (Did that last sentence offend you? Pause and reflect!)

Unfortunately I recognize that as the bitter truth, so it's of no use for me for training purposes.

Here's something which might work as an indignation test-- could it be a good move for an FAI to set a limit on human intelligence?

If an AI can be built, it has been shown that humanity is an AI-creating species. As technology and the promulgation of human knowledge improves, it will become easier and easier to make AIs, and the risk of creating a UFAI that the FAI can't defeat goes up.

It will be easier to have people who can't make AIs than to try to control the tech and knowledge comprehensively enough to make sure there are no additional FOOMs.

I considered limiting initiative (imposing akrasia) rather than intelligence, but I think that would impact a wider range of human values.

Replies from: ArisKatsaris, moshez, Desrtopa, lessdazed
comment by ArisKatsaris · 2011-05-07T13:57:39.187Z · LW(p) · GW(p)

Nitpick: It took me a bit to realize you meant "going back to being among non-rationalists" rather than "going back to the meeting".

Same here. I suggest Eliezer edit it to make the intent more clear at first reading.

comment by moshez · 2011-10-20T21:59:56.270Z · LW(p) · GW(p)

That's funny -- I don't consider the FAI thing even remotely "offensive" (perhaps "debatable", in the sense of "I'm not sure how likely it is -- do you have any evidence?" but not "offensive"). I wrote a short story in which the FAI kept human beings humanly-intelligent (though not explained in the story, in my background, it did bring humans to a fairly high minimum, but it did not change the intelligence level overall).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-10-20T22:34:48.251Z · LW(p) · GW(p)

I don't have evidence. I'm just generalizing from one example that folks at LW are very fond of being intelligent, would probably like to be more intelligent, and would resent being knocked down to 120 IQ or whatever it would take to make creating another AI impossible.

comment by Desrtopa · 2011-05-09T20:40:11.169Z · LW(p) · GW(p)

If an AI can be built, it has been shown that humanity is an AI-creating species. As technology and the promulgation of human knowledge improves, it will become easier and easier to make AIs, and the risk of creating a UFAI that the FAI can't defeat goes up.

I would think that something as much more intelligent than humans as the FAI would be able to prevent humans from creating an UFAI that could defeat it without limiting their intelligence.

comment by lessdazed · 2011-05-09T20:05:47.214Z · LW(p) · GW(p)

Apologies for not being at all indignant, but can we generalize to say you have suggested that it could be good to limit something good because that it is a sufficient solution to a specific problem?

I'd appreciate it if someone can show how endorsements of locally bad, merely sufficient solutions are (or aren't) all implicitly arguments from ignorance, confessing to not knowing how to achieve the same results without the negative local consequences..

In other words, sure, limiting locally good thing X could be good on balance if doing so has generally positive consequences (like assassinating Hitler on certain dates), but that really depends on there not being something better on balance in general, of which an interesting case is a solution that has the same positive consequences but fewer negative ones.

comment by BrandonReinhart · 2011-05-08T06:13:20.913Z · LW(p) · GW(p)

Grunching. (Responding to the exercise/challenge without reading other people's responses first.)

Letting go is important. A failure in letting go is to cling to the admission of belief in a thing which you have come not to believe, because the admission involves pain. An example of this failure: I suggest a solution to a pressing design problem. Through conversation, it becomes apparent to me that my suggested solution is unworkable or has undesirable side effects. I realize the suggestion is a failure, but defend it to protect my identity as an authority on the subject and to avoid embarrassment.

An example of success: I stop myself, admit that I have changed my mind, that the idea was in error, and then relinquish the belief.

A 5-second-level description:

  • I notice that my actual belief state and my professed belief state do not match. This is a trigger that signals that further conscious analysis is needed. What I believe (the suggestion will have undesirable side effects) and what I desire to profess (the suggestion is good) are in conflict.

  • I notice that I feel impending embarrassment or similar types of social pain. This is also a trigger. The feeling that a particular action may be painful is going to influence me to act in a way to avoid the pain. I may continue to defend a bad idea if I'm worried about pain from retreat.

  • Noticing these states triggers a feeling of caution or revulsion: I may act in a way opposed to what I believe merely to defend my ego and identity.

  • I take a moment to evaluate my internal belief state and what I desire to profess. I actively override my subconscious desire to evade pain with statements that follow from my actual internal belief. I say "I'm sorry. I appear to be wrong."

An exercise to cause these sub-5-second events:

I proposed a scenario to my wife wherein she was leading an important scientific project. She was known among her team as being an intelligent leader and her team members looked up to her with admiration. A problem on the project was presented: without a solution the project could not move forward. I told my wife that she had had a customary flash of insight and began detailing the solution. A plan to resolve the problem and moving the project forward.

Then, I told her that a young member of her team revealed new data about the problem. Her solution wouldn't work. Even worse, the young team member looked smug about the fact she had outsmarted the project lead. Then I asked "what do you do?"

My wife said she would admit her solution was wrong and then praise the young team member for finding a flaw. Then she said this was obviously the right thing to do and asked me what the point of posing the scenario was.

I'm not sure my scenario/exercise is very good. The conversation that followed the scenario was more informative for us than the scenario itself.

Replies from: Charlie_OConnor, Cayenne
comment by Charlie_OConnor · 2011-05-11T03:45:44.563Z · LW(p) · GW(p)

I think your scenario is good. I think the group dynamic and individual personality determine when this is easy and when it is difficult.

I have been in groups where it is easy to admit mistakes and move on; and I have been in groups where admitting a mistake feels like you are no longer part of the group.

So this can be realistic. I find taking the approach of admitting mistakes often helps others follow the same path, and leads to a better group dynamic.

comment by Cayenne · 2011-05-08T06:31:48.156Z · LW(p) · GW(p)

Don't cherish being right, instead cherish finding out that you're wrong. You learn when you're wrong.

Edit - please disregard this post

Replies from: wedrifid, Alicorn
comment by wedrifid · 2011-05-08T07:16:03.147Z · LW(p) · GW(p)

Don't cherish being right, instead cherish finding out that you're wrong. You learn when you're wrong.

I prefer to cherish being right enough that I appreciate finding out that I was wrong. It feels like more of a positive frame! (And the implicit snubbing to the typical "don't care about being right" injunction appeals.)

comment by Alicorn · 2011-05-08T06:39:25.330Z · LW(p) · GW(p)

And under this model, we like learning because...?

Replies from: katydee
comment by katydee · 2011-05-08T06:59:55.683Z · LW(p) · GW(p)

Well, it isn't being wrong that you cherish under Cayenne's model, just finding out about it so that you can correct it. To put it in other terms, being wrong is bad, but learning that you are wrong is good, because all of a sudden something gets shifted out of the "unknown unknown" category.

Replies from: Cayenne
comment by Cayenne · 2011-05-08T07:29:09.966Z · LW(p) · GW(p)

This is it exactly!

Edit - please disregard this post

comment by JohnH · 2011-05-07T06:49:47.964Z · LW(p) · GW(p)

Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common

They are both physical objects, usually containing some metal and of roughly the same height, that have the ability to stop traffic, thus are found on a road, and have the colors of silver and white and (presumably by the specification of "that") also red in common?

(by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences

(sarcasm) Really? I hadn't noticed in the slightest... (/sarcasm)

What would be an exercise which develops that habit?

Talking with people that do not agree with you as though they were people. That is taking what they say seriously and trying to understand why they are saying what they say. Asking questions helps. Also, assume that they have reasons that seem rational to them for what they say or do, even if you disagree.

This also helps in actually reasoning with people. To show that something is irrational, it is needed to show that it is irrational within the system that they are using, not your own. Bashing someone over the head with ones reasonings in ones own system doesn't (usually) work (unless one believes there is an absolute correct reasoning system that is universally verifiable, understandable, and acceptable to everyone (and the other person thinks likewise, or one happens to actually be right about that assumption)). Often times, such reasonings when translated to what the other person's system is become utter nonsense. This is why materialists have such a hard time dealing with much of religion and platonic thought, and vice versa.

Taking as an assumption that the thing one is trying to show is irrational (or doesn't exist) is actually irrational (or actually doesn't exist) is perhaps the worst thing to do when constructing an argument meant to convince people that believe otherwise. For example see, The Amazing Virgin Birth and try and think of it from a Catholics perspective.

Replies from: HopeFox, Grognor
comment by HopeFox · 2011-05-11T13:53:20.430Z · LW(p) · GW(p)

Talking with people that do not agree with you as though they were people. That is taking what they say seriously and trying to understand why they are saying what they say. Asking questions helps. Also, assume that they have reasons that seem rational to them for what they say or do, even if you disagree.

I think this is a very important point. If we can avoid seeing our political enemies as evil mutants, then hopefully we can avoid seeing our conversational opponents as irrational mutants. Even after discounting the possibility that you, personally, might be mistaken in your beliefs or reasoning, don't assume that your opponent is hopelessly irrational. If you find yourself thinking, "How on earth can this person be so wrong!", then change that exclamation mark into a question mark and actually try to answer that question.

If the most likely failure mode in your opponent's thoughts can be traced back to a simple missing fact or one of the more tame biases, then supply the fact or explain the bias, and you might be able to make some headway.

If you trace the fault back to a fundamental belief - by which I mean one that can't be changed over the course of the conversation - then bring the conversation to that level as quickly as possible, point out the true level of your disagreement, and say something to the effect of, "Okay, I see your point, and I understand your reasoning, but I'm afraid we disagree fundamentally on the existence of God / the likelihood of the Singularity / the many-worlds interpretation of quantum mechanics / your support for the Parramatta Eels[1]. If you want to talk about that, I'm totally up for that, but there's no point discussing religion / cryonics / wavefunction collapse / high tackles until we've settled that high-level point."

There are a lot of very clever and otherwise quite rational people out there who have a few... unusual views on certain topics, and discounting them out of hand is cutting yourself off from their wisdom and experience, and denying them the chance to learn from you.

[1] Football isn't a religion. It's much more important than that.

comment by Grognor · 2012-03-25T00:36:38.857Z · LW(p) · GW(p)

(by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences

(sarcasm) Really? I hadn't noticed in the slightest... (/sarcasm)

It's interesting that this (extremely rude) misinterpretation has sat here unnoticed for a year. The grammatical reasoning behind parentheses is that you can remove them from the sentence without changing its entire meaning. So Eliezer's original phrasing becomes,

[...]to teach the procedural habit, you don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers [...], which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences.

Which is not at all a thing to be scoffed at.

And no one noticed for a year, even though this is the first comment on the page.

Communication always fails.

Replies from: RobinZ
comment by RobinZ · 2012-10-12T20:04:08.991Z · LW(p) · GW(p)

That "Communication always fails" article made me very happy.

Also, the "English - the universal language on the Internet?" article which was linked from it had this bit:

But although Internet services themselves are, generally speaking, easy to learn and use, you will find yourself isolated on the Internet if you are not familiar with English. This means that knowledge or lack of knowledge of English is one of the most severe factors that cause polarization. Learning to use a new Internet service or user interface may take a few hours, a few days, or even weeks, but it takes years to learn a language so that you can use it in a fluent and self-confident manner. Of course, when you know some English, you can learn more just by using it on the Internet, but at least currently the general tendency among Internet users is to discourage people in their problems with the English language. Incorrect English causes a few flames much more probably than encouragement and friendly advice.

...which made me think of a five-second skill: when someone uses poor language or otherwise communicates strangely, instead of taking offense at their rudeness, try to figure out what they meant (interactively, if possible).

Replies from: thomblake
comment by thomblake · 2012-10-12T20:10:17.232Z · LW(p) · GW(p)

...which made me think of a five-second skill: when someone uses poor language or otherwise communicates strangely, instead of taking offense at their rudeness, try to figure out what they meant (interactively, if possible).

I usually also try to point out a more helpful phrasing - most non-native speakers who are trying to communicate in English seem appreciative.

Replies from: RobinZ
comment by RobinZ · 2012-10-12T20:19:24.079Z · LW(p) · GW(p)

Suggesting phrasings is a good way of interactively figuring out what they meant, and I recommend it for the purpose.

Suggesting phrasings to tell people how to say what they mean, on the other hand, bears a risk of being annoying and/or wrong. I think an attitude of seeking clarification is more likely to be successful.

(footnote: I have almost no relevant firsthand knowledge.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-09T05:02:59.095Z · LW(p) · GW(p)

The word "moralize" has now been eliminated from the blog post. Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

Replies from: Eugine_Nier, None, wedrifid
comment by Eugine_Nier · 2011-05-09T05:29:29.525Z · LW(p) · GW(p)

Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

I would just like to point out the irony of telling people you're training to be rationalists not to reason about a concept.

Edit: A better way to express what I find ironic about Eliezer's statement, is that at least half the people here started their journey into rationalism by ignoring the big bright warning sign saying "Don't question God!" This fact is useful to keep in mind when predicting their reactions to big bright warning signs.

Replies from: JamesAndrix, lessdazed, wedrifid, rhollerith_dot_com
comment by JamesAndrix · 2011-05-09T15:42:20.672Z · LW(p) · GW(p)

Rationalists should also strive to be precise, but you should not try to express precisely what time it was that you stopped beating your wife.

Much of rationality is choosing what to think about, We've seen this before in the form of righting a wrong question, correcting logical fallacies (as above), using one method to reason about probabilities in favor of another, and culling non-productive search paths. (which might be the most general form here.

The proper meta-rule is not 'jump past warning signs'. I'm not yet ready to propose a good phrasing of the proper rule.

Replies from: lessdazed
comment by lessdazed · 2011-05-09T17:12:12.702Z · LW(p) · GW(p)

I thoroughly endorse this comment.

Just a note relevant for people involved in the discussion on this page regarding upvoting and downvoting. This is a sort of situation in which I might downvote lessdazed's comment below, simply to increase local contrast between the vote totals of responses to the parent (so long as I did not push the score of the below comment into the negatives). This is true even though I (happen to ;-)) agree with the below comment.

Downvoting is not a personal thing, and if you take it personally, it is probably because it happens to be so for you and you are projecting your voting behavior onto others. In all discussions of voting I've seen, people have different criteria.

Apologies for metaness and thread hijack.

comment by lessdazed · 2011-05-09T15:00:49.976Z · LW(p) · GW(p)

at least half the people here started their journey into rationalism by ignoring the big bright warning sign saying "Don't question God!"

Your edit is perfectly sufficient and I have no criticisms of it. However, the point can be expanded upon such that it will seem different and it may appear I am disagreeing.

The metaphorical signs that exist invoke the idea "Don't question God!", but in the West, that's not too close what they actually say. In religious communities at least moderately touched by the enlightenment, enough distaste of signs reading "Don't question God!" has been absorbed that such signs would be disrespected as low status.

This is something a member of a moderate strain of fundamentalism might pride himself or herself on, as a factor that distinguishes him or her from literalists, perhaps as an important part of his or her identity.

To make someone think "Don't question God (this time)!", the sign might say something like "You don't know what the consequences would have been had those people lived. God does, so rely on his judgment."

The "this time" will happen to be every time, but the universality of it won't be derived from so general a rule; it will be a contingent truth but not a logical one exactly.

comment by wedrifid · 2011-05-09T06:33:28.038Z · LW(p) · GW(p)

I would just like to point out the irony of telling people you're training to be rationalists not to reason about a concept.

Not quite ironic. More just arbitrary.

comment by RHollerith (rhollerith_dot_com) · 2011-05-09T06:04:12.079Z · LW(p) · GW(p)

It's ironic only to those who have different ideas about what it means to reason. Reason need not be applied indiscriminately. (And it's not equivalent to arguing.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-09T06:46:35.356Z · LW(p) · GW(p)

Reason need not be applied indiscriminately.

This is a very interesting statement (with which I agree). I would also like to see your explanation for when it's inappropriate to apply reason, I'll post mine afterwords.

(And it's not equivalent to arguing.)

I don't quite see the distinction you're trying to make. Especially in this context since the posters arguing about morality were certainly trying to reason about it and not just arguing for the sake of arguing.

Replies from: rhollerith_dot_com, rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-05-09T19:52:35.578Z · LW(p) · GW(p)

I (and probably the 2 who upvoted me) misunderstood your use of 'ironic'. I now see that you probably meant it in the sense of 'superficially paradoxical or false, but on closer inspection, interesing'. (I thought you meant it more in the sense of 'incongruous, and consequently suspect'. I.e., I thought you were arguing that it is probably bad pedagogy to advise an aspiring rationalist not to reason about something.)

comment by RHollerith (rhollerith_dot_com) · 2011-05-09T07:15:57.573Z · LW(p) · GW(p)

would also like to see your explanation for when it's inappropriate to apply reason

It is inappropriate -- well, let us say it is a mistake in reasoning -- to apply reason to something whenever it is obvious that the time and mental energy are better applied to something else. My point is that I do not see the irony in Eliezer's advising his readers that some particular issue is not worth applying reason to.

(And it's not equivalent to arguing.)

I don't quite see the distinction you're trying to make.

Can I just declare my statement in parens above to be withdrawn? :)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-09T07:47:51.933Z · LW(p) · GW(p)

would also like to see your explanation for when it's inappropriate to apply reason

It is inappropriate -- well, let us say it is a mistake in reasoning -- to apply reason to something whenever it is obvious that the time and mental energy are better applied to something else.

Interesting, I had in mind something much stronger. For example, if you attempt to apply too much reasoning to a Schelling point, you'll discover that the Schelling point's location was ultimately arbitrary and greatly weaken it in the process.

Another related example, is that you shouldn't attempt to (re)create hermeneutic truths/traditions from first principals. You won't be able to create a system that will work in practice, but might falsely convince yourself that you have.

Replies from: TimFreeman, rhollerith_dot_com, byrnema
comment by TimFreeman · 2011-05-09T20:04:48.227Z · LW(p) · GW(p)

...you shouldn't attempt to (re)create hermeneutic truths/traditions from first principals. You won't be able to create a system that will work in practice, but might falsely convince yourself that you have.

I didn't see any mentions of examples in Szabo's paper of traditions that have a high instrumental value but can't be derived from first principles, although he does seem to be saying that they exist. The best example that comes to mind is Jews and Moslems not eating pork, but I eat pork and my family has on both sides for multiple generations, and we haven't curled up and died yet, so the present instrumental value of that tradition is unclear to me. Do you have any examples in mind?

I can see that the wellbeing of the population that obeys the tradition would contribute to it doing well in cultural evolution, but it's not at all clear to me that it's a large enough factor that we're unlikely to come out ahead by discarding the tradition and designing a new one.

I suppose the claim that a tradition is one of these truths that one cannot usefully rederive from first principles is testable. Go form an intentional community that, say, has an 8 day week, and if they're still doing well physically and financially in a generation or two, then the 7 day week apparently wasn't such a tradition.

ETA: I suppose the organizational structure of a church is such a tradition.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-10T00:56:02.701Z · LW(p) · GW(p)

Well Szabo's main examples, which he briefly alludes to in this essay, are legal, economic and political systems. He discusses them at length in his other writings.

comment by RHollerith (rhollerith_dot_com) · 2011-05-09T19:49:59.578Z · LW(p) · GW(p)

I agree with your 2 examples.

comment by byrnema · 2011-05-09T15:18:06.767Z · LW(p) · GW(p)

You've articulated a couple of ideas that have been lurking in the collective concern of ideas here on Less Wrong, but which, as far as I know, haven't been made definite. About why some topics shouldn't have too much light directed at them -- ironically, as you originally claim, in the interest of reason. It's been a very vague concern and precisely because it hasn't been articulated it persists stronger than it might otherwise. I would encourage development of these points (not specifically by you, or specifically in this thread, but by anyone, wherever) .

comment by [deleted] · 2011-05-09T07:03:36.051Z · LW(p) · GW(p)

"Moralizing is the mind-killer"?

Nah, just kidding. Making a joke.

Replies from: wedrifid
comment by wedrifid · 2011-05-09T09:45:18.949Z · LW(p) · GW(p)

"Moralizing is the mind-killer"?

Nah, just kidding. Making a joke.

No, that's more or less right. Which is unsurprising since moralizing is just politics.

comment by wedrifid · 2011-05-09T06:15:12.896Z · LW(p) · GW(p)

The word "moralize" has now been eliminated from the blog post. Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

In case it isn't clear let me say that my reply continues to apply to the current version. I refer to the underlying concept described, not the word so consider my reply to be edited to match.

comment by BenAlbahari · 2011-05-07T08:13:29.803Z · LW(p) · GW(p)

don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers

OK, so you're saying that to change someone's mind, identify mental behaviors that are "world view building blocks", and then to instill these behaviors in others:

...come up with exercises which, if people go through them, causes them to experience the 5-second events

Such as:

...to feel the temptation to moralize, and to make the choice not to moralize, and to associate alternative procedural patterns such as pausing, reflecting...

Or:

...to feel the temptation to doubt, and to make the choice not to doubt, and to associate alternative procedural patterns such as pausing, prayer...

The 5-second method is sufficiently general to coax someone into believing any world view, not just a rationalist one.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-07T09:15:20.043Z · LW(p) · GW(p)

The 5-second method is sufficiently general to coax someone into believing any world view, not just a rationalist one.

Um, yes. This is supposed to increase your general ability to teach a human to do anything, good or bad. In much the same way, having lots of electricity increases your general ability to do anything that requires electricity, good or bad. This does not make electrical generation a Dark Art.

Replies from: Eliezer_Yudkowsky, BenAlbahari
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-07T19:26:20.116Z · LW(p) · GW(p)

Actually, it occurs to me that this can be generalized. We might feel morally worried about a technique for initial epistemic persuasion which can operate equally to convince people of true statements or false statements, which is being used without the person's knowledge and before they've come to an initial decision about the worth of the idea (i.e., it's not like they already believe it and you're trying to help them alieve it). This is what some people (not me, please note) termed the Dark Arts.

Instrumental techniques which are useful for accomplishing anything, good or bad, depending on the user's utility function? Those are fine. Those are great. Nothing Dark about them.

Replies from: Cyan
comment by Cyan · 2011-05-07T19:38:55.512Z · LW(p) · GW(p)

I think the usual statement of this idea is something like, "Tool X can be used for good or evil."

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-08T01:11:54.425Z · LW(p) · GW(p)

Most tools can be. Tools with moral dimensions are rare.

comment by BenAlbahari · 2011-05-07T10:46:15.281Z · LW(p) · GW(p)

Good to see you've morally condoned the 5 second method.

rationalists don't moralize.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-05-07T10:54:37.920Z · LW(p) · GW(p)

Good to see you've morally condoned the 5 second method.

It looked to me more like he was discussing the consequences.

Replies from: wedrifid
comment by wedrifid · 2011-05-07T11:16:18.529Z · LW(p) · GW(p)

Good to see you've morally condoned the 5 second method.

It looked to me more like he was discussing the consequences.

It was a bit of both, at least with some elements under discussion. But the moralizing was relatively mild and mostly via connotation rather than overt. Not at a level I would comment on specifically.

comment by [deleted] · 2013-09-16T18:00:10.979Z · LW(p) · GW(p)Replies from: adele-lopez-1
comment by Adele Lopez (adele-lopez-1) · 2022-03-20T18:56:49.812Z · LW(p) · GW(p)

Abstracting being a useful move isn't in dispute here. The problem is that it's a path of least resistance, which means that you're liable to choose it without thinking, even when it isn't the best move. Giving yourself the space to notice that choice allows you to make a better choice when there is one.

I've found that even when doing category theory, my lack of this skill has often made things much harder than they needed to be. For example, when I tried understanding the Yoneda lemma as something like "objects are equivalent to the morphisms into them", it never quite clicked, and worse, I didn't even notice that I was missing something important (the difference between the Yoneda lemma vs the Yoneda embedding). A clearer understanding only came when I tried writing an expository proof, and tried understanding the poset version, which were both steps in the concrete direction.

comment by MrMind · 2011-05-13T13:44:50.699Z · LW(p) · GW(p)

I wanted to do the 5-second decomposition on what I think is one of the most important quality of a rationalist: s/he is able to say "oops!", but I found that it's probably a rationalist primitive. Anyway, here's my attempt:

  • notice the feeling of being wrong, or of having something screwed up, etc
  • don't deny it, stay with the feeling, let it be present in your mind
  • notice that you're still alive, that just because you admit it, nothing changed in the world: you already screwed up, you already experienced the consequences of your failure
  • say oops!
  • get on with your life (correct the mistake / revise your belief / etc)
Replies from: MrMind
comment by MrMind · 2011-05-13T13:53:34.683Z · LW(p) · GW(p)

It also seems to me that a general structure for the application of rationality follows a path like this:

  • notice a trigger: usually automatically activated bias has an unpleasant feeling attached to it
  • insert a space of rest so that the bias doesn't get automatically triggered
  • execute instead the rational behaviour
Replies from: loqi
comment by loqi · 2011-05-14T15:58:23.249Z · LW(p) · GW(p)

I really like this breakdown. I do think the first item can be generalized:

usually automatically activated bias has a feeling attached to it

since positive-affect feelings like righteousness are also useful hooks.

Replies from: MrMind
comment by MrMind · 2011-05-14T23:13:37.219Z · LW(p) · GW(p)

You're right, they don't even need to be strong emotions: like in the case of positive-affect induced biases building incrementally over time, as in the affective death spirals.

comment by roland · 2011-05-08T04:19:45.645Z · LW(p) · GW(p)

I know that I'll probably be downvoted again, but nevertheless.

Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists. It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you.

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

To give a concrete example of Eliezer himself

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/

I don't believe there were explosives planted in the World Trade Center. ... I believe that all these beliefs are not only wrong but visibly insane.

I politely asked for clarification only to be not only ignored but also downvoted to -4:

Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1t7r

On another comment I presented evidence to the contrary(a video interview) to be downvoted to -15: http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r5v

So when just asking the most basic rationality question(why do you believe what you believe) and presenting evidence that contradicts a point is downvoted I don't feel that LW is about rationality as much as others like to believe. And I also feel that basic elements of politeness are missing and yes, I feel like I have to walk on eggs.

Replies from: lessdazed, Cayenne, wedrifid, LHJablonski, jsalvatier
comment by lessdazed · 2011-05-08T06:05:19.009Z · LW(p) · GW(p)

A point about counteracting evidence: if I believe I have a weighted six sided die that yields a roll of "one" one out of every ten rolls rather than one out of every six rolls as a fair die would, a single roll yielding a "one" is evidence against my theory. In a trial in which I repeatedly roll the die, I should expect to see many rolls of "one", even though each "one" is more likely under the theory the die is fair than it is under the theory the die is weighted against rolls of "one".

You really didn't present evidence that contradicted anything, the most this sort of testimony could be is as you said, "evidence to the contrary", but not as you also said, "contradicts". One thing to look out for is idiosyncratic word usage. Apparently, I interpret the word "contradict" to be much stronger than you do. It would be great to find out how others interpret it, there are all sorts of possibilities.

When I consider whether or not the things I am directed to are good evidence of a conspiracy behind the destruction of the World Trade Center, I discount apparent evidence indicating a conspiracy against what I would expect to see if there were no actual conspiracy.

As an analogy: if I hear a music album and find 75% of the songs are about troubled relationships or love, I don't conclude the songwriter's life is or was particularly troubled, because that's what gets sung about by people of somewhat normal background, even though much of their lives are spent sleeping, eating, standing in line, etc. Only when every song sounds like the same complaint do I conclude something is uniquely wrong with them. This is somewhat counterintuitive, one might have thought 75% love/troubled songs indicated unique problems, but its not so.

Similarly, the conspiracy stuff surrounding the Twin Towers has been underwhelming to me. What I see is exactly what I would expect were the towers collapsed by Al-Quaeda hijacked planes. This absolutely includes what you presented, an interview after the fact by someone saying that in the confusion he heard sounds that sounded like explosions beneath him. Seeing this evidence is like rolling a "one" or hearing a love song on an album: totally expected according to the theories that the die lands on "one" 10% of the time, that the singer is normal, or that the towers were brought down by the planes.

Replies from: roland
comment by roland · 2011-05-08T19:08:55.212Z · LW(p) · GW(p)

I concede the point about language.

Discounting evidence is dangerous considering we are all biased and if you dismiss any evidence to the contrary you have to answer: what evidence would be strong enough to change your mind?

But my problem is not with people discounting evidence(everyone is free to close their eyes) but outright downvoting evidence that goes against their beliefs is social punishment.

Replies from: None, lessdazed
comment by [deleted] · 2011-05-08T19:20:59.948Z · LW(p) · GW(p)

There was a time, many years ago, when I paid close attention to the arguments of the "truthers", and came to the conclusion that they were wrong. What you're doing now is bringing up the same old arguments with no obviously new evidence. I'm not going to give you my full attention, not because I want to close my eyes to the truth, but because I already looked at the evidence and already, in Bayesian terminology, updated my priors. Revisiting old evidence and arguments as if they were fresh evidence would arguably be an irrational thing for me to do, because it would be treating one piece of evidence as if it were two, updating twice on the same, rehashed, points that I've already considered.

I did not downvote you, because I have a soft spot for that sort of thing, but if other people have already, long ago, considered the best arguments and evidence, then at this point you really are wasting their time. It's not that they're rejecting evidence, I suspect, but that they're rejecting having their time being taken up with old evidence that they've already taken into account.

comment by lessdazed · 2011-05-09T00:36:51.762Z · LW(p) · GW(p)

if you dismiss any evidence to the contrary you have to answer: what evidence would be strong enough to change your mind?

As a separate point, I have always argued against the validity of a certain argument against theists, that they are obligated to say what would constitute evidence sufficient to change their minds. The demand is an argument from ignorance. Nonetheless, being able to articulate what sufficiently contradictory evidence would be is a point in an arguers favor, even though the inability to do so is not fatal.

In this case, I'd say the question is somewhat ill-formed for two reasons. First, many entirely different things would be sufficient evidence to get me to change my mind, but if other things were also the case, they would no longer be sufficient. Certain statements by the CIA might be sufficient, but not if there were also other statements from the FBI.

Second, there are many sorts of mind changing possible. The more sane conspiracy theorists simply say the official account is not credible. The others articulate theories that, even granting all of there premises, are still less likely than the official story. A related point is what it means to be wrong according to different logics. If I believe in Coca-Cola's version of Santa Claus and also believe that Kobe Bryant is left-handed, in one sense there is no "Kobe Bryant" in the same way that there is no "Santa Claus". In a more useful sense, we say "Kobe Bryant really exists, but is right handed, and Santa Claus does not exist." This is so even though there is no one thing preventing us from saying "Santa Claus is really young, not old, tall and thin, not fat, has no beard and shaves his head, is black, and not white, and plays shooting guard for the Lakers under the alias "Kobe Bryant", and does nothing unusual on Christmas." Whether you say things I learn falsify the official story or modify it is a matter of semantics, but certain elements-like the involvement of Al-Quaeda-are more central to it than others. These elements are better established by existing evidence and would take correspondingly more evidence to dislodge.

So the answer to "what evidence would be strong enough to change your mind?" varies a lot depending on exactly what is being asked.

But my problem is not with people discounting evidence(everyone is free to close their eyes) but outright downvoting evidence that goes against their beliefs is social punishment.

I think it is notable and important that the different but similar things you said got different responses. One was downvoted unto automatic hiding (the threshold is set to hide at -3 or less (more negative) by default). One was downvoted much more. We can speculate as to why but its important to acknowledge different community responses to different behavior (I won't prejudge it by saying "Different going against social beliefs").

Onto speculation: one problem with the video as evidence for explosions was a certain kind of jumping to conclusions. The guy said he heard explosions, but this is skipping a step. I could just as well say I heard people in a box, when I had actually heard sound waves emitted by a speaker attached to a computer. The guy's insistence that explosions were causing the sound is very strange, even granted that he had heard explosions before and the sounds he heard may have sounded exactly like those. Likewise for his claim they were coming from beneath him, considering what was going on.

Similarly, your assumption about the reason for your downvotes is certainly skipping steps. Most noticeable is how you don't distinguish what you are being socially punished for among your several downvoted posts, but the response to them was so different.

It's not so simple as that you were "go[ing] against their beliefs". Not everyone uses the voting function identically, but assuming many others use it as I do I can offer an analysis. I use it to push things to where I think they should be, rather than as an expression that I was glad I read a post (in hopes others will do the same, such that votes reflect what individuals were glad to have read. I believe something like this was the intent of the system's creators). I see -4 and -15 as not inappropriate final marks for your posts, and so didn't weigh in on them through the voting mechanism.

The problem with your first post was that it unfairly pushed the work of argument onto Eliezer. This is the same problem with the poll sent out by the fundamentalists to philosophers a few months ago, I couldn't find it, but it included questions such as "Do you agree: life begins at conception?" and "Do you agree: humans are unique and unlike the other animals?" The problem with that question is that the work/number of words needed to adequately disentangle and answer that exceed those required to ask it. Your question also didn't start from anywhere, you would have gotten a better response if you had said you thought the beliefs either actually right or wrong but not insane.

The tl;dr is that it was a passive-aggressive question. A small sin, for which it gets a -4, as implicitly the one voicing it disagrees with it and is against the communal norm, how important that factor is, I can't know.

The video evidence was a larger sin, as it was basically a waste of time to listen to it. First, the guy emphasized that he certainly heard explosions beneath him, as if by disbelieving that one would be thinking him to be a liar. Like I said above, this is the same thing ghost observers do: I don't necessarily disbelieve you heard what you heard and saw what you saw, I just am unsure about the original cause of that noise, especially considering how humans hear what they hear based on what they are familiar with hearing and expecting to hear (the multiple-drafts model of cognition).

What's more, when the advocate of a position has an opportunity to direct someone to evidence supporting his or her position and must elect to give them one piece of evidence in an attempt to spread the belief, I expect them to go with their best argument, which in turn ought to sound pretty impressive, as even incorrect positions often have one compelling argument in their favor.

If I had come across the video you showed as the first video I saw in the course of randomly watching accounts of 9/11 survivors (if a random sample of survivors were filmed and archived), it would be maybe perhaps be somewhat suspicious. As a video cherry picked by someone trying to justify skepticism, it's catastrophically weak, shockingly so actually. I expect cherry picked evidence in favor of any conspiracy to at least induce a physiological response, e.g. OMG bush has reptilian eyes he is a reptile he is a lizard person, oh wait that's stupid, it's an artifact of light being shined on dozens of presidents millions of times and this video has been cherry-picked.

comment by Cayenne · 2011-05-08T06:04:15.091Z · LW(p) · GW(p)

I know that I'll probably be downvoted again, but nevertheless.

This is precisely the wrong way to start off a post like this, a very passive-aggressive tone.

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

Are you certain that it isn't simply the tone of your posts?

So when just asking the most basic rationality question (why do you believe what you believe) and presenting evidence that contradicts a point is downvoted I don't feel that LW is about rationality as much as others like to believe. And I also feel that basic elements of politeness are missing and yes, I feel like I have to walk on eggs.

Also bitterness. I think that you would benefit a lot by rephrasing your questions in a less confrontational manner.

Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?

could have become

Eliezer, I don't understand how you arrived at this conclusion, could you explain the reasoning behind it?

Soften up your posts.

I never downvote, as I think it's counterproductive. Others don't agree, but that is their right. Taking it personally is not the right approach.

Edit - please disregard this post

Replies from: roland, roland
comment by roland · 2011-06-12T20:40:36.048Z · LW(p) · GW(p)

Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?

could have become

Eliezer, I don't understand how you arrived at this conclusion, could you explain the reasoning behind it?

Done. and done

Edit - please disregard this post

Sorry, I can't unread it.

comment by roland · 2011-05-08T19:12:32.352Z · LW(p) · GW(p)

I would welcome factual criticisms of my posts instead of just attacking the "tone" you read in them.

Right, the posts could be softened up, but isn't it funny that you don't direct the same criticism to the ones who called a certain point of view insane? How confrontational is that?

Replies from: thomblake, Cayenne
comment by thomblake · 2011-05-09T17:24:12.998Z · LW(p) · GW(p)

I would welcome factual criticisms of my posts instead of just attacking the "tone" you read in them.

Characterizing helpful criticism as "attacking" is also not good.

comment by Cayenne · 2011-05-08T19:52:23.443Z · LW(p) · GW(p)

I'm limited in my scope, I'm not going to follow links and criticize every single post. I happened to be reading yours, and thought that I might be able to help you with tone... others are probably better at dealing with actual content. If you would prefer me to not try to help you, let me know and I'll focus my efforts elsewhere.

Edit - please disregard this post

comment by wedrifid · 2011-05-08T07:29:30.259Z · LW(p) · GW(p)

I upvoted your comment prospectively. That is, it'll be worth an upvote when you edit out the passive aggressive intro and I'm being optimistic. :)

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

We do. Not all the downvoting is moralizing but a significant subset is. And not all the moralizing is undesirable to me, even though a significant subset is.

For what it is worth, believing the WTC was loaded with explosives really is insane.

Replies from: roland, roland
comment by roland · 2011-06-12T20:46:22.270Z · LW(p) · GW(p)

Following a suggestion from Cayenne:

For what it is worth, believing the WTC was loaded with explosives really is insane.

wedrifid, I don't understand how you arrived at this conclusion, could you explain the reasoning behind it?

comment by roland · 2011-05-08T19:02:34.853Z · LW(p) · GW(p)

For what it is worth, believing the WTC was loaded with explosives really is insane.

How did you arrive at this conclusion? Did you really think it through or is it just a knee-jerk reaction?

Replies from: WrongBot, Mitchell_Porter
comment by WrongBot · 2011-05-11T02:41:06.223Z · LW(p) · GW(p)
  • The WTC being loaded with explosives is a much more complex explanation than the orthodox one - penalty.
  • The explosives theory involves a conspiracy - penalty.
  • The explosives theory can be and is used to score political points - penalty.
  • Explosive-theory advocates seem to prefer videos to text, which raises the time cost I have to pay to investigate it - penalty.
  • The explosives theory doesn't make any goddamn sense - huge penalty.
Replies from: Dorikka, bgaesop, simplyeric
comment by Dorikka · 2011-05-11T03:36:06.326Z · LW(p) · GW(p)

Labeling these as 1-5 from top to bottom, 2 contributes to 1 (you may be double-penalizing if you're counting them distinctly), and 4 (time cost to investigate) doesn't seem like a valid reason to discount a hypothesis.

I don't know whether I disagree with your conclusion -- I haven't bothered to read arguments about the topic and probably will continue to not do so because the expected value of such data is of pretty low for me -- I just wanted to point out possible errors in your process.

Replies from: WrongBot
comment by WrongBot · 2011-05-11T05:14:30.007Z · LW(p) · GW(p)

2 contributes to 1, yes, but conspiracy hypotheses are flawed for reasons other than their complexity.

I agree with you on 4: it isn't a reason to discount the hypothesis, but it is a reason to avoid seeking further information on the topic (high opportunity cost).

On reflection, I now regret engaging on this topic. My apologies for time wasted.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2011-09-05T11:30:24.494Z · LW(p) · GW(p)

On reflection, I now regret engaging on this topic. My apologies for time wasted.

Please don't. Your comment was an example that it is possible to reply politely and rationally even in a discussion on topic that you (presumably) consider irrational. That is a nice skill to have.

comment by bgaesop · 2011-07-28T17:30:41.122Z · LW(p) · GW(p)

The explosives theory involves a conspiracy

So does the traditional explanation.

The explosives theory can be and is used to score political points

So is the traditional explanation. War in Iraq, anyone?

Explosive-theory advocates seem to prefer videos to text, which raises the time cost I have to pay to investigate it

This is a very silly reason to reject an idea.

Replies from: shokwave, Vladimir_Nesov
comment by shokwave · 2011-07-29T05:56:54.729Z · LW(p) · GW(p)

This is a very silly reason to reject an idea.

Not always. Time-consuming investigations have a disutility value - if the prior for theories in this reference class multiplied by the utility of finding this idea to be true does not overcome that disutility, you ought not investigate. That is a very serious reason to reject an idea. If you do not give some weight to time costs of investigation, I have a reductio ad absurdum here that will monopolise your free time forever.

Replies from: bgaesop
comment by bgaesop · 2011-08-09T22:25:22.674Z · LW(p) · GW(p)

That's true. But that's a reason to not investigate and not read this thread and not think about the subject at all, not a reason to reply in this thread that the idea is unlikely, much less to declare it unlikely.

If your reaction to reading about the truther idea is "the value of knowing the facts about this issue, whatever they are, is rather low, and it would be time consuming to learn them, so I don't care" that is A-OK. If your reaction is "the value of knowing the facts about this issue, whatever they are, is rather low, and it would be time consuming to learn them, therefore I am not going to update whatsoever on this issue and will ignore the evidence I know is available and yet still have a strong, high-confidence belief on it" then that seems kind of silly to me.

Does that make sense? Do you agree, or not? This is not an issue I feel very strongly about, but value of information is something I've been thinking about more recently and so I think that hearing others' opinions on it would be useful. At the very least, worth the time to read them :) Amusing link, by the way.

Replies from: shokwave, Vladimir_Nesov
comment by shokwave · 2011-08-10T00:55:46.240Z · LW(p) · GW(p)

I agree with you that "investigating is time-consuming" is not a defense for declaring ideas you don't like to be unlikely.

comment by Vladimir_Nesov · 2011-08-09T22:44:14.559Z · LW(p) · GW(p)

That's true. But that's a reason to not investigate and not read this thread and not think about the subject at all, not a reason to reply in this thread that the idea is unlikely, much less to declare it unlikely.

If it's a priori deemed unlikely, deciding not to investigate will lead to it staying this way, and one could as well express this state of knowledge in posting to the thread.

comment by Vladimir_Nesov · 2011-08-09T22:45:42.091Z · LW(p) · GW(p)

Explosive-theory advocates seem to prefer videos to text, which raises the time cost I have to pay to investigate it

This is a very silly reason to reject an idea.

It's a reason to keep the idea rejected, without giving it a chance to become accepted.

comment by simplyeric · 2011-05-12T21:01:05.540Z · LW(p) · GW(p)

A brief continuance on the derailment of the thread:

•The explosives theory involves a conspiracy - penalty.

The 9/11 attack undisputedly did involve a conspiracy.
The question here is, by whom? (a. just by foreign terrorists, b. an "inside job").

•The explosives theory can be and is used to score political points - penalty.

What does that have to do with anything? A reduction in unemployment can be used to score political points...that certainly doesn't make is unlikely

•The explosives theory doesn't make any goddamn sense - huge penalty.

This is subjective - penalty?

The biggest point is: the orthodox explanation of the collapse seems robust to me on its own merits. There are other questions.

Replies from: roland
comment by roland · 2011-06-12T20:43:57.526Z · LW(p) · GW(p)

I think your points are all valid but they were downvoted because they are against the group belief.

comment by Mitchell_Porter · 2011-05-11T03:41:33.098Z · LW(p) · GW(p)

Years ago, I formulated the "No Bullet Hypothesis" of the Kennedy assassination: he wasn't hit by any bullets at all, his head just blew up. I had been thinking it was a peculiar form of spontaneous human combustion, perhaps involving Marilyn Monroe and Tibetan Nazis, but now I realize that his head must have been full of nano-thermite, possibly inserted during a trip to the presidential dentist.

Replies from: TheDave, lessdazed
comment by TheDave · 2011-05-12T04:46:13.175Z · LW(p) · GW(p)

I'm not sure that heavy sarcasm like this is constructive. While I thought it was funny, I think it encourages the audience to automatically disregard and deride the subject. In my experience, heavy sarcasm tends to both make the subject angry and reinforce the subject's (erroneous?) beliefs.

My own sarcastic responses (about political or otherwise weighty matters) typically just polarize the group I'm in, making the new in-group like me and the new out-group dislike me.

comment by lessdazed · 2011-05-11T12:45:59.663Z · LW(p) · GW(p)

This comment is awesome, and I'd like to think that if I believed the twin towers were destroyed by demolitions set off by the government I would still upvote it.

comment by LHJablonski · 2011-05-08T05:47:37.878Z · LW(p) · GW(p)

And I feel people moralize here especially using the downvote function.

Do you think that people use the downvote to tell another user that they are a terrible person... or do they simply use it to express disagreement with a statement?

I think probably both happen, but it's tilted heavily toward the latter. Feel free to downvote if you disagree. :)

Replies from: TimFreeman, mendel
comment by TimFreeman · 2011-05-08T19:58:30.659Z · LW(p) · GW(p)

Do you think that people use the downvote to tell another user that they are a terrible person... or do they simply use it to express disagreement with a statement?

There's another possibility. I downvote when I felt that reading the post was a waste of my time and I also believe it wasted most other people's time.

(This isn't a veiled statement about Roland. I do not recall voting on any of his posts before.)

comment by mendel · 2011-05-08T20:52:39.445Z · LW(p) · GW(p)

The problem with the downvote is that it mixes the messages "I don't agree" with "I don't think others should see this". There is no way to say "I don't agree, but that post was worth thinking about", is there? Short of posting a comment of your own, that is.

Replies from: Swimmer963, lessdazed, AdeleneDawner
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-08T20:54:33.227Z · LW(p) · GW(p)

Short of posting a comment of your own, that is.

That's exactly what I do. I try to downvote comments based on how they're written (if they're rude or don't make sense, I downvote them) instead of what they're written about. (Though I may upvote comments based on agreeing with the content.)

Replies from: wedrifid
comment by wedrifid · 2011-05-08T23:31:12.458Z · LW(p) · GW(p)

That's exactly what I do. I try to downvote comments based on how they're written (if they're rude or don't make sense, I downvote them) instead of what they're written about. (Though I may upvote comments based on agreeing with the content.)

That's exactly what I do too. (Although my downvote threshold is likely a tad more sensitive. :P)

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-09T00:26:05.929Z · LW(p) · GW(p)

(Although my downvote threshold is likely a tad more sensitive.

Likely. Mine will probably become more sensitive with time.

comment by lessdazed · 2011-05-09T02:35:31.416Z · LW(p) · GW(p)

I think there is a positive outcome from the system as it is, at least for sufficiently optimistic people. The feature is that it should be obvious that downvoting is mixed with those and other things, which helps me not take anything personally.

Downvotes could be anything, and individuals have different criteria for voting, and as I am inclined to take things personally, this obviousness helps me. If I knew 50% of downvotes meant "I think the speaker is a bad person", every downvote might make me feel bad. As downvotes currently could mean so many things, I am able to shrug them off. They could currently mean: the speaker is bad, the comment is bad, I disagree with the comment, I expect better from this speaker, it's not fair/useful for this comment to be voted so highly rated compared to a similar adjacent comment that I would rather people read instead/I would like to promote as the communal norm, etc.

If one has an outlook that is pessimistic in a particular way, any mixing of single messages to multiple meanings will cause one to overly react as if the worst meaning is intended by a message, and this sort of person would be most helped by ensuring each message has only one meaning.

comment by AdeleneDawner · 2011-05-08T20:55:38.938Z · LW(p) · GW(p)

I've been known to upvote in such cases, if the post is otherwise neutral-or-better. I like to see things here that are worth thinking about.

comment by jsalvatier · 2011-05-08T17:47:16.154Z · LW(p) · GW(p)

Are there lots of other topics you feel this way about?

If it's just this topic, that doesn't seem like a very big deal to me. I have no doubt LW has at least a few topics where people have an unproductive moralizing response. However, if such toxicity uncommon and doesn't affect important topics then I don't think it's a very big deal (though certainly worth avoiding).

Replies from: None
comment by [deleted] · 2011-05-08T18:14:48.161Z · LW(p) · GW(p)

It was made pretty clear in the other thread that the evidence linked was extremely weak.

Maybe that doesn't justify -15, but a priori I'd downvote it.

Replies from: wedrifid
comment by wedrifid · 2011-05-08T18:20:19.287Z · LW(p) · GW(p)

but a priori I'd downvote it.

ceteris paribus?

Replies from: None
comment by [deleted] · 2011-05-08T19:25:30.537Z · LW(p) · GW(p)

If I didn't already know it'd been downvoted into the asthenosphere, I would have downvoted it. But as it stands now, there's no reason for me to downvote it, because it's already been downvoted enough.

Replies from: wedrifid
comment by wedrifid · 2011-05-08T22:16:42.246Z · LW(p) · GW(p)

If I didn't already know it'd been downvoted into the asthenosphere, I would have downvoted it. But as it stands now, there's no reason for me to downvote it, because it's already been downvoted enough.

I understood the message. But the latin phase was off. Ceteris paribus is the one that would fit.

Replies from: None
comment by [deleted] · 2011-05-08T23:54:56.733Z · LW(p) · GW(p)

Fair enough.

comment by Charlie_OConnor · 2011-05-12T05:40:57.388Z · LW(p) · GW(p)

5 second level for evidence as soldiers

  1. Notice that all your evidence favors your belief; or Notice the anger/resentment/fear when coming across evidence against your belief.
  2. Pause and remember that
    1. beliefs are just expectations and truth is a measure of how accurate your expectations are
    2. evidence is not for or against a belief, it is a flow of probability between expectations
  3. Feel aversion to not internalizing all the evidence, to not letting reality constrain your expectations (beliefs)
  4. Make an bayesian calculation, incrementally incorporating all the evidence, so that your expectations (beliefs) are accurate (true).

A recent example for me comes from reading The Nurture Assumption and Selfish Reasons to Have more Kids.

  1. I noticed I was really convinced by a lot of evidence in favor of the view that parental influence is less important than I thought.
  2. My beliefs were being updated, but only by evidence in one direction - in favor of the hypothesis.
  3. Not wanting to be inaccurate about the best way to raise children I searched google scholar for twin/adoption studies and criticisms.
  4. I updated by beliefs based on the criticisms of the studies and I now feel confident in my expectations about parental influence.

Exercises include picking a belief (maybe one you recently acquired from a convincing friend) and researching all arguments for and against the belief. Write down your expectations before the research. As you research compare the research to your expectations and update your expectations as you go (I actually mean writing down so others can read it what you actually expect). Repeat. Eventually pick beliefs you have held for a long time and are a part of your identity (after practicing on recent beliefs that matter less).

Replies from: outofculture, laakeus
comment by outofculture · 2011-05-15T07:13:30.342Z · LW(p) · GW(p)

A variant on this topic:

  1. Notice when providing evidence X for a position P you believe in.
    1. Bonus points for reviewing recent memories to see if you have supported P repeatedly, especially to the exclusion of evidence to the contrary.
  2. Feel revulsion at having become the puppet of P.
  3. Introduce a nudge away from P. Some examples:
    1. Provide some good evidence counter to P.
    2. If you cannot point to specific counter evidence, try to at least describe what counter evidence would look like.
    3. State just how surprised you would be to see the evidence X if the position P were false. Can you rank it relative to other pieces of evidence under consideration? If the evidence is really weak, ask to have it weighted as such.

This seems sloppy, as it relies on the sense of revulsion to determine how much of a counter-nudge to give. It should still be useful, I hope.

The exercise to train this with:

  1. Propose a character facing a choice, especially on topics that are muddled by being high-profile (e.g. Jane Senator must decide how to vote on extending unemployment benefits).
  2. Provide a small selection of evidence that the character has considered, and state that their position after seeing just that evidence is for, against or undecided.
  3. Ask the participants what additional evidence they think the character should consider.
comment by laakeus · 2012-12-20T20:32:51.995Z · LW(p) · GW(p)

I updated by beliefs based on the criticisms of the studies and I now feel confident in my expectations about parental influence.

I'm curious as to what your updated beliefs are on parental influence. Can you summarize in couple of paragraphs?

(I think the original description matches how I view the issue, but I feel the topic doesn't have enough importance for me to spend a lot of time trying to update my beliefs.)

comment by novalis · 2011-05-11T04:35:02.809Z · LW(p) · GW(p)

I was thinking about how Beliefs Must Pay Rent the other day, because my wife is much better than me at noticing when this isn't happening. One major trick to this is that she always asks (at least internally), "So what?"

That is, rather than immediately finding a way to attack whatever it is that the other person said, she considers whether what they've said affects anything in their argument. One line of inquiry is, "can I concede this point and still win?" But "so what?" goes further than that -- it helps her internally understand if there is anything of substance to the argument. If the answer (in her mind) to "so what?" is, "that would be bad", then there at least might be some substance there. But if there is no answer, she asks the question out loud, to see whether she's missing something, or whether there really is no valid belief at all.

Note: this is my paraphrasing of her technique; she may or may not endorse this interpretation.

comment by AlanCrowe · 2011-05-09T21:39:24.974Z · LW(p) · GW(p)

For my attempt at the exercise I pick a sub-skill of "reading, pen-in-hand" that I call "spotting opportunities to engage." My attemp runs to 2020 words and was rejected by the LessWrong software for being too long. I've put the raw text on a web page. Sorting out the html will have to wait for another day.

Why so long? I see the skill as very important. I'm crap at it. I've just had a success that I'm pleased with, but it is too recent, I haven't had time to boil it down so that I can describe it briefly.

comment by Psy-Kosh · 2011-05-07T22:02:04.851Z · LW(p) · GW(p)

Something I still need to work on, but which I think would be an important one (perhaps instead a general class of 5-second-skills rather than a single one) would be "remember what you know when you need it"

Example: you're potentially about to escalate a already heated political debate and make in personal. 5-second-skill: actually remembering that politics is the mind-killer, thus giving yourself a chance to pause, reconsider what you're about to do, and thus have a chance to avoid doing something stupid.

I'd also apply this notion to what you said about testability. Not so much being able to think of a quick test as much as being able to quickly remember to think about how it could be tested.

Perhaps this general category of 5-second-skills could be called "pause and think" or "pause and remember".

ie, the critical thing this 5-second-skill here isn't so much being able to swiftly execute some other rationalist skill quickly as much as remembering to use that skill at all when you actually need to.

Replies from: Cayenne
comment by Cayenne · 2011-05-07T22:46:14.009Z · LW(p) · GW(p)

How about 'flinch away from drama'?

Never argue opinions, only facts.

If you must argue an opinion, then pin it down so that it can't wriggle around. Example: if you have the opinion 'AI can/will paperclip', then try to pin down how and why it can as strictly as you can, and then take the argument from 'it can happen' to 'perhaps we can test this'. Bring it out of the clouds and into reality as quickly as possibly.

If you manage to kill someone's opinion, showing that it is just wrong, then pause and mourn its passing instead of gloating. It can't hurt to apologize for winning, since feelings are so easily hurt.

Edit - please disregard this post

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-05-07T22:58:21.499Z · LW(p) · GW(p)

Hrm... That could work for the specific "remember that politics is the mindkiller" rule (Although, of course, while one can distinguish issues of preference from issues of fact... issues of opinion vs issues of fact seems more questionable. :))

Replies from: Cayenne
comment by Cayenne · 2011-05-07T23:10:06.784Z · LW(p) · GW(p)

Well, I view opinions as inherently meaningless to attempt to test. A fact can be looked up or tested, but an opinion either can't be tested yet or is worthless to test.

'The sky is blue' is testable unless you've been stuck for generations underground. 'I like pink' is worthless to test, and really worthless to argue against. 'When we can do X it will then proceed to Y' is hard to do anything about until we can actually X, but if we pin the specifics down enough then it isn't totally useless to argue about it.

Some opinions can also just be completely infeasible to test as well, due to the steps the test would need to take. (Hayek vs. Keynes, I'm looking at you.)

Edit - please disregard this post

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-05-11T00:08:46.100Z · LW(p) · GW(p)

Sorry for delayed reply. "I like pink" is an assertion of a preference, rather than an opinion about a fact. (Well, I guess it's asserting the fact that you like pink... and stuff like brain analysis may help test it. ;))

Well, yes, some are difficult to test... but then one can argue the reasoning for having them.

Oh, just to clarify, I was proposing a sort of 5-second-meta-skill of "remembering your rationalist knowledge/skills when you need them", the "remember politics is the mind killer" being an example rather than one I wanted to single out.

*blinks at the edit* erm? disregard which part/aspect of it? (ie, are you retracting a claim, or...?)

comment by scientism · 2011-05-07T15:40:35.481Z · LW(p) · GW(p)

One of the things I think virtue ethics gets right is that if you think, say, lying is wrong then you should have a visceral reaction to liars. You shouldn't like liars. I don't think this is irrational at all (the goal isn't to be Mr. Spock). Having a visceral reaction to liars is part of how someone who thinks lying is wrong embodies that principle as much as not lying is. If somebody claims to follow a moral principle but fails to have a visceral reaction those who break it, that's an important cue that something is wrong. That goes doubly for yourself. Purposefully breaking that connection by avoiding becoming indignant seems like throwing away important feedback.

Replies from: None, gjm, cousin_it
comment by [deleted] · 2011-05-07T23:57:40.498Z · LW(p) · GW(p)

Purposefully breaking that connection by avoiding becoming indignant seems like throwing away important feedback.

Feedback arrives in the form of a split-second impression of "this is wrong". However long you spend being indignant after that, it won't provide you with any new ethical insight. Indignance isn't about ethics, it's about verbally crushing your enemy while signalling virtue to onlookers.

comment by gjm · 2011-05-08T09:35:52.590Z · LW(p) · GW(p)
  1. Why do you think merely having a visceral reaction to lying (one's own or others'; actual or hypothetical) isn't enough?

  2. Conditional on having that visceral reaction, what is the advantage of then becoming indignant? Or do you think that becoming indignant is identical to that visceral reaction?

comment by cousin_it · 2011-05-07T17:04:54.014Z · LW(p) · GW(p)

Why must my personal understanding of right and wrong also apply to other people? What if I think something's wrong for me to do, but I don't care if other people do it (e.g. procrastination)?

Replies from: thomblake, shokwave, Peterdjones
comment by thomblake · 2011-05-09T17:21:07.194Z · LW(p) · GW(p)

Why must my personal understanding of right and wrong also apply to other people? What if I think something's wrong for me to do, but I don't care if other people do it (e.g. procrastination)?

Because you care about other people, and other people are relevantly similar to yourself. This applies to both instrumentally relevant details, like the character of a person you're going to hire, and more personal concern, like whether your brother is living a good life.

comment by shokwave · 2011-05-07T17:07:40.766Z · LW(p) · GW(p)

Is there some law of nature saying my pesonal understanding of right and wrong should also apply to other people?

Principles derivable from game theory, maybe.

comment by Peterdjones · 2011-05-07T17:27:46.837Z · LW(p) · GW(p)

If it's purely personal, why call it moral?

Replies from: thomblake, Cayenne, wedrifid, cousin_it
comment by thomblake · 2011-05-09T17:18:35.820Z · LW(p) · GW(p)

If it's purely personal, why call it moral?

I'm confused. With Sidgwick, I define 'ethics' as 'the study of what one has most reason to do or to want', and take 'moral' to in most cases be equivalent to 'ethical'.

Then, 'morality' is indeed purely personal, but being very similar creatures we can build off each others' moral successes.

comment by Cayenne · 2011-05-07T21:09:46.665Z · LW(p) · GW(p)

I tend to think of 'the things I have to do to be me' as moral, and 'the things I have to do to fit into society' to be ethics. In a lot of cases when someone is calling someone else immoral, it seems to me that they're saying that that person has done something that they couldn't do and remain who they are.

Edit - please disregard this post

comment by wedrifid · 2011-05-07T18:47:59.220Z · LW(p) · GW(p)

If it's purely personal, why call it moral?

Why not? (A somewhat quirky twist that seems to crop up is that of having a powerful moral intuition that people's morals should be personal. It can sometimes get contradictory but morals are like that.)

Replies from: Peterdjones
comment by Peterdjones · 2011-05-07T18:50:10.292Z · LW(p) · GW(p)

Usual reasons...for one things, there are other ways of describing it, such as "personal code". For another, it renders morality pretty meaningless if someone can say "murders' OK for me".

Replies from: eugman, wedrifid, a363
comment by eugman · 2011-05-08T18:52:17.594Z · LW(p) · GW(p)

I think it makes sense in the negative sense, as things that aren't OK. What's wrong with holding oneself to a higher standard? What's wrong with saying "It'd be immoral for ME to murder?"

comment by wedrifid · 2011-05-07T19:28:11.612Z · LW(p) · GW(p)

for one things, there are other ways of describing it, such as "personal code". For another, it renders morality pretty meaningless if someone can say "murders' OK for me".

And yet if the same neurological hardware is being engaged in order to make social moves of a similar form 'morality' still seems appropriate. Especially since morals like "people should not force their view of right and wrong on others" legitimate instances of moralizing even when the moralizer tends to take other actions which aren't consistent with the ideal. Because, as I tend to say, morals are like that.

comment by a363 · 2011-05-08T12:04:27.755Z · LW(p) · GW(p)

What about "war is OK for me"?

It really gets to me that when a bunch of people gather together under some banner then it suddenly becomes moral for them to do lots of things that would never be allowed if they were acting independently: the difference between war and murder...

The only morality I want is the kind where people stop doing terrible things and then saying "they were following orders". Personal responsibility is the ONLY kind of responsibility.

comment by cousin_it · 2011-05-07T18:08:39.716Z · LW(p) · GW(p)

This path leads to an argument about the meanings of words, so I'm not going there.

comment by twanvl · 2011-05-07T12:08:58.864Z · LW(p) · GW(p)

Answering "a color" to the question "what is red?" is not irrational or wrong in any way. In fact, it is the answer that is usually expected. Often when people ask "what is X?" they do in fact mean "to what category does X belong?". I think this is especially true when teaching. A teacher will be happy with the answer "red is a color".

Replies from: DSimon, TrE
comment by DSimon · 2011-05-07T20:31:56.970Z · LW(p) · GW(p)

Agreed, though I think this depends a lot on who you're talking to and what they already know. Typically if someone I know asks me something like "What is red?" they're trying to start some kind of philosophical conversation, and in that case "It's a color" is the proper response (because it lets them move on to their next Socratic question, and eventually to the point they're making).

On the other hand, if we were talking to color-blind aliens, answering "It's what is in common between light reflected by the stop sign there and the fire truck yonder, but not the light reflected by this mailbox here" is a lot more useful starting response than "it's a color". If I answered "It's a color", and the alien is fairly smart and thinks like a human, the conversation would probably then go:

Alien: So what's a color then?

Me: Well a color is a particular kind of light...

Alien: Wait, hold on. Light, like the stuff that bounces off objects and that I use to see with?

Me: Yep, that's it.

Alien: What distinguishes light of one color from that of another?

Me: The wavelength of the light wave.

Alien: What wavelength is red light?

Me: Off the top of my head, I don't know. If you have a way to measure the wavelength of light, though, then that stop sign there and the fire truck younder are both red to my eyes, so the light they're reflecting is in that wavelength.

Alien: Gotcha.

... If I went straight to the examples, I'd have ended up at pretty much the same point, but a lot quicker.

Replies from: mendel
comment by mendel · 2011-05-08T02:28:30.399Z · LW(p) · GW(p)

Assuming the person who asks the question wants to learn something and not hold a socratic argument, what they need is context. They need context to anchor the new information (there's a word "red", in this case) to what they already know. You can give this context in the abstract and specific (the "one step up, one step down" method that jimrandomh descibes above achieves this), but it doesn't really matter. The more different ways you can find, the better the other person will understand, and the richer a concept they will take away from your conversation. (I'm obviously bad at doing this.)

An example is language learning: a toddler doesn't learn language by getting words explained, they learn language by hearing sounds used in certain contexts and recalling the association where appropriate.

I suspect that the habit of answering questions badly is being taught in school, where an answer is often not meant to transfer knowledge, but to display it. If asked "What is a car?", answering that is has wheels and an engine will get you a better grade than to state that your mom drives a Ford, even though talking about your experience with your mom's car would have helped a car-less friend to better understand what it means to have one.

So what we need to learn (and what good teachers have learned) is to take questions and, in a subconscious reaction, translate them to a realisation what the asking person needs to know: what knowledge they are missing that made them ask the question, and to provide it. And that depends on context as well: the question "what is red" could be properly answered by explaining when the DHS used to issue red alerts (they don't color code any more), it could be explaining the relation of a traffic light to traffic, it could be explaining what red means in Lüscher's color psychology or in Chinese chromotherapy. If I see a person nicknamed Red enter at the far side of the room wearing a red sweater, and I shudder and remark "I don't like red", then someone asks me "what do you mean, red" I ought to simply say that I meant the color - any talk of stop signs or fire engines would be very strange. To be specific, I would answer "that sweater".

To wrap this overlong post up, I don't think there's an innate superiority of the specific over the abstract. What I'll employ depends on what the person I'm explaining stuff to already understands. A 5-second "exercise" designed to emphasise the specific over the abstract can help me overcome a mental bias of not considering specifics in my explanations (possibly instilled by the education system). It widens the pool that I can draw my answers from, and that makes me a potentially better answerer.

comment by TrE · 2011-05-08T18:52:43.647Z · LW(p) · GW(p)

Also, it is very important to give counter-examples: 'This crow over there belongs to the bird category. But the plane in the sky and the butterfly over there do not.'

comment by gjm · 2011-05-07T09:23:37.561Z · LW(p) · GW(p)

I suggest a different and possibly better way of thinking about what Eliezer says about "moralizing" and "judging": don't judge other people. Enabling reasonable discussion and being fun to be around depend more on whether one turns disagreement into personal disdain than on what sort of disagreements one has.

(Some "moralizing" talk doesn't explicitly pass judgement on another person's worth, but I think such judgement, even if implicit, is the thing that's corrosive.)

comment by Broggly · 2011-05-11T10:58:06.086Z · LW(p) · GW(p)

The first fictional example I thought of was the Wax Lips scene from The Simpsons. "Try our wax lips: the candy of 1000 uses!" "Like what?" "One, a humourous substitute for your own lips." "Keep going..." "Two, err...oh, I'm needed in the basement!"

comment by atucker · 2011-05-09T03:38:48.848Z · LW(p) · GW(p)

Why is so much of the discussion about the "avoid moralizing" statement?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-09T04:58:43.891Z · LW(p) · GW(p)

I made the mistake of using a word for something people shouldn't do. Then they started disputing the definition of the word, even after I told them not to. I will edit to take out the evil word.

Replies from: atucker
comment by atucker · 2011-05-09T10:55:30.591Z · LW(p) · GW(p)

Ah, okay.

I was looking forward to reading more 5 second techniques.

comment by John_Maxwell (John_Maxwell_IV) · 2011-05-07T20:27:25.748Z · LW(p) · GW(p)

I thought of a few five-second skills like this:

  • remembering that a purpose of engaging in argument is to update your map
  • realizing you should actually spend time on activities that have proven to be helpful in the past (related to this)
  • noticing when you have a problem and actually applying your creativity to solve it (similar to this)
  • recognizing a trivial inconvenience for what it is

I noticed that all of my 5-second skills (and Eliezer's also) involve doing more mental work than you're instinctively inclined to do at a key point. This makes sense if the main reason people are irrational is due to taking cognitive shortcuts (see this great article; feel free to skip down to "Time for a pop quiz"). So maybe we could save some labor identifying or at least acquiring 5-second skills if we learn to be comfortable with constant reflectivity and hard mental work.

comment by [deleted] · 2011-05-08T02:12:11.116Z · LW(p) · GW(p)

So here is a procedure I actually developed for myself couple of months ago. It's self-helpy (the purpose was to solve my self-esteem issues) but I think indignant moralizing uses some of the same mental machinery so it's relevant to the task of becoming less judgemental in general.

I believed that self-esteem doesn't say anything about the actual world so it would be a good idea to disconnect it from external feedback and permanently set to a comfortable level. At some point I realized that this idea was too abstract and I had to be specific to actually change something. And here's roughly what it led to:

  1. Notice that I'm engaging in judgement. If the judgement is internally-directed and negative then the trigger will be anxiety. If it were positive then it would be some sort of narcissistic enthusiasm. If the judgement were directed at another person then it could be a feeling of smugness, if negative, and probably some sort of reverential admiration if positive.

  2. Realize that the emotions I'm feeling don't represent objective reality. They are a heuristic hacked together by evolution to guide my behaviour in a savannah-dwelling hunter-gatherer tribe. And I'm definitely not currently a member of such a collective.

  3. Remember that thinking abstractly about a 'sense of self-esteem' doesn't capture the way it is experienced and that thinking that it should be disconnected from external stimuli isn't something that can be translated into action and I need something specific to target.

  4. Focus on how an algorithm feels from the inside -- that the sense of self-esteem doesn't feel like a sense of self-esteem. It feels like a feature of the world. As if everyone, including me, had an inherent, non-specific aura of awesomeness that I were able to directly perceive, though not with any of the 'standard' senses.

  5. Reflect on the silliness of that way of perceiveing. Look at the world and notice the distinct lack of worthiness everywhere I turn to. Tell myself, verbally, that there is no inherent awesomeness or worthiness and that therefore nothing can affect it. Don't just try to disconnect the emotions from experience, aim to outright destroy them (note: I don't claim that destroying them is actually possible).

comment by Jasnah Kholin (Jasnah_Kholin) · 2022-09-12T16:16:06.653Z · LW(p) · GW(p)

this post was almost-useless for me - i learn from it much less then from any post for the sequences. what i did learn: how over-generalization look like. that someone think that other people learn rationality skills in a way that i never saw anyone learn from, with totally different language and way of thinking about that. that translating is important.

the way i see it is: people look on the world with different lens. my rationality skills are the lens that are instinctive to me and include in the rationality-skills subset. 

i learned them mostly be seeing examples and creating a category for it. 

all those exercises not only didn't work for me, i have much less idea what Yudkowsky  tried to teach, while from the sequences i did manged to learn some things.

maybe the core rationality skill is the ability to bridge the gap between theory and practice? i consider "go one meta level higher" the most important one. it creates important feedback loop. 

also, in most situations i consider going level higher - give category and not example - good idea. 

i actually learned that examples are really good thing and that is the natural way humans learn. i think it's part of the things the post tried to say, but i'm not sure. this is one of the least understandable post of Yudkowsky i ever read. 

comment by Conor (conor) · 2021-06-14T21:16:43.195Z · LW(p) · GW(p)

Example

  1. I am working on a hard problem and A. I notice a thought proposing a distraction from my current task, B. but I stop myself and continue my current activity.
  2.  
    1. Perceptually recognize a thought proposing a distraction from my current task.
    2. Feel the need for explicit reasons why I would change tasks.
    3. Experience an aversion to changing tasks without explicit reasons.
    4. Ask why I want to change to that task, to what end, and why now.
  3. Exercise

Recognizing the distractions. I'm struggling to come up with an idea on how to do this other than a form of awareness or attention meditation.

comment by 103percent · 2021-04-29T16:09:43.683Z · LW(p) · GW(p)

I'm going to attempt the exercise of turning "Don't dominate conversations" into something that you can train yourself to do on the 5-second level.

I count the number of active (or would-be active) members of a conversation and convert that into a rough percentage that I can hold in my mind quite easily. Three people = 33%, Four people = 25% and so on.

I keep an approximate running guess of the percentage of the conversation time where I was speaking (no more accurate than an amateur card counter in Blackjack needs to be) and use it to guide my behaviour when I want to contribute something.

If I'm over the 33% in a 3-person conversation (especially if it's by a lot) then it is now time to dedicate myself to listening to others and letting others respond (even if I'm dying to say something in response myself) and wait until I'm reasonably certain that I'm under or close to the 33% before I jump back in. Heck, I might even interrupt someone(!)

Replies from: Raemon
comment by Raemon · 2021-05-01T05:39:48.581Z · LW(p) · GW(p)

I'm not sure this is actually the best way to relate to conversations, but I do think at least tracking how much proportion of the conversation you're speaking is important so you can make some kind of deliberate choice about it.

I think there are some situations where it is better to either follow an exciting conversation thread, or withdraw more fully.

I think in practice this doesn't mean (usually) taking up more than 50% of a conversation time – if an exciting conversation thread is happening, there's usually at least two participants talking, and it's good to notice when you're not actually having a conversation, you're just monologuing, 

comment by NiceButAngry · 2012-11-15T15:50:23.932Z · LW(p) · GW(p)

I'm new here and couldn't find a better place to ask this: Are there any exercises to train such skills on the site? For example a list of statements to assess their testability?

Also I was wondering if there is some sort of pleasant way to access this site using an Android phone. I would like to read the sequences on mine.

Oh and hello everybody! :) I hope I can find the time and motivation to spend some time in this place, I think I might like to have your skills. ^^

If I violate any of your rules or anything just let me know I have barely scratched the surface of this seemingly massive site.

Replies from: aletheianink
comment by aletheianink · 2013-11-30T04:39:38.118Z · LW(p) · GW(p)

Your post was over a year ago, but I will reply anyway:

I don't know the answer to the first question, as I am also new.

To the second question, I recommend something like readability where you can clip a page (or sequence) and then read that in a really nice interface through the readability app.

Replies from: hyporational
comment by hyporational · 2013-11-30T12:59:01.478Z · LW(p) · GW(p)

Pocket is nice too.

comment by Folcon · 2011-05-29T08:43:27.590Z · LW(p) · GW(p)

Could someone give me the reasoning for why silver lining thinking itself is bad? Making mistakes is inevitable and so I would have thought this is a way to start to look past the mistake and try to give it a sense of perspective. Falsely rationalising a bad thing into a good thing is not valuable, however taking a bad thing and working out how to use the situation you are now in into a more positive experience or if you are completely stuck, realising that it is time to move on I would have thought to be a useful skill. Please explain if you believe that I am wrong.

Replies from: Carinthium
comment by Carinthium · 2011-08-02T07:05:36.494Z · LW(p) · GW(p)

If you're still here: As far as I can tell, emphasising how to take advantage of a bad situation can be useful, but a tendency to downplay the bad side of a situation reduces objectivity by making it hard to percieve the bad side of a situation. Of course you should try to turn such experiences to your advantage (often- sometimes it's better to 'cut and run', and sometimes to try and minimise losses. In some situations it would be necessary to try and avert a greater castatrophe), but objective awareness of the extent of the problem is useful.

In addition, mistakes can be minimised (for some people in some areas of life, they are reducable to insignificance). It is best if a person can recognise a mistake, figure out what they did wrong, and be sure not to do it again.

Replies from: Folcon
comment by Folcon · 2015-01-20T01:28:08.656Z · LW(p) · GW(p)

A bit late, but thank you for the insight.

comment by Cayenne · 2011-05-08T03:26:37.370Z · LW(p) · GW(p)

It might be useful to form a habit of reflexively trying to think about a problem in the mode you're not currently in, trying to switch to near mode if in far, or vice-versa. Even just a few seconds of imagining a hypothetical situation as if it were imminent and personal could provoke insight, and trying to 'step back' from problems is already a common technique.

I've used this to convince myself that a very long or unbounded life wouldn't get boring. When I try to put myself in near-mode, I simply can't imagine a day 2000 years from now when I wouldn't want to go talk to a friend one last time, or go and reread a favorite book, or cook a favorite meal, or any one of a thousand other small things. I might get bored off and on, but not permanently.

Edit - please disregard this post

comment by mendel · 2011-05-08T03:02:57.208Z · LW(p) · GW(p)

Eliezer, you state in the intro that the 5-second-level is a "method of teaching rationality skills". I think it is something different.

First, the analysis phase is breaking down behaviour patterns into something conscious; this can apply to my own patterns as I figure out what I need to (or want to) teach, or to other people's patterns that I wish to emulate and instill into myself.

It breaks down "rationality" into small chunks of "behaviour" which can then be taught using some sort of conditioning - you're a bit unclear on how "teaching exercises" for this should be arrived at.

You suggest a form of self-teaching: The 5-second analysis identifies situations when I want some desired behaviour to trigger, and to pre-think my reaction to the point where it doesn't take me more than 5 seconds to use. In effect, I am installing a memory of thoughts that I wish to have in a future situation. (I could understand this as communcating with "future me" if I like science fiction. ;) Your method of limiting this to the "5-second-level" aims to make this pre-thinking specific enough so that it actually works. With practice, this response will trigger subconsciously, and I'll have modified my behaviour.

It would be nice if that would actually help to talk about rationality more clearly (but won't we be too specific and miss the big picture?), and it would be nice if that would help us arrive at a "rationality syllabus" and a way to teach it. I'm looking forward to reports of using this technique in an educational setting; what the experience of you and your students were in trying to implement this. Until your theory's tested in that kind of setting, it's no more than a theory, and I'm disinclined to believe your "you need to" from the first sentence in your article.

Is rationality just a behaviour, or is it more? Can we become (more) rational by changing our behaviour, and then have that changed behaviour change our mind?

Replies from: mendel
comment by mendel · 2011-05-09T10:28:35.828Z · LW(p) · GW(p)

Of course, these analyses and exercises would also serve beautifully as use-cases and tests if you wanted to create an AI that can pass a Turing test for being rational. ;-)

comment by Anny1 · 2011-05-07T12:28:09.681Z · LW(p) · GW(p)

What would be an exercise which develops that habit?

Speaking from personal experience, I would propose that moralizing is mostly caused by anger about the presumed stupidity/ irrationality behind the statement we want to moralize about. The feeling of "Oh no they didn't just say that, how could they!". What I try to do against it, is to simply let that anger pass by following simple rules like taking a breath, counting to 10 or whatever works. When the anger is gone, usually the need for moralizing is as well.

Also I feel there is a lot of discussion about Eliezer moralizing in his posts that can be broken down to the distinction between moralizing as an automated response und moralizing after careful deliberation (as in blog posts). I wouldn't say that the latter is wrong per se.

In daily life I often meet people that I feel are so far off, so tangled up in their rationalizations, that even after my anger about their comments is passed I decide that a discussion would be a waste of everybody's time. In this case I use a sarcastic remark to a least get them of back. Maybe if the person in question gets a similar reaction from enough people, they will reconsider. It can also be for the benefit of bystanders.

So I think this would be the steps that work for me:

1) Recognize anger

2) Wait it out

3) Ask some questions to clarify/falsify your understanding of the questionable statement

4) Think about good, precise counterarguments and/or find the errors that I think the other one made.

5) Decide whether or not your arguing will probably be productive and then

a) Do it (in a civilized manner of course) or

b) Make a sarcastic comment that pinpoints the irrationality you see or simply say that you don't agree and leave.

I realize that this can't really be done in 5 seconds, but I think I got far enough myself that I can do the first two steps in a couple of seconds and keeping option 5b) in mind helps me in calming myself down.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-07T19:31:17.838Z · LW(p) · GW(p)

The goal invoked in the post, though, is to avoid moralizing in conversations between rationalists so that they don't feel like they're walking through a minefield. Having the anger and suppressing it, doesn't work for that. The person next to you is still walking the minefield. They're just not getting feedback.

Replies from: Anny1
comment by Anny1 · 2011-05-08T10:07:35.625Z · LW(p) · GW(p)

From some of the above posts I get the impression that at least in a community of aspiring rationalists, there is still some anger around. I think it is one of the hardest things to get rid of.

There is a point about my personal technique I wanted to make that I feel I didn't really transport so far... I find it hard to explain though. Thinking about something like option 5b) somehow helps me to combat the feeling of helplessness that is often mixed in with the anger. Somehow in saying myself "you can act on that later, if you still feel it is necessary" I take the edge off. Can someone relate to that and maybe help in clarifying?

Also there is a difference between suppressing anger and what I am trying to describe that feels totally clear internally but is also hard to explain.

The point about the missing feedback is a very good one and I'm wondering if and how and how often rationalists give each other feedback about how the discussion makes them feel.

Replies from: Alicorn
comment by Alicorn · 2011-05-08T10:33:57.678Z · LW(p) · GW(p)

Somehow in saying myself "you can act on that later, if you still feel it is necessary" I take the edge off. Can someone relate to that and maybe help in clarifying?

I think I may know what you're talking about. I find it immensely helpful to tell myself (when it's true) "there is no hurry", sometimes repeatedly. When there's no hurry, I can double-check. When there's no hurry, I can ask someone for help. When there's no hurry, there's no reason to panic. When there's no hurry, I can put it down, come back to it later whenever I feel like it, and see if anything's changed about how I want to react to it. So it's more general than just anger, but perhaps the same class of thing.

Replies from: Anny1
comment by Anny1 · 2011-05-09T18:03:36.295Z · LW(p) · GW(p)

Yes that's what I mean, thank you.

comment by diegocaleiro · 2011-05-19T19:48:29.300Z · LW(p) · GW(p)

I decided on using "Motivated stopping" and "Motivated continuation" as my two examples.

To successfully avoid motivated stopping, someone who thinks he can use Solomonoff Induction to simulate "what is it like to be the epistemology of a mind" should think if he has or not considered in detail how much of our understanding of gross-level affective neuroscience can be mapped into a binary ´01010001´ kind of description, and if he has sufficiently detailed evidence to go on and write smth like http://arxiv.org/PS_cache/arxiv/pdf/0712/0712.4318v1.pdf (This is not a critique of Peter de Blanc, but of Solomonoffist Inductors in general)

To successfully avoid motivated continuation, someone who thinks she can make easy money without much effort should 1) Notice if her decision to postpone actually doing it because she believes it doable is a form or akrasia, or fear of the twinge of starting 2) Think if she would be confortable explaining her thesis on how to easily make money to a friend (and not embarassed by it) 3) Wonder if she keeps reading about how to do it in order to feel the warm glow of reading Tim Ferris-like material and pretending to be awesome or if there is an actual need for more information then she presently has.

comment by thomblake · 2011-05-09T18:34:10.058Z · LW(p) · GW(p)

Taking a look at Hug the Query for the exercise:

We have an ordered hierarchy:

  • authority
  • argument
  • calculations
  • experiment

In which we should be going as far down the chain as possible when considering a factual dispute.

Thus, if you find yourself thinking about whether someone can be trusted based on reputation or prestige, ask, "Can I look at their arguments instead?". If you find yourself looking at their arguments, ask, "Can I look at their calculations?". If you find yourself looking at their calculations, ask, "Can I perform an experiment?".

An exercise would be difficult in the absence of real factual disputes. If there are real factual disputes amongst the participants: Begin arguing about the factual dispute based on whatever seems most compelling. Ask the above questions, and resolve it to the point where at least in principle an experiment is identified which would answer that question. It would be helpful if the dispute is cut off after a set amount of time (slightly more than 5 seconds, I think) so that it counts as practice for the 5-second skill of determining whether experimental evidence is available.

Did I miss anything?

comment by TrE · 2011-05-07T10:59:41.481Z · LW(p) · GW(p)

What could one do about rationalization? It probably won't be enough to ask oneself what arguments there are for the opposite position. Also, one could think about why one would want to confirm their position and if this is worth less or more than coming to know the truth (it will almost always be worth less). Do you have more ideas on how to beat this one?

Replies from: Vladimir_Nesov, TrE, Cayenne
comment by Vladimir_Nesov · 2011-05-07T23:00:57.130Z · LW(p) · GW(p)

Ask, "What exactly do I believe? Why do I believe it?", separately from "Why is what I believe true? Is it true?". This will call attention to the process that could or could not privilege your hypotheses, before they are granted special rights. Also, a lot of confusion originates from vague ideas that don't even correspond to a clear meaning, so that the question of their correctness is mostly ambiguity.

Replies from: wedrifid
comment by wedrifid · 2011-05-08T02:41:14.421Z · LW(p) · GW(p)

Better yet: Ask Whether, Not Why.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-08T10:17:29.548Z · LW(p) · GW(p)

Both questions are important, and have potential for bringing good info. They shouldn't be mixed up, one of them shouldn't be considered while forgetting the other, and where one of them can't be readily answered, you should just work with the other. Pursuing "Why" is how you improve on a faulty heuristic, for example, fixing a bug in a program without rewriting it from scratch.

Replies from: wedrifid
comment by wedrifid · 2011-05-08T10:32:25.688Z · LW(p) · GW(p)

Both questions

All three. You already had two, neither of which matches Eliezer's.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-08T10:42:05.846Z · LW(p) · GW(p)

I don't see it, list the three. When applied to the context of these comments, the post says, "If you don't remember why you decided to believe X, ask yourself, is X true? (That is, should you believe X?)". Which is one of the options I listed. What I didn't explicitly consider here is the condition of not remembering the reasons, in which case, Eliezer suggests, you are safer off not going there lest you come up with new rationalizations, and stick to the question you have a better chance of answering based on the facts.

Replies from: thomblake, wedrifid
comment by thomblake · 2011-05-09T14:22:04.460Z · LW(p) · GW(p)

I notice wedrifid still did not explicitly answer you, so for completeness:

  • What exactly do I believe? Why do I believe it?
  • Why is what I believe true? Is it true?
  • Whatever question was brought up by linking to "Ask whether, not why".

(Given the abundance of question marks, I'm not sure how that obviously parses into "three" questions)

And what Vladimir_Nesov meant by "both" was presumably:

  • Whether
  • Why
comment by wedrifid · 2011-05-08T10:45:35.063Z · LW(p) · GW(p)

I don't see it, list the three.

No. Your own words "separately from" in between quoted sentences with question marks were more than sufficient. Making other people explain things that they should not need to explain has undesirable connotations.

There is a give explanations and justifications, there is a time to decline. Consider this to be a '5 second example' of when not to 'list three'.

Replies from: loqi, Vladimir_Nesov
comment by loqi · 2011-05-10T18:34:42.727Z · LW(p) · GW(p)

Downvoted for spending more words explaining your non-response than it would have taken to just give Nesov the benefit of the doubt and be explicit.

Everyone is capable of misunderstanding trivial things, so the notion "should not need to explain" looks suspicious to me (specifically, it looks like posturing rather than honest communication). Can you explain it, or does it self-apply?

Replies from: wedrifid
comment by wedrifid · 2011-05-10T20:20:40.096Z · LW(p) · GW(p)

Downvoted for spending more words explaining your non-response than it would have taken to just give Nesov the benefit of the doubt and be explicit.

More to the point - far more words than saying absolutely nothing. Almost always the best way to keep free of other people's games.

comment by Vladimir_Nesov · 2011-05-08T10:52:05.401Z · LW(p) · GW(p)

Making other people explain things that they should not need to explain has undesirable connotations.

In what sense you shouldn't need to explain it? If you assume that I really do understand what you mean, but ask for other reasons, you are incorrect. That was not a rhetorical question. The only effect of not explaining in this case that I see is that I will remain ignorant of what you meant.

(In general, I noticed that I often can have trouble understanding things that people assume should be evident (I agree that they should be evident, in the hindsight), and I need explicit guidance to get what is meant. I can understand arbitrarily difficult things, but not always easily. My intuition can have trouble noticing the obvious.)

Replies from: wedrifid
comment by wedrifid · 2011-05-08T11:00:15.730Z · LW(p) · GW(p)

Your own words "separately from" in between quoted sentences with question marks

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-08T11:05:03.993Z · LW(p) · GW(p)

Obviously the words can be different, but how do you identify or distinguish the meanings? What particular distinction are you drawing attention to?

(Also, "Your own words "separately from" in between quoted sentences with question marks were more than sufficient" suggests that I already listed three meanings myself, but again, which ones?)

For example, "What should you believe?" and "What is the truth?" are somewhat different questions, but it looks to me that these are the same for the purpose of this discussion. I don't know which distinction you allude to (This one? Probably not. Something else?). There are two questions that I listed in my comments, but also questions in the post. The questions in the post seem to map to my questions. You believe that one of them doesn't map. Which one?

(It's not even an interesting question. A simple answer would've prevented this whole sub-discussion.)

Replies from: wedrifid, wedrifid
comment by wedrifid · 2011-05-08T11:39:48.700Z · LW(p) · GW(p)

A simple answer would've prevented this whole sub-discussion.

It would have avoided the meta discussion - but that is the only part that was interesting or relevant to the thread. There is an important counterpoint to Eliezer's "ask for examples" prescription. Just like demands of "Where is your evidence?" demands of the form "Give me some examples?" are often best left unanswered. They are powerful argument tactics regardless of whether they should be given the subject. The context, degree of mutual respect and expectations of flow of the conversation matter a lot when choosing whether or not to go along with the other person's demand.

Yes, if not for the relevance to the topic at hand I would have averted the whole sub-discussion. Probably by simply ignoring the request, which is often the optimal response.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-08T11:54:42.964Z · LW(p) · GW(p)

Just like demands of "Where is your evidence?" demands of the form "Give me some examples?" are often best left unanswered.

In usual practice, there are many useful techniques that don't try to clarify the situation. But on this forum it's also possible to actually answer with similar efficiency, even if not in an expected manner, for example "I believe absence of citable evidence is not a problem here" or "Not interesting enough for me to discuss further." That would be an actual reason, out in the open.

Replies from: wedrifid
comment by wedrifid · 2011-05-08T16:24:57.996Z · LW(p) · GW(p)

I believe absence of citable evidence is not a problem here

I like that one and I expect I shall make use of it. Mind you I expect it would often result in much the same response to this one given how similar the message.

comment by wedrifid · 2011-05-08T11:14:58.884Z · LW(p) · GW(p)

EDIT: Below may not reply to the current version of parent.

And that is an example of a response that doesn't warrant a "Do Not Reply" warning. You made your disagreement clear rather than setting bait (whether sincere or not being unimportant).

Now the subject is merely insufficiently interesting to argue about. It really doesn't matter that much in what combinations the questions are conceptually bundled in a model.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-08T11:26:03.843Z · LW(p) · GW(p)

Below may not reply to the current version of parent.

What changed? (I guess the first paragraph refers to the conditions for having this reaction to a comment, though not sure.)

comment by Vladimir_Nesov · 2011-05-08T11:23:23.508Z · LW(p) · GW(p)

And that is (edit: or rather was, prior to Vlad's edit) an example of a response that doesn't warrant a "Do Not Reply" warning. You made your disagreement clear rather than setting bait (whether sincere or not being unimportant).

And again I fail to interpret this step of the game of subtlety (from my tone-deaf perspective). There are multiple things in this paragraph that I can't interpret. (Which edit do you refer to? What's a ""Do Not Reply" warning"? Why "warning"? What disagreement? What bait?) Seriously, it's like that.

comment by TrE · 2011-05-08T19:05:26.588Z · LW(p) · GW(p)

Thank you, Vladimir, wedrifid, Cayenne.

Now, how would an exercise to train this 5-second-skill look like?

Read out to a group questions of the form 'why X?' where X itself is a controversial statement for which arguments for and against can be found. This shall encourage them to think of whether X is true itself. X could be very probable, something like 'rationality is the best way of life', or something improbable. This way, the group shall learn to avoid the urge to rationalize while at the same time trying to avoid the opposite, namely feel the urge to crush every statement.

Could this work? How could one modify it?

comment by Cayenne · 2011-05-07T22:22:08.091Z · LW(p) · GW(p)

Always play devil's advocate, and really try to destroy your position?

Whenever you argue, make a point to look up information regarding your argument and if you find that you were mistaken about it immediately let the other person know that you were wrong about it. The more certain you are about the information, the easier it should be to look up.

Think about who you know that would argue against your position, and how they would do it, and make sure that their (hypothetical) argument doesn't apply.

Make sure the null hypothesis 1) makes sense, and 2) isn't right.

Don't view an argument as a chance to be right. View it as an attempt to find facts or a useful model, or as John Maxwell IV says in another comment:

remembering that a purpose of engaging in argument is to update your map

I'm not sure how many of these things you can do reflexively, but I do look up facts as I argue, and I find that I am frequently wrong. I try not to care about being right as much as finding out something useful.

Edit - please disregard this post

comment by thomblake · 2011-05-09T23:27:01.034Z · LW(p) · GW(p)

Relevant comic: pfsc:"for you my valentine"

comment by calcsam · 2011-05-09T07:43:41.235Z · LW(p) · GW(p)

Good post. This invokes, of course, the associated problem, of phrasing this in a way that might encourage listening on the other end.

comment by Louie · 2011-05-08T07:57:38.760Z · LW(p) · GW(p)Replies from: Alicorn, Eliezer_Yudkowsky, katydee, wedrifid, Barry_Cotter, Charlie_OConnor, Desrtopa
comment by Alicorn · 2011-05-08T08:30:18.160Z · LW(p) · GW(p)

Alicorn, before you try to collect 30 karma by complaining that women get an unfair shake in my comment, why don’t you just accept that I haven’t dated any men?

Indeed I do accept that you haven't dated any men. I do not expect you to date men, and failing to date men is not unfair. So there's nothing wrong with beginning your paraphrased sample conversation with:

You’ve independently discovered the the sole “logical” technique that all my ex-girlfriends have ever used to “win” arguments. Let me demonstrate:

But then you stop talking about who you've dated:

So if you’re talking to a typical man about a social situation, or you’re talking to a typical woman about anything else

If you mostly hang out with men

And for these sentences, it is not important that you have had only platonic interactions with men or that all of your romantic interactions have been with women.

If you're going to anticipate my behavior, consider spending an extra 60 seconds to do so correctly.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-09T05:05:41.168Z · LW(p) · GW(p)

When I use this technique I'm usually using it on someone who is perfectly capable of coming up with an example, and whose mind has just taken the path of least resistance; I do it to help them think and they usually think successfully, and I make suggestions if they don't. Using this to call bluffs, when I think the other person's got nothin', is a rarer practice.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-05-11T17:48:10.314Z · LW(p) · GW(p)

Using this to call bluffs, when I think the other person's got nothin', is a rarer practice.

When you do this in person, hopefully you give them as long as they'd like to respond? It is not uncommon for people to take 30 seconds or more to come up with their best example. (Oftentimes they're afraid of giving less than best, since they assume whatever examples they give will be unfairly assumed to be their best, especially if they're already modelling you as socially adversarial.) Calling bluffs is fine I guess, even if I don't like the precedent or aesthetics of "why do you believe what you believe?" being used in Status Attack Mode, but such a social maneuver should probably only be done when the bluffer gets a legitimate chance to show their hand. That basically limits it to heads-up games.

ETA: I should clarify that though Louie's point is perhaps underappreciated (most people asking for examples really are just using an annoyingly hard-to-diffuse piece of rhetoric, and to many your question will thus pattern match), I don't agree with his (admittedly obviously hyperbolic but still pretty bizarre) larger point that Eliezer/rationalists shouldn't be asking Eliezer/rationalists for concrete examples.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-14T04:14:59.313Z · LW(p) · GW(p)

When you do this in person, hopefully you give them as long as they'd like to respond?

30 seconds is actually a pretty long time in conversation, so if they don't say anything like "I'm thinking" I will have probably made a suggestion myself, or asked other questions intended to narrow things down.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-05-15T05:46:17.538Z · LW(p) · GW(p)

30 seconds is actually a pretty long time in conversation, so if they don't say anything like "I'm thinking" I will have probably made a suggestion myself, or asked other questions intended to narrow things down.

Oh ah, even better. I'm glad you're willing to do that even when you think they're bluffing. I think the fact that you're Eliezer effing Yudkowsky omgz and that you're known to judge folk quickly makes people automatically assume you're challenging them, at least in the conversations I've been around, which generally doesn't do much for their ability to think clearly. Perhaps you haven't noticed this phenomenon much in yourself, but many people drop 15 IQ points in that kind of social context. This should definitely be part of your model when meeting/interviewing nerds if it wasn't already.

I'm surprised (assuming it is the case) that you don't have much problem recalling reasons for beliefs. I don't think that's the case with many people, including Michael Vassar (though maybe my impressions are off), which makes me think it might be a quirk of your neurology. Uhm, have you gotten an fMRI or DNA sequencing etc done yet? If you did/will, would you share any interesting results? :)

Replies from: lessdazed
comment by lessdazed · 2011-05-17T08:39:16.070Z · LW(p) · GW(p)

you're known to judge folk quickly

Knowing that someone admits to judging folk quickly (and to others, no less!) hardly makes them intimidating to talk to for me than an average person would. I probably feel this way because I assume everyone else is judging just as quickly.

Of course, this might make the conversations in question less intimidating than most merely by increasing intimidation in the others rather than decreasing the intimidation in this set.

comment by katydee · 2011-05-08T09:03:00.528Z · LW(p) · GW(p)

This post comes off as weirdly aggressive, almost hectoring, and I'm not sure that you meant that. I'm not talking about the part that refers to Alicorn, either.

comment by wedrifid · 2011-05-08T09:52:41.812Z · LW(p) · GW(p)

PS - Alicorn, before you try to collect 30 karma by complaining that women get an unfair shake in my comment, why don’t you just accept that I haven’t dated any men?

When I saw this I expected to find a comment by Alicorn in a parent or sibling in which Alicorn got 30 karma by a dubious criticism of some kind. But I don't see anything in the immediate context.

When you have an issue with something it is critical that you avoid looking petty if you hope to achieve anything.

comment by Barry_Cotter · 2011-05-08T21:52:24.349Z · LW(p) · GW(p)

This is a mistake.

Depends. If you're talking about normal social functioning, sure, this is wildly sub-optimal unless you hang out mostly with debate heads or others who put a high value on quick wit, and superior argumentation skills. In any problem solving or intellectual arena it is an awesome technique, but yeah, outside of a pretty narrow group if you act like that normally it'll go down like a lead balloon. But presumably as one familiar with the Art of Charm guys you know that most social interaction is grooming/bullshitting/vibing where the noises you make with your mouth are important more for the way they're said than for their informational content.

On the ex-gf thing, tell them if they have a problem, say it immediately or you'll assume it doesn't matter. Repeat this policy whenever such stuff comes up. Works for me.

comment by Charlie_OConnor · 2011-05-11T03:53:39.696Z · LW(p) · GW(p)

I've had similar discussions and I have found it useful to mentally (or actually on paper) tally the number of times I did the dishes and the number of times she did the dishes for a week or two.

Even though I thought I did them more and she thought she did them more, it turned out even. I was biased to remember the times I did the dishes and she was biased to remember the times she did them, and neither of us remember the times the other person did them.

I have taken this as a lesson that examples are useful.

And as a lesson that without examples I should be less upset than I am.

comment by Desrtopa · 2011-05-11T04:06:41.223Z · LW(p) · GW(p)

I'm skeptical that this is something that really tends to divide along gender lines. As you say, you haven't dated any men, so are you extrapolating from a sample of men larger than yourself?

When asked for specific examples of something that has genuinely happened many times, I can usually think of at least two.