Strawman Yourself

post by katydee · 2014-05-18T05:28:05.423Z · LW · GW · Legacy · 33 comments

One good way to ensure that your plans are robust is to strawman yourself. Look at your plan in the most critical, contemptuous light possible and come up with the obvious uncharitable insulting argument for why you will fail.

In many cases, the obvious uncharitable insulting argument will still be fundamentally correct.

If it is, your plan probably needs work. This technique seems to work not because it taps into some secret vault of wisdom (after all, making fun of things is easy), but because it is an elegant way to shift yourself into a critical mindset.

For instance, I recently came up with a complex plan to achieve one of my goals. Then I strawmanned myself; the strawman version of why this plan would fail was simply "large and complicated plans don't work." I thought about that for a moment, concluded "yep, large and complicated plans don't work," and came up with a simple, elegant plan to achieve the same ends.

You may ask "why didn't you just come up with a simple, elegant plan in the first place?" The answer is that elegance is hard. It's easier to add on special case after special case, not realizing how much complexity debt you've added. Strawmanning yourself is one way to safeguard against this risk, as well as many others.

33 comments

Comments sorted by top scores.

comment by palladias · 2014-05-18T16:16:40.550Z · LW(p) · GW(p)

Be a little careful about what kinds of arguments come to mind. If I run "look for the most obvious uncharitable insulting argument" on one of my current projects (writing a book), my oh-so-helpful brain immediately returns "It's arrogant for you to write a book. Also you can't finish a project this long. You're going to hit a hurdle and all the work will be for nothing!"

This, despite the fact that I've got a contract, my editor liked the first three chapters I sent, and I'm over halfway to the finish, look to be on track for deadline, and set aside a month of nothing but this at the end.

I'd suggest just being slightly more suspicious of insulting arguments that make claims about your character sucking (immutably) than ones about the way you've laid out the plan.

I do like this comment from "Die Vampire Die" as a countermeasure:

Who do you think you’re kidding?
You look like a fool.
No matter how hard you try, you’ll never be good enough.
Why is it that if some dude walked up to me on the subway platform and said these things, I’d think he was a mentally ill asshole, but if the vampire inside my head says it: It’s the voice of reason.

Replies from: Error, Kawoomba, eggman, buybuydandavis, someonewrongonthenet, katydee
comment by Error · 2014-05-19T16:47:25.559Z · LW(p) · GW(p)

my oh-so-helpful brain immediately returns "It's arrogant for you to write a book."

Ugh. I get this sort of thing when writing, too, and I hate it. For blog posts it comes out as "the insights you think are revelatory are actually banal and somewhat pathetic, and you're embarrassing yourself by presenting them as heartfelt knowledge. You're trying to signal wisdom (and lying to yourself about that, incidentally), but you're actually signaling a contemptible desperation for validation. Any halfway intelligent reader is going to smell that desperation, like it's roadkill of Pepe le Pew."

For fiction it's more like "the scene you think is tense and gripping is actually made of grade A Narm that you can't see. The one you think is touching is really teenage angsty melodrama." The bit about transparent reaching for validation stays the same, though.

Who do you think you’re kidding?
You look like a fool.

Yeah, that's pretty close to it. Die, vampire, die.

Replies from: FeepingCreature
comment by FeepingCreature · 2014-05-19T17:35:05.924Z · LW(p) · GW(p)

"the insights you think are revelatory are actually banal and somewhat pathetic, and you're embarrassing yourself by presenting them as heartfelt knowledge. "

Let me try to defeat this.

I have learnt something.

That means, I didn't know, and then I put in effort, and now I know.

Previously, I did not know; it is highly unlikely that I am the only person who did not know.

Even though it's obvious in hindsight, I still value having the knowledge; it is highly unlikely that I am the only person to value it.

I spent n minutes acquiring this knowledge; barring better data about others I should expect this time to be about average.

Reading this article will take t minutes where t < n.

Articles get written once and read many times; thus, my investment of effort is a net social good. (The time value is (n - t) * readers - n: time expended - time saved.)

PS: this gives you license to spam your article everywhere. You're committing a social good!

tl;dr: the corollary of "You are not a special snowflake" is "you are not alone".

Replies from: Error
comment by Error · 2014-05-19T20:11:41.348Z · LW(p) · GW(p)

Let me try to defeat this.

When I run that through my own mind, it spits out an accusation of motivated reasoning. Thanks for the attempt though.

comment by Kawoomba · 2014-05-18T16:23:43.485Z · LW(p) · GW(p)

In fairness, whatever lives inside your head probably knows you better than some dude.

Replies from: shminux
comment by shminux · 2014-05-18T20:12:20.236Z · LW(p) · GW(p)

In fairness, whatever lives inside your head probably knows you better than some dude.

Probably false for an average person.

comment by eggman · 2014-05-21T03:49:55.403Z · LW(p) · GW(p)

I'd suggest just being slightly more suspicious of insulting arguments that make claims about your character sucking (immutably) than ones about the way you've laid out the plan.

It seems katydee may have made a mistake in choice of language here by conflating "yourself" with "your plans". To nitpick, it might better to consistently refer to the thing to be strawmanned as "your plan(s), and not use "you" at all. If one wants to generate an argument to point out flaws in their own plans, strawmanning yourself is like launching an ad hominem attack upon oneself. When somebody is looking to improve only one plan targeted for a (very) specific goal, strawmanning the plan rather than one's own character would seem to illuminate the relevant flaws better.

Of course, if somebody wants to prevent mistakes in a big chunk of their lives, or their general template for plans, strawmanning themselves, then might be the time strawmanning one's own character is more worthwhile.

comment by buybuydandavis · 2014-05-18T23:49:00.904Z · LW(p) · GW(p)

We are much more sadistic to ourselves than we'd ever be with others.

comment by someonewrongonthenet · 2014-05-19T17:44:47.814Z · LW(p) · GW(p)

Okay, the internal critic is officially is prohibited from ad-hominems.

comment by katydee · 2014-05-18T17:03:23.123Z · LW(p) · GW(p)

I agree that "it's arrogant for you to write a book" is probably not helpful, though "you can't finish a project this long" may or may not be helpful depending on whether you generate that thanks to reference class forecasting (even insulting, biased reference class forecasting) or thanks to negative self-image issues.

In general, I do not advocate this (or any other) technique if it causes damage to your self-concept, intrusive thoughts, etc.

Replies from: palladias
comment by palladias · 2014-05-18T17:55:51.918Z · LW(p) · GW(p)

The problem with "You can't finish a project this long" is that is doesn't come with a reason like "You haven't set aside enough time" or "Planning fallacy!" or "You'll have to trade off against more worthwhile use of your time" which are all useful to address. I'm describing a kind of thought that doesn't feel like troubleshooting but more like anti-self efficacy, where the problem isn't the plan, it's that the plan has you in it.

I like pre-mortems, outside view, etc, so I'm not denigrating the technique, just flagging an error mode.

comment by Tenoke · 2014-05-18T12:05:50.611Z · LW(p) · GW(p)

Sounds like a lesser version of Kahneman's pre-mortem with a different name (that I don't see as too appropriate to be honest, since this is not really what a strawman is).

Replies from: katydee
comment by katydee · 2014-05-18T15:54:39.658Z · LW(p) · GW(p)

I actually find it more effective than the pre-mortem (and the closely related pre-hindsight technique I learned at CFAR). While those techniques are certainly effective, I think it's easy to be too charitable to oneself, even in failure. This version has explicit safeguards against that possibility.

As for the name, certainly it is not fully accurate. That said, being memorable and salient is quite important. The primary failure mode of this sort of technique is not remembering it in the heat of the moment, so I selected a name optimized for being shocking and memorable rather than fully accurate.

comment by Nectanebo · 2014-05-18T07:21:04.783Z · LW(p) · GW(p)

I'm going to need some more examples, this sounds like it could be something but I'm not seeing how I could actually apply the concept to a situation.

Replies from: Metus
comment by Metus · 2014-05-18T07:56:30.020Z · LW(p) · GW(p)

As I read it this is an instance of or very closely related to taking the outside view though not by reflecting on how a neutral observer would see your thoughts but by ridiculing yourself.

For example I extended stereotypes about behaviour to myself: Of course you only need to use big words if your whole argument is to confuse the partner. Which made me realise that I use big words too much in everyday speech. Obviously the quality of an argument is independent of its presentation yet still the presentation matters.

Another example is when I wanted to change a specific behaviour with the same approach that failed in the past but this time it is different. The phrase "once a liar, always a liar" came to mind and I was motivated to try a different approach.

Replies from: jimmy
comment by jimmy · 2014-05-19T02:39:36.868Z · LW(p) · GW(p)

This one is fun. I think of it as self-caricaturing. Not trying to come up with a "straw man" exactly, but rather a comical/cynical oversimplification that still kinda fits. The goal is to compress it as much as possible while still having it be as accurate as possible. The more you can compress without losing much/the less nuances you lose, the more this self caricature tells you about yourself.

For example, I sometimes joke that I'm a redneck that doesn't want to believe he's a closeted hippie. It's a quite lossy caricature, obviously, but every time I do something consistent with that caricature it gives me an opportunity to examine whether I'm merely doing the cached self thing or whether I can find the underlying heuristic (and notice whether it's any good).

Another neat use is to signal that you're self aware. People will try to fit you into their caricatures, and if you can lampshade this, they'll perk up and listen for how you don't fit this caricature. After all, if that's exactly what you were, how could you be aware of how it looks and laughing about it?

comment by buybuydandavis · 2014-05-18T23:46:56.253Z · LW(p) · GW(p)

Look at your plan in the most critical, contemptuous light possible and come up with the obvious uncharitable insulting argument for why you will fail.

Seems like an idea, but as I already see a million faults with my plans, and end up paralyzed thereby, I don't think this would help me out much.

My problem is generally more one of turning off the criticism than turning it on.

What you're suggesting is what I used to call self sadism, and I've concluded that it was decidedly unhelpful and unhealthy for me.

I've moved on from self sadism to Big Brother. Channel a person older and wiser than you, who wants to help you, and will review your plan. I think one key is to get your ego out of the evaluation of the plan and it's effects. Anyone else's problems are easy to solve, it's only our own problems that we're morons about, because they're so tied up in our egos and self image.

Naturally, doing this with a real "Big Brother" would be better still. Having people you trust and can talk to is a good thing. Our own perspectives are quite limited, and particularly distorted when it comes to ourselves.

Replies from: katydee
comment by katydee · 2014-05-19T13:13:36.565Z · LW(p) · GW(p)

If you can reliably emulate a wiser person, why not just be the wiser person?

Replies from: Richard_Kennaway, buybuydandavis
comment by Richard_Kennaway · 2014-05-19T14:01:56.920Z · LW(p) · GW(p)

If you can reliably emulate a wiser person, why not just be the wiser person?

Being wise is the goal, emulation is a method of approaching it.

comment by buybuydandavis · 2014-05-20T05:43:07.463Z · LW(p) · GW(p)

I think you're missing my point.

I'm saying it's easier to be wise about someone else's problems than your own. My "Big Brother" need not be wiser than me, just wiser than me about my problems.

comment by brazil84 · 2014-05-18T21:33:55.848Z · LW(p) · GW(p)

For me "strawmanning" means responding to a position which is imagined or made up -- as opposed to responding to the person's actual position. So if your plan really was complex, then (by my definition) you weren't really strawmanning yourself.

Of course you are free to define "strawmanning" any way you like but I think my definition is the more commonly used and accepted one.

Wouldn't it be easier to just write down and then assess/ameleriorate the biggest weaknesses in your plans?

Replies from: katydee
comment by katydee · 2014-05-19T01:43:29.720Z · LW(p) · GW(p)

Wouldn't it be easier to just write down and then assess/ameleriorate the biggest weaknesses in your plans?

In theory, yes; in practice, this seems to work less effectively than we'd like to think.

Replies from: brazil84
comment by brazil84 · 2014-05-19T08:31:22.537Z · LW(p) · GW(p)

In theory, yes; in practice, this seems to work less effectively than we'd like to think.

Are there studies on this? Or is it your personal observations? If it's the latter, what works best for you?

comment by [deleted] · 2014-05-18T20:27:47.702Z · LW(p) · GW(p)

This is going to be extremely hard for anyone with difficulty with negative self-concepts to do well; the negativity could be extremely demotivating.

I'd suggest extending it by writing every separate statement down on an index card. Take the index cards at the end that are just negative (like "it's arrogant you think you can write a book" below) and burn them. Literally.

Then take the index cards where your internal less wrong commenter has a point, and do what they say. ("You'll never be finished in two months if you haven't started yet" is an example from the book case below that might be productive.)

The catharsis of burning the pure negativity will be helpful in preventing you from lapsing into a negative affect spiral about yourself, and then you'll get the benefits of this technique, which I think has a lot of potential.

Anyone who comments on Less Wrong knows it's much easier to tear something apart in comments than it is to write a good post. This is just a neat hack on top of that.

comment by Creutzer · 2014-05-18T23:37:21.383Z · LW(p) · GW(p)

For instance, I recently came up with a complex plan to achieve one of my goals. Then I strawmanned myself; the strawman version of why this plan would fail was simply "large and complicated plans don't work." I thought about that for a moment, concluded "yep, large and complicated plans don't work," and came up with a simple, elegant plan to achieve the same ends.

You even mention an example and then still fail to actually give it. That annoyed me because it would have been nice to see this abstract idea grounded.

Replies from: katydee
comment by katydee · 2014-05-19T01:41:44.204Z · LW(p) · GW(p)

You even mention an example and then still fail to actually give it. That annoyed me because it would have been nice to see this abstract idea grounded.

In general I think LessWrong cares far too much about this sort of detail. I posted this in Discussion rather than Main precisely because I didn't want to write up a bunch of examples to express a straightforward principle.

Replies from: Creutzer
comment by Creutzer · 2014-05-19T06:17:12.745Z · LW(p) · GW(p)

Thing is, it's not straightforward to me what exactly I have to imagine, and I don't seem to be alone with this. (See e.g. Nectanebo's comment below.) In general, the established practice on LessWrong is to give examples to illustrate what you mean, and I disagree that this is "caring far too much about that sort of detail".

Replies from: katydee
comment by katydee · 2014-05-19T13:32:24.802Z · LW(p) · GW(p)

This approach 80/20s the point I want to convey. Writing up a bunch of examples is more work than the entire rest of the post combined and IMO adds substantially less utility, so I'm not doing it here. I'll probably do so when/if I write this up for Main.

comment by [deleted] · 2014-05-18T20:22:07.952Z · LW(p) · GW(p)

In many cases, the obvious uncharitable insulting argument will still be fundamentally correct.

Is this intended to be an empirical claim?

Replies from: katydee
comment by katydee · 2014-05-18T20:26:28.040Z · LW(p) · GW(p)

Is this intended to be an empirical claim?

I'm confused as to what thought process generated this comment. Can you explain?

Replies from: None
comment by [deleted] · 2014-05-18T20:40:19.541Z · LW(p) · GW(p)

Sure. The piece of advice being offered is a potent one: "Any time you come up with a significant plan, assume the worst about your own planning and your own performance. Specifically, reformulate the plan and your expectations of its execution in terms you would find insulting." If someone were to take this seriously, it would dramatically change the way they live. Much in the same way that the corresponding epistemic pessimism is supposed to, and I assume that's your intention.

So, again, potent medicine. At the heart of your argument for doing this is what looks to be an empirical claim: "In general, following this advice will give you an accurate picture of your probable performance." This may be so, but it seems likely to me that the relationship between expected and actual performance will vary significantly from person to person. And if in a given case your claim is off the mark, the effect could easily be harmful.

So, I wondered if you considered this an empirical claim. And if so, whether or not you had some relevant data. If you don't have any relevant data, and you do consider this an empirical claim, then it may be a bit rash to offer this kind of advice.

Replies from: katydee
comment by katydee · 2014-05-19T01:47:12.421Z · LW(p) · GW(p)

Yes, I consider this an empirical claim. I have a fair amount of anecdata from people I've shared this with in person about this being a useful approach.

That said, I agree that some may not find this effective or will find it harmful; this is why I wrote "in many cases" rather than "in almost all cases" or "you will find" or similar.

If you do not find this technique effective, I suggest that you don't practice it. I and a few friends found it useful and interesting enough to be worth disseminating.

comment by David_Gerard · 2014-05-21T07:41:34.843Z · LW(p) · GW(p)

This is an excellent idea and a necessary step in just about anything important you're going to present to the public. You do need to anticipate objections, even if you don't answer them right there in your original presentation. And yes, this is a great way to make your plans more robust on a deeper level.