"You can't possibly succeed without [My Pet Issue]"

post by Raemon · 2019-12-19T01:12:15.502Z · LW · GW · 14 comments

Contents

14 comments

There's a particular conversational move that I've noticed people making over the past couple years. I've also noticed myself making it. The move goes:

"You can't possibly succeed without X", where X is whatever principle the person is arguing for. 

(Where "succeed" means "have a functioning rationality community / have a functioning organization / solve friendly AI / etc")

This is not always false. But, I am pretty suspicious of this move. 

(I've seen it from people from a variety of worldviews. This is not a dig on any one particular faction from local-politics. And again, I do this myself).

When I do the move, my current introspective TAP goes something like: "Hmm. Okay, is this actually true? Is it impossible to succeed without my pet-issue-of-the-day? Upon reflection, obviously not. I legit think it's harder. There's a reason I started caring about my pet-issue in the first place. But 'impossible' is a word that was clearly generated by my political rationalization mind. How much harder is it, exactly? Why do I believe that?")

In general, there are incentives (and cognitive biases) to exaggerate the importance of your plans. I think this is partly for political reasons, and partly for motivational reasons [LW · GW] – it's hard to get excited enough about your own plans if you don't believe they'll have outsized effects. (A smaller version of this, common on my web development team, is someone saying "if we just implemented Feature X we're get a 20% improvement on Metric Y", and the actual answer was we got, like, a 2% improvement, and it was worth it. But, like, the 20% figure was clearly ridiculous).

"It's impossible" is an easier yellow-flag to notice than "my numbers are bigger than what other people think are reasonable". But in both cases, I think it's a useful thing to train yourself to notice, and I think "try to build an explicit quantitative model" is a good immune response. Sometimes the thing is actually impossible, and your model checks out. But I'm willing to bet if you're bringing this up in a social context where you think an abstract principle is at stake, it's probably wrong. 

14 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2019-12-19T05:46:08.623Z · LW(p) · GW(p)

You know, I think my favorite thing about internet rationalists is when they notice a bias and go "I wonder if I can notice this in myself to avoid being wrong" rather than "How can I use this to win arguments about current hot topics."

Replies from: Evan Rysdam
comment by Sunny from QAD (Evan Rysdam) · 2019-12-21T05:36:26.676Z · LW(p) · GW(p)

I automatically admire anybody whose first thought when encountering a new bias is to search for it in themselves.

Replies from: Raemon
comment by Raemon · 2019-12-21T05:39:50.825Z · LW(p) · GW(p)

I should probably be clear that I my first thought was to complain about it and my second thought was to improve my own habits. 

...

...possibly fourth thought

comment by Bendini (bendini) · 2019-12-19T01:48:50.226Z · LW(p) · GW(p)

I have no trouble believing that this is common thing to hear if you're in a position of power, but what about situations where this is correct? After all, if it was never correct, people would never find it persuasive.

Are there any heuristics you use to figure out when this is likely to be true?

Replies from: Raemon, Raemon
comment by Raemon · 2019-12-19T02:05:25.184Z · LW(p) · GW(p)

(updated post to be a bit more clear about this)

comment by Raemon · 2019-12-19T01:59:44.866Z · LW(p) · GW(p)

Nod. The suggested tap of "build an actual model if you don't have one", or "doublecheck your model" (if you do), isn't meant to output "the statement is never true", just that you should check that you have a clear reason to believe it's true.

It hasn't been true the times I've noticed myself saying it. 

I think it's more likely to be true in physical-system setups, where, like, your engine literally won't run if it doesn't have the right kind of fuel or whatever. 

I think some instances have been a person posing a mathematical formalism and saying 'this must be true', and it was true in the mathematical example but not AFAICT in the real world analogue. (In this cases there's being some kind of Law/Toolbox [LW · GW] conflation)

Replies from: bendini
comment by Bendini (bendini) · 2019-12-19T02:22:26.242Z · LW(p) · GW(p)

Ah.

My first reaction was thinking of a few scenarios that were analogous to the original framing, one example being "if it takes you years to coordinate the local removal of [obvious abuser], why do you think you will be able to coordinate safe AI development on a global scale?"

This isn't a pet issue of mine, but I suspect it is important to be able to say things like this. I guess my overall view is that crystallising this pattern might be putting ducttape over a more structural problem.

Replies from: Raemon
comment by Raemon · 2019-12-19T02:47:12.031Z · LW(p) · GW(p)

Recent motivating examples have been of the form "we can't possibly form good models and coordinate without X", to which I thought "WHAT!? X harms Y, and we can't possibly form good models and coordinate without Y". And it took me awhile to realize I was doing the same behavior that was annoying me.

(I think the answer is that often you need a deep understanding of both the Rock and the Hard Place [LW(p) · GW(p)] before you can, hopefully, eventually, just eliminate the problem entirely [LW · GW])

Replies from: bendini
comment by Bendini (bendini) · 2019-12-19T03:44:47.842Z · LW(p) · GW(p)

I don't disagree with that, but I do think one reason we find it difficult to form good models and coordinate is that there's an insane norm of only ever talking about issues in abstract terms like X and Y. Maybe the issue in question here is super sensitive, since I have no idea what you are talking about, but "raising awareness of general patterms" often seems to be used as a (mostly subconscious) justification for avoiding the object level because it might make someone important look bad.

Replies from: Raemon
comment by Raemon · 2019-12-19T04:03:42.860Z · LW(p) · GW(p)

Usually when I'm avoiding addressing the object level it's 

a) engaging with someone I consider to be in roughly the same strata of social status and position-of-power as I, and

b) I just don't want to get into that particular object level debate right now (either because it's exhausting, or distracting).

I think a notable exception is Healthy Competition [LW · GW], where I am in fact avoiding directly critiquing powers that be. I have a cluster of reasons I could point to there with varying degrees of virtuousness, but the unvirtuous ones are definitely there.

Replies from: TurnTrout, bendini
comment by TurnTrout · 2019-12-19T05:23:58.953Z · LW(p) · GW(p)

I think it might be worth having an example-generating TAP here instead. Instead of weighing off "weigh in on the sensitive / exhausting debate" vs "say things like ' affects in a double-causal-backflip-Goodhart manner'", one could just generate another concrete example?

Replies from: Raemon
comment by Raemon · 2019-12-19T05:30:13.075Z · LW(p) · GW(p)

I agree examples are good, but generating good ones is often fairly hard (and is the difference between being a post I could rattle off in 30 minutes vs one that'll take several hours)

Replies from: TurnTrout
comment by TurnTrout · 2019-12-19T17:04:07.012Z · LW(p) · GW(p)

I guess it just doesn't seem like examples should take that long? I also think that really good examples might make for a good part of the value in a few cases, but that's just a hunch.

comment by Bendini (bendini) · 2019-12-19T04:18:40.745Z · LW(p) · GW(p)

For what it's worth, I think that post made the right tradeoff. There will probably be some people who will have glossed over it due to lack of examples, but in that case I think it was an acceptable price to pay.

What I'm referring to is when the community does this by default, not when the author has explicitly weighed up the pros and cons. Not wanting to get into an issue is okay in isolation, but when everyone does this it impedes the flow of information in ways that make it even more difficult to avoid talking past each other.