Tactical vs. Strategic Cooperation
post by sarahconstantin
As I've matured, one of the (101-level?) social skills I've come to appreciate is asking directly for the narrow, specific thing you want, instead of debating around it.
What do I mean by "debating around" an issue?
"If we don't do what I want, horrible things A, B, and C will happen!"
(This tends to degenerate into a miserable argument over how likely A, B, and C are, or a referendum on how neurotic or pessimistic I am.)
"You're such an awful person for not having done [thing I want]!"
(This tends to degenerate into a miserable argument about each other's general worth.)
"Authority Figure Bob will disapprove if we don't do [thing I want]!"
(This tends to degenerate into a miserable argument about whether we should respect Bob's authority.)
It's been astonishing to me how much better people respond if instead I just say, "I really want to do [thing I want.] Can we do that?"
No, it doesn't guarantee that you'll get your way, but it makes it a whole lot more likely. More than that, it means that when you do get into negotiation or debate, that debate stays targeted to the actual decision you're disagreeing about, instead of a global fight about anything and everything, and thus is more likely to be resolved.
Back at MetaMed, I had a coworker who believed in alternative medicine. I didn't. This caused a lot of spoken and unspoken conflict. There were global values issues at play: reason vs. emotion, logic vs. social charisma, whether her perspective on life was good or bad. I'm embarrassed to say I was rude and inappropriate. But it was coming from a well-meaning place; I didn't want any harm to come to patients from misinformation, and I was very frustrated, because I didn't see how I could prevent that outcome.
Finally, at my wit's end, I blurted out what I wanted: I wanted to have veto power over any information we sent to patients, to make sure it didn't contain any factual inaccuracies.
Guess what? She agreed instantly.
This probably should have been obvious (and I'm sure it was obvious to her.) My job was producing the research reports, while her jobs included marketing and operations. The whole point of division of labor is that we can each stick to our own tasks and not have to critique each other's entire philosophy of life, since it's not relevant to getting the company's work done as well as possible. But I was extremely inexperienced at working with people at that time.
It's not fair to your coworkers to try to alter their private beliefs. (Would you try to change their religion?) A company is an association of people who cooperate on a local task. They don't have to see eye-to-eye about everything in the world, so long as they can work out their disagreements about the task at hand.
This is a skill that "practical" people have, and "idealistic" and "theoretical" people are often weak at -- the ability to declare some issues off topic. We're trying to decide what to do in the here and now; we don't always have to turn things into a debate about underlying ethical or epistemological principles. It's not that principles don't exist (though some self-identified "pragmatic" or "practical" people are against principles per se, I don't agree with them.) It's that it can be unproductive to get into debates about general principles, when it takes up too much time and generates too much ill will, and when it isn't necessary to come to agreement about the tactical plan of what to do next.
Well, what about longer-term, more intimate partnerships? Maybe in a strictly professional relationship you can avoid talking about politics and religion altogether, but in a closer relationship, like a marriage, you actually want to get alignment on underlying values, worldviews, and principles. My husband and I spend a ton of time talking about the diffs between our opinions, and reconciling them, until we do basically have the same worldview, seen through the lens of two different temperaments. Isn't that a counterexample to this "just debate the practical issue at hand" thing? Isn't intellectual discussion really valuable to intellectually intimate people?
Well, it's complicated. Because I've found the same trick of narrowing the scope of the argument and just asking for what I want resolves debates with my husband too.
When I find myself "debating around" a request, it's often debating in bad faith. I'm not actually trying to find out what the risks of [not what I want] are in real life, I'm trying to use talking about danger as a way to scare him into doing [what I want]. If I'm quoting an expert nutritionist to argue that we should have home-cooked family dinners, my motivation is not actually curiosity about the long-term health dangers of not eating as a family, but simply that I want family dinners and I'm throwing spaghetti at a wall hoping some pro-dinner argument will work on him. The "empirical" or "intellectual" debate is just so much rhetorical window dressing for an underlying request. And when that's going on, it's better to notice and redirect to the actual underlying desire.
Then you can get to the actual negotiation, like: what makes family dinners undesirable to you? How could we mitigate those harms? What alternatives would work for both of us?
Debating a far-mode abstraction (like "how do home eating habits affect children's long-term health?") is often an inefficient way of debating what's really a near-mode practical issue only weakly related to the abstraction (like "what kind of schedule should our household have around food?") The far-mode abstract question still exists and might be worth getting into as well, but it also may recede dramatically in importance once you've resolved the practical issue.
One of my long-running (and interesting and mutually respectful) disagreements with my friend Michael Vassar is about the importance of local/tactical vs. global/strategic cooperation. Compared to me, he's much more likely to value getting to alignment with people on fundamental values, epistemology, and world-models. He would rather cooperate with people who share his principles but have opposite positions on object-level, near-term decisions, than people who oppose his principles but are willing to cooperate tactically with him on one-off decisions.
The reasoning for this, he told me, is simply that the long-term is long, and the short-term is short. There's a lot more value to be gained from someone who keeps actively pursuing goals aligned with yours, even when they're far away and you haven't spoken in a long time, than from someone you can persuade or incentivize to do a specific thing you want right now, but who won't be any help in the long run (or might actually oppose your long-run aims.)
This seems like fine reasoning to me, as far as it goes. I think my point of departure is that I estimate different numbers for probabilities and expected values than him. I expect to get a lot of mileage out of relatively transactional or local cooperation (e.g. donors to my organization who don't buy into all of my ideals, synagogue members who aren't intellectually rigorous but are good people to cooperate with on charity, mutual aid, or childcare). I expect getting to alignment on principles to be really hard, expensive, and unlikely to work, most of the time, for me.
Now, I think compared to most people in the world, we're both pretty far on the "long-term cooperation" side of the spectrum.
It's pretty standard advice in business books about company culture, for instance, to note that the most successful teams are more likely to have shared idealistic visions and to get along with each other as friends outside of work. Purely transactional, working-for-a-paycheck, arrangements don't really inspire excellence. You can trust strangers in competitive market systems that effectively penalize fraud, but large areas of life aren't like that, and you actually have to have pretty broad value-alignment with people to get any benefit from cooperating.
I think we'd both agree that it's unwise (and immoral, which is kind of the same thing) to try to benefit in the short term from allying with terrible people. The question is, who counts as terrible? What sorts of lapses in rigorous thinking are just normal human fallibility and which make a person seriously untrustworthy?
I'd be interested to read some discussion about when and how much it makes sense to prioritize strategic vs. tactical alliance.
Comments sorted by top scores.
comment by shminux ·
2018-08-12T23:56:04.158Z · LW(p) · GW(p)
It's not immediately obvious when it is best to request a specific thing, and when it is best to foster an agreement about the general principles that informed your need for the thing. Both have their place, and it's good to mentally check in each particular situation which seems like a better option, instead of sticking to one pattern without thinking.
comment by deluks917 ·
2018-08-16T20:35:47.944Z · LW(p) · GW(p)
The question is, who counts as terrible? What sorts of lapses in rigorous thinking are just normal human fallibility and which make a person seriously untrustworthy?
If at all possible you need to look at the person's actual track record. Everyone has views you will find incredible stupid or immoral. Even the very wise make mistakes that look obvious to us. In addition its possible that the person engaging in 'obvious folly' actually has a better understanding of the situation than we do. You need to look at a representiive sample and weigh their successes and failures in a systematic way. If you cannot access their history you still need to get an actual sample. If you were judging programmers something like a triplebyte interview is a reasonable way to get info. Trying to weigh the stupid things they have said about programming is a very bad method. Without a real sample you are making a character judgement under huge uncertainty.
Of course we are Bayesians. If forced to come up with an estimate despite uncertainty we can do it. But its important to do the updating correctly. Say a person's stupidest belief, that you know about, is X. The relevant odds ratio is not:
P(beleives X| trustworthy)/P(beleives X|untrustorthy)
Instead you have to look at:
P(stupidest belief I learn about is at least as stupid as X| trustworthy)/P(stupidest beleif I learn about is at least as stupid as X|untrustowrthy)
You can try to estimate similar odds ratios for collections of stupid beleifs. This method isnt as good as trying to conditioning on both unusually wise and unusually stupid beliefs. But if you are going to judge based on stupid beliefs you have to do it correctly. Keep in mind that the more 'open' a person is the more likely you are to learn their stupid beleifs. So you need to facor in an estimate of their openness towards you.
comment by Vladimir_Nesov ·
2018-08-13T07:28:10.615Z · LW(p) · GW(p)
The difference between getting what you want and sorting out a topic in the long term is also about social norms and habits of thought. It's often easier to get what you want by asking, but then you'll only get what you want and maybe become less effective at changing the underlying reasoning that caused you to want this thing or prevented others from giving it to you.
I worry that if I'm being a little bit more specific, I'm aligning with the norm of following directions rather than own understanding, shifting priority away from developing this understanding in myself and others. Of course any advice has an equal and opposite counteradvice, but my guess is that the popular inclination is to treat the symptoms.
comment by adamzerner ·
2018-08-24T21:47:12.842Z · LW(p) · GW(p)
I wonder why people have the instinct to "debate around" rather than just asking for what they want. I get the sense that understanding the why will really help to get ourselves to stop doing it.
Maybe because our instinct is to think, "Well, if they don't agree with me in the more theoretical sense, why would they do the thing I want?" Eg. if your husband doesn't agree with you about the value of home-cooked family dinners, why would he agree to your request to have them?
Well, perhaps he doesn't mind having home-cooked family dinners. Or perhaps he just senses that your preferences are larger than his, and wants to make you happy. In either case, his thinking is roughly, "Sure, we can do that. I don't agree with you about why, but it's not a big deal to me and I don't mind doing it at all."
I sense that enough positive reinforcement could address this. Eg. "I make requests of people to do things even though they disagree with me all the time, and many times it works!"
Or maybe it has to do with our tendency to get dragged in to arguments. Eg. you start off mentioning that you think home-cooked family dinners are valuable, with the intention to follow up by asking if you can have them more often. But your husband disagrees, and your instinct is to explain why you disagree with his disagreement. Which leads to a rabbit hole. Which perhaps leads you to forget to ask, "well could we just do this even though you don't agree". Or perhaps it feels uncomfortable to make the request after he clearly disagrees.
I guess one way to address this is to try to establish some sort of TAP of "Disagreement --> This might be a rabbit hole. Do I have any concrete requests to make before we get going?"
comment by Elo ·
2018-08-14T01:59:07.579Z · LW(p) · GW(p)
Drafting edits. I trust that you can see past the basic "I'm being attacked" feeling and can recognise the effort and time that has gone into the comments.
Replies from: rk, philh
↑ comment by rk ·
2018-08-14T19:11:17.315Z · LW(p) · GW(p)
I feel a pull towards downvoting this. I am not going to, because I think this was posted in good faith, and as you say, it's clear a lot of time and effort has gone into these comments. That said, I'd like to unpack my reaction a bit. It may be you disagree with my take, but it may also be there's something useful in it.
[EDIT: I should disclaim that my reaction may be biased from having recently received an aggressive comment.]
First, I should note that I don't know why you did these edits. Did sarahconstantin ask you to? Did you think a good post was being lost behind poor presentation? Is it to spell out your other comment in more detail? Knowing the answer to this might have changed my reaction.
My most important concern is why this feedback was public. The only (charitable) reason I can think of is to give space for pushback of the kind that I am giving.
My other major concern is presentation. The sentence 'I trust that you can see past the basic "I'm being attacked" feeling and can recognise the effort and time that has gone into the comments' felt to me like a status move and potentially upsetting someone then asking them to say thank you.
Replies from: Elo
↑ comment by Elo ·
2018-08-14T20:14:10.142Z · LW(p) · GW(p)
I want this level of feedback culture to be more common. I want every writer to be able to grow from in depth pulling apart of their words and putting back together. Quality writing comes from iteration. Often on the small details like the hedges and the examples and the flow.
I wanted to be blatant about the effort not being ignored. I'd be sad if the comments were barely read. I was hoping to show awareness of myself in doing what might have been taken as a jerk move, (twice if you include the comment).
I don't know how to do the blatant thing without words, and my other option of post without comment didn't have the same effect.
↑ comment by philh ·
2018-08-15T14:43:40.490Z · LW(p) · GW(p)
I suggest that you only make comments like this for people who've opted in.
Regardless of that, I request that you not make comments like this on anything I post.
(I had written reasons for these, but on the spirit of the post, I'm defaulting to not including them.)
Replies from: Elo
comment by Elo ·
2018-08-12T19:26:48.359Z · LW(p) · GW(p)
Great technique. I'm confused about why you wrote the article in your old style of "debate around" instead of your new style of "say it like it is". Since that's what you want to promote. What would this article look like in the new style?
Replies from: sarahconstantin
↑ comment by sarahconstantin ·
2018-08-12T20:54:48.703Z · LW(p) · GW(p)
I'm not actually asking for people to do a thing for me, at this point. I think the closest to a request I have here is "please discuss the general topic and help me think about how to apply or fix these thoughts."
I don't think all communication is about requests (that's a kind of straw-NVC) only that when you are making a request it's often easier to get what you want by asking than by indirectly pressuring.Replies from: Elo
↑ comment by Elo ·
2018-08-12T21:18:27.818Z · LW(p) · GW(p)
The salient part for me in your post was:
Start with the request - don't build the argument case. If necessary, come back with layers of consideration, validation, justification. Explain in detail, expound considerations and concerns. But start with the request and trust the other party to listen to your request or seek justification if they need it.
Trust is hard. But I'm in support of your post.
Replies from: SaidAchmiz
comment by Elo ·
2018-08-14T02:41:20.103Z · LW(p) · GW(p)
Tactical vs. Strategic Cooperation
How important is local & tactical vs. global & strategic cooperation? Is it more important to get alignment with people on fundamental beliefs, epistemology, and world-models? Or is it more important to get cooperate on a local task in the present?
From the outside, a “company” is an example of a group of people cooperating on a local task. They don’t have to see eye-to-eye about everything in the world, so long as they can work out their disagreements about the task at hand.
From the inside of a company - it’s standard advice in company culture, to note that the most successful teams are more likely to have shared idealistic visions and to get along with each other as friends outside of work. Purely transactional, working-for-a-paycheck, arrangements don’t tend to inspire excellent work.
I expect getting to alignment on principles to be really hard, expensive, and unlikely to work for me, (It would be valuable to ask myself why that is, but not right now). I expect to get a lot of mileage out of relatively transactional or local cooperation. For example donors to my organisation who don’t buy into all of my ideals, but are willing to donate anyway.
I used to approach my interpersonal “wants”, by “debating around” the issue.
For example I might have said:
- “If we don’t do what I want, horrible things A, B, and C will happen!”
This degenerates into a miserable argument over how likely A, B, and C are, or a referendum on how neurotic or pessimistic I am.
- “You’re such an awful person for not having done [thing I want]!”
This degenerates into a miserable argument about each other’s general worth.)
- “Authority Figure Bob will disapprove if we don’t do [thing I want]!”
This degenerates into a miserable argument about whether we should respect Bob’s authority.
Back at MetaMed, I had a coworker who believed in alternative medicine. I didn’t, and I was very frustrated. I didn’t want any harm to come to patients from misinformation, and I couldn’t see how I could prevent harm. This caused a lot of conflict, both spoken and unspoken. There were global values issues on debate; reason vs. emotion, logic vs. social charisma, who’s perspective on life was good or bad. I’m aware now and embarrassed to say I came across rude and inappropriate. I was coming from a well-meaning place. I didn’t want any harm to come to patients from misinformation, and I was very frustrated, because I didn’t see how I could prevent that outcome.
Finally, at my wit’s end, I blurted out what I wanted: I wanted to have veto power over information we sent to patients, to make sure it didn’t contain any factual inaccuracies.
She agreed instantly.
Turns out, people respond better if I just say, “I really want [thing I want] Can we do that?”
This phrase doesn’t guarantee I’ll get my way. It does mean that when I do get into a negotiation, that discussion stays targeted to the decision I disagree about, not the justification points.
I was aiming for global alignment on local issues when I should have aimed for simple agreement.
Narrowing the scope of the request and being clear about what I want, resolves debates with my husband too. If I watch myself quoting an expert nutritionist to argue that we should have home-cooked family dinners, my motivation feels like it’s not curiosity about the long-term health dangers of not eating as a family, rather that I want family dinners and I’m throwing spaghetti at a wall hoping some pro-dinner argument will work. The “empirical” or “intellectual” debate is rhetorical window dressing for my underlying request to fulfil my need. When I notice that is going on, it’s better to redirect to the actual underlying desire.
Then I can get to the actual negotiation. What makes family dinners undesirable to you? How could we balance those needs? What alternatives would work for both of us?
My friend Michael Vassar prefers getting to alignment with people on fundamentals first. Epistemics, beliefs, values. He believes there’s a lot more value to be gained from someone who keeps actively pursuing goals aligned with yours, even when they’re far away and you haven’t spoken in a long time, than from someone you can persuade to do a specific action you want right now, but who will need further persuading in the future.
Assuming humans divide into two rough categories, “the great people who have value alignment with me” and “the terrible people who don’t have value alignment with me”, I think both Michael and myself would agree it’s unwise to benefit in the short term from allying with terrible people. The question is, who counts as terrible? Which value misalignments are acceptable human fallibility and which lapses in rigorous thinking make a human seriously untrustworthy?
I’d be interested to read some discussion about when and how much it makes sense to prioritise long and short term alliance.Replies from: habryka4
↑ comment by habryka (habryka4) ·
2018-08-14T17:46:25.848Z · LW(p) · GW(p)
Upvoted for the effort, and since it helped me get value out of the post, but I do think many authors can feel violated when they see other people significantly edit their writing and then publish it like this. I have some underlying models here that I might be able to explicate, but for now I just wanted to state the high-level outputs of my thinking. Replies from: SaidAchmiz