Karate Kid and Realistic Expectations for Disagreement Resolution
post by Raemon · 2019-12-04T23:25:59.608Z · LW · GW · 23 commentsContents
Why? i. Complex Beliefs, or Frame Differences, that take time to communicate. ii. Complex Beliefs, or Frame Differences, that take time to absorb iii. Idea Innoculation + Inferential Distance iv. Hitting the right explanation / circumstances v. Social pressure might take time to shift Optimism and Pessimism None 23 comments
There’s an essay that periodically feels deeply relevant to a situation:
Someday I want to write a self-help book titled “F*k The Karate Kid: Why Life is So Much Harder Than We Think”.
Look at any movie with a training montage: The main character is very bad at something, then there is a sequence in the middle of the film set to upbeat music that shows him practicing. When it's done, he's an expert.
It seems so obvious that it actually feels insulting to point it out. But it's not obvious. Every adult I know--or at least the ones who are depressed--continually suffers from something like sticker shock (that is, when you go shopping for something for the first time and are shocked to find it costs way, way more than you thought). Only it's with effort. It's Effort Shock.
We have a vague idea in our head of the "price" of certain accomplishments, how difficult it should be to get a degree, or succeed at a job, or stay in shape, or raise a kid, or build a house. And that vague idea is almost always catastrophically wrong.
Accomplishing worthwhile things isn't just a little harder than people think; it's 10 or 20 times harder. Like losing weight. You make yourself miserable for six months and find yourself down a whopping four pounds. Let yourself go at a single all-you-can-eat buffet and you've gained it all back.
So, people bail on diets. Not just because they're harder than they expected, but because they're so much harder it seems unfair, almost criminally unjust. You can't shake the bitter thought that, "This amount of effort should result in me looking like a panty model."
It applies to everything. [The world] is full of frustrated, broken, baffled people because so many of us think, "If I work this hard, this many hours a week, I should have (a great job, a nice house, a nice car, etc). I don't have that thing, therefore something has corrupted the system and kept me from getting what I deserve."
Last time I brought this up it was in the context of realistic expectations for self improvement [LW · GW].
This time it’s in the context of productive disagreement.
Intuitively, it feels like when you see someone being wrong, and you have a simple explanation for why they’re wrong, it should take you, like, 5 minutes of saying “Hey, you’re wrong, here’s why.”
Instead, Bob and Alice people might debate and doublecrux for 20 hours, making serious effort to understand each other’s viewpoint… and the end result is a conversation that still feels like moving through molasses, with both Alice and Bob feeling like the other is missing the point.
And if 20 hours seems long, try years.
AFAICT the Yudkowsky/Hanson Foom Debate didn’t really resolve. But, the general debate over “should we expect a sudden leap in AI abilities that leaves us with a single victor, or a multipolar scenario?" has actually progressed over time. Paul Christiano's Arguments About Fast Takeoff [LW · GW] seemed most influential of reframing the debate in a way that helped some people stop talking past each other, and focus on the actual different strategic approaches that the different models would predict.
Holden Karnofsky initially had some skepticism about some of MIRI's (then SIAI's) approach to AI Alignment. Those views changed over the course of years.
On the LessWrong team, we have a lot of disagreements about how to make various UI tradeoffs, which we still haven't resolved. But after a year or so of periodic chatting about I think we at least have better models of each other's reasoning, and in some cases we've found third-solutions that resolved the issue [LW · GW].
I have observed myself taking years to really assimilate the worldviews of others.
When you have deep frame disagreements [LW · GW], I think "years" is actually just a fairly common timeframe for processing a debate. I don't think this is a necessary fact about the universe, but it seems to be the status quo.
Why?
The reasons a disagreement might take years to resolve vary, but a few include:
i. Complex Beliefs, or Frame Differences, that take time to communicate.
Where the blocker is just "dedicating enough time to actually explaining things." Maybe the total process only takes 30 hours but you have to actually do the 30 hours, and people rarely dedicate more than 4 at a time, and then don't prioritize finishing it that highly.
ii. Complex Beliefs, or Frame Differences, that take time to absorb
Sometimes it only takes an hour to explain a concept explicitly, but it takes awhile for that concept to propagate through your implicit beliefs. (Maybe someone explains a pattern in social dynamics, and you nod along and say "okay, I could see that happening sometimes", but then over the next year you start to see it happening, and you don't "really" believe in it until you've seen it a few times.)
Sometimes it's an even vaguer thing like "I dunno man I just needed to relax and not think about this for awhile for it to subconsciously sink in somehow"
iii. Idea Innoculation + Inferential Distance
Sometimes the first few people explaining a thing to you suck at it, and give you an impression that anyone advocating the thing is an idiot, and causes you to subsequently dismiss people who pattern match to those bad arguments. Then it takes someone who puts a lot of effort into an explanation that counteracts that initial bad taste.
iv. Hitting the right explanation / circumstances
Sometimes it just takes a specific combination of "the right explanation" and "being in the right circumstances to hear that explanation" to get a magical click [LW · GW], and unfortunately you'll need to try several times before the right one lands. (And, like reason #1 above, this doesn't necessarily take that much time, but nonetheless takes years of intermittent attempts before it works)
v. Social pressure might take time to shift
Sometimes it just has nothing to do with good arguments and rational updates – it turns out you're a monkey who's window-of-possible beliefs depends a lot on what other monkeys around you are willing to talk about. In this case it takes years for enough people around you to change their mind first.
Hopefully you can take actions to improve your social resilience, so you don't have to wait for that, but I bet it's a frequent cause.
Optimism and Pessimism
You can look at this glass half-empty or half-full.
Certainly, if you're expecting to convince people of your viewpoint within a matter of hours, you may sometimes have to come to terms with that not always happening. If your plans depend on it happening, you may need to re-plan. (Not always: I've also seen major disagreements get resolved in hours, and sometimes even 5 minutes. But, "years" might be an outcome you need to plan around. If it is taking years it may not be worthwhile unless you're actually building a product together. [LW · GW])
On the plus side... I've now gotten to see several deep disagreements actually progress. I'm not sure I've seen a years-long disagreement resolve completely, but have definitely seen people change their minds in important ways. So I now have existence proof that this is even possible to address.
Many of the reasons listed above seem addressable. I think we can do better.
23 comments
Comments sorted by top scores.
comment by romeostevensit · 2019-12-06T04:41:21.254Z · LW(p) · GW(p)
I disagree with the karate kid essay. People have a really hard time with interventions often because they literally do not have a functioning causal model of the thing in question. People who apply deliberate practice to a working causal model often level up astonishingly quickly. Don't know if you have the appropriate causal model? Well, when you apply deliberate practice do you not get better? You're pulling on fake levers.
People have cognitive dissonance about this because a bunch of their life UI is fake levers, and acknowledging one would lead to acknowledging others.
I propose an alternative model. People don't resolve disagreements because there are no incentives to resolve them. In fact the incentives often cut the other way. The pretense of shared intent and avoidance of conflicting intents is short term workable, long term calcifying. Conflict resolutions that would need to be about intents often disguise themselves as conflicts about strategies or implementations instead, because bike shedding, and people not even being aware of their own inconsistencies in this regard.
Replies from: mr-hire, Lanrian, Raemon↑ comment by Matt Goldenberg (mr-hire) · 2019-12-08T01:16:56.922Z · LW(p) · GW(p)
People who apply deliberate practice to a working causal model often level up astonishingly quickly.
This is (sort of) true but I think you're overstating it. For instance, TIm Ferris' whole thing is about breaking down skills into functional causal models, and he certainly does a good job of becoming proficient at them, but you'll notice he never becomes EXPERT at them.
Similarly, Josh Waitzkin also wrote a whole book about learning and breaking down skills into causal models, but still wanted at least 5 YEARS to train BJJ 24/7 before being comfortable going to the world finals (he never actually ended up going to the world finals so we'll never be sure if this was enough time).
Your example below is a solo skill, but I wager if it was a competitive skill you'd find that while you outpace your peer newbies, it still feels quite slow to catch up to veterans.
I suspect something similar with disagreements. Someone skilled at disagreements can take a deep disagreement that would have taken 10 years and turn it into 5 years. But that's still 5 years.
↑ comment by Lukas Finnveden (Lanrian) · 2019-12-06T15:43:26.500Z · LW(p) · GW(p)
People have a really hard time with interventions often because they literally do not have a functioning causal model of the thing in question. People who apply deliberate practice to a working causal model often level up astonishingly quickly.
Could you give an example of a functioning causal model that quickly helps you learn a topic?
I'm not sure whether you're thinking about something more meta-level, "what can I practice that will cause me to get better", or something more object-level, "how does mechanics work", and I think an example would help clarify. If it's the latter, I'm curious about what the difference is between having a functioning causal model of the subject (the precondition for learning) and being good at the subject (the result of learning).
Replies from: romeostevensit↑ comment by romeostevensit · 2019-12-07T00:26:23.473Z · LW(p) · GW(p)
When I was trying to improve at touch typing I had to distinguish between causes of different kinds of errors. If my model was 'speed is good' and 'tried to go faster' I'd face constant frustration at the seeming interplay between speed and error rate. Instead, I built up a model of different errors like 'left right finger confusion', 'moved wrong finger off home row', 'tendency to reverse key presses in certain sequences', etc. Then I could find ways of practicing each error specifically, finding really cruxy examples that caused the worst traffic jams for me. This is a simple example because feedback loops are immediate. In many cases the added complexity is VoI calculations because gathering data on any given hypothesis costs some time or other resources.
Learning the causal model as you practice is a meta skill that levels up as you try to be careful when learning new domains.
↑ comment by Raemon · 2019-12-08T04:17:55.430Z · LW(p) · GW(p)
This is an interesting comment. Some thoughts after reflecting a bit:
Awhile ago you wrote a comment saying something like "deliberate practice deliberate practice until you get really good identifying good feedback loops, and working with them." I found that fairly inspiring at the time.
I didn't ever really dedicate myself to doing that thoroughly enough to have a clear opinion on whether it's The Thing. I think I put maybe... 6 hours into deliberate practice, with intent to pay attention to how deliberate practice generalized. I got value out of that that was, like, commensurate with the "serious practice" noted here [LW · GW] (i.e. if I kept that up consistently I'd probably skill up at a rate of 5-10% per year, and at the time that I did it, my output in the domains-in-question was maybe 10-20% higher, but more costly), but it required setting aside time for hard cognitive labor that feels in short supply.
There were at least some domains (a particular videogame I tried to deliberate practice), that seemed very surprisingly hard to improve at.
I do have a general sense (from this past year as well as previous experience) that in many domains, there are some rapid initial gains to be had for the first 20 hours or so.
None of this feels like "things are actually easier than described in the karate kid essay." I would agree with the claim "the karate kid essay sort of implies you just have to try hard for a long time, and actually many of the gains come from actually having models of how things work and you should be able to tell if you're improving." But that doesn't make things not hard.
It seems plausible that if you gain the generalized Deliberate Practice skill a lot of things become much easier, and that it's the correct skill to gain early in the skill-tree. But, like, it's still pretty hard yo.
I also agree that most people aren't actually even trying to get better at disagreement, and if they were doing that much at all that'd make a pretty big difference. ("years" is what I think the default expectation should be among people that aren't really trying)
Replies from: romeostevensit↑ comment by romeostevensit · 2019-12-09T03:22:25.789Z · LW(p) · GW(p)
Right, that first 20 hours gets you to the 80th-90th percentile and it takes another 200 to get to the 99th. But important cognitive work seems multiplicative more than additive, so getting to the 80-90th percentile in the basics makes a really big difference.
comment by Shmi (shminux) · 2019-12-05T04:07:12.288Z · LW(p) · GW(p)
Maybe the total process only takes 30 hours but you have to actually do the 30 hours, and people rarely dedicate more than 4 at a time, and then don't prioritize finishing it that highly
And even worse, our beliefs are elastic, so between this session and the next one the damage to the wounded beliefs heals, and one has to start almost from scratch again.
comment by Bendini (bendini) · 2019-12-05T08:14:56.552Z · LW(p) · GW(p)
I'm glad this post was written, but I don't think it's true in the sense that things have to be this way, even without new software to augment our abilities.
It's true that 99% of people cannot resolve disagreements in any real sense, but it's a mistake to assume that because Yudkowsky couldn't resolve a months long debate with Hanson and the LessWrong team can't resolve their disagreements that they're inherently intractable.
If the Yud vs Hanson debate was basically Eliezer making solid arguments and Hanson responding with interesting contrarian points because he sees being an interesting contrarian as the purpose of debating, then their inability to resolve their debate tells you little about how easy the agreement would be to resolve.
If the LessWrong team is made up entirely of conflict-avoidant people who don't ground their beliefs in falsifiable predictions (this is my impression, having spoken to all of them individually), then the fact that their disagreements don't resolve after a year of discussion shouldn't be all that surprising.
The bottleneck is the dysfunctional resolution process, not the absolute difficulty of resolving the disagreement.
Replies from: 24joy↑ comment by 24joy · 2019-12-05T08:55:32.508Z · LW(p) · GW(p)
Something about this comment feels slightly off.
You say
It's true that 99% of people cannot resolve disagreements in any real sense
But then go on to discuss specific cases in a way that gives me the impression a) that you don't think disagreements take a long time for the reasons discussed in the post b) that rationalists should easily be able to avoid the traps of disagreements being lengthy and difficult if only they "did it right".
If the impression I get from the comment represents your view, I'm concerned you'll be missing ways to actually solve disagreements in more cases by dismissing the problem as other people's fault.
Replies from: bendini↑ comment by Bendini (bendini) · 2019-12-05T09:52:19.000Z · LW(p) · GW(p)
a) that you don't think disagreements take a long time for the reasons discussed in the post
Disagreements aren't always trivial to resolve, but you've been actively debating an issue for a month and zero progress has been made, either the resolution process is broken or someone is doing something besides putting maximum effort into resolving the disagreement.
b) that rationalists should easily be able to avoid the traps of disagreements being lengthy and difficult if only they "did it right".
Maybe people who call themselves rationalists "should" be able to, but that doesn't seem to be what happens in practice. Then again, if you've ever watched a group of them spend 30 minutes debating something that can be googled, you have to wonder what else they might be missing.
I'm concerned you'll be missing ways to actually solve disagreements in more cases by dismissing the problem as other people's fault.
It's true that if you are quick to blame others, you can fail to diagnose the real source of the problem. However, the reverse is also true. If the problem is that you or others aren't putting in enough effort, but you've already ruled it out on principle, you will also fail to diagnose it.
Something about this comment feels slightly off.
I'm not surprised that the comment feels off, it felt off to write it. Saying something that's outside the Overton window that doesn't sound like clever contrarianism feels wrong. (Which may also explains why people rarely leave comments like that in good faith.)
comment by Raemon · 2019-12-10T22:11:03.229Z · LW(p) · GW(p)
The original motivating example for this post doesn't actually quite fit into the lens the post oriented around. The post focuses on "disagreements between particular people". But there's a different set of issues surrounding "disagreements in a zeitgeist."
Groups update slower than individuals. So if you write a brilliant essay saying "this core assumption or frame of your movement is wrong", you may not only have to wait for individual people to go "oh, I see, that makes sense", but even longer for that to become common knowledge, enough that so that you won't reliably see group members acting under the old assumptions.
(Again, this is an observation about the status quo, not about what is necessarily possible if we grabbed all the low hanging fruit. But in the case of groups I'm more pessimistic about things improving as much, unless the group has strong barriers to entry and strong requirements of "actually committing serious practice to disagreement resolution.")
comment by Raemon · 2019-12-08T22:51:02.172Z · LW(p) · GW(p)
Somewhat replying to both romeo and bendini elsethread:
Bendini:
Disagreements aren't always trivial to resolve, but you've been actively debating an issue for a month and zero progress has been made, either the resolution process is broken or someone is doing something besides putting maximum effort into resolving the disagreement.
Romeo:
I propose an alternative model. People don't resolve disagreements because there are no incentives to resolve them. In fact the incentives often cut the other way.
I definitely have a sense that rationalists by default aren't that great at disagreeing for all the usual reasons (not incentivized to, don't actually practice the mental moves necessary to do so productively), and kinda the whole point of this sequence is to go "Yo, guys, it seems like we should actually be able to be good at this?"
And the problem in part is that that this requires multiple actors – my sense is that a single person trying their best to listen/learn/update can only get halfway there, or less.[1]
[1] The exact nature of "what you can accomplish with only one person trying to productively-disagree depends on the situation." It may be that that particular person can come to whatever the truest-nearby-beliefs reasonably well, but if you need agreement, or if the other person is the final decision maker, "one-person-coming-to-correct-beliefs" may not solve the problem.
Coming to Correct Beliefs vs Political Debate
I think one of the things going on is that it takes a bit of vulnerability to switch from "adversarial political mode" (a common default) to "actually be open to changing your mind." There is a risk that if you try earnestly to look at the evidence and change your mind, but your partner is just pushing their agenda, and you don't have some skills re: "resilience to social pressure", then you may be sort of just ceding ground in a political fight without even successfully improving truthseeking.
(sometimes, this is a legitimate fear, and sometimes it's not but it feels like it is, and noticing that in the moment is an important skill)
I've been on both sides of this, I think. Sometimes I've found myself feeling really frustrated that it feels like my discussion partner isn't listening or willing to update, and I find myself sort of leaning into an aggressive voice to try and force them to listen to me. And then they're like 'Dude, you don't sound like you're actually willing to listen to me or update" and then I was sheepishly like... "oh, yeah you're right."
It seems like having some kind of mutally-trustable-procedure for mutual "disarmament" would be helpful.
Replies from: Raemon↑ comment by Raemon · 2019-12-08T22:53:09.449Z · LW(p) · GW(p)
I do still think there's a lot of legitimately hard stuff here. In the past year, in some debates with Habryka and with Benquo, I found a major component (of my own updating) had to do with giving their perspectives time to mull around in my brain, as well as some kind of aesthetic component. (i.e. if one person says "this UI looks good" and another person says "this UI looks bad", there's an aspect of that that doesn't lend itself well to "debate". I've spent the past 1.5 years thinking a lot about Aesthetic Doublecrux, which much of this sequence was laying the groundwork for)
Replies from: bendini↑ comment by Bendini (bendini) · 2019-12-09T09:12:59.244Z · LW(p) · GW(p)
(Site meta: it would be useful if there was a way to get a notification for this kind of mention)
Some thoughts about specific points:
the whole point of this sequence is to go "Yo, guys, it seems like we should actually be able to be good at this?"
This is true for the sequence overall, but this post and some others you've written elsewhere follow the pattern of "we don't seem to be able to do the thing, therefore this thing is really hard and we shouldn't beat ourselves up about not being able to do it" that seems to come from a hard-coded mindset rather than a balanced evaluation of how much change is possible, how things could be changed and whether it was important enough to be worth the effort.
I think the mindset of "things are hard, everyone is doing the best we can" can be very damaging, as it reduces our collective agency by passively addressing the desire for change in a way that takes the wind out of its sails.
There is a risk that if you try earnestly to look at the evidence and change your mind, but your partner is just pushing their agenda, and you don't have some skills re: "resilience to social pressure", then you may be sort of just ceding ground in a political fight without even successfully improving truthseeking.
Resilience to social pressure is part of it, but there also seems to be a lot of people who lack the skill to evaluate evidence in a way that doesn't bottom out at "my friends think this is true" or "the prestigious in-group people say this is true".
It seems like having some kind of mutally-trustable-procedure for mutual "disarmament" would be helpful.
A good starting point for this would be listing out both positions in a way that orders claims separately, ranked by importance, and separating the evidence for each into 1) externally verifiable 2) circumstantial 3) non-verifiable personal experience 4) intuition.
if one person says "this UI looks good" and another person says "this UI looks bad", there's an aspect of that that doesn't lend itself well to "debate"
I've had design arguments like this (some of them even about LW), but my takeaway from them was not that this can't be debated, but that:
1) People usually believe that design is almost completely subjective
2) Being able to crux on design requires solving 1 first
3) Attempts to solve 1 are seen as the thin end of the wedge
4) If you figure out how to test something they assumed couldn't be tested, they feel threatened by it rather than see it as a chance to prove they were right.
5) The question "which design is better" contains at least 10 cruxable components which need to be unpacked.
6) If the other person doesn't know how to unpack the question, they will see your attempts as a less funny version of proving that 1 = 2.
7) People seem to think they can bury their heads in the sand and the debate will magically go away.
Arguments about design have a lot of overlap with debates about religion, but if you're trying to debate "does God exist?" on face value rather than questions like "given the scientific facts we can personally verify, what is the probability that God exists?" and "regardless of God's existence, which religious teachings should we follow anyway?" then it is unlikely to make progress.
comment by Mary Chernyshenko (mary-chernyshenko) · 2019-12-21T09:03:32.248Z · LW(p) · GW(p)
About the idea inoculation bit. It doesn't even require an idiot or two to be the first to explain an idea. (Although maybe I'm thinking about a different thing.)
It so happens that in the course of biology instruction in highschool and college where I live, the tradition is to open several courses with the same topic. Specifically, The Structure of the Cell. (Another motif, for botanists, is The General Life Cycle of Plants. I'm sure other specialties have their own Foundations.) And it's an important topic, but after the third time you just feel... dumber... when the teacher begins to talk about it. As if you try to find something new and exciting or in any case something to think about, and there's nothing there.
I would say it is the "keeping the abstraction level constant" that does it. I felt like I've got to actively work to avoid stupor and a desire to "just see it die in peace".
comment by TAG · 2019-12-06T14:14:12.487Z · LW(p) · GW(p)
In the case of Karate, one can see that it is at least possible to become an expert. What does being a black belt at debate even mean? It probably doesn't mean being able to conclude any argument, since it is not clear that achieving resolution ofcontentious issues is possible in general.
Issues that are sufficiently deep, or which cut across cultural boundaries run into a problem where, not only do parties disagree about the object level issue, they also disagree about underlying questions of what constitutes truth, proof, evidence, etc. "Satan created the fossils to mislead people" is an example of one side rejecting the other sides evidence as even being evidence . Its a silly example, but there are much more robust ones. Can't you just agree on an epistemology, and then resolve the object level issue? No, because it takes an epistemology to come to conclusions about epistemology. Two parties with epistemological differences at the object level will also have them at the meta level.
Once this problem, sometimes called "the epistemological circle" or "the Munchausen Trilemma", is understood, it will be seen that the ability to agree or settle issues is the exception, not the norm
Replies from: CronoDAS, romeostevensit↑ comment by CronoDAS · 2019-12-06T23:03:09.111Z · LW(p) · GW(p)
I have a bit of a dilemma. What do you say to someone who says things like "I believe in ghosts because I see and have conversations with them"? (Not a hypothetical example!)
Replies from: TAG, mr-hire↑ comment by TAG · 2019-12-07T12:50:32.450Z · LW(p) · GW(p)
"Well, I've never seen one".
There is still better and worse epistemology. I have been arguing that cautious epistemology is better than confident epistemology.
Even if there is a set of claims that are very silly, it doesn't follow that Aumann-style agreement is possible.
↑ comment by Matt Goldenberg (mr-hire) · 2019-12-06T23:46:43.989Z · LW(p) · GW(p)
Can you introduce me?
↑ comment by romeostevensit · 2019-12-08T00:42:27.673Z · LW(p) · GW(p)
LKY apparently regularly destroyed debate opponents in 3 different languages in long marathon sessions.
Replies from: TAG↑ comment by TAG · 2019-12-08T14:10:40.102Z · LW(p) · GW(p)
Did he solve politics? Certainly some people are better debaters than others, but that doesn't mean that the absolute standard of being able to resolve fundamental disputes is ever reached.
Replies from: romeostevensit↑ comment by romeostevensit · 2019-12-09T03:17:20.292Z · LW(p) · GW(p)
1. that's not a reasonable standard for the thing?
2. he actually came closer than almost anyone else?
Replies from: TAG↑ comment by TAG · 2019-12-09T08:26:37.561Z · LW(p) · GW(p)
Whether solving politics is reasonable depends on where you are coming from. There's a common assumption round here that Aumann's agreement applies to real life, that people should be able to reach agreement and solve problems, and if they can't, that's an anomaly that needs explanation. The OP is suggesting that the explanation for arch-rationalists such as Hanson and Yudkowsky being unable to agree is lack of skill,whereas I am suggesting that Aumanns theorem doesn't apply to real life, so lack of skill is not the only problem.