Arguments are good for helping people reason about things
post by AprilSR · 2022-03-11T23:02:33.158Z · LW · GW · 8 commentsContents
8 comments
Yudkowsky recently posted a Twitter thread about how ideal reasoners respond to arguments. My understanding of his reasoning is:
- The more smart/rational/whatever you are, the better you are at figuring out what is true
- Thus, whether you believe the conclusion of an argument should be based primarily on whether the conclusion is true, rather than on how effectively the argument was presented
This principle seems valid to me.
Valentine, in a LessWrong comment thread, [LW(p) · GW(p)] used Yudkowsky's thread to draw the conclusion that a healthy rationalist community should not make arguments. I think this is silly. We are not, unfortunately, logically omniscient; we cannot just look at data and draw all the correct conclusions from it. The purpose of an argument is to help people realize what conclusions they should draw from data without having to figure it all out on their own.
8 comments
Comments sorted by top scores.
comment by Valentine · 2022-03-14T22:42:50.682Z · LW(p) · GW(p)
The word "argument" has (at least) two kinds of uses:
- "Here's some reasoning showing why X might be true."
- Social pressure. (e.g. "No, you should donate to XYZ and not ABC because blah blah blah.")
I hear you saying that the first is good. I agree. Even when people have all the pieces. Not logically omniscient as you say.
The second is dumb. Healthy rational discourse norms would banish it.
I'm a bit of a dick on this point. When I see it, I exaggerate it and call it out. It irritates me. I wasn't being totally fair to D0TheMath. I think they were trying to be polite and follow standard LW norms and engage in good faith. But I still think my sight was basically right.
I read their comment as saying something like:
Given my epistemic state, I find myself disagreeing with you. I don't find it worthwhile to investigate whether you're right, so here's what you would need to do to persuade me.
This is normal in LW culture, and I think it's nuts.
It's part of the same bonkers thread that has people policing their epistemic impacts on each other and calling this handwringing "good collective epistemic hygiene". It's codependence. Plain and simple.
(What to do instead? How about just actually intend good epistemics for yourself, aim to be clear and transparent [LW · GW], and let others take care of their epistemic states (or completely fuck themselves up). That produces vastly more epistemically robust networks with vastly less overhead.)
A network that values persuading each other is crazy. Awful incentives. It feeds ego structures ("rewards with status" if you like) when arguments are persuasive. It starves geeks and feeds sociopaths.
If you disagree with someone and aren't curious, that's fine. Just admit that to yourself (and maybe tell them).
If you are curious, just ask. ("If you feel like looking into simulacra levels theory, maybe give me an example of XYZ if you can?")
But for the love of sanity, don't feed the sociopaths.
Replies from: AprilSR↑ comment by AprilSR · 2022-03-15T00:24:04.074Z · LW(p) · GW(p)
I mostly agree, but I feel like
Given my epistemic state, I find myself disagreeing with you. I don't find it worthwhile to investigate whether you're right, so here's what you would need to do to persuade me.
is pretty much just an example of someone admitting that they aren't curious (enough to do the investigation themself)?
comment by AprilSR · 2022-03-12T00:19:20.772Z · LW(p) · GW(p)
An extra nuance which isn't totally relevant to my main hot take of "arguments are often not useless": I agree with the idea that framing an argument in your head as an attempt to convince someone of something isn't necessarily the best way to think about it. It's very easy for arguments to start to feel adversarial, which they ideally shouldn't be.
This post was requested by Eneasz Brodski on the Bayesian Conspiracy discord.
comment by Nhlmnrt · 2022-03-12T20:35:48.977Z · LW(p) · GW(p)
The summary of Valentine's comment here is extremely inaccurate seeing as in the same comment thread he made an argument, it calls for making the minimum amount of arguments and says nothing about arguments not being useful. The comment seems to be more about how communities about optimizing for being good at something shouldn't spend resources on catering to people who are bad at that thing
Replies from: AprilSR↑ comment by AprilSR · 2022-03-14T07:09:21.240Z · LW(p) · GW(p)
I feel like "a healthy rationalist community should not make arguments" is pretty much just a slight rephrasing of "strong rationalist communication is healthiest and most efficient when practically empty of arguments", but I'm open to suggestions for alternative phrasings (especially if Valentine wants to comment).
I wouldn't say that anyone has an obligation to spend their time/energy responding to requests for laying out an argument, but I think the idea that doing so is "unhealthy" is wrong and I want to push back against it. It does not seem to me that the core of Valentine's objection is "producing a better argument would be too resource-intensive."
Replies from: jimmy↑ comment by jimmy · 2022-03-14T18:35:01.463Z · LW(p) · GW(p)
I feel like "a healthy rationalist community should not make arguments" is pretty much just a slight rephrasing of "strong rationalist communication is healthiest and most efficient when practically empty of arguments", but I'm open to suggestions for alternative phrasings (especially if Valentine wants to comment).
They're quite different. The latter is a qualified description. The former is an unqualified prescription. Even if the prescription were qualified, it does not automatically follow from the description, because it is not necessarily the case that focusing on making less arguments is a way to get healthier -- in the same way that "healthiest people tend to exercise" doesn't imply "get out of bed and go for a jog" is gonna help sick people. Goodheart's law has a tendency to screw these kind of things up.
Sometimes these kinds of things can work (maybe exercising more will keep you from getting sick?), but in those cases it is still an additional piece which is not contained in "healthiest tends to look like X". Every time you add in an additional piece because it seems implied from your perspective, you risk changing the meaning to something that the person saying it wouldn't endorse. When you're reading someone whose worldview is quite different to your own, this can happen very rapidly, so it's crucial to read precisely what they are saying and note which inferences are your own rather.
comment by romeostevensit · 2022-03-12T03:26:22.321Z · LW(p) · GW(p)
Outlining heuristics is helpful for every lurker who happens not to have encountered that heuristic before. This is often more useful to me than the object level arguments.