6 Tips for Productive Arguments
post by John_Maxwell (John_Maxwell_IV) · 2012-03-18T21:02:32.326Z · LW · GW · Legacy · 122 commentsContents
Keep it Friendly Inquire about Implausible-Sounding Assertions Before Expressing an Opinion Isolate Specific Points of Disagreement Don't Straw Man Fellow Arguers, Steel Man Them Instead If You See an Opportunity To Improve the Accuracy of Your Knowledge, Take It! Have Low "Belief Inertia" None 122 comments
We've all had arguments that seemed like a complete waste of time in retrospect. But at the same time, arguments (between scientists, policy analysts, and others) play a critical part in moving society forward. You can imagine how lousy things would be if no one ever engaged those who disagreed with them.
This is a list of tips for having "productive" arguments. For the purposes of this list, "productive" means improving the accuracy of at least one person's views on some important topic. By this definition, arguments where no one changes their mind are unproductive. So are arguments about unimportant topics like which Pink Floyd album is the best.
Why do we want productive arguments? Same reason we want Wikipedia: so people are more knowledgeable. And just like the case of Wikipedia, there is a strong selfish imperative here: arguing can make you more knowledgeable, if you're willing to change your mind when another arguer has better points.
Arguments can also be negatively productive if everyone moves further from the truth on net. This could happen if, for example, the truth was somewhere in between two arguers, but they both left the argument even more sure of themselves.
These tips are derived from my personal experience arguing.
Keep it Friendly
Probably the biggest barrier to productive arguments is the desire of arguers to save face and avoid publicly admitting they were wrong. Obviously, it's hard for anyone's views to get more accurate if no one's views ever change.
- Keep things warm and collegial. Just because your ideas are in violent disagreement doesn't mean you have to disagree violently as people. Stay classy.
- To the greatest extent possible, uphold the social norm that no one will lose face for publicly changing their mind.
- If you're on a community-moderated forum like Less Wrong, don't downvote something unless you think the person who wrote it is being a bad forum citizen (ex: spam or unprovoked insults). Upvotes already provide plenty of information about how comments and submissions should be sorted. (It's probably safe to assume that a new Less Wrong user who sees their first comment modded below zero will decide we're all jerks and never come back. And if new users aren't coming back, we'll have a hard time raising the sanity waterline much.)
- Err on the side of understating your disagreement, e.g. "I'm not persuaded that..." or "I agree that x is true; I'm not as sure that..." or "It seems to me..."
- If you notice some hypocrisy, bias, or general deficiency on the part of another arguer, think extremely carefully before bringing it up while the argument is still in progress.
Inquire about Implausible-Sounding Assertions Before Expressing an Opinion
If someone suggests something you find implausible, start asking friendly questions to get them to clarify and justify their statement. If their reasoning seems genuinely bad, you can refute it then.
As a bonus, doing nothing but ask questions can be a good way to save face if the implausible assertion-maker turns out to be right.
Be careful about rejecting highly implausible ideas out of hand. Ideally, you want your rationality to be a level where even if you started out with a crazy belief like Scientology, you'd still be able to get rid of it. But for a Scientologist to berid themselves of Scientology, they have to consider ideas that initially seen extremely unlikely.
It's been argued that many mainstream skeptics aren't really that good at critically evaluating ideas, just dismissing ones that seem implausible.
Isolate Specific Points of Disagreement
Stick to one topic at a time, until someone changes their mind or the topic is declared not worth pursuing. If your discussion constantly jumps from one point of disagreement to another, reaching consensus on anything will be difficult.
You can use hypothetical-oriented thinking like conditional probabilities and the least convenient possible world to figure out exactly what it is you disagree on with regard to a given topic. Once you've creatively helped yourself or another arguer clarify beliefs, sharing intuitions on specific "irreducible" assertions or anticipated outcomes that aren't easily decomposed can improve both of your probability estimates.
Don't Straw Man Fellow Arguers, Steel Man Them Instead
You might think that a productive argument is one where the smartest person wins, but that's not always the case. Smart people can be wrong too. And a smart person successfully convincing less intelligent folks of their delusion counts as a negatively productive argument (see definition above).
Play for all sides, in case you're the smartest person in the argument.
Rewrite fellow arguers' arguments so they're even stronger, and think of new ones. Arguments for new positions, even—they don't have anyone playing for them. And if you end up convincing yourself of something you didn't previously believe, so much the better.
If You See an Opportunity To Improve the Accuracy of Your Knowledge, Take It!
This is often called losing an argument, but you're actually the winner: you and your arguing partner both invested time to argue, but you were the only one who received significantly improved knowledge.
If you're worried about losing face or seeing your coalition (research group, political party, etc.) diminish in importance from you admitting that you were wrong, here are some ideas:
- Say "I'll think about it". Most people will quiet down at this point without any gloating.
- Just keep arguing, making a mental note that your mind has changed.
- Redirect the conversation, pretend to lose interest, pretend you have no time to continue arguing, etc.
Some of these techniques may seem dodgy, and honestly I think you'll usually do better by explaining what actually changed your mind. But they're a small price to pay for more accurate knowledge. Better to tell unimportant false statements to others than important false statements to yourself.
Have Low "Belief Inertia"
It's actually pretty rare that the evidence that you're wrong comes suddenly—usually you can see things turning against you. As an advanced move, cultivate the ability to update your degree of certainty in real time to new arguments, and tell fellow arguers if you find an argument of theirs persuasive. This can actually be a good way to make friends. It also encourages other arguers to share additional arguments with you, which could be valuable data.
One psychologist I agree with suggested that people ask
- "Does the evidence allow me to believe?" when evaluating what they already believe, but
- "Does the evidence compel me to believe?" when evaluating a claim incompatible with their current beliefs.
If folks don't have to drag you around like this for you to change your mind, you don't actually lose much face. It's only long-overdue capitulations that result in significant face loss. And the longer you put your capitulation off, the worse things get. Quickly updating in response to new evidence seems to preserve face in my experience.
If your belief inertia is low and you steel-man everything, you'll reach the super chill state of not having a "side" in any given argument. You'll play for all sides and you won't care who wins. You'll have achieved equanimity, content with the world as it actually is, not how you wish it was.
122 comments
Comments sorted by top scores.
comment by gwern · 2012-03-19T14:37:09.471Z · LW(p) · GW(p)
The most important tip for online arguing, for anything which you expect ever to be discussed again, is to keep a canonical master source which does your arguing for you. (The backfire effect means you owe it to the other person to present your best case and not a sketchy paraphrase of what you have enough energy to write down at that random moment; it's irresponsible to do otherwise.)
For example, if you are arguing about the historical Jesus and your argument does not consist of citations and hyperlinks with some prepositional phrases interspersed, you are doing it wrong. If I'm arguing about brain size correlation with intelligence, I stick the references into the appropriate Wikipedia article and refer to that henceforth. If I'm arguing about modafinil, I either link to the relevant section in Wikipedia or my article, or I edit a cleaned-up version of the argument into my article. If I'm arguing that Moody 2008 drastically undermines the value of dual n-back for IQ on the DNB ML, you can be sure that it's going into my FAQ. If I don't yet have an article or essay on it but it's still a topic I am interested in like how IQ contributes to peace and economic growth, then I will just accumulate citations in comments until I do have something. If I can't put it in LW, gwern.net, or Wikipedia - then I store it in Evernote!
This is an old observation: healthy intellectual communities have both transient discussions (a mailing list) and a static topical repository (an FAQ or wiki). Unfortunately, often there is no latter, so you have to make your own.
Replies from: Dmytry, Vulture↑ comment by Dmytry · 2012-03-19T18:51:23.450Z · LW(p) · GW(p)
Suggestions like that quickly degenerate into appeal to authority, or biased selection of sources, with no substance to it (no actual argument being made; imagine arguing mathematics like this, for extreme example; you make a proof, you ask person that disagrees to show what step, exactly, is wrong, they refer to 'expert' conclusions, 99% of the time simply because they can't do math, not because they are organized). I usually don't need your cherry-picking of references from wikipedia, I have wikipedia for that.
Replies from: gwern↑ comment by gwern · 2012-03-19T19:29:07.527Z · LW(p) · GW(p)
So in other words, this strategy degenerates into several steps higher up the hierarchy of disagreement than just about every other online argument...
Replies from: Dmytry, Manfred↑ comment by Dmytry · 2012-03-19T19:37:18.785Z · LW(p) · GW(p)
Okay, let me clarify: the problem of unproductive argument stems from the reality that people are a: bad truth finders, b: usually don't care to find truth and c: are prone to backwards thought from proposition to justifications, which is acceptable [because of limited computing power and difficulty to do it other way around].
The tip is awesome when you are right (and I totally agree that it is great to have references and so on). When you are wrong, which is more than half of the problem (as much of the time BOTH sides are wrong), it is extremely obtuse. I'd rather prefer people dump out something closer to why they actually believe the argument, rather than how they justify them. Yes, that makes for poor show, but it is more truthful. Why you believe something, is [often] not accurate citation. It is [often] the poor paraphrasing.
Just look at the 'tips' for productive arguments. Is there a tip number 1: drop your position ASAP if you are wrong? Hell frigging no (not that it would work either, though, that's not how arguing ever works).
edit: to clarify more. Consider climate debates. Those are terrible. Now, you can have naive honest folk who says he ain't trusting no climate scientist. You can have naive honest folk who says, he ain't trusting no oil company. And you can have two pseudo-climate-scientific dudes, arguing by obtusely citing studies at each other, not understanding a single thing about the climate modelling, generating a lot of noise that looks good but neither of them would ever change the view even if they seen all the studies they citing in exact same light. But they are merely the sophisticated version of former folks, who hide their actual beliefs. The cranks that make up some form of crank climate theory, are not as bad as those two types of climate-arguing folks. The former folks talking about politics, they generate some argument, they won't agree because one's authoritarian and other liberal, but they at least make that clear. The cranks, they generate cranky theories. The citingpeople, they generate pure deception as to who they are.
Replies from: Zetetic↑ comment by Zetetic · 2012-03-22T22:56:02.704Z · LW(p) · GW(p)
Just look at the 'tips' for productive arguments. Is there a tip number 1: drop your position ASAP if you are wrong? Hell frigging no (not that it would work either, though, that's not how arguing ever works).
I've done my best to make this a habit, and it really isn't that hard to do, especially over the internet. Once you 'bite the bullet' the first time it seems to get easier to do in the future. I've even been able to concede points of contention in real life (when appropriate). Is it automatic? No, you have to keep it in the back of your mind, just like you have to keep in mind the possibility that you're rationalizing. You also have to act on it which, for me, does seem to get easier the more I do it.
The tip is awesome when you are right (and I totally agree that it is great to have references and so on). When you are wrong, which is more than half of the problem (as much of the time BOTH sides are wrong), it is extremely obtuse. I'd rather prefer people dump out something closer to why they actually believe the argument, rather than how they justify them. Yes, that makes for poor show, but it is more truthful. Why you believe something, is [often] not accurate citation. It is [often] the poor paraphrasing.
This sort of goes with the need to constantly try to recognize when you are rationalizing. If you are looking up a storm of quotes, articles, posts etc. to back up your point and overwhelm your 'opponent', this should set off alarm bells. The problem is that those who spend a lot of time correcting people who are obviously wrong by providing them with large amounts of correct information also seem prone to taking the same approach to a position that merely seems obviously wrong for reasons they might not be totally conscious of themselves. They then engage in some rapid fire confirmation bias, throw a bunch of links up and try to 'overpower the opponent'. This is something to be aware of. If the position you're engaging seems wrong but you don't have a clear-cut, well evidenced reason why this is, you should take some time to consider why you want it to be right.
When facing someone who is engaging in this behavior (perhaps they are dismissing something you think is sensible, be it strong AI or cryonics, or existential risk, what have you) there are some heuristics you can use. In online debates in particular, I can usually figure out pretty quickly if the other person understands the citations they make by choosing one they seem to place some emphasis on and looking at it carefully, then posing questions about the details.
I've found that you can usually press the 'citingpeople' into revealing their underlying motivations in a variety of ways. One way is sort of poor - simply guess at their motivations and suggest that as truth. They will feel the need to defend their motivations and clarify. The major drawback is that this can also shut down the discussion. An alternative is to suggest a good-sounding motivation as truth - this doesn't feel like an attack, and they may engage it. The drawback is that this may encourage them to take up the suggested motivation as their own. At this point, some of their citations will likely not be in line with their adopted position, but pointing this out can cause backtracking and can also shut down discussion if pressed. Neither approach guarantees us clear insight into the motivations of the other person, but the latter can be a good heuristic (akin to the 'steel man suggestion). Really, I can't think of a cut-and-dried solution to situations in which people try to build up a wall of citations - each situation I can think of required a different approach depending on the nature of the position and the attitude of the other person.
Anyway, I think that in the context of all of the other suggestions and the basic etiquette at LW, the suggestions are fine, and the situation you're worried about would typically only obtain if someone were cherry picking a few of these ideas without making effort to adjust their way of thinking. Recognizing your motivation for wanting something to be true is an important step in recognizing when you're defending a position for poor reasons, and this motivation should be presented upfront whenever possible (this also allows the other person to more easily pinpoint your true rejection).
Replies from: Dmytry↑ comment by Dmytry · 2012-03-22T23:27:46.465Z · LW(p) · GW(p)
One should either cite the prevailing scientific opinion (e.g. on global warming), or present a novel scientific argument (where you cite the data you use). Other stuff really is nonsense. You can't usefully second-guess science. Citing studies that support your opinion is cherry picking, and is bad.
Consider a drug trial; there were 2000 cases where drug did better than placebo, and 500 cases where it did worse. If each trial was a study, the wikipedia page would likely link to 20 links showing that it did better than placebo, including the meta-study, and 20 that it did worse. If it was edited to have 40 links that it did better, it'll have 40 links that it did worse. How silly is the debate, where people just cite the cases they pick? Pointlessly silly.
On top of that people (outside lesswrong mostly) really don't understand how to process scientific studies. If there is a calculation that CO2 causes warming, then if calculation is not incorrect, or some very basic physics is not incorrect, CO2 does cause warming. There's no 'countering' of this study. The effect won't go anywhere, what ever you do. The only thing one could do is to argue that CO2 somehow also causes cooling; an entirely new mechanism. E.g. if snow was black, rather than white, and ground was white rather than dark, one could argue that warming removes the snow, leading in decrease in absorption, and decreasing the impact of the warming. Alas snow is white and ground is dark, so warming does cause further warming via this mechanics, and the only thing you can do is to come up with some other mechanism here that does the opposite. And so on. (You could disprove those by e.g. finding that snow, really, is dark, and ground, really, is white., or by finding that CO2 doesn't really absorb IR, but that's it).
People don't understand difference between calculating predictions, and just free-form hypothesising that may well be wrong, and needs to be tested with experiment, etc etc.
(i choose global warming because I trust it is not a controversial issue on LW, but I do want something that is generally controversial and not so crazy as to not be believed by anyone)
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-22T23:48:15.991Z · LW(p) · GW(p)
If there is a calculation that CO2 causes warming, then if calculation is not incorrect, or some very basic physics is not incorrect, CO2 does cause warming.
It might very well be possible that the calculation is correct, and the basic physics is correct, and yet an increase in CO2 emissions does not lead to warming -- because there's some mechanism that simultaneously increases CO2 absorption, or causes cooling (as you said, though in a less counterfactual way), or whatever. It could also be possible that your measurements of CO2 levels were incorrect.
Thus, you could -- hypothetically -- "counter" the study (in this scenario) by revealing the errors in the measurements, or by demonstrating additional mechanisms that invalidate the end effects.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-23T00:19:27.584Z · LW(p) · GW(p)
If there was a mechanism that simultaneously increased CO2 absorption, the levels wouldn't have been rising. For the measurements, you mean, like vast conspiracy that over reports the coal that is being burnt? Yes, that is possible, of course.
One shouldn't do motivated search, though. There is a zillion other mechanisms going on, of course, that increase, and decrease the effects. All the immediately obvious ones amplify the effect (e.g. warming releases CO2 and methane from all kinds of sources where it is dissolved; the snow is white and melts earlier in spring, etc). Of course, if one is to start doing motivated search either way, one could remain ignorant of those and collect the ones that work in opposite, and successfully 'counter' the warming. But that's cherry picking. If one is to just look around and report on what one sees there is a giant number of amplifying mechanisms, and few if any opposite mechanisms; which depend on the temperature and are thus incapable of entirely negating the warming because they need warming to work.
Replies from: Bugmaster, Eugine_Nier↑ comment by Bugmaster · 2012-03-23T00:26:56.930Z · LW(p) · GW(p)
If there was a mechanism that simultaneously increased CO2 absorption, the levels wouldn't have been rising.
I was thinking of a scenario where you measured CO2 emissions, but forgot to measure absorption (I acknowledge that such a scenario is contrived, but I think you get the idea).
For the measurements, you mean, like vast conspiracy that over reports the coal that is being burnt?
That's a possibility as well, but I was thinking about more innocuous things like sample contamination, malfunctioning GPS cables, etc.
In all of these cases, your math is correct, and your basic physics is correct, and yet the conclusion is still wrong.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-23T00:34:05.594Z · LW(p) · GW(p)
Well, I mentioned one of measurements - with regards to how well CO2 absorbs the infrared - as example. The measurements for inputs on basic physics are pretty damn well verified though, and rarely contested. If one is so concerned about such basic stuff, one shouldn't be using the technology anyway.
It's the selective trust that is a problem - you trust that plastic smell in car won't kill you in 15 years but you don't trust the scientists on warming. Amish global warming denialists aren't really a problem; the technophilic civilization that doesn't trust scientists only when they say something uncomfortable, is.
edit:
Anyhow, what you get in AGW debate, is citing of studies that aren't refuting each other in the slightest; the anti-warming side just cites some low grade stuff like climate measurements, which can at most prove e.g. that sun is dimming, or that we are also doing something which cools the planet (e.g. airplane contrails seed clouds, and the effect is not tiny), but we don't know what it is. The pro-warming side, though, typically, doesn't even understand that it got uncontested evidence. Majority of debates, both sides are wrong, one is correct about the fact simply due to luck, but not because the fact has made it, causally, to hold the view.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-23T01:39:04.330Z · LW(p) · GW(p)
The measurements for inputs on basic physics are pretty damn well verified though, and rarely contested.
That's true, but the model is more complex than "CO2 absorbs infrared, therefore global warming". It's closer to something like, "CO2 absorbs infrared, CO2 is produced faster than it is consumed, mitigating factors are insufficient, therefore global warming"; and in reality it's probably more complex than that. So, it's not enough to just measure some basic physical properties of CO2; you must also measure its actual concentration in the atmosphere, the rate of change of this concentration, etc.
the technophilic civilization that doesn't trust scientists only when they say something uncomfortable, is [the problem].
Here you and I agree.
both sides are wrong, one is correct about the fact simply due to luck, but not because the fact has made it, causally, to hold the view.
I think you're being a bit harsh here. Surely, not all the scientists are just rolling dice in the dark, so to speak ? If scientific consensus was correct primarily "due to luck", we probably wouldn't have gotten as far as we did in our understanding of the world...
Replies from: Dmytry↑ comment by Dmytry · 2012-03-23T03:55:06.440Z · LW(p) · GW(p)
re: the model, it gets extremely complex when you want to answer a question: does it absolutely positively compel me to change my view about warming.
It doesn't need to get that complex when you try to maximize expected future utility. We are, actually, pretty good at measuring stuff. And fairly outrageous things need to happen to break that.
I think you're being a bit harsh here. Surely, not all the scientists are just rolling dice in the dark, so to speak ?
I'm not speaking of scientists. I'm speaking of people arguing. Not that there's all that much wrong with it - after all, the folks who deny the global warming, they have to be convinced somehow, and they are immune to simple reasonable argument WRT the scientific consensus. No, they want to second-guess science, even though they never studied anything relevant outside the climate related discussion.
Replies from: Zetetic↑ comment by Zetetic · 2012-03-23T17:55:14.656Z · LW(p) · GW(p)
I'm speaking of people arguing. Not that there's all that much wrong with it - after all, the folks who deny the global warming, they have to be convinced somehow, and they are immune to simple reasonable argument WRT the scientific consensus. No, they want to second-guess science, even though they never studied anything relevant outside the climate related discussion.
I'm a tad confused. Earlier you were against people using the information they don't fully understand yet happens to be true, but here you seem to be suggesting that this isn't so bad and has a useful purpose - convincing people who deny global warming because they don't trust the science.
Would you be amenable to the position that sometimes it is OK to purposely direct people to adopt your point of view if it has a certain level of clear support, even if those people leave not fully understanding why the position is correct? I.e. is it sometimes good to promote "guessing the teacher's password" in the interest of minimizing risk/damages?
Replies from: Dmytry↑ comment by Dmytry · 2012-03-23T17:58:36.170Z · LW(p) · GW(p)
Well, I said it was irritating to see, especially if it doesn't work to convince anyone. If it works, well, the utility of e.g. changing the attitudes can exceed dis-utility of it being annoying. It's interesting how if one is to try to apply utilitarian reasoning it is immediately interpreted as 'inconsistent'. May be why we are so bad at it - other's opinions matter.
There has however be a mechanism for it to work for correct positions better than for incorrect ones. That is absolutely the key.
Replies from: Zetetic↑ comment by Zetetic · 2012-03-23T21:23:10.296Z · LW(p) · GW(p)
There has however be a mechanism for it to work for correct positions better than for incorrect ones. That is absolutely the key.
The whole point of studying formal epistemology and debiasing (major topics on this site) is to build the skill of picking out which ideas are more likely to be correct given the evidence. This should always be worked on in the background, and you should only be applying these tips in the context of a sound and consistent epistemology. So really, this problem should fall on the user of these tips - it's their responsibility to adhere to sound epistemic standards when conveying information.
As far as the issue of changing minds - there is sort of a continuum here, for instance I might have a great deal of strong evidence for something like, say, evolution. Yet there will be people for whom the inferential distance is too great to span in the course of a single discussion - "well, it's just a theory", "you can't prove it" etc.
Relevant to the climate example, a friend of mine who is doing his doctorate in environmental engineering at Yale was speaking to the relative of a friend who is sort of a 'naive' climate change denier - he has no grasp of how scientific data works nor does he have any preferred alternative theory he's invested in. He's more like the "well it's cold out now, so how do you explain that?" sort. My friend tried to explain attractors and long term prediction methods, but this was ineffective. Eventually he pointed out how warm the winter has been unusually this year, and that made him think a bit about it. So he exploited the other person's views to defend his position. However, it didn't correct the other person's epistemology at all, and left him with an equally wrong impression of the issue.
The problem with his approach (and really, in his defense, he was just looking to end the conversation) is that should that person learn a bit more about it, he will realize that he was deceived and will remember that the deceiver was a "global warming believer". In this particular case, that isn't likely (he almost certainly will not go and study up on climate science), but it illustrates a general danger in presenting a false picture in order to vault inferential distance.
It seems like the key is to first assess the level of inferential distance between you and the other person, and craft your explanation appropriately. The difficult part is doing so without setting the person up to feel cheated once they shorten the inferential distance a bit.
So, the difficulty isn't just in making it work better for correct positions (which has its own set of suggestions, like studying statistics and (good) philosophy of science), but also being extremely careful when presenting intermediate stories that aren't quite right. This latter issue disappears if the other person has close to the same background knowledge as you, and you're right that in such cases it can become fairly easy to argue for something that is wrong, and even easier to argue for something that isn't as well settled as you think it is (probably the bigger danger of the two), leading you to misrepresent the strength of your claim. I think this latter issue is much 'stickier' and particularly relevant to LW, where you see people who appear to be extremely confident in certain core claims yet appear to have a questionable ability to defend them (often opting to link to posts in the sequences, which is fine if you've really taken the time to work out the details, but this isn't always the case).
↑ comment by Eugine_Nier · 2012-03-23T03:14:38.838Z · LW(p) · GW(p)
All the immediately obvious ones amplify the effect
You mean like the fact that clouds are white and form more when it's warmer.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-23T03:45:40.484Z · LW(p) · GW(p)
You mean like the fact that clouds are white and form more when it's warmer.
Do they, really? Last time I checked they formed pretty well at -20c and at +35c . Ohh, i see knee jerk reaction happening - they may form a bit more at +35c in your place (here they are white, and also form more in winter). Okay, 55 degrees of difference may make a difference, now what?
There comes another common failure mode: animism. Even if you find temperature dependent effects that are opposite, they have to be quite strong to produce any notable difference of temperature as a result of 2 degree difference in temperature, at the many points of the temperature range, to get yourself any compensation beyond small %. It's only the biological systems, that tend to implement PID controllers, which do counter any deviations from equilibrium, even little ones, in a way not dependent on their magnitude.
Replies from: steven0461↑ comment by steven0461 · 2012-03-23T04:02:14.491Z · LW(p) · GW(p)
The way I've always heard it, mainstream estimates of climate sensitivity are somewhere around 3 degrees (with a fair amount of spread), and the direct effect of CO2 on radiation is responsible for 1 degree of that, with the rest being caused by positive feedbacks. It may be possible to argue that some important positive feedbacks are also basic physics (and that no important negative feedbacks are basic physics), but it sounds to me like that's not what you're doing; it sounds to me like, instead, you're mistakenly claiming that the direct effect by itself, without any feedback effects, is enough to cause warming similar to that claimed by mainstream estimates.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-23T04:05:24.020Z · LW(p) · GW(p)
Nah, I'm speaking of the anthropogenic global warming vs no anthropogenic global warming 'debate', not of 1 degree vs 3 degrees type debate. For the most part, the AGW debate is focussed on the effect of CO2, sans the positive feedbacks, as the deniers won't even accept 1 degree of difference.
Speaking of which, one very huge positive feedback is that water vapour is a greenhouse 'gas'.
Replies from: Nornagest, Eugine_Nier↑ comment by Nornagest · 2012-03-23T05:12:25.666Z · LW(p) · GW(p)
Why the quotes? Water vapor's a gas. There's also liquid- and solid-phase water in the atmosphere in the form of clouds and haze, but my understanding is that that generally has a cooling effect by way of increasing albedo.
Might be missing some feedbacks there, though; I'm not a climatologist.
Replies from: Dmytry↑ comment by Eugine_Nier · 2012-03-23T04:28:30.317Z · LW(p) · GW(p)
I think the debate, and certainly the policy debate, is (in effect) about the catastrophic consequences of CO2.
↑ comment by Manfred · 2012-03-23T09:22:13.409Z · LW(p) · GW(p)
Hm, I think higher up the hierarchy of abstraction is generally bad, when it comes to disagreements. People so easily get trapped into arguing because someone else is arguing back, and it's even easier when you're not being concrete.
Replies from: gwern↑ comment by gwern · 2012-03-23T15:14:21.652Z · LW(p) · GW(p)
I didn't say abstraction, I said disagreement.
Replies from: Manfred, Dmytry↑ comment by Dmytry · 2012-03-23T15:36:32.424Z · LW(p) · GW(p)
Which one on the list is appeal to authority or quotation of a piece of text one is not himself qualified to understand? (i only briefly skimmed and didn't really see it). (Looks like DH1 is the only one mentioning references to authorities, in the way of accusation of lack of authority).
Replies from: gwern↑ comment by gwern · 2012-03-23T15:42:46.634Z · LW(p) · GW(p)
DH4, argument. Pointing out what authorities say on the question is contradiction (the authorities contradict your claim) plus evidence (which authorities where).
Replies from: Dmytry↑ comment by Dmytry · 2012-03-23T18:13:08.331Z · LW(p) · GW(p)
Cherry picking, combined with typically putting words into authorities mouths. But I agree that if it is an accepted consensus rather than cherry-picked authorities, then it's pretty effective. (edit: Unfortunately of course, one probably knows of the consensus long before the argument)
↑ comment by Vulture · 2012-03-24T19:59:58.113Z · LW(p) · GW(p)
I think we all seem to be forgetting that the point of this article is to help us enage in more productive debates, in which two rational people who hold different beliefs on an issue come together and satisfy Aumann's Agreement Theorem- which is to say, at least one person becomes persuaded to hold a different position from the one they started with. Presumably these people are aware of the relevant literature on the subject of their argument; the reason they're on a forum (or comment section, etc.) instead of at their local library is that they want to engage directly with an actual proponent of another position. If they're less than rational, they might be entering the argument to persuade others of their position, but nobody's there for a suggested reading list. If neither opponent has anything to add besides a list of sources, then it's not an argument- it's a book club.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-24T20:34:55.854Z · LW(p) · GW(p)
I think we all seem to be forgetting that the point of this article is to help us enage in more productive debates, in which two rational people who hold different beliefs on an issue come together and satisfy Aumann's Agreement Theorem- which is to say, at least one person becomes persuaded to hold a different position from the one they started with.
Also, make sure that position is closer to the truth. Don't forget that part.
Replies from: Vulture↑ comment by Vulture · 2012-03-24T20:56:20.086Z · LW(p) · GW(p)
And that's another important point: Trading recommended reading lists does nothing to sift out the truth. You can find a number of books xor articles espousing virtually any position, but part of the function of a rational argument is to present arguments that respond effectively to the other person's points. Anyone can just read books and devise brilliant refutations of the arguments therein; the real test is whether those brilliant refutations can withstand an intelligent, rational "opponent" who is willing and able to thoroughly deconstruct it from a perspective outside of your own mind.
comment by Eugine_Nier · 2012-03-19T04:08:41.240Z · LW(p) · GW(p)
Don't Straw Man Fellow Arguers, Steel Man Them Instead
Be careful with this one. I've been in arguments where in attempting to steel-man their position only to discover that they don't agree with what I thought was the steel man.
Replies from: loup-vaillant, John_Maxwell_IV↑ comment by loup-vaillant · 2012-03-19T15:39:57.570Z · LW(p) · GW(p)
Maybe you failed to make your steel man a proper superset (in probability space) of their original argument? If they still disagree, then they have a problem.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-19T06:44:41.706Z · LW(p) · GW(p)
From an argument productivity perspective, generating strong arguments, regardless of what position they are for, can be helpful for improving the productivity of your argument.
In other words, just because an argument wasn't what another arguer was trying to communicate doesn't mean it isn't valuable.
comment by John_Maxwell (John_Maxwell_IV) · 2012-03-18T21:03:25.793Z · LW(p) · GW(p)
Author's Notes
I wrote this to be accessible to a general audience. I didn't announce this at the top of the post, like previous posts of this type have done, because I thought it would be weird for a member of the general audience to see "this was written for a general audience" or "share this with your friends" at the beginning of a post they were about to read.
However, it's not just for a general audience; I'm hoping it will be useful for Less Wrong users who realize they haven't been following one or more of these tips. (Like so much on Less Wrong, think of it as a list of bugs that you can check your thinking and behavior for.)
I realize the position I'm taking on holding back downvotes is somewhat extreme by current standards. But the negative externalities from excess downvoting are hard for us to see. My best friend, an intelligent and rational guy whose writing ability is only so-so, was highly turned off by Less Wrong when the first comment he made was voted down.
If we really feel like we need downvoting for hiding and sorting things, maybe we could mask the degree of downvoting by displaying negative integers as 0? I think this is what reddit does for submissions. Right now, I suspect that heavily downvoted items actually attract attention, like a car crash, as evidenced by the fact that lots of stuff is voted down significantly below "hide threshold" levels.
I'm certainly not perfect at following these tips. If you notice me violating my own advice, please leave a message in my feedback form.
This post was inspired by an argument I had with a fellow Less Wrong user in which both of us failed to follow some of these tips in retrospect.
I chose to use the word "arguer" instead of "opponent" to emphasize that the relationship doesn't have to be adversarial.
Props to jkaufman for prior art in writing about these ideas.
Replies from: Larks, John_Maxwell_IV↑ comment by Larks · 2012-03-19T00:10:58.538Z · LW(p) · GW(p)
I agree that downvoting new people is a bad idea - and every comment in the Welcome Thread should get a load of karma.
However, I think people should aggressively downvote - at the very least a couple of comments per page.
If we don't downvote, comments on average get positive karma - which makes people post them more and more. A few 0 karma comments is a small price to pay if there's a high chance of positive karma.
However, we don't want these posts. They clutter LW, increasing noise. The reason we read forums rather than random letter sequences is because forums filter for strings that have useful semantic content; downvoting inane or uninsightful comments increases this filtering effect. I'd much rather spent a short period of time reading only high quality comments than spend longer reading worse comments.
Worse, it can often be hard to distinguish between a good comment on a topic you don't understand and a bad one. Yet I get much more value spending time reading the good one, which might educate me, than the bad one, which might confuse me - especially if I have trouble distinguishing experts.
Downvotes provide the sting of (variable) negative reinforcement. In the long run, well kept gardens die by pacificism.
Replies from: Desrtopa, handoflixue, oliverbeatson, Richard_Kennaway, John_Maxwell_IV↑ comment by Desrtopa · 2012-03-19T06:04:12.495Z · LW(p) · GW(p)
However, I think people should aggressively downvote - at the very least a couple of comments per page.
If we don't downvote, comments on average get positive karma - which makes people post them more and more. A few 0 karma comments is a small price to pay if there's a high chance of positive karma.
We should expect comments on average to get positive karma, as long as the average member is making contributions which are on the whole more wanted than unwanted. Attempting to institute a minimum quota of downvoted comments strikes me as simply ridiculous. If the least worthwhile comment out of twenty is still not an actual detraction from the conversation, there's no reason to downvote it.
If we're just concerned with the average quality of discourse, it would be simpler to just cut off the whole community and go back to dialogues between Eliezer and Robin,.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-19T08:30:51.304Z · LW(p) · GW(p)
If we're just concerned with the average quality of discourse, it would be simpler to just cut off the whole community and go back to dialogues between Eliezer and Robin,.
The most significant dialog between Eliezer and Robin (Foom debate) was of abysmally low quality - relative to the output of either of those individuals when not dialoging with each other. I have been similarly unimpressed with other dialogs that I have seen them have in blog comments. Being good writers does not necessarily make people good at having high quality dialogs. Especially when their ego may be more centered around being powerful presenters of their own ideas than in being patient and reliable when comprehending of the communication of others.
If we want high quality dialog have Eliezer write blog posts and Yvain engage with them.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-19T18:58:02.296Z · LW(p) · GW(p)
Especially when their ego may be more centered around being powerful presenters of their own ideas than in being patient and reliable when comprehending of the communication of others.
Yep. I did write this article hoping that LWers would benefit from it, and EY was one of those LWers. (Assuming his arguing style hasn't changed since the last few times I saw him argue.)
↑ comment by handoflixue · 2012-03-23T18:58:36.363Z · LW(p) · GW(p)
"Downvotes provide the sting of (variable) negative reinforcement."
"My [...friend...] was highly turned off by Less Wrong when the first comment he made was voted down."
It seems to me that we want to cull people who repeatedly make poor comments, and who register an account just to make a single trolling remark (i.e. evading the first criteria via multiple accounts). We do not want to cull new users who have not yet adapted to the cultural standards of LessWrong, or who happen to have simply hit on one of the culture's sore spots.
If nothing else, the idea that this community doesn't have blind spots and biases from being a relatively closed culture is absurd. Of course we have biases, and we want new members because they're more likely to question those biases. We don't want a mindless rehashing of the same old arguments again and again, but that initial down vote can be a large disincentive to wield so casually.
Of course, solving this is trickier than identifying it! A few random ideas:
Mark anyone who registered less than a week ago, or with less than 5 comments, with a small "NEWBIE" icon (ideally something less offensive than actually saying "NEWBIE"). Also helps distinguish a fresh troll account from a regular poster who happens to have said something controversial.
Someone's first few posts are "protected" and only show positive karma, unless the user goes beneath a certain threshold (say, -10 total karma across all their posts). This allows "troll accounts" to quickly be shut down, and only shields someone's initial foray (and they'll still be met with rebuttal comments)
There's probably other options, but it seems that it would be beneficial to protect a user's initial foray, while still leaving the community to defend itself from longer-term threats.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-23T20:41:03.999Z · LW(p) · GW(p)
How about redirecting users to the latest Welcome thread when they register, and encouraging them to post there? Such posts are usually quickly uploaded to half-a-dozen or thereabouts.
Replies from: handoflixue↑ comment by handoflixue · 2012-03-23T23:42:29.036Z · LW(p) · GW(p)
I definitely think the "Welcome" threads could do with more prominence. That said, I'm loathe to do introductions myself; I'd far rather just jump in to discussing things and let people learn about me from my ideas. I'd expect plenty of other people here have a similar urge to respond to a specific point, before investing themselves in introductions and community-building / social activities.
↑ comment by oliverbeatson · 2012-03-20T13:16:38.454Z · LW(p) · GW(p)
For some reason I would feel much better imposing a standard cost on commenting (e.g. -2 karma) that can be easily balanced by being marginally useful. This would better disincentivise both spamming and comments that people didn't expect to be worth very much insight, and still allow people to upvote good-but-not-promotion-worthy comments without artificially inflating that user's karma. This however would skew commenters towards fewer, longer, more premeditated replies. I don't know if we want this.
Replies from: handoflixue↑ comment by handoflixue · 2012-03-23T19:13:27.766Z · LW(p) · GW(p)
I find short, pithy replies tend to get better responses karma-wise.
↑ comment by Richard_Kennaway · 2012-03-19T14:39:48.578Z · LW(p) · GW(p)
If we don't downvote, comments on average get positive karma - which makes people post them more and more. A few 0 karma comments is a small price to pay if there's a high chance of positive karma.
Anyone who posts in order to get karma either overvalues karma or undervalues their time. If their time really is worth so little, they probably can't produce karma-worthy comments anyway.
Replies from: handoflixue↑ comment by handoflixue · 2012-03-23T19:12:28.609Z · LW(p) · GW(p)
"If their time really is worth so little, they probably can't produce karma-worthy comments anyway."
I can throw out a quick comment in 2 minutes. I enjoy writing quick comments, because I like talking about myself. I expect a lot of people like talking about themselves, given various social conventions and media presentations.
I almost never see a comment of mine voted down unless it's actively disagreeable (BTW, cryonics is a scam!), attempting to appeal to humour (you lot seriously cannot take a joke), or actively insulting (I like my karma enough not to give an example :P)
I'd idly estimate that I average about +1 karma per post. Basically, they're a waste of time.
I have over 1,000 karma.
So, the community consensus is that I'm a worthwhile contributor, despite the vast majority of my comments being more or less a waste of time. Specifically, I'm worthwhile because I'm prolific.
(Of course, if I cared about milking karma, I'd put this time in to writing a couple well-researched main posts and earn 100+ karma from an hour of work, instead of ~30/hour contributing a two-line comment here and there.)
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-19T01:22:36.573Z · LW(p) · GW(p)
Thanks for that link.
It occurred to me that Eliezer's intuitions for moderation may not be calibrated to the modern Internet, where there really is a forum for people at every level of intelligence: Yahoo Answers, Digg, Facebook, 4chan, Tagged (which is basically the smaller but profitable successor to MySpace that no one intelligent has heard of), etc. I saw the Reddit community denigrate, but Reddit was a case of the smart people having legitimately better software (and therefore better entertainment through better chosen links). Nowadays, things are more equalized and you don't pay much of a price in user experience terms for hanging out on a forum where the average intelligence is similar to yours.
Robin Hanson recently did the first ever permanent banning on Overcoming Bias, and that was for someone who was unpleasant and made too many comments, not someone who was stupid. (Not sure how often Robin deletes comments though, it does seem to happen at least a little.)
If we don't downvote, comments on average get positive karma - which makes people post them more and more.
I don't think this effect is very significant. I find it implausible that people post more comments on Hacker News, where comments are hardly ever voted down below zero, because it gets them karma. But even if they do, Hacker News is a great, thriving community. I would love it if we adopted a Hacker News-style moderation system where only users with high karma could vote down.
I like the idea of promote/agree/disagree buttons somewhat.
Replies from: Eugine_Nier, radical_negative_one, wedrifid↑ comment by Eugine_Nier · 2012-03-20T03:54:14.722Z · LW(p) · GW(p)
I would love it if we adopted a Hacker News-style moderation system where only users with high karma could vote down.
We already have a system where you can only downvote a number of comments up to four times your karma.
↑ comment by radical_negative_one · 2012-03-19T15:25:39.735Z · LW(p) · GW(p)
a Hacker News-style moderation system where only users with high karma could vote down.
I idly wonder if any noticeable fraction of downvotes does come from people who don't have enough karma to post toplevel articles.
I'd guess that "high karma" would refer to the threshhold needed for posting articles, which is a pretty low bar.
↑ comment by wedrifid · 2012-03-19T02:12:20.801Z · LW(p) · GW(p)
I would love it if we adopted a Hacker News-style moderation system where only users with high karma could vote down.
I like the sound of that for some reason.
Replies from: gwern, John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-19T03:08:53.247Z · LW(p) · GW(p)
Larks strikes again.
(Comment was at -1 when I found it.)
It definitely changes my feeling about getting voted down to know there are people like Larks. I guess I just assumed that everyone was like me in reserving downvotes for the absolute worst stuff. Maybe there's some way of getting new users to expect that their first few comments will be voted down and not to worry about it?
Replies from: handoflixue, Vladimir_Nesov, Larks↑ comment by handoflixue · 2012-03-23T19:01:44.996Z · LW(p) · GW(p)
It would be interesting to see statistics on up vs down vote frequency per user. Even just a graph of how many users are in the 0-10% down vote bracket, 10-20%, etc. would be neat. I doubt the data is currently available, otherwise it would be trivial to put together a simple graph and a quick post detailing trends in that data.
↑ comment by Vladimir_Nesov · 2012-03-19T11:41:08.586Z · LW(p) · GW(p)
To adjust your calibration a bit more: I worry that I might run out of my 4*Karma downvoting limit.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-20T04:46:12.687Z · LW(p) · GW(p)
This is a thought I had after reading some anonymous feedback for this article.
I've decided that the maximally productive argument style differs some by audience size and venue.
When arguing online:
The argument productivity equation is dominated by bystanders. So offending the person you are responding to is not as much of a loss.
For distractible Internet users, succinctness is paramount. So it often makes sense to overstate your position instead of using wordy qualifiers. (Note that a well-calibrated person is likely to be highly uncertain.)
When arguing with a few others in meatspace:
The only way to have a productive argument is for you or one of the others to change their mind.
And you pay less of a price for qualifiers, since they're going to listen to you anyway.
Thanks for the feedback!
comment by cousin_it · 2012-03-19T10:56:22.271Z · LW(p) · GW(p)
It seems to me that arguments between scientists are productive mostly because they have a lot of shared context. If the goal of arguing is to learn things for yourself, then it's useless to argue with someone who doesn't have the relevant context (they can't teach you anything) and useless to argue about a topic where you don't know the relevant context yourself (it's better to study the context first). Arguments between people who are coming from different contexts also seem to generate more heat and less light, so it might be better to avoid those.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-19T18:53:33.047Z · LW(p) · GW(p)
Well in an ideal world, if I'm an ignoramus arguing with a scientist, our argument would transform into the scientist teaching me the basics of his field. Remember, the "arguing" relationship doesn't have to be (and ideally shouldn't be) adversarial.
comment by RomeoStevens · 2012-03-19T02:06:21.074Z · LW(p) · GW(p)
Arguing logically works on a much smaller proportion of the populace than generally believed. My experience is that people bow to authority and status and only strive to make it look like they were convinced logically.
Replies from: jimmy, wedrifid↑ comment by jimmy · 2012-03-22T19:50:27.797Z · LW(p) · GW(p)
That's basically right, but I'd like to expand a bit.
Most people are fairly easily convinced "by argument" unless they have a status incentive to not agree. The problems here are that 1) people very often have status reasons to disagree, and 2) people are usually so bad at reasoning that you can find an argument to convince them of anything in absence of the first problem. It's not quite that they don't "care" about logical inconsistencies, but rather that they are bad at finding them because they don't build concrete models and its easy enough to find a path where they don't find an objection. (note that when you point them out, they have status incentives to not listen and it'll come across that they don't care - they just care less than the status loss they'd perceive)
People that I have the most productive conversations with are good at reasoning, but more importantly, when faced with a choice to interpret something as a status attack or a helpful correction, they perceive their status raising move as keeping peace and learning if at all possible. They also try to frame their own arguments in ways to minimize perceived status threat enough that their conversation partner will interpret it as helpful. This way, productive conversation can be a stable equilibrium in presence of status drives.
However, unilaterally adopting this strategy doesn't always work. If you are on the blunt side of the spectrum, the other party can feel threatened enough to make discussion impossible even backing up n meta levels. If you're on the walking on eggshells side, the other party can interpret it as allowing them to take the status high ground, give bad arguments and dismiss your arguments. Going to more extreme efforts not to project status threats only makes the problem worse, as (in combination with not taking offense) it is interpreted as submission. It's like unconditional cooperation. (this appears to be exactly what is happening with the Muelhauser-Goertzel dialog, by the way - though the bitterness hints that he still perceives SIAI as a threat - just a threat he is winning a battle with).
I have a few thoughts on potential solutions (and have had some apparent success), but they aren't well developed enough to be worth sharing yet.
So yes, all the real work is done in manipulating perceptions of status, but it's more complicated than "be high status" - it's getting them to buy into the frame where they are higher status when they agree - or at least don't lose status.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2012-03-22T21:04:34.180Z · LW(p) · GW(p)
I fully agree in the context of longer interactions or multiple interactions.
↑ comment by wedrifid · 2012-03-19T08:36:41.008Z · LW(p) · GW(p)
Arguing logically works on a much smaller proportion of the populace than generally believed.
Roughly speaking, from what I can tell, it is generally believed that it works on 10% of the populace but really it works on less than 1%.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-19T08:50:25.228Z · LW(p) · GW(p)
and 99% believe they are in 1%.
Replies from: wedrifid, gwern↑ comment by wedrifid · 2012-03-19T09:11:07.556Z · LW(p) · GW(p)
and 99% believe they are in 1%.
I'm in the 1% that don't think they are in the 1%. (It seems we have no choice but to be in one of the arrogant categories there!)
I usually get persuaded not so much by logic (because logical stuff I can think of already and quite frankly I'm probably better at it than than the arguer) but by being given information I didn't previously have.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-19T09:25:40.899Z · LW(p) · GW(p)
It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty. (edit: and especially so if you don't even have a range of some kind).
E.g. we had unproductive argument about whenever random-ish AGI 'almost certainly' just eats everyone, it's not that I have some data that it is almost certain it is not eating everyone, it's that you shouldn't have this sort of certainty about such a topic. It's fine if your estimated probability distribution centres there, it's not fine if it is ultra narrow.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-19T11:50:32.162Z · LW(p) · GW(p)
It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty.
I gave nothing to indicate that this was the case. While the grandparent is more self deprecating on the behalf of both myself and the species than it is boastful your additional criticism doesn't build upon it. The flaw you perceive in me (along the lines of disagreeing with you) is a different issue.
(edit: and especially so if you don't even have a range of some kind).
I have a probability distribution, not a range. Usually not a terribly well specified probability distribution but that is the ideal to be approximated.
E.g. we had unproductive argument about whenever random-ish AGI 'almost certainly' just eats everyone, it's not that I have some data that it is almost certain it is not eating everyone, it's that you shouldn't have this sort of certainty about such a topic. It's fine if your estimated probability distribution centres there, it's not fine if it is ultra narrow.
No. Our disagreement was not one of me assigning too much certainty. The 'almost certainly' was introduced by you, applied to something that I state has well under even chance of happening. (Specifically, regarding the probability of humans developing a worse-than-just-killing-us uFAI in the near vicinity to an FAI.)
You should also note that there is a world of difference between near certainty about what kind of AI will be selected and an unspecified level of certainty that the overwhelming majority of AGI goal systems would result in them killing us. The difference is akin to having 80% confidence that 99.9999% of balls in the jar are red. Don't equivocate that with 99.9999% confidence. They represent entirely different indicators of assigned probability distributions.
In general meta-criticisms of my own reasoning that are founded on a specific personal disagreement with me should not be expected to be persuasive. Given that you already know I reject the premise (that you were right and I was wrong in some past dispute) why would you expect me to be persuaded by conclusions that rely on that premise?
Replies from: Dmytry↑ comment by Dmytry · 2012-03-19T12:13:51.236Z · LW(p) · GW(p)
Nah, my argument was "Well, the crux of the issue is that the random AIs may be more likely to leave us alone than near-misses at FAI." , by the 'may', I meant, there is a notable probability that far less than 99.9999% of the balls in the jar are red, and consequently, far greater than 0.0001% probability of drawing a non-red ball.
edit: Furthermore, suppose we have a jar with 100 balls in which we know there is at least one blue ball (near-FAI space), and a huge jar with 100000 balls, about which we don't know quite a lot, and which has substantial probability of having a larger fraction of non-red balls than the former jar.
edit: also, re probability distributions, that's why i said a "range of some sort". Humans don't seem to quite do the convolutions and the like on probability distributions when thinking.
↑ comment by gwern · 2012-03-19T14:28:15.683Z · LW(p) · GW(p)
Robin just had an interesting post on this: http://www.overcomingbias.com/2012/03/disagreement-experiment.html
comment by Mike Bishop (MichaelBishop) · 2012-03-28T16:11:14.572Z · LW(p) · GW(p)
It seems someone should link up "Why and How to Debate Charitably." I can't find a copy of the original because the author has taken it down. Here is a discussion of it on LW.. Here are my bulleted summary quotes. ADDED: Original essay I've just learned, and am very saddened to hear, that the author, Chris, committed suicide some time ago.
Replies from: RobinZ, Wei_Dai, NancyLebovitz↑ comment by Wei Dai (Wei_Dai) · 2012-04-19T23:15:37.200Z · LW(p) · GW(p)
From that essay:
Rules for dealing with lack of charity
Now, all the above is good and well to strive to follow. But what should you do when you’re dealing with those who don’t follow these rules? I have many ideas on this aspect of debate as well, which I will write up sometime.
Does anyone know if he did write them up? Even the Internet Archive's mirror of pdf23ds.net is gone now (intentionally purged by the author, it looks like).
↑ comment by NancyLebovitz · 2012-03-28T16:36:01.302Z · LW(p) · GW(p)
I've noticed that if I notice someone online as civilized and intelligent, the odds seem rather high that I'll be seeing them writing about having an ongoing problem with depression within months.
This doesn't mean that everyone I like (online or off) is depressed, but it seems like a lot. The thing is, I don't know whether the proportion is high compared to the general population, or whether depression and intelligence are correlated. (Some people have suggested this as an explanation for what I think I've noticed.)
I wonder whether there's a correlation between depression and being conflict averse.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2012-03-28T17:07:13.322Z · LW(p) · GW(p)
I wonder whether there's a correlation between depression and being conflict averse. I would guess that there is, and I'm sure there has been at least some academic study of it. This doesn't really address the issue, but its related.
I also think that keeping a blog or writing in odd corners of the internet may be associated with, possibly even caused by, depression.
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-19T11:59:16.614Z · LW(p) · GW(p)
Good article. The 6 techniques seem quite useful. I think I use 'isolate specific disagreements' the most-it feels like at least 80% of all arguments I get into consist of me trying to clarify exactly what we disagree on, and often finding out half the time that we don't actually disagree on anything of substance, just vocabulary/values/etc.
If your belief inertia is low and you steel-man everything, you'll reach the super chill state of not having a "side" in any given argument. You'll play for all sides and you won't care who wins.
I've actually been criticized for this. Finding it hard to have a firm opinion on something makes it nearly impossible to write a high school or university persuasive essay, and I hate having to write something that I don't actually believe or think is valid, so I end up agonizing for ages and ending up with a wishy-washy, kind of pointless main argument.
Replies from: TheOtherDave, John_Maxwell_IV↑ comment by TheOtherDave · 2012-03-19T20:16:16.316Z · LW(p) · GW(p)
"Not having a side" doesn't have to mean being unable to argue a side, it can instead mean being able to argue several different sides. If I can do that, then if someone insists that I argue a position I can pick one and argue it, even knowing perfectly well that I could just as readily argue a conflicting position.
In the real world, knowing that there are several different plausible positions is actually pretty useful. What I generally find professionally is that there's rarely "the right answer" so much as there are lots of wrong answers. If I can agree with someone to avoid the wrong answers, I'm usually pretty happy to accept their preferred right answer even if it isn't mine.
Replies from: Dmytry, Swimmer963↑ comment by Dmytry · 2012-03-19T20:28:23.557Z · LW(p) · GW(p)
When you can equally well argue that X is true, and that X is false, it means that your arguing is quite entirely decoupled from truth, and as such, both of those arguments are piles of manure that shouldn't affect anyone's beliefs. It is only worth making one of such arguments to counter the other for the sake of keeping the undecided audience undecided. Ideally, you should instead present a very strong meta-ish argument that the argumentation for both sides which you would be able to make, is complete nonsense.
(Unfortunately that gets both sides of argument pissed off at you.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-19T21:17:02.827Z · LW(p) · GW(p)
When you can equally well argue that X is true, and that X is false, it means that your arguing is quite entirely decoupled from truth
That's not actually true.
In most real-world cases, both true statements and false statements have evidence in favor of them, and the process of assembling and presenting that evidence can be a perfectly functional argument. And it absolutely should affect your belief: if I present you with novel evidence in favor of A, your confidence in A should decrease.
Ideally, I should weigh my evidence in favor of A against my evidence in favor of not-A and come to a decision as to which one I believe. In cases where one side is clearly superior to the other, I do that. In cases where it's not so clear, I generally don't do that.
Depending on what's going on, I will also often present all of the available evidence. This is otherwise known as "arguing both sides of the issue" and yes, as you say, it tends to piss everyone off.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-19T21:26:00.436Z · LW(p) · GW(p)
Let me clarify with an example. I did 100 tests testing new drug against placebo (in each test i had 2 volunteers), and I got very lucky to get exactly neutral result: in 50 of them, it performed better than placebo, while in other 50 it performed worse.
I can construct 'argument' that drug is better than placebo, by presenting data from 50 cases where it performed better, or construct 'argument' that drug is worse than placebo, by presenting data from 50 cases where placebo performed better. Neither of 'arguments' should sway anyone's belief about the drug in any direction, in case of perfect knowledge of the process that has led to the acquisition of that data, even if the data from other 50 cases has been irreversibly destroyed and are not part of the knowledge (it is only known that there were 100 trials, and 50 outcomes have been destroyed because they didn't support the notion that drug is good; the actual 50 outcomes are not available). That's what I meant. Each 50-case data set tells absolutely nothing about truth of drug>placebo, by itself, is a weak evidence that effects of the drugs are small in the perfect knowledge of the extent of cherry picking, and only sways the opinion on "is drug better than placebo" if there's a false belief related to the degree of cherry picking.
Furthermore, unlike mathematics of decision theory, qualitative comparison of two verbal arguments in their vaguely determined 'strength' yield complete junk unless the strength differs very significantly (edit: which happens when one argument is actually good and other is junk). Due to cherry picking as per above example.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-20T00:32:36.571Z · LW(p) · GW(p)
Sure. More generally, in cases where evaluating all the evidence I have leads me unambiguously to conclude A, and then I pick only that subset of the evidence that leads me to conclude NOT A, I'm unambiguously lying. But in cases like that, the original problem doesn't arise... picking a side to argue is easy, I should argue A. That's very different (at least to my mind) from the case where I'm genuinely ambivalent between A and NOT A but am expected to compellingly argue some position rather than asserting ignorance.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-20T01:03:30.531Z · LW(p) · GW(p)
Well, the drug example, there is no unambiguous conclusion, and one can be ambivalent, yet it is a lie to 'argue' A or not A , and confusing to argue both rather than to integrate it into conclusion that both pro and anti arguments are complete crap (it IS the case in the drug example that both the pro, and anti data, even taken alone in isolation, shouldn't update the beliefs, i.e. shouldn't be effective arguments). The latter though really pisses people off.
It is usually the case in such cases that while you can't be sure in the truth value of proposition, you can be pretty darn sure that the arguments presented aren't linked to it in any way. But people don't understand the distinction between that, and really don't like if you attack their argument rather than position. In both side's eyes you're just being a jerk who doesn't even care who's right.
All while, the positions are wrong statistically half of the time (counting both sides of argument once), while arguments are flawed far more than half. Even in math, if you just guess at truth value of the idk Fermat's last theorem using a coin flip, you have 50% chance of being wrong about the truth value, but if you were to make a proof, you would have something around 99% chance of entirely botching proof up, unless you are real damn good at it. And a botched proof is zero evidence. If you know the proof is botched (or if you have proof of the opposite that also pass your verification, implying that verification is botched), it's not weak bayesian evidence about any truth values, it's just a data about human minds, language, fallacies, etc.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-21T17:42:14.751Z · LW(p) · GW(p)
Agreed that evaluating the relevance of an argument A to the truth-value T of a proposition doesn't depend on knowing T.
Agreed that pointing out to people who are invested in a particular value of T and presenting A to justify T that in fact A isn't relevant to T generally pisses them off.
Agreed that if T is binary, there are more possible As unrelated to T than there are wrong possible values for T, which means my chances of randomly getting a right answer about T are higher than my chances of randomly constructing an argument that's relevant to T. (But I'll note that not all interesting T's are binary.)
the drug example, there is no unambiguous conclusion, and one can be ambivalent,
This statement confuses me.
If I look at all my data in this example, I observe that the drug did better than placebo half the time, and worse than placebo half the time. This certainly seems to unambiguously indicate that the drug is no more effective than the placebo, on average.
Is that false for some reason I'm not getting? If so, then I'm confused
If that's true, though, then it seems my original formulation applies. That is, evaluating all the evidence in this case leads me unambiguously to conclude "the drug is no more effective than the placebo, on average". I could pick subsets of that data to argue both "the drug is more effective than the placebo" and "the drug is less effective than the placebo" but doing so would be unambiguously lying.
Which seems like a fine example of "in cases where evaluating all the evidence I have leads me unambiguously to conclude A, and then I pick only that subset of the evidence that leads me to conclude NOT A, I'm unambiguously lying." No? (In this case, the A to which my evidence unambiguously leads me is "the drug is no more effective than the placebo, on average"
Replies from: Dmytry↑ comment by Dmytry · 2012-03-21T22:32:05.848Z · LW(p) · GW(p)
Non-binary T: quite so, but can be generalized.
If I look at all my data in this example, I observe that the drug did better than placebo half the time, and worse than placebo half the time. This certainly seems to unambiguously indicate that the drug is no more effective than the placebo, on average.
but would it seem if it was 10 trials, 5 win 5 lose? It just sets some evidence that effect is small. If the drug is not some homoeopathy thats pure water, you shouldn't privilege zero effect. Exercise for the reader: calculate 95% ci for 100 placebo-controlled trials.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-22T02:01:56.918Z · LW(p) · GW(p)
Ah, I misunderstood your point. Sure, agreed that if there's a data set that doesn't justify any particular conclusion, quoting a subset of it that appears to justify a conclusion is also lying.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-22T07:26:38.945Z · LW(p) · GW(p)
Well, the same should apply to arguing a point when you could as well have argued opposite with same ease.
Note, as you said:
In most real-world cases, both true statements and false statements have evidence in favor of them
and i made an example where both true and false statements got "evidence in favour of them" - 50 trials one way, 50 trials other way. Both of those evidences are subset of evidence, that appears to justify a conclusion, and is a lie.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-22T13:49:14.105Z · LW(p) · GW(p)
...
You are absolutely correct.
Point taken.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-03-19T21:22:59.542Z · LW(p) · GW(p)
If I can do that, then if someone insists that I argue a position I can pick one and argue it, even knowing perfectly well that I could just as readily argue a conflicting position.
This is the part that I can't do. It's almost like I can't argue for stuff I don't believe because I feel like I'm lying. (I'm also terrible at actually lying.)
Replies from: Strange7↑ comment by Strange7 · 2012-03-26T00:51:31.063Z · LW(p) · GW(p)
I figured out a long time ago that I don't like lying. As a result, I constructed some personal policies to minimize the amount of lying I would need to do. In that, we most likely are the same. However, I also practiced the skill enough that when a necessity arose I would be able to do it right the first time.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-19T19:02:01.510Z · LW(p) · GW(p)
Finding it hard to have a firm opinion on something makes it nearly impossible to write a high school or university persuasive essay, and I hate having to write something that I don't actually believe or think is valid, so I end up agonizing for ages and ending up with a wishy-washy, kind of pointless main argument.
So your thinking style is optimized for productive arguments but not school essays.
comment by wedrifid · 2012-03-19T08:33:32.716Z · LW(p) · GW(p)
Rule one: Have a social goal in any given conversation. It needn't be a fixed goal but as long as there actually is one the rest is easy.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-22T07:29:49.903Z · LW(p) · GW(p)
Hmm. What's your social goal here? Producing texts for social goal purposes is called signalling (usually, but depends to what you're trying to do).
Replies from: wedrifid↑ comment by wedrifid · 2012-03-22T11:54:55.693Z · LW(p) · GW(p)
Hmm. What's your social goal here?
It is something that it would be wiser to discuss with those for whom I would infer different motives and where I would predict different usages of any supplied text.
Producing texts for social goal purposes is called signalling (usually, but depends to what you're trying to do).
When I engage my Hansonian reasoning I can describe everything humans do in terms of their signalling implications. Yet to describe things as just signalling is to discard rather a lot of information. Some more specific goals that people could have in a given argument include:
- Learning information from the other.
- Understanding why the other believes the way they do.
- Tracing the precise nature of disagreement.
- Persuading the other.
- Providing information to the other.
- Combining your thinking capabilities with another so as to better explore the relevant issues and arrive at a better solution than either could alone.
- Persuade the audience.
- Educate the audience.
- Mitigate the damage that the other has done through advocating incorrect or undesired knowledge or political opinions.
- Entertain others or yourself.
- Make oneself look impressive to the audience.
- Alter the feelings that the other has towards you in a positive direction.
- Practice one's skills at doing any of the above.
- Demonstrate one's ability to do any of the above and thereby gain indirect benefit.
Some of those can be better described as 'signalling' than others.
comment by [deleted] · 2012-03-18T23:22:44.529Z · LW(p) · GW(p)
Don't Straw Man Fellow Arguers, Steel Man Them Instead
Can you provide some specific tips on this one? I've tried to do this in discussions with non-LW people, and it comes off looking bizarre at best and rude at worst. (See also: this comment, which is basically what ends up happening.) Have you been able to implement this technique in discussions, and if so, how did you do it while adhering to social norms/not aggravating your discussion partners?
Replies from: Vladimir_Nesov, Zaine, Anne_Briquet↑ comment by Vladimir_Nesov · 2012-03-18T23:30:04.647Z · LW(p) · GW(p)
It gets better when you disagree with your opponent that they were justified in agreeing with you...
Replies from: None↑ comment by [deleted] · 2012-03-18T23:36:31.584Z · LW(p) · GW(p)
Oh, absolutely. This is why I don't like being in discussions with people who hold the same position as I do but for different reasons--saying that a particular argument in favor of our position is wrong sounds like arrogance and/or betrayal.
Replies from: Vladimir_Nesov, MartinB↑ comment by Vladimir_Nesov · 2012-03-18T23:37:55.628Z · LW(p) · GW(p)
I was thinking more of a case where people agree out of politeness, and you have reason to believe they didn't properly understand your position.
Replies from: None↑ comment by [deleted] · 2012-03-18T23:47:37.294Z · LW(p) · GW(p)
Oh, I see. I don't think that's ever happened me, although I have had people try to end conversations with "everyone's entitled to their own beliefs" or "there's no right answer to this question, it's just your opinion" and my subsequent haggling turned the discussion sour.
↑ comment by MartinB · 2012-03-19T00:17:57.296Z · LW(p) · GW(p)
It was weird the first few times when I had a cluster of people agreeing with me, spent time there, and then started to collect counter arguments. People are better tested with disagreements, than just holding similar end-views. A belief held for the wrong reasons can easily be changed. It gets a bit weird when you start fixing your opponents arguments against your own position.
↑ comment by Zaine · 2012-03-19T04:07:44.671Z · LW(p) · GW(p)
I tend to do this often as part of serving as a 'moderator' of discussions/arguments, even when it's just me and another. It's useful to perceive the other party's (parties') argument as merely a podium upon which their belief rests, and then endeavor to identify, with specificity, their belief or position. Colloquially, the result would be something like:
Not you: "I think that, it just doesn't seem right, that, even without being given even a chance, the baby just dies. It's not right how they have no say at all, you know?"
You: "So, your position is..." In verbal communications you can at this point briefly pause as if you're carefully considering your words in order to allow an opportunity for their interjection of a more lucidly expressed position. "...that the fetus (and I'm just using the scientific terminology, here), has value equal to that of a grown person in moral considerations? [If confused:] I mean, that when thinking about an abortion, the fetus' rights are equal to that of the mother's?"
[As shown above, clarify one point at a time. Your tone must be that of one asking for clarification on a fact. More, "The tsunami warning was cancelled before or after the 3/14 earthquake hit?" than, "You've been wrong before; you sure?"]
Not you: "Yea, such is mine position."
You: "And, due to the fetus' having equal moral standing to the mother, abortions thus are an unjust practice?"
Not you: "Aye."
Be careful with these clarification proceedings, though. If by framing their arguments you happen to occlude the actual reasoning of their argument, due to them not knowing it themselves or otherwise, the entire rest of the argument could be a waste of time predicated upon a falsely framed position. Suggestions of possible solutions include:
- Asking whether they are sure the framed argument accurately expresses the reason for their position on the matter; not framing at all, but jumping right into the hypothetical probing and allowing for them to explore the issue enough to provide a confident statement of their position; going straight to the hypothetical probing, using their responses to form a mental estimation of their actual position, steel man-ing that mental estimation, and proceeding to argue upon the presumption your steel-man is accurate, updating as necessary.
From then on, you now have at your disposal vetted statements of their position that are intricate with their arguments. Subsequent arguments can then be phrased as hypotheticals: "What if EEG scans, which monitor brain waves, only showed the fetus as having developed brain activity akin to that of a grown person (the mother, say) at four months? Would that mean that at four months the fetus becomes developed enough to be considered equal to the mother?"
This way you can inquire after their exact position, why they hold that position, and without taking a side gather whether they're open to accepting another position whilst presenting viable alternatives in a reasoned and unobtrusive fashion. If you wish to defuse an argument, simply pointing out that party X holds to alternative II, and asking whether they can understand why party X holds to alternative II, should be enough to at least start smothering the fuse.
Note: The use of 'should' when expressing ideals implies a position of righteous power, and should (please decry me if I am unjustified in taking on this position of righteous power) never be used in an argument, regardless of whether it's self contained within a hypothetical. In my experience its use tends to only reinforce beliefs.
Replies from: Richard_Kennaway, Eugine_Nier, CronoDAS, CronoDAS↑ comment by Richard_Kennaway · 2012-03-21T17:11:58.909Z · LW(p) · GW(p)
"...that the fetus (and I'm just using the scientific terminology, here), has value equal to that of a grown person in moral considerations?
Well now, this technique is straight-out dishonesty. You're not "just using the scientific terminology". You have a reason for rejecting the other person's use of "baby", and that reason is that you want to use words to draw a moral line in reality at the point of birth. Notice that you also increased the distance by comparing the "fetus" not to a newborn baby but to an adult. But you cannot make distinctions appear in the real world by drawing them on the map.
Your tone must be that of one asking for clarification on a fact.
That is, your tone must be a lie. You are not asking for clarification of a fact.
Replies from: Zaine↑ comment by Zaine · 2012-03-21T20:16:58.769Z · LW(p) · GW(p)
You have a reason for rejecting the other person's use of "baby", and that reason is that you want to use words to draw a moral line in reality at the point of birth.
Don't know how you came to this, but nowhere do I take a stance on the issue. There's the 'Not you' and the 'You', with the former thinking it's wrong, and the latter wanting to know the former's reasoning and position.
You can just as well use the word 'baby'; only using a neutral word as decided by a third party (namely science and scientists), besides 'the baby' or 'it', help in distancing them as well as their perception of you from the issue. It's difficult for someone to perceive an issue clearly when, every time it comes to mind, they're reminded, "Oh yes, this I believe." Subtly separating that associative belief of the other party (parties) allows them to evince their true reasons with greater accuracy. Harry did the same (unintentionally) by setting Draco into an honestly inquisitive state of mind in HPMOR when investigating blood (not ad verecundiam, just an example).
I fully agree you cannot manifest terrain by drawing on the map; this is why I suggest comparing the fetus to a grown person. A new human has the potential to become a grown human, and I, in assuming the position of the 'Not you', guessed this was a reason they may value the fetus. In another example they may say, "No, they're just a baby! They're so cute! You can't kill anything cute!"
From a consequentialist point of view (which appears to be the same as the utilitarian - correct me if I'm wrong, please), it doesn't matter whether they are cute unless this is a significant factor to those considering the fetus' mortality. A fetus' potential 'cuteness' is a transient property, and a rather specious foundation upon which to decide whether the fetus' shall have a life. What if they're ugly, grow out of the cute, are too annoying to the mother? Then their reason for valuing the baby operates on a relative curve directly proportional to the baby's cuteness at any point in time; I'm not sure what it's called when a baby is wanted for the same reasons one might want a pet, or a stuffed animal, but don't think it's rational. Babies are living humans.
From a religious perspective, I am unaware of any religion that affords separate moral rights to children than from men, besides the possible distinction from an innocent, helpless man and a morally responsible one. Thus, if this be their rationale, then consideration of the fetus' as having full moral rights equal to that of a grown person would be a given prerequisite. Without consideration as a person, a fetus' is rendered excluded from the moral protection of many a religion. From a Bayesian perspective, the probability of their belonging to a value system wherein human lives are exclusively valued above all others', outweighs the other possibilities - or so I reasoned (if the logic's unsound, please inform).
Thus, the likening of the fetus unto a grown person.
And, on your last point, you assume the 'You' would have an agenda; you cannot, as you well reason, honestly and neutrally steel-man their argument with an agenda. Sure, the 'You' may subtly be presenting other paradigms, however 'tis in the best interests of all parties that none remain as ignorant after the argument as they were before; this is a thread on improving productivity of arguments, after all. Making their argument into a steel-man necessitates the full or mostly full understanding the other party's (parties') position; how can you steel-man what you do not understand? And so you honestly ask for clarification, using distancing hypotheticals to probe the truth of their position out of them, or framing their argument with your own words, using an interrogative tone requesting clarification on the fact of their belief (note the possible dangers of the latter as explicated elsewhere in this comment train).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-03-21T21:03:17.557Z · LW(p) · GW(p)
There's the 'Not you' and the 'You', with the former thinking it's wrong, and the latter wanting to know the former's reasoning and position.
Ok, the "You" person isn't you. Sorry for conflating the two. (Probably because "You" is portrayed as the voice of reason, while "Not you" is the one given the idiot ball.) But if I look just at what the "You" person is saying, ignoring the interior monologue, they come across to me as I described. And if they speak that interior monologue, I won't believe them. This is a topic on which there is no neutral frame, no neutral vocabulary. Every discussion of it consists primarily of attempts to frame the matter in a preferred way. Here's a table comparing the two frames:
fetus unborn child
right to choose right to life
pro-choice pro-abortion
anti-choice anti-abortion
You can tell what side someone is on by the words they use.
Replies from: Zaine↑ comment by Zaine · 2012-03-21T21:36:47.712Z · LW(p) · GW(p)
No problem. If someone with your objection were to raise their concerns with the 'You' at the time of intercourse, I would recommend calmly requesting agreement on a word both agree as neutral; actually, this would be an excellent first step in ensuring the cooperation of both parties in seeking the truth, or least wrong or disagreeable position on the matter. What that word would be in this instance, besides fetus, I haven't a clue - there may be no objectively neutral frame, from your perspective, however in each discourse all involved parties can create mutually agreed upon subjectively neutral vocabulary, if connotations truly do prove such an obstacle to productive communication. I am still for fetus as a neutral word, as it's the scientific terminology. Pro-life scientists aren't paradoxical.
↑ comment by Eugine_Nier · 2012-03-19T04:31:59.506Z · LW(p) · GW(p)
This way you can inquire after their exact position, why they hold that position, and without taking a side gather whether they're open to accepting another position whilst presenting viable alternatives in a reasoned and unobtrusive fashion.
What you actually appear to be doing in this exchange is framing the debate (this is not a neutral action) under the guise of being a neutral observer. If your arguer is experienced enough to see what you're doing, he will challenge you on it probably in a way that will result in a flame war. If he isn't experienced enough he may see what appears to be a logical argument that somehow doesn't seem persuasive and this may put him off the whole concept of logical arguing.
Replies from: Incorrect, Zaine↑ comment by Incorrect · 2012-03-19T05:01:33.479Z · LW(p) · GW(p)
I don't see how it breaks neutrality if you frame the debate in a non-fallacious perspective.
If your arguer is experienced enough to see what you're doing, he will challenge you on it probably in a way that will result in a flame war.
Can't it end in a peaceful back-and-forth until we have agreed on a common frame?
Replies from: Zaine↑ comment by Zaine · 2012-03-19T05:13:10.402Z · LW(p) · GW(p)
If I'm interpreting his objection correctly, I think the framing enables potential and possibly unknown biases to corrupt the entire process. The other party (parties) may consciously think they agree on a particular frame, but some buried bias or unknown belief may be incompatible with the frame, and will end up rejecting it.
Replies from: Incorrect↑ comment by Incorrect · 2012-03-19T05:16:52.016Z · LW(p) · GW(p)
Well, then they can tell you they made a mistake and actually reject the frame explaining why and you will have learned about their position allowing you to construct a new frame.
Replies from: Zaine↑ comment by Zaine · 2012-03-19T05:27:28.252Z · LW(p) · GW(p)
Indeed, though I wonder whether they will not themselves be able to express why often enough to warrant a complete omission of the framing step in favor of immediate hypothetical probing, and even that assumes they'll realize the frame is inaccurate before the argument ends and each go their separate way.
↑ comment by Zaine · 2012-03-19T04:45:18.483Z · LW(p) · GW(p)
So by framing their position with my own words, I could be tricking them into agreeing to something that sounds to technically be their position, while their actual position could be suppressed, unknown, and biasing their reception to all that then follows? That sounds true, however if they interject and state their position themselves, then would the technique of probing with hypotheticals also not be neutral?
I have edited the original comment so as to include and account for the former possibility, though I think the latter, probing with hypotheticals, is a valid neutral technique. If I'm wrong, please correct me.
↑ comment by CronoDAS · 2012-03-19T09:15:17.600Z · LW(p) · GW(p)
If you wish to defuse an argument, simply pointing out that party X holds to alternative II, and asking whether they can understand why party X holds to alternative II, should be enough to at least start smothering the fuse.
Sometimes, the only answer you can come up with for this is "Because they're mistaken, evil, or both." (We can probably agree today that anyone making serious pro-slavery arguments prior to the American Civil War was mistaken, evil, or both.)
↑ comment by CronoDAS · 2012-03-19T09:06:10.369Z · LW(p) · GW(p)
This is the Socratic method of arguing. It can also be as a Dark Side technique by choosing your questions so as to lead your counterpart into a trap - that their position is logically inconsistent, or implies that they have to bite a bullet that they don't want to admit to biting.
I've seen this "countered" by people simply refusing to talk any more, by repeating their original statement, or saying "No, that's not it" followed by something that seems incomprehensible.
Replies from: TheAncientGeek, Dmytry, wedrifid↑ comment by TheAncientGeek · 2015-02-24T12:01:50.363Z · LW(p) · GW(p)
Why would that be a problem in their position actually is inconsistent? People don't like having their inconsistencies exposed, but it's still a legitimate concern for a truth-seeking debate.
↑ comment by Dmytry · 2012-03-19T09:13:03.004Z · LW(p) · GW(p)
Also leads to undesirable outcome of provoking even more screwed up beliefs by propagating them from one screwed up belief. If you want to convince someone (as opposed to convincing yourself and.or audience that you are right), you ought to try to start from the correct beliefs, and try to edge your way towards the screwed up ones. But that doesn't work either because people usually have surprisingly good instrumental model of the absence of the dragon in the garage, and see instantly what you are trying to do to their imaginary dragon. Speaking of which, it is easy to convince people to refine their instrumental model of non-existence of dragon, than to make them stop saying there is a dragon.
People not formally trained usually don't understand the idea of proof by contradiction.
Replies from: CronoDAS↑ comment by CronoDAS · 2012-03-19T10:01:10.577Z · LW(p) · GW(p)
Also leads to undesirable outcome of provoking even more screwed up beliefs by propagating them from one screwed up belief.
Reason as memetic immune disorder?
Replies from: Dmytry↑ comment by wedrifid · 2012-03-19T09:12:03.506Z · LW(p) · GW(p)
I've seen this "countered" by people simply refusing to talk any more, by repeating their original statement, or saying "No, that's not it" followed by something that seems incomprehensible.
Or, if you try to pull this kind of stunt at them too much, some good old ad baculum.
↑ comment by Anne_Briquet · 2012-03-21T13:43:38.199Z · LW(p) · GW(p)
I have sometimes used this one with my ex-boyfriend, who was an extraordinarily bad arguer. I did it for two reasons : have a productive argument and not have a boyfriend complaining that I always won arguments, so not exactly what the post had in mind. He had also admitted that I argued better than he did, so it did not come off too rude (I still had to use many qualifiers). That being said, I never used it efficiently when there were more than two people involved ans it sometimes backfired.
I would not recommend using it in any conversation, and especially not on an online discussion.
comment by dspeyer · 2012-03-19T02:37:36.105Z · LW(p) · GW(p)
These tips seem designed for cases where everyone has read them and everyone wants to reach the truth. An important case, certainly, and what we're trying (probably pretty successfully) to achieve here.
I can't help suspecting that an argument containing someone like this and someone arguing to win will either go nowhere or conclude whatever the arguer-to-win went in thinking. Clearly no good.
Any ideas how to deal with that (rather common) case?
Replies from: TheOtherDave, John_Maxwell_IV↑ comment by TheOtherDave · 2012-03-19T02:44:37.284Z · LW(p) · GW(p)
One option is to spend less time with people who argue to win, and more time with people who argue to reach truth.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-19T03:20:33.457Z · LW(p) · GW(p)
I actually tried pretty hard to teach an intelligent person off the street to argue productively with this post. (Agonized over the wording a fair amount, started with simple techniques and then discussed advanced ones, made no assumptions about prior knowledge, etc.) So, share this prior to your argument?
Clearly, your argument's productivity will be bounded if one participant is totally intransigent. So the best you can do in that case is collect evidence from the intransigent participant and move on.
comment by Dmytry · 2012-03-19T05:26:17.351Z · LW(p) · GW(p)
TBH the #1 rule should be: set a limit of time for arguing with individuals or groups of individuals who are dogmatically sure in something for which they don't even provide any argument for that could conceivably been this convincing to them. E.g. "why you are so sure exactly 1 God exist", "well, there's a book, which i agree doesn't make a whole lot of sense, it says it was written by God..." what ever, clearly you aren't updating your beliefs to '50% sure god exists' when presented with comparable quality argument that god doesn't exist, and by induction, no amount of argument can work, therefore there's no use arguing.
Unproductive arguments usually are genuinely result of at least one side being stupid, or insane, or doesn't care. Typically the count is 2, because why the hell is the other person arguing with the stupid, or insane? edit: there can be reason in public arguments though.
edit: also, this is why steel-man ing the other side arguments don't work in practice. Usually, if it could have worked, there wouldn't have been an argument in the first place. In theory, 2 intelligent people come to intelligent disagreement, and one side can steel-man other side's argument and then disprove it, and the other person will be enlightened. In practice, virtually all of the time, at least one side is being stupid, and often that includes you, and nothing good is going to happen out of your motivated re-interpretation of other side's argument. edit: actually, scratch that. If you can steel-man other side's argument that they didn't steel-man already, that would typically be result of other side's lower intelligence or ignorance to begin with. Proving lower and upper bounds in math, that's the steel man that works, but in verbal stuff, not so much.
comment by Dmytry · 2012-03-19T19:49:31.185Z · LW(p) · GW(p)
Actually, I have two tips, which sound unfriendly, but if followed, should minimize the unproductive arguments:
1: Try not to form strong opinions (with high certainty in the opinion) based on shaky arguments (that should only result in low-certainty opinion). I.e. try not to be overconfident in whatever was conjectured.
2: Try hard not to be wrong.
More than half of the problem with the unproductive arguments is that you are wrong. That's because in the arguments, often, both sides are wrong (note: you can have a wrong proof that 2*3=6 and still be very wrong. Try not to apply this to other side, by the way. If they are right, they are right. edit: Or actually, do. If they are wrong factually, your argumentation may still be flawed too. If their argumentation is 'flawed', they may still be factually correct, and you may still be factually wrong).
comment by Eugine_Nier · 2012-03-19T04:11:37.874Z · LW(p) · GW(p)
This is a list of tips for having "productive" arguments. For the purposes of this list, "productive" means improving the accuracy of at least one person's views on some important topic. By this definition, arguments where no one changes their mind are unproductive.
Sometimes the onlookers will change their position. When arguing with someone sufficiently mind-killed about a topic (and yes there are people like that on lesswrong), that's the best you can hope for.