DPiepgrass's Shortforms
post by DPiepgrass · 2024-08-19T19:13:30.154Z · LW · GW · 15 commentsContents
15 comments
15 comments
Comments sorted by top scores.
comment by DPiepgrass · 2024-09-30T14:48:50.870Z · LW(p) · GW(p)
The Hidden-Motte-And-Bailey fallacy: belief in a Bailey inspires someone to invent a Motte and write an article about it. The opinion piece describes the Motte exclusively with no mention of the Bailey. Others read it and nod along happily because it supports their cherished Bailey, and finally they share it with others in an effort to help promote the Bailey.
Example: Christian philosopher describes new argument for the existence of a higher-order-infinity God which bears no resemblance to any Abrahamic God, and which no one before the 20th century had ever conceived of.
Maybe the Motte is strong, maybe it isn't, but it feels very strong when combined with the Gish Fallacy: the feeling of safety some people (apparently) get by collecting large numbers of claims and arguments, whether or not they routinely toss them out as Gish Gallops at anyone who disagrees. The Gish Fallacy seems to be the opposite of the mathematician's mindset, for mathematicians know that a single flaw can destroy proofs of any length. While the mathematician is satisfied by a single a short and succinct proof or disproof, the Gish mindset wishes to read a thousand different descriptions of three dozen arguments, with another thousand pithy rejections of their counterarguments, so they're thoroughly prepared to dismiss the evil arguments of the enemies of truth―or, if the enemy made a good point, there are still 999 articles supporting their view, and more where that came from!
Example: my dad
Replies from: Seth Herd, quila↑ comment by Seth Herd · 2024-09-30T20:08:02.977Z · LW(p) · GW(p)
Excellent point.
I'd think the Gish mindset isn't limited to people like your dad. I'd think that rationalists are vulnerable to it as well in any complex domain. It's not like we're doing literal bayesian updates or closed form proofs for our actually important beliefs like how hard alignment is or what methods are promising. In those areas no argument is totally closed, so weighing preponderance of decent arguments is about all we can do. So I'd say we're all vulnerable to the Gish Fallacy to an important degree. And therefore the implicit Motte-And-Bailey fallacy. l
Replies from: DPiepgrass↑ comment by DPiepgrass · 2024-10-04T19:54:42.502Z · LW(p) · GW(p)
Well, yeah, it bothers me that the "bayesian" part of rationalism doesn't seem very bayesian―otherwise we'd be having a lot of discussions about where priors come from, how to best accomplish the necessary mental arithmetic, how to go about counting evidence and dealing with ambiguous counts (if my friends Alice and Bob both tell me X, it could be two pieces of evidence for X or just one depending on what generated the claims; how should I count evidence by default, and are there things I should be doing to find the underlying evidence?)
So―vulnerable in the current culture, but rationalists should strive to be the opposite of the "gishy" dark-epistemic people I have on my mind. Having many reasons to think X isn't necessarily a sin, but dark-epistemic people gather many reasons and have many sins, which are a good guide of what not to do.
↑ comment by quila · 2024-09-30T15:34:36.804Z · LW(p) · GW(p)
Agreed that hidden-motte-and-baileys are a thing. They may also be caused by pressure not to express the actual belief (in which case, idk if I'd call it a fallacy / mistake of reasoning).
I'm not seeing how they synergise with the 'gish fallacy' though.
mathematicians know that a single flaw can destroy proofs of any length
Yes, but the analogy would be having multiple disjunctive proof-attempts which lead to the same result, which you can actually do validly (including with non-math beliefs). (Of course the case you describe is not a valid case of this)
Replies from: DPiepgrass↑ comment by DPiepgrass · 2024-10-04T19:33:18.299Z · LW(p) · GW(p)
TBH, a central object of interest to me is people using Dark Epistemics. People with a bad case of DE typically have "gishiness" as a central characteristic, and use all kinds of fallacies of which motte-and-bailey (hidden or not) is just one. I describe them together just because I haven't seen LW articles on them before. If I were specifically naming major DE syndromes, I might propose the "backstop of conspiracy" (a sense that whatever the evidence at hand doesn't explain is probably still explained by some kind of conspiracy) and projection (a tendency to loudly complain that one's political enemies have whatever negative characteristics you yourself, or your political heroes, are currently exhibiting). Such things seem very effective at protecting the person's beliefs from challenge. I think there's also a social element ("my friends all believe the same thing") but this is kept well-hidden. EDIT: other telltale signs include refusing to acknowledge that one got anything wrong or made any mistake, no matter how small; refusing to acknowledge that the 'opponent' is right about anything, no matter how minor; an allergy to detail (refusing to look at the details of any subtopic); and shifting the playfield repeatedly (changing the topic when one appears to be losing the argument).
comment by DPiepgrass · 2024-08-19T19:13:30.341Z · LW(p) · GW(p)
I have a feature request for LessWrong. It's the same as my feature request for every site: you should be able to search within a user (i.e. visit a user page and begin a search from there). This should be easy to do technically; you just have to add the author's name as one of the words in the search index. And in terms of UI, I think you could just link to https://www.lesswrong.com/search?query=@author:username [? · GW].
Preferably do it in such a way that a normal post cannot do the same, e.g. if "foo authored this post" is placed in the index as @author:foo, if a normal post contains the text @author:foo
then perhaps the index only ends up with @author
(or author
) and foo
, while the full string is not in the index (or, if it is in the index, can only be found by searching with quotes a la Google: "@author:foo"
P.S. we really should be able to search by date range, too. I'm looking for something posted by Yudkowsky at some point before 2010... and surely this is not an unusual thing to want to search for.
Replies from: MondSemmel↑ comment by MondSemmel · 2024-08-20T14:03:19.215Z · LW(p) · GW(p)
Tip: To increase the chance that the LW team sees feature requests, it helps linking to them on Intercom.
Replies from: DPiepgrass↑ comment by DPiepgrass · 2024-08-22T02:23:15.486Z · LW(p) · GW(p)
Do you mean the "send us a message" popup at bottom-right?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-22T03:17:21.507Z · LW(p) · GW(p)
Yep, that's what we usually mean by Intercom.
comment by DPiepgrass · 2024-08-19T22:21:30.911Z · LW(p) · GW(p)
I wrote a long comment and someone took the "strong downvote, no disagreement, no reply" tack again. Poppy-cutters[1] seem way more common at LessWrong than I could've predicted, and I'd like to see statistics to see how common they are, and to see whether my experience here is normal or not.
- ^
Meaning users who just seem to want to make others feel bad about their opinion, especially unconstructively. Interesting statistics might include the distributions of users who strong-downvote a positive score into a negative one (as compared to their other voting behaviors), who downvote more than they disagree, who downvote more than they upvote, who strong-downvote more than they downvote, and/or strong-downvote more often than they reply. Also: is there an 80/20 rule like "20% of the people do 80% of the downvotes"? Is the distribution different for "power users" than "casual users"?
↑ comment by habryka (habryka4) · 2024-08-19T22:26:05.882Z · LW(p) · GW(p)
Huh, just to check, this [LW(p) · GW(p)] seems like the comment of yours that you are probably referring to, and I didn't see any strong downvotes. Before I voted on it it was at -1 with 2 total votes, which very likely means someone with a weak-upvote strength of 2 small-downvoted it. My guess is that's just relatively random voting noise and people small-upvote and small-downvote lots of stuff without having strong feelings about it.
It does produce harsher experiences when the first vote is a downvote, and I've considered over the years to do a Reddit-like thing where you hide the total karma of a new comment for a few hours to reduce those effects, but I overall decided against it.
Replies from: Dagon, DPiepgrass↑ comment by DPiepgrass · 2024-08-19T22:31:25.128Z · LW(p) · GW(p)
Shoot, I forgot that high-karma users have a "small-strength" of 2, so I can't tell if it was strong-downvoted or not. I mistakenly assumed it was a newish user. Edit: P.S. I might feel better if karma was hidden on my own new comments, whether or not they are hidden on others, though it would then be harder to guess at the vote distribution, making the information even more useless than usual if it survives the hiding-period. Still seems like a net win for the emotional benefits.
↑ comment by Martin Randall (martin-randall) · 2024-08-20T00:54:43.968Z · LW(p) · GW(p)
Disagree votes are meant to be a way to signal disagreement without signaling that the comment was lower quality/signal.
I don't think the disagreement of a single LW reader is something to feel sad about. I would feel sad if nobody ever disagreed with my comments.
Replies from: DPiepgrass↑ comment by DPiepgrass · 2024-08-20T02:35:12.161Z · LW(p) · GW(p)
Yikes! Apparently I said "strong disagree" when I meant "strong downvote". Fixed. Sorry. Disagree votes generally don't bother me either, they just make me curious what the disagreer disagrees about.