Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
post by Zack_M_Davis · 2019-09-24T04:12:07.560Z · LW · GW · 40 commentsContents
40 comments
Followup to: What Evidence Filtered Evidence? [LW · GW]
In "What Evidence Filtered Evidence?" [LW · GW], we are asked to consider a scenario involving a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time. Observing Heads is 1 bit of evidence for the coin being Heads-biased (because the Heads-biased coin lands Heads with probability 2/3, the Tails-biased coin does so with probability 1/3, the likelihood ratio of these is , and ), and analogously and respectively for Tails.
If such a coin is flipped ten times by someone who doesn't make literally false statements [LW · GW], who then reports that the 4th, 6th, and 9th flips came up Heads, then the update to our beliefs about the coin depends on what algorithm the not-lying[1] reporter used to decide to report those flips in particular. If they always report the 4th, 6th, and 9th flips independently of the flip outcomes—if there's no evidential entanglement [LW · GW] between the flip outcomes and the choice of which flips get reported—then reported flip-outcomes can be treated the same as flips you observed yourself: three Headses is 3 * 1 = 3 bits of evidence in favor of the hypothesis that the coin is Heads-biased. (So if we were initially 50:50 on the question of which way the coin is biased, our posterior odds after collecting 3 bits of evidence for a Heads-biased coin would be = 8:1, or a probability of 8/(1 + 8) ≈ 0.89 that the coin is Heads-biased.)
On the other hand, if the reporter mentions only and exactly the flips that came out Heads, then we can infer that the other 7 flips came out Tails (if they didn't, the reporter would have mentioned them), giving us posterior odds of = 1:16, or a probability of around 0.06 that the coin is Heads-biased.
So far, so standard. (You did read the Sequences, right??) What I'd like to emphasize about this scenario today, however, is that while a Bayesian reasoner who knows the non-lying reporter's algorithm of what flips to report will never be misled by the selective reporting of flips, a Bayesian with mistaken beliefs about the reporter's decision algorithm can be misled quite badly: compare the 0.89 and 0.06 probabilities we just derived given the same reported outcomes, but different assumptions about the reporting algorithm.
If the coin gets flipped a sufficiently large number of times, a reporter whom you trust to be impartial (but isn't), can make you believe anything she wants without ever telling a single lie, just with appropriate selective reporting. Imagine a very biased coin that comes up Heads 99% of the time. If it gets flipped ten thousand times, 100 of those flips will be Tails (in expectation), giving a selective reporter plenty of examples to point to if she wants to convince you that the coin is extremely Tails-biased.
Toy models about biased coins are instructive for constructing examples with explicitly calculable probabilities, but the same structure applies to any real-world situation where you're receiving evidence from other agents, and you have uncertainty about what algorithm is being used to determine what reports get to you. Reality is like the coin's bias; evidence and arguments are like the outcome of a particular flip. Wrong theories will still have some valid arguments and evidence supporting them [LW · GW] (as even a very Heads-biased coin will come up Tails sometimes), but theories that are less wrong will have more.
If selective reporting is mostly due to the idiosyncratic bad intent of rare malicious actors, then you might hope for safety in (the law of large) numbers: if Helga in particular is systematically more likely to report Headses than Tailses that she sees, then her flip reports will diverge from everyone else's, and you can take that into account when reading Helga's reports. On the other hand, if selective reporting is mostly due to systemic structural factors that result in correlated selective reporting even among well-intentioned people who are being honest as best they know how,[2] then you might have a more serious problem.
"A Fable of Science and Politics" [LW · GW] depicts a fictional underground Society polarized between two partisan factions, the Blues and the Greens. "[T]here is a 'Blue' and a 'Green' position on almost every contemporary issue of political or cultural importance." If human brains consistently understood the is/ought distinction, then political or cultural alignment with the Blue or Green agenda wouldn't distort people's beliefs about reality. Unfortunately ... humans. (I'm not even going to finish the sentence.)
Reality itself isn't on anyone's side, but any particular fact, argument, sign, or portent [LW · GW] might just so happen to be more easily construed as "supporting" the Blues or the Greens. The Blues want stronger marriage laws; the Greens want no-fault divorce. An evolutionary psychologist [LW · GW] investigating effects of kin-recognition mechanisms on child abuse by stepparents might aspire to scientific objectivity, but being objective and staying objective is difficult when you're embedded in an intelligent social web [LW · GW] in which in your work is going to be predictably championed by Blues and reviled by Greens.
Let's make another toy model to try to understand the resulting distortions on the Undergrounders' collective epistemology. Suppose Reality is a coin—no, not a coin, a three-sided die,[3] with faces colored blue, green, and gray. One-third of the time it comes up blue (representing a fact that is more easily construed as supporting the Blue narrative), one-third of the time it comes up green (representing a fact that is more easily construed as supporting the Green narrative), and one-third of the time it comes up gray (representing a fact that not even the worst ideologues know how to spin as "supporting" their side).
Suppose each faction has social-punishment mechanisms enforcing consensus internally. Without loss of generality, take the Greens (with the understanding that everything that follows goes just the same if you swap "Green" for "Blue" and vice versa).[4] People observe rolls of the die of Reality, and can freely choose what rolls to report—except a resident of a Green city who reports more than 1 blue roll for every 3 green rolls is assumed to be a secret Blue Bad Guy, and faces increasing social punishment as their ratio of reported green to blue rolls falls below 3:1. (Reporting gray rolls is always safe.)
The punishment is typically informal: there's no official censorship from Green-controlled local governments, just a visible incentive gradient made out of social-media pile-ons, denied promotions, lost friends and mating opportunities, increased risk of being involuntarily committed to psychiatric prison,[5] &c. Even people who privately agree with dissident speech might participate in punishing it, the better to evade punishment themselves.
This scenario presents a problem for people who live in Green cities who want to make and share accurate models of reality. It's impossible to report every die roll (the only 1:1 scale map of the territory, is the territory itself), but it seems clear that the most generally useful models—the ones you would expect arbitrary AIs to come up with—aren't going to be sensitive to which facts are "blue" or "green". The reports of aspiring epistemic rationalists who are just trying to make sense of the world will end up being about one-third blue, one-third green, and one-third gray, matching the distribution of the Reality die.
From the perspective of ordinary nice smart Green citizens who have not been trained in the Way, these reports look unthinkably Blue. Aspiring epistemic rationalists who are actually paying attention can easily distinguish Blue partisans from actual truthseekers,[6] but the social-punishment machinery can't process more than five words at a time [LW · GW]. The social consequences of being an actual Blue Bad Guy, or just an honest nerd who doesn't know when to keep her stupid trap shut, are the same.
In this scenario,[7] public opinion within a subculture or community in a Green area is constrained by the 3:1 (green:blue) "Overton ratio." In particular, under these conditions, it's impossible to have a rationalist community—at least the most naïve conception of such. If your marketing literature says, "Speak the truth, even if your voice trembles," but all the savvy high-status people's actual reporting algorithm is, "Speak the truth, except when that would cause the local social-punishment machinery to mark me as a Blue Bad Guy and hurt me and any people or institutions I'm associated with—in which case, tell the most convenient lie-of-omission", then smart sincere idealists who have internalized your marketing literature as a moral ideal and trust the community to implement that ideal, are going to be misled by the community's stated beliefs—and confused at some of the pushback they get when submitting reports with a 1:1:1 blue:green:gray ratio.
Well, misled to some extent—maybe not much! In the absence of an Oracle AI (or a competing rationalist community in Blue territory) to compare notes with, then it's not clear how one could get a better map than trusting what the "green rationalists" say. With a few more made-up modeling assumptions, we can quantify the distortion introduced by the Overton-ratio constraint, which will hopefully help develop an intuition for how large of a problem this sort of thing might be in real life.
Imagine that Society needs to make a decision about an Issue (like a question about divorce law or merchant taxes). Suppose that the facts relevant to making optimal decisions about an Issue are represented by nine rolls of the Reality die, and that the quality (utility) of Society's decision is proportional to the (base-two logarithm) entropy of the distribution of what facts get heard and discussed.[8]
The maximum achievable decision quality is ≈ 3.17.
On average, Green partisans will find 3 "green" facts[9] and 3 "gray" facts to report, and mercilessly stonewall [LW · GW] anyone who tries to report any "blue" facts, for a decision quality of ≈ 2.58.
On average, the Overton-constrained rationalists will report the same 3 "green" and 3 "gray" facts, but something interesting happens with "blue" facts: each individual can only afford to report one "blue" fact without blowing their Overton budget—but it doesn't have to be the same fact for each person. Reports of all 3 (on average) blue rolls get to enter the public discussion, but get mentioned (cited, retweeted, &c.) 1/3 as often as green or gray rolls, in accordance with the Overton ratio. So it turns out that the constrained rationalists end up with a decision quality of ≈ 3.03,[10] significantly better than the Green partisans—but still falling short of the theoretical ideal where all the relevant facts get their due attention.
If it's just not pragmatic to expect people to defy their incentives [LW · GW], is this the best we can do? Accept a somewhat distorted state of discourse, forever?
At least one partial remedy seems apparent. Recall from our original coin-flipping example that a Bayesian who knows what the filtering process looks like, can take it into account and make the correct update. If you're filtering your evidence to avoid social punishment, but it's possible to clue in your fellow rationalists to your filtering algorithm without triggering the social-punishment machinery—you mustn't assume that everyone already knows [LW · GW]!—that's potentially a big win. In other words, blatant cherry-picking is the best kind! [LW · GW]
I don't quite want to use the word honest here. ↩︎
And it turns out that knowing how to be honest is much more work than one might initially think. You have read the Sequences, right?! ↩︎
For lack of an appropriate Platonic solid in three-dimensional space, maybe imagine tossing a triangle in two-dimensional space?? ↩︎
As an author, I'm facing some conflicting desiderata in my color choices here. I want to say "Blues and Greens" in that order for consistency with "A Fable of Science and Politics" (and other classics from the Sequences [LW · GW]). Then when making an arbitrary choice to talk in terms of one of the factions in order to avoid cluttering the exposition, you might have expected me to say "Without loss of generality, take the Blues," because the first item in a sequence ("Blues" in "Blues and Greens") is a more of a Schelling point [LW · GW] than the second, or last, item. But I don't want to take the Blues, because that color choice has other associations that I'm trying to avoid right now: if I said "take the Blues", I fear many readers would assume that I'm trying to directly push a partisan point about soft censorship and preference-falsification social pressures in liberal/left-leaning subcultures in the contemporary United States. To be fair, it's true that soft censorship and preference-falsification social pressures in liberal/left-leaning subcultures in the contemporary United States are, historically, what inspired me, personally, to write this post. It's okay for you to notice that! But I'm trying to talk about the general mechanisms that generate this class of distortions on a Society's collective epistemology, independently of which faction or which ideology happens to be "on top" in a particular place and time [LW · GW]. If I'm doing my job right, then my analogue in a "nearby" Everett branch [LW · GW] whose local subculture was as "right-polarized" as my Berkeley environment is "left-polarized", would have written a post making the same arguments. ↩︎
Okay, they market themselves as psychiatric "hospitals", but let's not be confused by misleading labels [LW · GW]. ↩︎
Or rather, aspiring epistemic rationalists can do a decent job of assessing the extent to which someone is exhibiting truth-tracking behavior, or Blue-partisan behavior. Obviously, people who are consciously trying to seek truth, are not necessarily going to succeed at overcoming bias, and attempts to correct for the "pro-Green" distortionary forces being discussed in this parable could easily veer into "pro-Blue" over-correction. ↩︎
Please be appropriately skeptical about the real-world relevance of my made-up modeling assumptions! If it turned out that my choice of assumptions were (subconsciously) selected for the resulting conclusions about how bad evidence-filtering is, that would be really bad for the same reason that I'm claiming that evidence-filtering is really bad! ↩︎
The entropy of a discrete probability distribution is maximized by the uniform distribution, in which all outcomes receive equal probability-mass. I only chose these "exactly nine equally-relevant facts/rolls" and "entropic utility" assumptions to make the arithmetic easy on me; a more realistic model might admit arbitrarily many facts into discussion of the Issue, but posit a distribution of facts/rolls with diminishing marginal relevance to Society's decision quality. ↩︎
The scare quotes around the adjective "'green'" (&c.) when applied to the word "fact" (as opposed to a die roll outcome representing a fact in our toy model) are significant! The facts aren't actually on anyone's side! We're trying to model the distortions that arise from stupid humans thinking that the facts are on someone's side! This is sufficiently important—and difficult to remember—that I should probably repeat it until it becomes obnoxious! ↩︎
You have three green slots, three gray slots, and three blue slots. You put three counters each on each of the green and gray slots, and one counter each on each of the blue slots. The frequencies of counters per slot is [3, 3, 3, 3, 3, 3, 1, 1, 1]. The total number of counters you put down is 3*6 + 3 = 18 + 3 = 21. To turn the frequencies into a probability distribution, you divide everything by 21, to get [1/7, 1/7, 1/7, 1/7, 1/7, 1/7, 1/21, 1/21, 1/21]. Then the entropy is , which simplifies to . ↩︎
40 comments
Comments sorted by top scores.
comment by Raemon · 2019-09-24T20:25:16.308Z · LW(p) · GW(p)
Agree with others on this being a well-put together post, both clearly tying together several related concepts and pushing the surrounding conversation forward. I like 'blatant cherrypicking is the best kind' as an encapsulation of the concept.
Some high level thoughts here:
- It's better to share cherry-picking-algorithm at a high enough level that allows people to predict future things you might suddenly not be able to talk about in a few years. (i.e. I can list the things that I don't currently talk about, but in a year a new issue might get politicized, that I didn't think to spell out in advance. And sharing "I don't talk about X" right as X is getting politicized might be particularly socially hazardous. It's also just costly, in time, to periodically update your taboos, and costly in other people's time to constantly be checking up on it)
- Ideally, if it's only OTHER people we're worried about social harm from (i.e. non-aspiring-espistemic-rationalists), we still get to talk about the thing to build a fully integrated worldmodel. One property that a Citadel of Truth [LW(p) · GW(p)] should have is actually keeping things private from the outside world. (This is a solveable logistical problem, although you do have to actually solve it. It might be good for LW to enable posts to be hidden from non-logged out users, perhaps requiring some karma threshold to see taboo posts)
- The hardest of hard modes is "local politics", where it's not just that I'm worried about nebulous "outsiders" hurting me (or friends feeling pressure to disown me because they in turn face pressure from outsiders). Instead, the issue is politics inside the citadel. It seems like a quite desirable property to able to talk freely about which local orgs and people deserve money and prestige – but I don't currently know of robust game mechanics that will actually, reliably enable this in any environment where I don't personally know and trust each person.
- (Having multiple "inner citadels" of trust is sort of the de-facto way this is done currently, in my experience. Having clearer signposting on how to get trustworthy might be a good improvement. Notably, proclaiming "I only care about truth, not politics" is not sufficient for me to trust someone in this domain.)
Some concrete "cherry-picking-algorithm-sharings"
I'm happy, at this present moment, to note:
I generally avoid talking online about mainstream national political issues (gender and social justice issues in particular) unless they cross a particular threshold for "relevant to EA concerns", or "personally important to me".
(For example, climate change is politicized, but it's pretty important to fit it into my overall catastrophic risk ontology, and worth paying the cost for. I don't talk about it that often, partly because it still has a slightly-disproportionate cost*, and partly because it just turns out climate change isn't that important after having thought about it in EA context)
It currently so happens that gender/social-justice issues are both "national politics" and also "local politics", which makes them additionally hard to talk openly about. But this is a property of the particular year, and different issues might be more sensitive in a couple years. If you want to know which things I can't say in a couple years, and I haven't written another such comment in the meanwhile, you may need to do interpretive labor and careful observation. Which sucks, and I'm sorry.
*the climate change cost isn't just in "people might ostracize me" (I'm not actually worried about that), but "random people are less likely to be thinking clearly, and the conversation quality is often going to be worse".
There's an interesting thing where abortion becomes a much more interesting topic to me now that I have more nuanced views of moral uncertainty and trade. But this is a topic that I think makes most sense to discuss "within the citadel" (where "within the citadel" means "I'm reasonably confident randos from the culture war won't show up, and/or start making life difficult for me or causes I care about". Especially because most of the value isn't related to abortion itself, but integrating a worldview that includes abortion as well as digital uploads and low-grade simulations)
Local Politics
Local politics (i.e. facts pertinent to "who get money/prestige in the rationalsphere") has the double-edged property of "being much more important to talk about" and "being harder and anti-inductive to talk about". This post has provided a good reminder for me to figure out some thoughts here and write them up. I think they're beyond scope for this comment though.
Final note:
None of these are things that I don't talk about period, they are just things that are disproportionately costly to talk about, so I don't talk about them freely without putting some thought into it and making sure it's worth it.
Replies from: Raemon↑ comment by Raemon · 2021-01-19T04:54:34.803Z · LW(p) · GW(p)
Update: in the past year, since writing this comment, I periodically thought 'man, I should probably post some somewhat controversial opinions, to get into the habit of actually having the backbone to do that sometimes.'
And... well, I haven't gotten around to it. And I think there at least is a little bit of "Ray is just being cowardly here", but, also... it just doesn't feel super worth it. It's effortful to write up stuff in the first place, and I have a ton of stuff I want to write that isn't controversial which I think is really important, and meanwhile I don't think on the object level that any given controversial opinion was really that valuable to express except for building up my social resilience backbone.
I think "build up social resilience" is actually pretty important, enough that I lean towards "yes actually feel bad about this even though any given controversial opinion doesn't feel very important to express". But, whelp, here we are.
comment by Ben Pace (Benito) · 2019-09-27T00:11:04.186Z · LW(p) · GW(p)
Curated. I've tried to think about similar topics - silencing of different kinds of information can also lead to information cascades [LW · GW], for example. This was a simple toy model that I had properties I'd never put into an explicit model before - if signalling always looks like at least a 3:1 ratio of args in your side's favour, then random chance is gonna mean some people (even if 3:1 is the ground truth) will have lopsided info and have to lie, and that's a massive corruption of those people's epistemologies.
So far, so standard. (You did read the Sequences, right??)
Yes, indeed I did. A bunch of the beginning was nice to see again, it's good for people to reread that stuff, and for any newer users who haven't done it for the first time yet.
I wasn't so much enjoying one political footnote which seemed mostly off-topic or something, until the line at the end saying
If I'm doing my job right, then my analogue in a "nearby" Everett branch [LW · GW] whose local subculture was as "right-polarized" as my Berkeley environment is "left-polarized", would have written a post making the same arguments.
which I really like as a way of visualising needling the truth between political biases in any environment.
The post is very readable and clearly explained, plus lots of links for context, which is always great.
I mostly feel confused about quantifying how biased the decisions are. If you have 9 honest rolls then that's log_2 of 9 = 3.2 bits. But if you roll it 9 times and hide the 3 rolls in a certain direction, then you don't have log_2 of 6 = 2.6 bits. That would be true if you had 6 honest rolls (looking like 2:2:2) but 3:3:0 surely is not the same amount of evidence. I'm generally not sure how best to understand the effects of biases of this sort, and want to think about that more.
Replies from: rohinmshah
↑ comment by Rohin Shah (rohinmshah) · 2020-12-02T19:03:52.353Z · LW(p) · GW(p)
But if you roll it 9 times and hide the 3 rolls in a certain direction, then you don't have log_2 of 6 = 2.6 bits. That would be true if you had 6 honest rolls (looking like 2:2:2) but 3:3:0 surely is not the same amount of evidence. I'm generally not sure how best to understand the effects of biases of this sort, and want to think about that more.
The general formula is , where obs is the observation that you see. You need to calculate based on the problem setup; if you are given the ground truth of how the 9 rolls happen as well as the algorithm by which the 6 dice rolls to reveal are chosen, you can compute for each obs by brute force simulation of all possible worlds.
comment by DaystarEld · 2019-09-25T04:12:50.840Z · LW(p) · GW(p)
Great post. Something that's worth noting is that this is true *even if reality is not 1:1:1.*
Let's say the joke "reality has a well known Green bias" is actually true, and add an extra Green side to that impossible die and make it a true four sided pyramid, easy to picture and shake in your hands. It's now 2:1:1 Green:Blue:Grey
Reality says "Roll 10d4," and you let all the little pointy shapes in your palms spill out to reveal... *rolls actual dice* ~4 Green, ~4 Grey, and ~2 Blue.
"Oh snap," you, a Green, might say. "Evidence that my philosophy/political views are correct!... Er, I mean, more correct than the local alternative competing one! And really, that's what matters, isn't it? No one's PERFECT, but at least we can feel justified in fighting the good fight!"
You still have two glaring problems.
1) You're still, in the majority of cases, wrong, on this particular subject/roll. This is an unpopular thing to say among Greens. "We're actually wrong on most things, but we're more right than the opposition!" is only ever going to be a rallying cry for those who intrinsically dislike tribalism and grok lesser-of-two-evils and advocate for slow and steady progress, not those trying to Win The Battle for the Soul/Future/Rights of the Whatever.
2) Your opponents are actually still right on half the things they're fighting for compared to the things you're right about. Even hinting at this will make you EVEN MORE UNPOPULAR among your fellow Greens. Especially if any of the things they're right about are particularly hot-button issues at the point in spacetime that you find yourself in.
So, being a Green (or a Grey who knows that Greys have no chance and so hesitantly puts on a Green uniform) who now wants to actually go into politics and push for positive change, you will by necessity feel a lot more inclined to focus on those 4 things that you have strong evidence your side is right about, ignore those 2 things that your opponent is right about (unless you're behind closed doors with similarly reasonable Greens, Blues, or Greys), and maybe once in awhile try to push at a couple of the 4 things that both sides are wrong about, assuming you have any political capital to spare (you probably don't, hell you probably don't even have enough to focus on all of the 4 dice-backed Green topics).
Well guess what: now to an "objective observer," you, an intelligent and competent politician, look like a biased ideologue who's blinded by his partisanship and just appealing to his rank-and-file.
Change this from 10 separate issues to 10 pieces of evidence that make up the nuance of a single issue, and a similar thing happens as nuance gets left at the wayside. Now imagine that not everyone is actually researching all those dice rolls and instead just randomly looking at the subsection of them that happen across their newsfeed, or which appeal to their particular interests, or are the least complicated to take in.
It's disheartening, but it can help to note that those who are trying to actually make changes in any social sphere are not working under the same rules/toward the same goals as those trying to just seek truth. This is not a defense of lying, by omission or otherwise, but it is a defense of people who have good reasons to do or say things that we, observing them from the outside, may think are evidence of them being mindkilled.
Replies from: RRand↑ comment by RRand · 2019-10-01T23:47:44.974Z · LW(p) · GW(p)
It's worth pointing out that grey in the post refers to
a fact that not even the worst ideologues know how to spin as "supporting" their side".
I'm not sure what good grey rolls do in the context of this post (especially given the proviso that "there is a 'Blue' and a 'Green' position on almost every contemporary issue of political or cultural importance").
But grey rolls are, of course, important: Grey facts and grey issues are uncorrupted by the Great War, and hence are that much more accessible/tractable. The more grey facts there are, the better rationalists we can be.
With respect to your comment, the presence of Grey, Yellow, Orange and Purple Teams would actually help things substantially -- if I report facts from the six teams equally, it's harder to label me as a partisan. (And it's harder for any team to enforce partisanship.) Even if Blue-supporting facts truly are taboo (Green is unlikely to have more than one archnemesis), that's much less limiting when only a sixth of facts are Blue. It's a nice advantage of multipolar politics.
Replies from: matthias-goergens↑ comment by Matthias Görgens (matthias-goergens) · 2019-10-09T20:21:53.423Z · LW(p) · GW(p)
What seems to happen in practice is that everything non-Green gets lumped together as Blue. Even if the other people do not see themselves as being on one faction.
And then you have the additional complications of out-group vs far-group.
comment by vedrfolnir · 2019-09-29T03:19:00.406Z · LW(p) · GW(p)
(This post is important enough that I'm breaking my commitment not to post until a certain time in the future.)
The model here strikes me as the correct *sort* of model, but deserving of substantial complication. Two complications in particular seem clear and relevant to me.
First, will the smart sincere idealists be simply *misled?* Given that this hypothetical imperfect rationalist space exists within Green territory, deviations from the Overton ratio will be punished by Greens *both inside and outside* the rationalist space; as such, it could (entirely unintentionally, at least at first) serve to *reinforce* Green partisan hegemony, especially if there's a large imbalance between the abilities of Greendom and Bluedom to offer *patronage*.
We already know from history that regimes may become so... self-serving and detached from reality, as one could put it... that they'll feel the need to actively select against smart, sincere idealists, or any permutation thereof. Loyalty to anything but the regime may be seen as an inefficiency and optimized away.
As a result, it could be useful for Green partisans to keep such spaces around, albeit low-prestige and generally reviled. Partisans also have an interest in identifying the sincere and the idealistic, but for precisely the opposite reasons. (Cf. the Hundred Flowers Campaign.)
Second, the neat division of truths into Green, Blue, and Gray rings unconvincing to me. Consider the Greens and Blues as having reality maps: certain things directly benefit their reality maps, certain things directly harm those maps, and certain things are neutral. (To pick on Zoroastrianism: the reality of Ahura Mazda or Angra Mainyu would be in the first category, a genealogical account of Zoroastrian doctrine in the Nietzschean sense would be in the second, and the contents of a randomly selected academic journal in the field of (say) accounting would, I assume, be almost entirely in the third.)
If we multiply the three categories of the Greens by the three categories of the Blues, we get nine options, not three. If we make certain assumptions about Green-Blue conflict, we can reduce this somewhat, and posit that anything that is beneficial to one side but seemingly neutral to the other in fact benefits the first at the expense of the second.
But this leaves five possibilities, not three! In addition to [+Green -Blue], [-Green +Blue], and [0Green 0Blue], we have [+Green +Blue] and [-Green -Blue]. Would Blues and Greens not fear displacement by something outside their union?
Replies from: quanticle, wizzwizz4↑ comment by quanticle · 2019-09-29T03:50:08.394Z · LW(p) · GW(p)
We already know from history that that regimes may become so... self-serving and detached from reality, as one could put it... that they'll feel the need to actively select against smart, sincere idealists or any permutation thereof.
Coincidentally, I was reading this excellent article about the mindset behind Leninism, and I felt like this passage was particularly insightful:
In his history of Marxism, Kołakowski explains some puzzling aspects of Bolshevik practice in these terms. Everyone understands why Bolsheviks shot liberals, socialist revolutionaries, Mensheviks, and Trotskyites. But what, he asks, was the point of turning the same fury on the Party itself, especially on its most loyal, Stalinists, who accepted Leninist-Stalinist ideology without question? Kołakowski observes that it is precisely the loyalty to the ideology that was the problem. Anyone who believed in the ideology might question the leader’s conformity to it. He might recognize that the Marxist-Leninist Party was acting against Marxism-Leninism as the Party itself defined it; or he might compare Stalin’s statements today with Stalin’s statements yesterday. 'The citizen belongs to the state and must have no other loyalty, not even to the state ideology,' Kołakowski observes.
comment by hamnox · 2019-09-24T17:29:00.943Z · LW(p) · GW(p)
sigh. This post digs into why I can't watch the news without feeling frustrated.
Because even when I agree with the newscaster's overall assessment of a situation, there's just... never quite enough acknowledgement that some evidence might point a different direction than is politically convenient. That small or selective samples can even appear to point against the truth. That alternate perspectives on the facts don't come into existence solely to try to knock yours down.
comment by Ruby · 2019-09-24T18:40:02.108Z · LW(p) · GW(p)
As Davis said, this is a good post. Thanks!
Something especially disturbing is to think that the above dynamics don't apply solely to external partisan parties like the media or political groups, but also the inside of one's own head.
Imagine a very biased coin that comes up Heads 99% of the time. If it gets flipped ten thousand times, 100 of those flips will be Tails (in expectation), giving a selective reporter plenty of examples to point to if she wants to convince you that the coin is extremely Tails-biased.
Suppose one knows ten thousands facts about reality and they mostly point in one direction but one's mind has anchored on a given narrative or way of the world being. Selective [internal]-reporting is all to easy and once can convince oneself of quite strong the opposite position. It is all too easy to construct narratives.
comment by hg00 · 2019-09-27T03:44:39.351Z · LW(p) · GW(p)
Nice post. I think one thing which can be described in this framework is a kind of "distributed circular reasoning". The argument is made that "we know sharing evidence for Blue positions causes harmful effects due to Green positions A, B, and C", but the widespread acceptance of Green positions A, B, and C itself rests on the fact that evidence for Green positions is shared much more readily than evidence for Blue positions.
Replies from: wizzwizz4comment by Zack_M_Davis · 2021-01-02T22:19:33.526Z · LW(p) · GW(p)
(Self-review.) I've edited the post to include the calculation as footnote 10.
The post doesn't emphasize this angle, but this is also more-or-less my abstract story for the classic puzzle of why disagreement [? · GW] is so prevalent, which, from a Bayesian-wannabe rather than a human perspective, should be shocking: there's only one reality, so honest people should get the same answers. How can it simultaneously be the case that disagreement is ubiquitous, but people usually aren't outright lying [LW · GW]? Explanation: the "dishonesty" is mostly in the form of motivatedly asking different questions.
Possible future work: varying the model assumptions might yield some more detailed morals. I never got around to trying the diminishing-marginal-relevance variation suggested in footnote 8. Another variation I didn't get around to trying would be for the importance of a fact to each coalition's narrative to vary: maybe there are a few "sacred cows" for which the social cost of challenging is huge (as opposed to just having to keep one's ratio of off-narrative reports in line).
Prior work: So, I happened to [LW · GW] learn about the filtered-evidence problem from the Sequences, but of course, there's a big statistics literature about learning from missing data that I learned a little bit about in 2020 while perusing Ch. 19 of Probabilistic Graphical Models: Principles and Techniques by Daphne Koller and the other guy.
comment by Davis_Kingsley · 2019-09-24T09:38:44.689Z · LW(p) · GW(p)
This post is excellent. It is probably the best post I have read on LessWrong in a long time. Thank you for writing it!
comment by Viliam · 2019-09-24T20:50:33.747Z · LW(p) · GW(p)
I was trying to devise some strategy. For example, suppose the underlying reality has the equal number of "green" and "blue" facts, but the "green" facts are reported with probability 100%, and the "blue" facts with probability 33%. If you only read one Green newspaper, you can't find out the truth. But reading three independent Green newspapers could give you some clue -- the "green" facts are reported in all of them, the "blue" facts only in some of them. This would suggest that something is going on... even without reading any non-Green sources!
But the problem is that this toy model does not reflect reality. No one reports 100% of facts in line with their narrative. Facts are many, pages are scarce, you select the important things (among those that are in line with your narrative). Like, you cover presidential elections, and mostly ignore people who save kittens. So, using my strategy above, a suspicious reader would conclude that the suppressed Blue politics are about saving kittens, because that's what the Green newspapers don't synchronize about.
Similarly, the differences between various Green newspapers could be e.g. regional. Both Washington Green and New York Green would report about presidential elections, but the former would also mention less important things happening in Washington, and the latter would mention less important things happening in New York.
On the other hand, there would be some natural synchronization about which "blue" facts are allowed to be reported. The ones that are least dangerous to the narrative, of course! Giving a voice to the selected weakest [LW · GW] Blue points (or the strawman versions thereof) could be perceived as extra virtuous, while minimizing risk that readers start taking those points seriously. And the reader who tries to be extra charitable to the other side could end up trying to defend the strawmen.
At the end, it seems I was trying to extract signal from noise. Perhaps this is impossible. Unlike in simple mathematical models, you usually don't know the filtering algorithm, and it doesn't even have to be consistent. So the suspicion (even one supported by evidence) that you are being manipulated, does not help you undo the manipulation. You are likely to undershoot, you are likely to overshoot, you are likely to shoot in a completely wrong direction. You can't revert noise.
Replies from: Slidercomment by cata · 2019-10-01T00:27:07.248Z · LW(p) · GW(p)
Wow, this post really got me thinking. Dealing with this kind of pervasive filtering seems super important and also difficult. Thanks for writing it.
Your proposal that you can clue other people into your filtering mechanism seems hard in practice. Any effort in this vein means that you are saying that you suspect the consensus is misleading, which means that you are saying that non-consensus beliefs are more likely to be true, and people can pick up on this. I tried to come up with ways to express "this is cherry-picked, but" without triggering that but I couldn't figure out ways that seemed plausible.
Approaches like Raemon's, where you say "I'm just never going to talk about controversial thing X" seem mentally hard to update on -- in a world where there are a million people who say "I'm never going to talk about X" and a thousand who are constantly presenting cherry-picked evidence about X, it's very difficult for my mind to interpret the filtering disclaimers as object-level evidence about X that can fight with the evidence provided by the cherry-pickers.
Replies from: Raemon↑ comment by Raemon · 2019-10-01T06:12:55.956Z · LW(p) · GW(p)
One outcome might be that you live in a particular rationalist filter bubble where everyone can't-say similar sets of things, or that maybe you can take the things people say in aggregate and see that they form a fairly digestible pattern.
(I suspect I'm not alone in avoiding talking about mainstream politics, and I'd suspect there's roughly four groups in my circles that are something like "avoid politics", "disproportionately lean social leftist", "disproportionately lean libertarian", and "disproportionately lean edgy and contrarian", or something kinda like that, and if you can get a sense of how many of each group there are you can turn that into an overall heuristic.")
comment by Ege Erdil (ege-erdil) · 2021-11-28T15:57:18.624Z · LW(p) · GW(p)
This is a rather pedantic remark that doesn't have much relevance to the primary content of the post (EDIT: it's also based on a misunderstanding of what the post is actually doing - I missed that an explicit prior is specified which invalidates the concern raised here), but
If such a coin is flipped ten times by someone who doesn't make literally false statements, who then reports that the 4th, 6th, and 9th flips came up Heads, then the update to our beliefs about the coin depends on what algorithm the not-lying[1] reporter used to decide to report those flips in particular. If they always report the 4th, 6th, and 9th flips independently of the flip outcomes—if there's no evidential entanglement between the flip outcomes and the choice of which flips get reported—then reported flip-outcomes can be treated the same as flips you observed yourself: three Headses is 3 * 1 = 3 bits of evidence in favor of the hypothesis that the coin is Heads-biased. (So if we were initially 50:50 on the question of which way the coin is biased, our posterior odds after collecting 3 bits of evidence for a Heads-biased coin would be 23:1 = 8:1, or a probability of 8/(1 + 8) ≈ 0.89 that the coin is Heads-biased.)
is not how Bayesian updating would work in this setting. As I've explained in my post [LW · GW] about Laplace's rule of succession, if you start with a uniform prior over for the probability of the coin coming up heads and you observe a sequence of heads in succession, you would update to a posterior of which has mean . For that would be rather than .
I haven't formalized this, but one problem with the entropy approach here is that the distinct bits of information you get about the coin are actually not independent, so they are worth less than one bit each. They aren't independent because if you know some of them came up heads, your prior that the other ones also came up heads will be higher, since you'll infer that the coin is likely to have been biased in the direction of coming up heads.
To not leave this totally up in the air, if you think of the th heads having an information content of
bits, then the total information you get from heads is something like
bits instead of bits. Neglecting this effect leads you to make much more extreme inferences than would be justified by Bayes' rule.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2021-11-28T18:04:12.107Z · LW(p) · GW(p)
Thanks for this analysis! However—
if you start with a uniform prior over [0,1] for the probability of the coin coming up heads
I'm not. The post specifies "a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time"—that is (and maybe I should have been more explicit), I'm saying our prior belief about the coin's bias is just the discrete distribution {"1/3 Heads, 2/3 Tails": 0.5, "2/3 Heads, 1/3 Tails": 0.5}.
I agree that a beta prior would be more "realistic" in the sense of applying to a wider range of scenarios (your uncertainty about a parameter is usually continuous, rather than "it's either this, or it's that, with equal probability"), but I wanted to make the math easy on myself and my readers.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2021-11-28T18:51:06.875Z · LW(p) · GW(p)
Ah, I see. I missed that part of the post for some reason.
In this setup the update you're doing is fine, but I think measuring the evidence for the hypothesis in terms of "bits" can still mislead people here. You've tuned your example so that the likelihood ratio is equal to two and there are only two possible outcomes, while in general there's no reason for those two values to be equal.
comment by Slider · 2019-09-24T18:16:51.968Z · LW(p) · GW(p)
Having the dice be ambivalent about green, blue and grey can suggest a direction where giving equal space is not succumbing to bias. But enforcing balance where it doesn't exist in the territority is in the relevant way bad. There might be good arguments about making the math easy or that if we are talking about "explanation hogging" top candidates then their "explanatory fit by accident" should be somewhat proportional. if reality doesn't care about you it doesn't mean reality will be fair to you.
I also expected that in a area where a green result is expected giving a grey result would be a failure. So the ratio would be 3 greens to 1 non-greens. One of the tragedies is that in a hot topic issue making a stance that can be recognised by multiple sides to not be in line with their narrative can trigger the wrath of all the sides.
comment by [deleted] · 2019-09-25T17:40:05.478Z · LW(p) · GW(p)
The toy example you gave seems like something that would make for a fun simulation ala Nicky Case's stuff, you can try with multiple groups, different types of evidence (which support either side in varying amounts), and different coordination mechanisms.
I'll look into something this weekend. If anyone else likes doing JS development, ping me, and we can figure something out!
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-09-28T16:01:20.391Z · LW(p) · GW(p)
This sounds fun, but unfortunately, I don't think I have time to commit to anything!—I have a lot more (prose) writing to do today and tomorrow.
(I also try to avoid JavaScript, to the extent that even my new browser game, U.S.S. Uncommon Priors Require Origin Disputes (source, demo) is mostly written in Rust (compiled to WebAssembly), with just enough JavaScript glue to listen to keystrokes and paint the canvas.)
comment by Optimization Process · 2019-09-25T07:52:48.003Z · LW(p) · GW(p)
(Strong approval for this post. Figuring out how to deal with filtered evidence is close to my heart.)
Suppose that the facts relevant to making optimal decisions about an Issue are represented by nine rolls of the Reality die, and that the quality (utility) of Society's decision is proportional to the (base-two logarithm) entropy of the distribution of what facts get heard and discussed.
Sorry-- what distribution are we measuring the entropy of? When I hear "entropy of a distribution," I think -- but it's not clear to me how to get from there to , , and .
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-09-25T15:33:44.638Z · LW(p) · GW(p)
You have three green slots, three gray slots, and three blue slots. You put three counters each on each of the green and gray slots, and one counter each on each of the blue slots. The frequencies of counters per slot is [3, 3, 3, 3, 3, 3, 1, 1, 1]. The total number of counters you put down is 3*6 + 3 = 18 + 3 = 21. To turn the frequencies into a probability distribution, you divide everything by 21, to get [1/7, 1/7, 1/7, 1/7, 1/7, 1/7, 1/21, 1/21, 1/21]. Then the entropy is , which is . Right? (Thanks for checking—it would be really embarrassing if I got this wrong. I might edit the post later to include more steps.)
Replies from: Optimization Process↑ comment by Optimization Process · 2019-10-08T05:52:57.435Z · LW(p) · GW(p)
Ahhh! Yes, that helps a great deal. Thank you!
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-12-02T15:22:44.100Z · LW(p) · GW(p)
This post improved my understanding of how censorship, polarization, groupthink, etc. work. I also love the slogan "Blatant cherry-picking is the best kind."
comment by Said Achmiz (SaidAchmiz) · 2019-09-25T16:53:05.729Z · LW(p) · GW(p)
A ‘three-sided die’ (or, more properly, a d3) is quite easy to construct, actually. Simply label a twelve-sided die such that four of the faces say ‘1’, four say ‘2’, and four say ‘3’. (Similar to this twelve-sided d4.)
Replies from: Thelo, Thecommexokid↑ comment by Thelo · 2019-10-09T15:20:10.165Z · LW(p) · GW(p)
Another easy option for rolling a N-sided die is a N-sided prism, like a pencil that you roll on the table that can only come to rest on one of N sides (and never on the tips). With 3 sides it becomes a triangular prism that doesn't quite roll as well as we'd like, but it's doable.
Yet another option is a spinning top with N faces, where you can set N to whatever you want that's >= 3.
But you're right that in practice, probably re-labeling an existing dice, like relabeling a d6 as [1,1,2,2,3,3], is easiest.
↑ comment by Thecommexokid · 2019-10-08T12:54:20.426Z · LW(p) · GW(p)
Or this product http://spielpro.com/d3-by-the-dice-lab/
comment by hamnox · 2021-01-23T22:18:11.004Z · LW(p) · GW(p)
Crucial. I definitely remember reading this and thinking it was one of the most valuable posts I'd seen all year. Good logical structure.
But it's hard to read? It has jarring, erratic rhetoric flow; succinct where elaboration is predictably needed, and verbose where it is redundant. A mathematician's scratch notes, I think.
comment by Matt Goldenberg (mr-hire) · 2020-12-03T19:10:43.135Z · LW(p) · GW(p)
I've referenced this post at least a few times when trying to discuss the nuances of contextualizing vs. decoupling norms.
comment by RHollerith (rhollerith_dot_com) · 2019-09-25T23:28:13.656Z · LW(p) · GW(p)
Here is Zack's article formatted as plain text suitable for offline reading: https://gist.githubusercontent.com/hruvulum/ac9774ae322d958c46532804a5d01393/raw/945562bfaef6c50d501228990f6e458e762895b2/selective_reporting.txt
Replies from: Raemon↑ comment by Raemon · 2019-09-25T23:44:26.577Z · LW(p) · GW(p)
Apologies for the little LWs – we've removed them in our most recent commit to the site but ran into a bug that's been preventing us from deploying.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-09-26T07:41:52.790Z · LW(p) · GW(p)
(note, this is now fixed)
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2019-09-26T17:50:13.320Z · LW(p) · GW(p)
I appreciate the fix.
comment by Slimepriestess (Hivewired) · 2023-09-10T18:09:16.502Z · LW(p) · GW(p)
I like this post a lot, but I have a bit I want to push back on/add nuance towards, which is how the social web behaves when presented with "factionally inconsistent" true information. In the presented hypothetical world controlled by greens, correct blue observations are discounted and hidden, (and the reverse also holds in the reversed case). However, I don't think the information environment of the current world resembles that very much, the faction boundaries are much less distinct and coherent, often are only alliances of convenience, and the overall social reality field is less "static, enemy territory" than is presented as.
This is important because:
- freedom of speech means in practice anyone can say anything
- saying factionally-unpopular things can be status-conferring because the actual faction borders are unclear and people can flip sides.
- sharing the other faction's information in a way that makes them look bad can convey status to you for your faction
- the other faction can encode true information into what you think is clearly false, and when you then share it to dunk on them, you inadvertently give that true information to others.
this all culminates in a sort of recursive societal waluigi effect where the more that one faction tries to clamp down on a narrative, the more every other faction will inadvertently be represented within the structure of that clamped narrative, and all the partisan effects will replicate themselves inside that structure at every level of complexity.
If factional allegiances trump epistemic accuracy, then you will not have the epistemics to notice when your opponents are saying true things, and so if you try to cherrypick false things to make them look worse, you will accidentally convey true things without realizing it.
Let's give an example:
Say we have a biased green scientist who wants to "prove greens are always right" and he has that three sided die that comes up green 1/3 of the time. He wants to report "correct greens" and "incorrect blues" to prove his point. When a roll he expects to be green comes up green, he reports it, when a roll he expects to be green comes up blue, he also reports it as evidence blue is wrong, because it gives the "wrong answer" to his green-centric-query. if he's interpreting everything from a green-centric lens, then he will not notice he is doing this.
"the sky clearly blue-appearing to causal observation, which confirms my theory that the sky is green under these conditions I have specified, it merely appears blue for the same reason blues are always wrong"
but if you're a green who cares about epistemics, or a blue who is looking for real evidence, that green just gave you a bunch of evidence without noticing he was doing it. There are enough people in the world who are just trying to cherrypick for their respective factions, that they will not notice they're leaking correct epistemics where everyone else can see. This waluigi effect goes in every direction, you can't point to the other faction and describe how they're wrong without describing them, which, if they're right about something, will get slipped in without you realizing it. This is part of why truth is an asymmetric weapon.
The described "blue-green factions divided" world feels sort of "1984" to our world's "Brave New World", in a 1984-esque world, where saying "the sky is blue iff the sky is blue, the sky is green iff the sky is green" would get you hung as a traitor to the greens, the issues described in this thread would likely be more severe and closer to the presented description, but in our world, where "getting hung as a traitor" is, for most people outside of extremely adverse situations, "a bunch of angry people quote tweet and screenshot you and post about you and repeat "lol look how wrong they are" hundreds of times where everyone can see exactly what you're saying", well that's basically just free advertising for what you consider true information, and the people who care about truth will be looking for it, not for color coding.