Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
post by Zack_M_Davis
score: 199 (65 votes) ·
Followup to: What Evidence Filtered Evidence? [LW · GW]
In "What Evidence Filtered Evidence?" [LW · GW], we are asked to consider a scenario involving a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time. Observing Heads is 1 bit of evidence for the coin being Heads-biased (because the Heads-biased coin lands Heads with probability 2/3, the Tails-biased coin does so with probability 1/3, the likelihood ratio of these is , and ), and analogously and respectively for Tails.
If such a coin is flipped ten times by someone who doesn't make literally false statements [LW · GW], who then reports that the 4th, 6th, and 9th flips came up Heads, then the update to our beliefs about the coin depends on what algorithm the not-lying reporter used to decide to report those flips in particular. If they always report the 4th, 6th, and 9th flips independently of the flip outcomes—if there's no evidential entanglement [LW · GW] between the flip outcomes and the choice of which flips get reported—then reported flip-outcomes can be treated the same as flips you observed yourself: three Headses is 3 * 1 = 3 bits of evidence in favor of the hypothesis that the coin is Heads-biased. (So if we were initially 50:50 on the question of which way the coin is biased, our posterior odds after collecting 3 bits of evidence for a Heads-biased coin would be = 8:1, or a probability of 8/(1 + 8) ≈ 0.89 that the coin is Heads-biased.)
On the other hand, if the reporter mentions only and exactly the flips that came out Heads, then we can infer that the other 7 flips came out Tails (if they didn't, the reporter would have mentioned them), giving us posterior odds of = 1:16, or a probability of around 0.06 that the coin is Heads-biased.
So far, so standard. (You did read the Sequences, right??) What I'd like to emphasize about this scenario today, however, is that while a Bayesian reasoner who knows the non-lying reporter's algorithm of what flips to report will never be misled by the selective reporting of flips, a Bayesian with mistaken beliefs about the reporter's decision algorithm can be misled quite badly: compare the 0.89 and 0.06 probabilities we just derived given the same reported outcomes, but different assumptions about the reporting algorithm.
If the coin gets flipped a sufficiently large number of times, a reporter whom you trust to be impartial (but isn't), can make you believe anything she wants without ever telling a single lie, just with appropriate selective reporting. Imagine a very biased coin that comes up Heads 99% of the time. If it gets flipped ten thousand times, 100 of those flips will be Tails (in expectation), giving a selective reporter plenty of examples to point to if she wants to convince you that the coin is extremely Tails-biased.
Toy models about biased coins are instructive for constructing examples with explicitly calculable probabilities, but the same structure applies to any real-world situation where you're receiving evidence from other agents, and you have uncertainty about what algorithm is being used to determine what reports get to you. Reality is like the coin's bias; evidence and arguments are like the outcome of a particular flip. Wrong theories will still have some valid arguments and evidence supporting them (as even a very Heads-biased coin will come up Tails sometimes), but theories that are less wrong will have more.
If selective reporting is mostly due to the idiosyncratic bad intent of rare malicious actors, then you might hope for safety in (the law of large) numbers: if Helga in particular is systematically more likely to report Headses than Tailses that she sees, then her flip reports will diverge from everyone else's, and you can take that into account when reading Helga's reports. On the other hand, if selective reporting is mostly due to systemic structural factors that result in correlated selective reporting even among well-intentioned people who are being honest as best they know how, then you might have a more serious problem.
"A Fable of Science and Politics" [LW · GW] depicts a fictional underground Society polarized between two partisan factions, the Blues and the Greens. "[T]here is a 'Blue' and a 'Green' position on almost every contemporary issue of political or cultural importance." If human brains consistently understood the is/ought distinction, then political or cultural alignment with the Blue or Green agenda wouldn't distort people's beliefs about reality. Unfortunately ... humans. (I'm not even going to finish the sentence.)
Reality itself isn't on anyone's side, but any particular fact, argument, sign, or portent [LW · GW] might just so happen to be more easily construed as "supporting" the Blues or the Greens. The Blues want stronger marriage laws; the Greens want no-fault divorce. An evolutionary psychologist [LW · GW] investigating effects of kin-recognition mechanisms on child abuse by stepparents might aspire to scientific objectivity, but being objective and staying objective is difficult when you're embedded in an intelligent social web [LW · GW] in which in your work is going to be predictably championed by Blues and reviled by Greens.
Let's make another toy model to try to understand the resulting distortions on the Undergrounders' collective epistemology. Suppose Reality is a coin—no, not a coin, a three-sided die, with faces colored blue, green, and gray. One-third of the time it comes up blue (representing a fact that is more easily construed as supporting the Blue narrative), one-third of the time it comes up green (representing a fact that is more easily construed as supporting the Green narrative), and one-third of the time it comes up gray (representing a fact that not even the worst ideologues know how to spin as "supporting" their side).
Suppose each faction has social-punishment mechanisms enforcing consensus internally. Without loss of generality, take the Greens (with the understanding that everything that follows goes just the same if you swap "Green" for "Blue" and vice versa). People observe rolls of the die of Reality, and can freely choose what rolls to report—except a resident of a Green city who reports more than 1 blue roll for every 3 green rolls is assumed to be a secret Blue Bad Guy, and faces increasing social punishment as their ratio of reported green to blue rolls falls below 3:1. (Reporting gray rolls is always safe.)
The punishment is typically informal: there's no official censorship from Green-controlled local governments, just a visible incentive gradient made out of social-media pile-ons, denied promotions, lost friends and mating opportunities, increased risk of being involuntarily committed to psychiatric prison, &c. Even people who privately agree with dissident speech might participate in punishing it, the better to evade punishment themselves.
This scenario presents a problem for people who live in Green cities who want to make and share accurate models of reality. It's impossible to report every die roll (the only 1:1 scale map of the territory, is the territory itself), but it seems clear that the most generally useful models—the ones you would expect arbitrary AIs to come up with—aren't going to be sensitive to which facts are "blue" or "green". The reports of aspiring epistemic rationalists who are just trying to make sense of the world will end up being about one-third blue, one-third green, and one-third gray, matching the distribution of the Reality die.
From the perspective of ordinary nice smart Green citizens who have not been trained in the Way, these reports look unthinkably Blue. Aspiring epistemic rationalists who are actually paying attention can easily distinguish Blue partisans from actual truthseekers, but the social-punishment machinery can't process more than five words at a time [LW · GW]. The social consequences of being an actual Blue Bad Guy, or just an honest nerd who doesn't know when to keep her stupid trap shut, are the same.
In this scenario, public opinion within a subculture or community in a Green area is constrained by the 3:1 (green:blue) "Overton ratio." In particular, under these conditions, it's impossible to have a rationalist community—at least the most naïve conception of such. If your marketing literature says, "Speak the truth, even if your voice trembles," but all the savvy high-status people's actual reporting algorithm is, "Speak the truth, except when that would cause the local social-punishment machinery to mark me as a Blue Bad Guy and hurt me and any people or institutions I'm associated with—in which case, tell the most convenient lie-of-omission", then smart sincere idealists who have internalized your marketing literature as a moral ideal and trust the community to implement that ideal, are going to be misled by the community's stated beliefs—and confused at some of the pushback they get when submitting reports with a 1:1:1 blue:green:gray ratio.
Well, misled to some extent—maybe not much! In the absence of an Oracle AI (or a competing rationalist community in Blue territory) to compare notes with, then it's not clear how one could get a better map than trusting what the "green rationalists" say. With a few more made-up modeling assumptions, we can quantify the distortion introduced by the Overton-ratio constraint, which will hopefully help develop an intuition for how large of a problem this sort of thing might be in real life.
Imagine that Society needs to make a decision about an Issue (like a question about divorce law or merchant taxes). Suppose that the facts relevant to making optimal decisions about an Issue are represented by nine rolls of the Reality die, and that the quality (utility) of Society's decision is proportional to the (base-two logarithm) entropy of the distribution of what facts get heard and discussed.
The maximum achievable decision quality is ≈ 3.17.
On average, Green partisans will find 3 "green" facts and 3 "gray" facts to report, and mercilessly stonewall [LW · GW] anyone who tries to report any "blue" facts, for a decision quality of ≈ 2.58.
On average, the Overton-constrained rationalists will report the same 3 "green" and 3 "gray" facts, but something interesting happens with "blue" facts: each individual can only afford to report one "blue" fact without blowing their Overton budget—but it doesn't have to be the same fact for each person. Reports of all 3 (on average) blue rolls get to enter the public discussion, but get mentioned (cited, retweeted, &c.) 1/3 as often as green or gray rolls, in accordance with the Overton ratio. So it turns out that the constrained rationalists end up with a decision quality of ≈ 3.03, significantly better than the Green partisans—but still falling short of the theoretical ideal where all the relevant facts get their due attention.
If it's just not pragmatic to expect people to defy their incentives [LW · GW], is this the best we can do? Accept a somewhat distorted state of discourse, forever?
At least one partial remedy seems apparent. Recall from our original coin-flipping example that a Bayesian who knows what the filtering process looks like, can take it into account and make the correct update. If you're filtering your evidence to avoid social punishment, but it's possible to clue in your fellow rationalists to your filtering algorithm without triggering the social-punishment machinery—you mustn't assume that everyone already knows [LW · GW]!—that's potentially a big win. In other words, blatant cherry-picking is the best kind! [LW · GW]
Comments sorted by top scores.
comment by Raemon
· score: 33 (11 votes) · LW
Agree with others on this being a well-put together post, both clearly tying together several related concepts and pushing the surrounding conversation forward. I like 'blatant cherrypicking is the best kind' as an encapsulation of the concept.
Some high level thoughts here:
- It's better to share cherry-picking-algorithm at a high enough level that allows people to predict future things you might suddenly not be able to talk about in a few years. (i.e. I can list the things that I don't currently talk about, but in a year a new issue might get politicized, that I didn't think to spell out in advance. And sharing "I don't talk about X" right as X is getting politicized might be particularly socially hazardous. It's also just costly, in time, to periodically update your taboos, and costly in other people's time to constantly be checking up on it)
- Ideally, if it's only OTHER people we're worried about social harm from (i.e. non-aspiring-espistemic-rationalists), we still get to talk about the thing to build a fully integrated worldmodel. One property that a Citadel of Truth [LW · GW] should have is actually keeping things private from the outside world. (This is a solveable logistical problem, although you do have to actually solve it. It might be good for LW to enable posts to be hidden from non-logged out users, perhaps requiring some karma threshold to see taboo posts)
- The hardest of hard modes is "local politics", where it's not just that I'm worried about nebulous "outsiders" hurting me (or friends feeling pressure to disown me because they in turn face pressure from outsiders). Instead, the issue is politics inside the citadel. It seems like a quite desirable property to able to talk freely about which local orgs and people deserve money and prestige – but I don't currently know of robust game mechanics that will actually, reliably enable this in any environment where I don't personally know and trust each person.
- (Having multiple "inner citadels" of trust is sort of the de-facto way this is done currently, in my experience. Having clearer signposting on how to get trustworthy might be a good improvement. Notably, proclaiming "I only care about truth, not politics" is not sufficient for me to trust someone in this domain.)
Some concrete "cherry-picking-algorithm-sharings"
I'm happy, at this present moment, to note:
I generally avoid talking online about mainstream national political issues (gender and social justice issues in particular) unless they cross a particular threshold for "relevant to EA concerns", or "personally important to me".
(For example, climate change is politicized, but it's pretty important to fit it into my overall catastrophic risk ontology, and worth paying the cost for. I don't talk about it that often, partly because it still has a slightly-disproportionate cost*, and partly because it just turns out climate change isn't that important after having thought about it in EA context)
It currently so happens that gender/social-justice issues are both "national politics" and also "local politics", which makes them additionally hard to talk openly about. But this is a property of the particular year, and different issues might be more sensitive in a couple years. If you want to know which things I can't say in a couple years, and I haven't written another such comment in the meanwhile, you may need to do interpretive labor and careful observation. Which sucks, and I'm sorry.
*the climate change cost isn't just in "people might ostracize me" (I'm not actually worried about that), but "random people are less likely to be thinking clearly, and the conversation quality is often going to be worse".
There's an interesting thing where abortion becomes a much more interesting topic to me now that I have more nuanced views of moral uncertainty and trade. But this is a topic that I think makes most sense to discuss "within the citadel" (where "within the citadel" means "I'm reasonably confident randos from the culture war won't show up, and/or start making life difficult for me or causes I care about". Especially because most of the value isn't related to abortion itself, but integrating a worldview that includes abortion as well as digital uploads and low-grade simulations)
Local politics (i.e. facts pertinent to "who get money/prestige in the rationalsphere") has the double-edged property of "being much more important to talk about" and "being harder and anti-inductive to talk about". This post has provided a good reminder for me to figure out some thoughts here and write them up. I think they're beyond scope for this comment though.
None of these are things that I don't talk about period, they are just things that are disproportionately costly to talk about, so I don't talk about them freely without putting some thought into it and making sure it's worth it.
comment by Ben Pace (Benito)
· score: 23 (9 votes) · LW
Curated. I've tried to think about similar topics - silencing of different kinds of information can also lead to information cascades [LW · GW], for example. This was a simple toy model that I had properties I'd never put into an explicit model before - if signalling always looks like at least a 3:1 ratio of args in your side's favour, then random chance is gonna mean some people (even if 3:1 is the ground truth) will have lopsided info and have to lie, and that's a massive corruption of those people's epistemologies.
So far, so standard. (You did read the Sequences, right??)
Yes, indeed I did. A bunch of the beginning was nice to see again, it's good for people to reread that stuff, and for any newer users who haven't done it for the first time yet.
I wasn't so much enjoying one political footnote which seemed mostly off-topic or something, until the line at the end saying
If I'm doing my job right, then my analogue in a "nearby" Everett branch [LW · GW] whose local subculture was as "right-polarized" as my Berkeley environment is "left-polarized", would have written a post making the same arguments.
which I really like as a way of visualising needling the truth between political biases in any environment.
The post is very readable and clearly explained, plus lots of links for context, which is always great.
I mostly feel confused about quantifying how biased the decisions are. If you have 9 honest rolls then that's log_2 of 9 = 3.2 bits. But if you roll it 9 times and hide the 3 rolls in a certain direction, then you don't have log_2 of 6 = 2.6 bits. That would be true if you had 6 honest rolls (looking like 2:2:2) but 3:3:0 surely is not the same amount of evidence. I'm generally not sure how best to understand the effects of biases of this sort, and want to think about that more.
comment by vedrfolnir
· score: 22 (7 votes) · LW
(This post is important enough that I'm breaking my commitment not to post until a certain time in the future.)
The model here strikes me as the correct *sort* of model, but deserving of substantial complication. Two complications in particular seem clear and relevant to me.
First, will the smart sincere idealists be simply *misled?* Given that this hypothetical imperfect rationalist space exists within Green territory, deviations from the Overton ratio will be punished by Greens *both inside and outside* the rationalist space; as such, it could (entirely unintentionally, at least at first) serve to *reinforce* Green partisan hegemony, especially if there's a large imbalance between the abilities of Greendom and Bluedom to offer *patronage*.
We already know from history that regimes may become so... self-serving and detached from reality, as one could put it... that they'll feel the need to actively select against smart, sincere idealists, or any permutation thereof. Loyalty to anything but the regime may be seen as an inefficiency and optimized away.
As a result, it could be useful for Green partisans to keep such spaces around, albeit low-prestige and generally reviled. Partisans also have an interest in identifying the sincere and the idealistic, but for precisely the opposite reasons. (Cf. the Hundred Flowers Campaign.)
Second, the neat division of truths into Green, Blue, and Gray rings unconvincing to me. Consider the Greens and Blues as having reality maps: certain things directly benefit their reality maps, certain things directly harm those maps, and certain things are neutral. (To pick on Zoroastrianism: the reality of Ahura Mazda or Angra Mainyu would be in the first category, a genealogical account of Zoroastrian doctrine in the Nietzschean sense would be in the second, and the contents of a randomly selected academic journal in the field of (say) accounting would, I assume, be almost entirely in the third.)
If we multiply the three categories of the Greens by the three categories of the Blues, we get nine options, not three. If we make certain assumptions about Green-Blue conflict, we can reduce this somewhat, and posit that anything that is beneficial to one side but seemingly neutral to the other in fact benefits the first at the expense of the second.
But this leaves five possibilities, not three! In addition to [+Green -Blue], [-Green +Blue], and [0Green 0Blue], we have [+Green +Blue] and [-Green -Blue]. Would Blues and Greens not fear displacement by something outside their union?
comment by quanticle
· score: 9 (5 votes) · LW
We already know from history that that regimes may become so... self-serving and detached from reality, as one could put it... that they'll feel the need to actively select against smart, sincere idealists or any permutation thereof.
Coincidentally, I was reading this excellent article about the mindset behind Leninism, and I felt like this passage was particularly insightful:
In his history of Marxism, Kołakowski explains some puzzling aspects of Bolshevik practice in these terms. Everyone understands why Bolsheviks shot liberals, socialist revolutionaries, Mensheviks, and Trotskyites. But what, he asks, was the point of turning the same fury on the Party itself, especially on its most loyal, Stalinists, who accepted Leninist-Stalinist ideology without question? Kołakowski observes that it is precisely the loyalty to the ideology that was the problem. Anyone who believed in the ideology might question the leader’s conformity to it. He might recognize that the Marxist-Leninist Party was acting against Marxism-Leninism as the Party itself defined it; or he might compare Stalin’s statements today with Stalin’s statements yesterday. 'The citizen belongs to the state and must have no other loyalty, not even to the state ideology,' Kołakowski observes.
comment by DaystarEld
· score: 19 (7 votes) · LW
Great post. Something that's worth noting is that this is true *even if reality is not 1:1:1.*
Let's say the joke "reality has a well known Green bias" is actually true, and add an extra Green side to that impossible die and make it a true four sided pyramid, easy to picture and shake in your hands. It's now 2:1:1 Green:Blue:Grey
Reality says "Roll 10d4," and you let all the little pointy shapes in your palms spill out to reveal... *rolls actual dice* ~4 Green, ~4 Grey, and ~2 Blue.
"Oh snap," you, a Green, might say. "Evidence that my philosophy/political views are correct!... Er, I mean, more correct than the local alternative competing one! And really, that's what matters, isn't it? No one's PERFECT, but at least we can feel justified in fighting the good fight!"
You still have two glaring problems.
1) You're still, in the majority of cases, wrong, on this particular subject/roll. This is an unpopular thing to say among Greens. "We're actually wrong on most things, but we're more right than the opposition!" is only ever going to be a rallying cry for those who intrinsically dislike tribalism and grok lesser-of-two-evils and advocate for slow and steady progress, not those trying to Win The Battle for the Soul/Future/Rights of the Whatever.
2) Your opponents are actually still right on half the things they're fighting for compared to the things you're right about. Even hinting at this will make you EVEN MORE UNPOPULAR among your fellow Greens. Especially if any of the things they're right about are particularly hot-button issues at the point in spacetime that you find yourself in.
So, being a Green (or a Grey who knows that Greys have no chance and so hesitantly puts on a Green uniform) who now wants to actually go into politics and push for positive change, you will by necessity feel a lot more inclined to focus on those 4 things that you have strong evidence your side is right about, ignore those 2 things that your opponent is right about (unless you're behind closed doors with similarly reasonable Greens, Blues, or Greys), and maybe once in awhile try to push at a couple of the 4 things that both sides are wrong about, assuming you have any political capital to spare (you probably don't, hell you probably don't even have enough to focus on all of the 4 dice-backed Green topics).
Well guess what: now to an "objective observer," you, an intelligent and competent politician, look like a biased ideologue who's blinded by his partisanship and just appealing to his rank-and-file.
Change this from 10 separate issues to 10 pieces of evidence that make up the nuance of a single issue, and a similar thing happens as nuance gets left at the wayside. Now imagine that not everyone is actually researching all those dice rolls and instead just randomly looking at the subsection of them that happen across their newsfeed, or which appeal to their particular interests, or are the least complicated to take in.
It's disheartening, but it can help to note that those who are trying to actually make changes in any social sphere are not working under the same rules/toward the same goals as those trying to just seek truth. This is not a defense of lying, by omission or otherwise, but it is a defense of people who have good reasons to do or say things that we, observing them from the outside, may think are evidence of them being mindkilled.
comment by RRand
· score: 2 (2 votes) · LW
It's worth pointing out that grey in the post refers to
a fact that not even the worst ideologues know how to spin as "supporting" their side".
I'm not sure what good grey rolls do in the context of this post (especially given the proviso that "there is a 'Blue' and a 'Green' position on almost every contemporary issue of political or cultural importance").
But grey rolls are, of course, important: Grey facts and grey issues are uncorrupted by the Great War, and hence are that much more accessible/tractable. The more grey facts there are, the better rationalists we can be.
With respect to your comment, the presence of Grey, Yellow, Orange and Purple Teams would actually help things substantially -- if I report facts from the six teams equally, it's harder to label me as a partisan. (And it's harder for any team to enforce partisanship.) Even if Blue-supporting facts truly are taboo (Green is unlikely to have more than one archnemesis), that's much less limiting when only a sixth of facts are Blue. It's a nice advantage of multipolar politics.
comment by Matthias Görgens (matthias-goergens)
· score: 7 (3 votes) · LW
What seems to happen in practice is that everything non-Green gets lumped together as Blue. Even if the other people do not see themselves as being on one faction.
And then you have the additional complications of out-group vs far-group.
comment by Ruby
· score: 19 (10 votes) · LW
As Davis said, this is a good post. Thanks!
Something especially disturbing is to think that the above dynamics don't apply solely to external partisan parties like the media or political groups, but also the inside of one's own head.
Imagine a very biased coin that comes up Heads 99% of the time. If it gets flipped ten thousand times, 100 of those flips will be Tails (in expectation), giving a selective reporter plenty of examples to point to if she wants to convince you that the coin is extremely Tails-biased.
Suppose one knows ten thousands facts about reality and they mostly point in one direction but one's mind has anchored on a given narrative or way of the world being. Selective [internal]-reporting is all to easy and once can convince oneself of quite strong the opposite position. It is all too easy to construct narratives.
comment by hg00
· score: 16 (3 votes) · LW
Nice post. I think one thing which can be described in this framework is a kind of "distributed circular reasoning". The argument is made that "we know sharing evidence for Blue positions causes harmful effects due to Green positions A, B, and C", but the widespread acceptance of Green positions A, B, and C itself rests on the fact that evidence for Green positions is shared much more readily than evidence for Blue positions.
comment by hamnox
· score: 12 (7 votes) · LW
sigh. This post digs into why I can't watch the news without feeling frustrated.
Because even when I agree with the newscaster's overall assessment of a situation, there's just... never quite enough acknowledgement that some evidence might point a different direction than is politically convenient. That small or selective samples can even appear to point against the truth. That alternate perspectives on the facts don't come into existence solely to try to knock yours down.
comment by Viliam
· score: 11 (3 votes) · LW
I was trying to devise some strategy. For example, suppose the underlying reality has the equal number of "green" and "blue" facts, but the "green" facts are reported with probability 100%, and the "blue" facts with probability 33%. If you only read one Green newspaper, you can't find out the truth. But reading three independent Green newspapers could give you some clue -- the "green" facts are reported in all of them, the "blue" facts only in some of them. This would suggest that something is going on... even without reading any non-Green sources!
But the problem is that this toy model does not reflect reality. No one reports 100% of facts in line with their narrative. Facts are many, pages are scarce, you select the important things (among those that are in line with your narrative). Like, you cover presidential elections, and mostly ignore people who save kittens. So, using my strategy above, a suspicious reader would conclude that the suppressed Blue politics are about saving kittens, because that's what the Green newspapers don't synchronize about.
Similarly, the differences between various Green newspapers could be e.g. regional. Both Washington Green and New York Green would report about presidential elections, but the former would also mention less important things happening in Washington, and the latter would mention less important things happening in New York.
On the other hand, there would be some natural synchronization about which "blue" facts are allowed to be reported. The ones that are least dangerous to the narrative, of course! Giving a voice to the selected weakest [LW · GW] Blue points (or the strawman versions thereof) could be perceived as extra virtuous, while minimizing risk that readers start taking those points seriously. And the reader who tries to be extra charitable to the other side could end up trying to defend the strawmen.
At the end, it seems I was trying to extract signal from noise. Perhaps this is impossible. Unlike in simple mathematical models, you usually don't know the filtering algorithm, and it doesn't even have to be consistent. So the suspicion (even one supported by evidence) that you are being manipulated, does not help you undo the manipulation. You are likely to undershoot, you are likely to overshoot, you are likely to shoot in a completely wrong direction. You can't revert noise.
comment by Slider
· score: 1 (1 votes) · LW
In the logic where blue politics is about kittens you need to assume that non-green political things are blue. If the yellow news paper is spotty about dog saving does it make it a green or blue politics?
comment by cata
· score: 9 (2 votes) · LW
Wow, this post really got me thinking. Dealing with this kind of pervasive filtering seems super important and also difficult. Thanks for writing it.
Your proposal that you can clue other people into your filtering mechanism seems hard in practice. Any effort in this vein means that you are saying that you suspect the consensus is misleading, which means that you are saying that non-consensus beliefs are more likely to be true, and people can pick up on this. I tried to come up with ways to express "this is cherry-picked, but" without triggering that but I couldn't figure out ways that seemed plausible.
Approaches like Raemon's, where you say "I'm just never going to talk about controversial thing X" seem mentally hard to update on -- in a world where there are a million people who say "I'm never going to talk about X" and a thousand who are constantly presenting cherry-picked evidence about X, it's very difficult for my mind to interpret the filtering disclaimers as object-level evidence about X that can fight with the evidence provided by the cherry-pickers.
comment by Raemon
· score: 5 (3 votes) · LW
One outcome might be that you live in a particular rationalist filter bubble where everyone can't-say similar sets of things, or that maybe you can take the things people say in aggregate and see that they form a fairly digestible pattern.
(I suspect I'm not alone in avoiding talking about mainstream politics, and I'd suspect there's roughly four groups in my circles that are something like "avoid politics", "disproportionately lean social leftist", "disproportionately lean libertarian", and "disproportionately lean edgy and contrarian", or something kinda like that, and if you can get a sense of how many of each group there are you can turn that into an overall heuristic.")
comment by Slider
· score: 9 (7 votes) · LW
Having the dice be ambivalent about green, blue and grey can suggest a direction where giving equal space is not succumbing to bias. But enforcing balance where it doesn't exist in the territority is in the relevant way bad. There might be good arguments about making the math easy or that if we are talking about "explanation hogging" top candidates then their "explanatory fit by accident" should be somewhat proportional. if reality doesn't care about you it doesn't mean reality will be fair to you.
I also expected that in a area where a green result is expected giving a grey result would be a failure. So the ratio would be 3 greens to 1 non-greens. One of the tragedies is that in a hot topic issue making a stance that can be recognised by multiple sides to not be in line with their narrative can trigger the wrath of all the sides.
comment by lifelonglearner
· score: 8 (3 votes) · LW
The toy example you gave seems like something that would make for a fun simulation ala Nicky Case's stuff, you can try with multiple groups, different types of evidence (which support either side in varying amounts), and different coordination mechanisms.
I'll look into something this weekend. If anyone else likes doing JS development, ping me, and we can figure something out!
comment by Zack_M_Davis
· score: 3 (2 votes) · LW
This sounds fun, but unfortunately, I don't think I have time to commit to anything!—I have a lot more (prose) writing to do today and tomorrow.
comment by Optimization Process
· score: 6 (4 votes) · LW
(Strong approval for this post. Figuring out how to deal with filtered evidence is close to my heart.)
Suppose that the facts relevant to making optimal decisions about an Issue are represented by nine rolls of the Reality die, and that the quality (utility) of Society's decision is proportional to the (base-two logarithm) entropy of the distribution of what facts get heard and discussed.
Sorry-- what distribution are we measuring the entropy of? When I hear "entropy of a distribution," I think -- but it's not clear to me how to get from there to , , and .
comment by Zack_M_Davis
· score: 7 (5 votes) · LW
You have three green slots, three gray slots, and three blue slots. You put three counters each on each of the green and gray slots, and one counter each on each of the blue slots. The frequencies of counters per slot is [3, 3, 3, 3, 3, 3, 1, 1, 1]. The total number of counters you put down is 3*6 + 3 = 18 + 3 = 21. To turn the frequencies into a probability distribution, you divide everything by 21, to get [1/7, 1/7, 1/7, 1/7, 1/7, 1/7, 1/21, 1/21, 1/21]. Then the entropy is , which is . Right? (Thanks for checking—it would be really embarrassing if I got this wrong. I might edit the post later to include more steps.)
comment by Said Achmiz (SaidAchmiz)
· score: 5 (3 votes) · LW
A ‘three-sided die’ (or, more properly, a d3) is quite easy to construct, actually. Simply label a twelve-sided die such that four of the faces say ‘1’, four say ‘2’, and four say ‘3’. (Similar to this twelve-sided d4.)
comment by Thelo
· score: 2 (2 votes) · LW
Another easy option for rolling a N-sided die is a N-sided prism, like a pencil that you roll on the table that can only come to rest on one of N sides (and never on the tips). With 3 sides it becomes a triangular prism that doesn't quite roll as well as we'd like, but it's doable.
Yet another option is a spinning top with N faces, where you can set N to whatever you want that's >= 3.
But you're right that in practice, probably re-labeling an existing dice, like relabeling a d6 as [1,1,2,2,3,3], is easiest.