Priors and Prejudice

post by MathiasKB (MathiasKirkBonde) · 2024-04-22T15:00:41.782Z · LW · GW · 31 comments

Contents

  I
  II
  III
None
31 comments

I

Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society [EA · GW], as opposed to the rationalist diaspora. Let’s name this hypothetical movement the Effective Samaritans.

Like the EA movement of today, they believe in doing as much good as possible, whatever this means. They began by evaluating existing charities, reading every RCT to find the very best ways of helping.

But many effective samaritans were starting to wonder. Is this randomista approach really the most prudent? After all, Scandinavia didn’t become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures.

The Scandinavian societal model which lifted the working class, brought weekends, universal suffrage, maternity leave, education, and universal healthcare can be traced back all the way to 1870’s where the union and social democratic movements got their start.

In many developing countries wage theft is still common-place. When employees can’t be certain they’ll get paid what was promised in the contract they signed and they can’t trust the legal system to have their back, society settles on much fewer surplus producing work arrangements than is optimal.

Work to improve capacity of the existing legal structure is fraught with risk. One risks strengthening the oppressive arms used by the ruling and capitalist classes to stay in power.

A safer option may be to strengthen labour unions, who can take up these fights on behalf of their members. Being in inherent opposition to capitalist interests, unions are much less likely to be captured and co-opted. Though there is much uncertainty, unions present a promising way to increase contract-enforcement and help bring about the conditions necessary for economic development, a report by Reassess Priorities concludes.

Compelled by the anti-randomista arguments, some Effective Samaritans begin donating to the ‘Developing Unions Project’, which funds unions in developing countries and does political advocacy to increase union influence.

A well-regarded economist writes a scathing criticism of Effective Samaritanism, stating that they are blinded by ideology and that there isn’t sufficient evidence to show that increases in labor power leads to increases in contract enforcement.

The article is widely discussed on the Effective Samaritan Forum. One commenter writes a highly upvoted response, arguing that absence of evidence isn’t evidence of absence. The professor is too concerned with empirical evidence, and fails to engage sufficiently with the object-level arguments for why the Developing Unions Project is promising. Additionally, why are we listening to an economics professor anyways? Economics is completely bankrupt as a science [LW · GW], resting on empirically false ridiculous assumptions, and is filled with activists doing shoddy science to confirm their neoliberal beliefs.

 

I sometimes imagine myself trying to convince the Effective Samaritan why I’m correct to hold my current beliefs, many of which have come out of the rationalist diaspora.

I explain how I’m not fully bought into the analysis of labor historians, which credits labor unions and the Social Democratic movements for making Scandinavia uniquely wealthy, equitable and happy. If this were a driving factor, how come the descendants of Scandinavians who migrated to the US long before are doing just as well in America? Besides, even if I don’t know enough to dispute the analysis, I don't trust labor historians to arrive at unbiased and correct conclusions in the first place.

From my perspective, labor union advocacy seems as likely to result in restrictions of market participation as it is to encourage it. Instead, I’m more bullish for charter cities to bring institutional reform and encourage growth.

After all, many historical analyses by economic historians of the Chinese economic miracle would credit Deng Xiaoping’s decision to open four “special economic zones” inside of China with free-market oriented reforms, as the driving factor.

But the Effective Samaritan is similarly skeptical of the historical evidence I present suggesting charter cities to be a worthwhile intervention. “Hasn’t every attempt at creating a charter city failed?” they ask.

“A real charter city hasn’t been tried!” I reply. “The closest we got was in Honduras, and it barely got off the ground before being declared illegal by the socialist government. Moreover, special economic zones jump started the Chinese economic miracle, even if not exactly a charter city that’s gotta count for something!”

“Real socialism hasn’t been tried either!” the Effective Samaritan quips back. “Every attempt has always been co-opted by ruling elites who used it for their own ends. The closest we’ve gotten is Scandinavia which now has the world’s highest standards of living, even if not entirely socialist it’s gotta count for something!”

“Don’t you find it mighty suspicious how your intervention is suspiciously lacking in empirical evidence, and is held up only by theoretical arguments and the historic hand waving of biased academics?” We both exclaim in unison.

For every logical inference I make, they make the opposite. Every thoughtful prior of mine, they consider to be baseless prejudice. My modus ponens, their modus tollens.

It’s clear that we’re not getting anywhere. Neither one of us will change the other’s mind. We go back to funding our respective opposing charities, and the world is none the better.

 

II

In 2016 I was skipping school to compete in Starcraft tournaments. A competitive Starcraft match pits two players against each other, each playing one of the game’s three possible factions: Terran, Protoss or Zerg. To reach the level of competitive play, players opt to practice a single faction almost exclusively.

This has led to some fascinating dynamics in the Starcraft community.

At age 12 I began focusing on the Terran faction. At 16, I had racked up over ten thousand matches with the Terran faction. Over thousands of matches, you get to experience every intricate and quirky detail exclusive to your faction. I would spend hours practicing my marine-splits, a maneuver only my faction was required to do.

I experienced the humiliating defeat from a thousand dirty strategies available to my opponents’ factions, each more cheap and unfair than the last. Of course, they would claim my faction has cheap strategies too, but I knew those strategies were brittle, weak, and never worked against a sufficiently skilled player.

For as long as there have been forums for discussing Starcraft, they have been filled with complaints about the balance of the factions. Thousands of posts have been written presenting elaborate arguments and statistics, proving the very faction the author happens to play is, in fact, the weakest. The replies are just as thorough: “Of course if you look at tournament winnings in 2011-2012, Terran is going to be overrepresented, but that is due to a few outlier players who far outperformed everyone else. If you look at the distribution of grandmaster ranked players, terran underperforms!”

Like politics, the discussions can get heated, and it is not uncommon to see statements like: “How typical of you to say - Zerg players are all alike, always complaining about the difficulty of creep spreading, but never admitting their armies are much easier to control!”

There’s even a conspiracy theory currently circulating that a cabal of professional Zerg players sneakily are starting debates which pit Protoss and Terran players against one another to divert attention away from their faction’s current superiority.

Looking at it from a distance, it’s completely deranged. Why can’t anyone see the irony in the fact that everyone happens to think the very faction they play is the weakest?[1] Additionally, if they really believed it to be true, why doesn’t anybody ever switch to the faction they think is overpowered and start winning tournaments?

Moreover, the few people who do switch factions always end up admitting they were wrong. Their new faction is actually the most difficult! The few people who opt to play each match with a randomly selected faction mostly say the three factions are about equally difficult. But if there is one thing players of all three factions can agree on, it is that players who pick random are deceitful and not to be trusted.

I am aware of all these facts, it’s been almost a decade since I stopped competing, yet to this very day I remain convinced that Terran, the faction I arbitrarily chose when I was 12, was in fact the weakest faction during the era I played. Of course I recognize that the alternate version of me who picked a different faction, would have thought differently, but they would have been wrong.

My priors are completely and utterly trapped. Whatever opinion I hold of myself as a noble seeker of truth, my beliefs about Starcraft prove me a moron beyond any reasonable doubt.

My early intellectual influences were rationalists or free-market leaning economists, such as Scott Alexander and Robin Hanson. When I take a sincere look at the evidence today and try my very hardest to discern what is actually true from false, I conclude they mostly are getting things right.

But already in 7th grade, I distinctly remember staunchly defending my belief in unregulated biological modification and enhancement, much to the dismay of my teacher who in disbelief burst out that I was completely insane.

Of all the possible intellectuals I was exposed to, surely it is suspicious that the ones whose conclusions matched my already held beliefs were the ones who stuck. But what should I have done differently? To me, their arguments seemed the most lucid and their evidence the most compelling.

Why was my very first instinct as a seventh grader to defend bioenhancement and not the opposite? Where did that initial belief come from? I couldn’t explain to you basic calculus, yet I could tell you with unfounded confidence that bioenhancement would be good for humanity.

Like my beliefs about Starcraft, it seems so arbitrary. Had my initial instinct been the opposite, maybe I would have breezed past Hanson’s contrarian nonsense to one day discover truth and beauty reading Piketty.

III

I wake up to an email, thanking me and explaining how my donation has helped launch charter cities in two developing countries. Of course getting the approvals required some dirty political maneuvering, but that is the price of getting anything done.

I think of the Effective Samaritan, who has just woken up to a similar thankful email from the Developing Unions Project. In it, they explain how their donation helped make it possible for them to open a new branch of advocacy, lobbying to shut down two charter cities whose lax regulations are abused by employers to circumvent union agreements. It will require some dirty political maneuvering to get them shut down, but the ends will justify the means.

Yet, the combined efforts of our charity has added up to exactly nothing! I want to yell at the Samaritan whose efforts have invalidated all of mine. Why are they so hellbent on tearing down all the beauty I want to create? Surely we can do better than this.

But how can I collaborate with the Effective Samaritan, who I believe has deluded themselves into thinking outright harmful interventions are the most impactful?

We both believe in doing the most good, whatever that means, and we both believe in using evidence to inform our decision making. What evidence we can trust is contentious. And of the little evidence we both trust, we draw opposite conclusions!

For us to collaborate we need to agree on some basic principles which, when followed, produces knowledge that can fit into both our existing worldviews. We first try explicitly defining all our bayesian priors to see where they differ. This quickly proves tedious and intractable. The only way we can find to move forward is to take out priors from the equation entirely.

Simply run experiments and accept every result as true if the probability of it occurring by random chance falls below some threshold we agree on. This will lead us terribly astray every once in a while if we are not careful, but it also enables us to run experiments whose conclusions both of us can trust.[2]

To minimize the chance of statistical noise or incorrect inference polluting our conclusions, we create experiments with randomly chosen intervention and control groups, so we are sure the intervention is causally connected to the outcome.

As long as we follow these procedures exactly, we can both trust the conclusion. Others can even join in on the fun too.

Together we arrive at a set of ‘randomista’ interventions we both recognize as valuable. Even if we each have differing priors leading us to opposing preferred interventions, pooling our money together on the randomista interventions beats donating to causes which cancel each other out.

The world is some the better.

 

  1. ^

     I sometimes think about this when listening in on fervent debates over which gender has it better

  2. ^

     I don’t think it’s a coincidence frequentism came to dominate academia

31 comments

Comments sorted by top scores.

comment by romeostevensit · 2024-04-23T15:25:12.805Z · LW(p) · GW(p)

Our sensible Chesterton fences

His biased priors

Their inflexible ideological commitments

In addition to epistemic priors, there are also ontological priors and teleological priors to cross compare, each with their own problems. On top of which, people are even worse at comparing non epistemic priors than they are at comparing epistemic priors. As such, attempts to point out that these are an issue will be seen as a battle tactic: move the argument from a domain in which they have the upper hand (from their perspective) to unfamiliar territory in which you'll have an advantage, and resisted.

You may share the experience I've had that most attempts at discussion don't go anywhere. We mostly repeat our cached knowledge at each other. If two people who are earnestly trying to grok each other's positions drill down for long enough they'll get to a bit of ontology comparison, where it turns out they have different intuitions because they are using different conceptual metaphors for different moving parts of their model. But this takes so long that by the time it happens only a few bits of information get exchanged before one or both parties are too tired to continue. The workaround seems to be that if two people have a working relationship then, over time, they can accrue enough bits to get to real cruxes, and this can occasionally suggest novel research directions.

My main theory of change is therefore to find potentially productive pairings of people faster, and create the conditions under which they can speedrun getting to useful cruxes. Unfortunately, Eli Tyre tried this theory of change and reported that it mostly didn't work, after a bunch of good faith efforts from a bunch of people. I'm not sure what's next. I personally believe more progress could be made if people were trained in consciousness of abstraction (per Korzybski), but this is a sufficiently difficult ask as to fail people's priors on how much effort to spend on novel skills with unclear payoffs. And a theory of change that has a curiosity stopper that halts on "other people should do this thing that they clearly aren't going to do" is also not very useful.

comment by Garrett Baker (D0TheMath) · 2024-04-22T22:52:19.779Z · LW(p) · GW(p)

Priors are not things you can arbitrarily choose, and then throw you hands up and say "oh well, I guess I just have stuck priors, and that's why I look at the data, and conclude neoliberal-libertarian economics is mostly correct, and socialist economics is mostly wrong" to the extent you say this, you are not actually looking at any data, you are just making up an answer that sounds good, and then when you encounter conflicting evidence, you're stating you won't change your mind because of a flaw in your reasoning (stuck priors), and that's ok, because you have a flaw in your reasoning (stuck priors). Its a circular argument!

If this is what you actually believe, you shouldn't be making donations to either charter cities projects or developing unions projects[1]. Because what you actually believe is that the evidence you've seen is likely under both worldviews, and if you were "using" a non-gerrymandered prior or reasoning without your bottom-line already written, you'd have little reason to prefer one over the other.

Both of the alternatives you've presented are fools who in the back of their minds know they're fools, but care more about having emotionally satisfying worldviews instead of correct worldviews. To their credit, they have successfully double-thought their way to reasonable donation choices which would otherwise have destroyed their worldview. But they could do much better by no longer being fools.


  1. Alternatively, if you justify your donation anyway in terms of its exploration value, you should be making donations to both. ↩︎

Replies from: dr_s
comment by dr_s · 2024-04-23T07:03:15.687Z · LW(p) · GW(p)

To be fair, any beliefs you form will be informed by your previous priors. You try to evaluate evidence critically, but your critical sense was developed by previous evidence, and so on so forth back to the brain you came out of the womb with. Obviously as long as your original priors were open minded enough, you can probably reach the point of believing in anything given sufficiently strong evidence - but how strong depends on your starting point.

Replies from: cubefox
comment by cubefox · 2024-04-23T08:11:42.856Z · LW(p) · GW(p)

Though this is only what Bayesianism predicts. A different theory of induction (e.g. one that explains human intelligence, or one that describes how to build an AGI) may not have an equivalent to Bayesian priors. Differences in opinions between two agents could instead be explained by having had different experiences, beliefs being path dependent (order of updates matters), or inference being influenced by random chance.

Replies from: dr_s
comment by dr_s · 2024-04-23T08:50:46.067Z · LW(p) · GW(p)

I'm not sure how that works. Bayes' theorem, per se, is correct. I'm not talking about a level of abstraction in which I try to define decisions/beliefs as symbols, I'm talking about the bare "two different brains with different initial states, subject to the same input, will end up in different final states".

Differences in opinions between two agents could instead be explained by having had different experiences, beliefs being path dependent (order of updates matters), or inference being influenced by random chance.

All of that can be accounted for in a Bayesian framework though? Different experiences produce different posteriors of course, and as for path dependence and random chance, I think you can easily get those by introducing some kind of hidden states, describing things we don't quite know about the inner workings of the brain.

Replies from: cubefox
comment by cubefox · 2024-04-23T09:10:09.556Z · LW(p) · GW(p)

All of that can be accounted for in a Bayesian framework though?

I mean that those factors don't presuppose different priors. You could still end up with different "posteriors" even with the same "starting point".

An example for an (informal) alternative to Bayesian updating, that doesn't require subjective priors, is Inference to the Best Explanation. One could, of course, model the criteria that determine the goodness of explanations as a sort of "prior". But those criteria would be part of the hypothetical IBE algorithm, not a free variable like in Bayesian updating. One could also claim that there are no objective facts about the goodness of explanations and that IBE is invalid. But that's an open question.

Replies from: Richard_Kennaway, dr_s
comment by Richard_Kennaway · 2024-04-23T17:05:41.912Z · LW(p) · GW(p)

Whenever I've seen people invoking Inference to the Best Explanation to justify a conclusion (as opposed to philosophising about the logic of argument), they have given no reason why their preferred explanation is the Best, they have just pronounced it so. A Bayesian reasoner can (or should be able to) show their work, but the ItoBE reasoner has no work to show.

Replies from: romeostevensit, cubefox
comment by romeostevensit · 2024-04-23T21:49:37.668Z · LW(p) · GW(p)

These can often be operationalized 'How much of the variance in the output do you predict is controlled by your proposed input?'

comment by cubefox · 2024-04-24T15:23:39.572Z · LW(p) · GW(p)

IBE arguments don't exactly work that way. The argument is usually that one person is arguing that some hypothesis H is the best available explanation for the evidence E in question, and if the other person agrees with that, it is hard for them to not also agree that H is probably true (or something like that). Most people already accept IBE as an inference rule. They wouldn't say "Yes, the existence of an external world seems to be the best available explanation for our experiences, but I still don't believe the external world exists" nor "Yes, the best available explanation for the missing cheese is that a mouse ate it, but I still don't believe a mouse ate the cheese". And if they do disagree about H being the best available explanation, they usually feel compelled to argue that some H' is a better explanation.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2024-04-24T15:46:18.021Z · LW(p) · GW(p)

What is the measure of goodness? How does one judge what is the "better" explanation? Without an account of that, what is IBE?

Replies from: cubefox
comment by cubefox · 2024-04-24T17:23:38.612Z · LW(p) · GW(p)

Without an account of that, IBE is the claim that something being the best available explanation is evidence that it is true.

That being said, we typically judge the goodness of a possible explanation by a number of explanatory virtues like simplicity, empirical fit, consistency, internal coherence, external coherence (with other theories), consilience, unification etc. To clarify and justify those virtues on other (including Bayesian) grounds is something epistemologists work on.

comment by dr_s · 2024-04-23T09:26:16.833Z · LW(p) · GW(p)

I'd definitely call any assumption about which forms preferred explanations should take as a "prior". Maybe I have a more flexible concept of what counts as Bayesian than you, in that sense? Priors don't need to be free parameters, the process has to start somewhere. But if you already have some data and then acquire some more data, obviously the previous data will still affect your conclusions.

Replies from: cubefox
comment by cubefox · 2024-04-23T09:36:16.080Z · LW(p) · GW(p)

The problem with calling parts of a learning algorithm a prior that are not free variables, is that then anything (every part of any learning algorithm) would count as a prior. So even the Bayesian conditionalization rule itself. But that's not what Bayesians consider part of a prior.

comment by Ruby · 2024-06-09T20:28:05.058Z · LW(p) · GW(p)

Curated.  While the question of Epistemic Luck [? · GW] has been discussed on LessWrong before, I like this post for the entertaining and well-written examples that I expect to stick in my mind (the other one I remember from this topic is "Why are all functional decision-theorists in Berkeley and causal (?) decision theorists in Oxford).

It's a scary topic. What I fear is that not only might you start with different priors from someone else, but that you also assign different likelihood ratio updates to evidence gathered, in a way that traps you the more so in your beliefs. Of course, one has the internal inside feeling of "rightness" about their own views...but so do people with opposing views.

The approach at the conclusion of this post doesn't seem satisfactory to me, but I also don't feel like I'm satisfied with any other approach. At best I recurse to a meta-level and say I trust my views because of the gears I have that say my epistemology is better.

Anyhow, these are some ramblings. I like this post for raising the topic again. It feels tricky, and it feels topical with the charged and high-stakes debates of 2024- that many members of community find themselves having.

Replies from: MathiasKirkBonde
comment by MathiasKB (MathiasKirkBonde) · 2024-06-10T11:34:58.398Z · LW(p) · GW(p)

I agree the conclusion isn't great!

Not so surprisingly, many people read the last section as an endorsement of some version of "RCTism", but it's not actually a view I endorse myself.

What I really wanted to get at in this post was just how pervasive priors are, and how difficult it is to see past them.

comment by cousin_it · 2024-04-23T07:32:51.108Z · LW(p) · GW(p)

Yeah, the trapped priors thing is pretty worrying to me too. But I'm confused about the opposing interventions thing. Do charter cities, or labor unions, rely on donations that much? Is it really so common for donations to cancel each other out? I guess advocacy donations (for example, pro-life vs pro-choice) do cancel each other out, so maybe we could all agree that advocacy isn't charity.

comment by Amadeus Pagel (amadeus-pagel) · 2024-04-23T17:49:43.157Z · LW(p) · GW(p)

I think charter cities are a questionable idea, even though I'm pro free markets. It seems that the sort of constitional change and stability required for a charter city is no easier to achieve then the kind of constitutional change and stability required for a free market in the entire country. I don't think trying either in developing countries as an outsider is a good use of anyone's resources.

comment by M Ls (m-ls) · 2024-06-09T22:58:41.926Z · LW(p) · GW(p)

[All logic is a prior.]

The anthropologist Mary Douglas covers this meta-view you have more naively described with some great biographical gaming history.

Mary Douglas argues for cultural/personal choices in which perceptions of risk (to nature, to society) inform frameworks of action/agency. I would also argue that these choices when iterated in both economic messaging (charity/consumption/display) and in conversational argument (meetings/meals/water-cooler/parliament) create the world as we know it.

https://en.wikipedia.org/wiki/Mary_Douglas

I came to her through Thought styles: Critical essays on good taste (1996) in about 2000. 

Each choice by each of us is not an aggregate in this 'structuralism', I would prefer to describe it as a pool of negotiating compositional movement at the edge of chaos & order, the big game here is complexity and survival. It's structural like a language, not like a chassis for a truck). 

It's a _lek_ we create by cooperating in order to compete on the same field (cities are mega-leks). Otherwise you will not enjoy winning starcraft, you will not enjoy complaining about what a rough deal you are getting. There is no game. There is no game in town. There is no town.

https://en.wikipedia.org/wiki/Lek_mating

Complaining is a type of (meetings/meals/water-cooler/parliament). It's part of the fun. It is where we build society, that extended phenotype that is the world. (Ideologues and narcissism always try to take the fun out of it, and make it about themselves, the only thing they can perceive. Why? because for narcs & psychopaths self=world, other people existing are a threat to that identity.

https://en.wikipedia.org/wiki/The_Extended_Phenotype

Neither the game-play nor the complaining then are dangerous or threatening, as it builds the world around us, but does requires a sense of safety in the lek. Narcissists and idealogue-emotional equivalents do not care about the safety, they feel none, and will take you down with them. (These are often called death-cults).

Further, some individuals are threatened by the lek safety itself. The safe place in which we conduct society/economy/arguments/games/marriages/buyer'sregret. While it is not always a conscious choice (they just say and do stuff to suit the moment of 'narcissistic supply') it will generally trend to push  complexity away from the stable attractors of progress and safety towards the chaos of war an coercive control.

They will do this in all organisation and labels and cultures and religions/cults. Your job is to police them in your group, the Terrans, not to point them out in the other groups. That way lies useless ineffective paranoia and conspiracy-creating conspiracy theorising (as you give in the Starcraft example). Log in your own eye.

What types of individuals? Covert narcissists (50/50 split male/female) grandiose narcissists (80/20 male/female) and the latter hold the subset of psychopaths. (All psychopaths are narcissists.)

They will engage in choices that subsumed the safety of the lek into their own "godhood". They never feel safe unless everyone else is a loser. And seen to be (as loyalist dupe or dead). They are individuals who make choices continually through the day without regard to the safety of the world/lek.

Some organisation/segments have been captured by these parasites. Which then go on to create laws which curiously favour the same behaviors. A healthy society knows how to police them.

Economics in any form, fabian, effective arseholes, marxists, weberian, austrian, hayekistians, while based on the choices made by individuals or their collectives, take no notice of the variety or complexity of psychological types, and while rational agent/actor has been criticised to death as a simplistic framework, there has been little work done to rationally exploring the diversity of human choice, it's 'structure', and why we fail to police the narcissists in our midst who are parasites on the safety of the lek created by all of us, each and collectively both. It is our job to police them.

And no it is not a witch hunt. Witches do not exist.

comment by Daphne_W · 2024-06-21T12:28:15.077Z · LW(p) · GW(p)

I kind of... hard disagree?

Effective Samaritans can't be a perfect utility inverse of Effective Altruists while keeping the labels of 'human', 'rational', or 'sane'. Socialism isn't the logical inverse of Libertarianism; both are different viewpoints on how to achieve the common goal of societal prosperity.

Effective Samaritans won't sabotage an EA social experiment any more than Effective Altruists will sabotage an Effective Samaritan social experiment. If I received a letter from Givewell thanking me for my donation that was spent on sabotaging a socialist commune, I would be very confused - that's what the CIA is for. I frankly don't expect either the next charter city or the next socialist commune to produce a flourishing society, but I do expect both to give valuable information that would allow both movements and society in general to improve their world models.

Also, our priors are not entirely trapped. It can seem that way because true political conversion rarely happens in public, and often not even consciously [LW(p) · GW(p)], but people join or leave movements regularly when their internal threshold is passed. Effective Altruist/Samaritan forums will always defend Effective Altruism/samaritanism as long as there is one EA/ES on earth, but if evidence (or social condemnation) builds up against it people whose thresholds are reached will just leave. Of course the movement as a whole can also update, but not in the same way as individual members.

People do tend to be resistant to evidence that goes against their (political) beliefs, but that resistance gets easier to surmount the fewer people there are that are less fanatical about the movement than you. Also, active rationalist practices like doubt [? · GW] can also help get your priors unstuck.

So in a world with EAs and ESs living side by side, there would constantly be people switching from one movement to the other or vice versa. And as either ES or EA projects get eaten by a grue, one of these rates will be greater than the other until one or both have too few supporters to do much of anything. This may have already happened (at least for the precise "Effective Samaritan" ideology of rationalism/bayesianism + socialism + individual effective giving).

So I don't think we need to resort to frequentism. Sure we can use frequentism to share the same scientific journal, but in terms of cooperation we can just all run our own experiments, share the data, and regularly doubt ourselves.

comment by rotatingpaguro · 2024-06-12T11:02:26.005Z · LW(p) · GW(p)

Simply run experiments and accept every result as true if the probability of it occurring by random chance falls below some threshold we agree on. This will lead us terribly astray every once in a while if we are not careful, but it also enables us to run experiments whose conclusions both of us can trust.[2]

To minimize the chance of statistical noise or incorrect inference polluting our conclusions, we create experiments with randomly chosen intervention and control groups, so we are sure the intervention is causally connected to the outcome.

As long as we follow these procedures exactly, we can both trust the conclusion. Others can even join in on the fun too.

Together we arrive at a set of ‘randomista’ interventions we both recognize as valuable. Even if we each have differing priors leading us to opposing preferred interventions, pooling our money together on the randomista interventions beats donating to causes which cancel each other out.

The world is some the better.

Problems I see here:

  1. In an RCT you can get a causal effect measure rather than just a correlation, but you can't prove whether it generalizes. Example: If you pick some people and flip a coin for each to decide whether to give them a blerg or a sblarg, and then find out that blergs has a positive effect on self-confidence, the weak spot in your inference is the way you have picked the initial set of people within whom you did the randomization. The debate moves from "you can't prove it's a causal effect!" to "your set of people where younger than normal", "the set of people is from a first-world country", "people changed too much from when you did the experiment to right now", etc.
  2. Considering only significance and not power (in other words, only checking the probability something would happen "by chance", rather than the probability it would happen under specific meaningful alternative hypotheses) is limiting and can be misleading enough to be a problem in practice. Read Andrew Gelman for this stuff.
  3. By Wald's complete class theorems, whatever you do with kosher frequentist statistics that you think is good, is equivalent to some specific prior used by a Bayesian guy doing Bayesian things, so if you at any point think "but frequentist" you should also have a caveat like "considering that humans won't actually use statistics kosher, I can see a pretty precise story for how doing nonsense in this specific way will produce the correct behavior in the end", which incidentally is reasoning I would not trust people in general to pull off.
comment by Lyrongolem (david-xiao) · 2024-06-12T01:01:02.976Z · LW(p) · GW(p)

Excellent post! I found the starcraft fairly amusing. Though, I am curious. Doesn't your analogy of starcraft solve the issue with trapped priors? 

Like you said, players who played all three factions mostly agree that all factions tend to be roughly similar in difficulty. However, to play all 3 factions you must arbitrarily start off playing one faction. If such people had their priors completely trapped then they wouldn't be able to change their mind after the first game, which clearly isn't true. 

I feel like even if two people disagree in theory they tend to agree in practice once you have experience with every viewpoint and point to concrete examples. (For instance, the EA and Effective Samaritan likely both agree that Denmark style social democracy is generally good while Maoist Communism is generally bad, even if they disagree on what socialism is or whether it's good or not!). 

Clearly the rationalist strategy then is not to immediately assume your right (which evidence doesn't support) but to run an experiment and figure out who's right. Notably, you shouldn't be using underhanded tactics!

I wake up to an email, thanking me and explaining how my donation has helped launch charter cities in two developing countries. Of course getting the approvals required some dirty political maneuvering, but that is the price of getting anything done.

I think of the Effective Samaritan, who has just woken up to a similar thankful email from the Developing Unions Project. In it, they explain how their donation helped make it possible for them to open a new branch of advocacy, lobbying to shut down two charter cities whose lax regulations are abused by employers to circumvent union agreements. It will require some dirty political maneuvering to get them shut down, but the ends will justify the means.

Like, this seems pretty clearly like a prisoner's dilemma doesn't it? You have concluded 'the benefits will exceed the costs' without being able to convince a reasonable opponent of the same using empirical evidence, and you went ahead and caused tangible harm anyways. Meanwhile the effective Samaritan used similar tactics to end your experiment before it bore fruit. Lose lose. You were both better off agreeing that 'underhand tactics bad' and proceeding accordingly. 

Why not just decide not to fight eachother? He starts unions in one developing country and you do a charter city in another. If one strategy is clearly better (which you both seem to insist on) then clearly the winning choice is to stop. There's no need for randomization or compromise, just moderation. You don't need to try and actively undermine eachother's efforts if we expect the results to speak for themselves. Somewhere in reality there is a truth somewhere between your worldviews. You just need to find out. 

As long as you recognize potential biases and are willing to experiment wouldn't you eventually arrive at the correct conclusions? Why bemoan the priors? They don't actually effect reality. 

comment by Ben Livengood (ben-livengood) · 2024-06-10T16:10:58.633Z · LW(p) · GW(p)

There's a hint in the StarCraft description of another factor that may be at play; the environment in which people are most ideologically, culturally, socially, and intellectually aligned may be the best environment for them, ala winning more matches as their usual StarCraft faction than when switching to a perceived-stronger faction.

Similarly people's econopolitical ideologies may predict their success and satisfaction in a particular economic style because the environment is more aligned with their experience, motivation, and incentives.

If true, that would suggest that no best economical principle exists for human flourishing except, perhaps, free movement (and perhaps not free trade).

comment by Review Bot · 2024-06-10T07:02:51.929Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Nicolas Villarreal (nicolas-villarreal) · 2024-06-09T22:30:40.049Z · LW(p) · GW(p)

I don't think your dialectical reversion back to randomista logic makes much sense considering we can't exactly do random control trials to figure out any of the major questions of the social sciences. If you want to promote social science research, I think the best thing you could do is collect consistent statistics over long periods of times. You can learn a lot about modern societies just by learning how national accounts work and looking back at them many different ways. Alternatively, building agent based simulations allows you to test in flexible ways how different types of behavior, both heterogenous and homogenous, might effect macroscopic social outcomes. These are the techniques that I use and they've proven very helpful. 

If there's one other thing you're missing is this, epistemology isn't something you can rely on others for, even trying to triangulate between different viewpoints. You always have to do your own epistemology, because every way of knowing you encounter in society is a part of someone's ideological framework trying to adversarially draw you into it. 

comment by Cornelius Dybdahl (Kalciphoz) · 2024-06-12T14:35:38.948Z · LW(p) · GW(p)

Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society [EA · GW], as opposed to the rationalist diaspora.

That's a lot closer to the truth than you might think. There are plenty of lines going from the Fabian society (and from Trotsky, for that matter) into the rationalist diaspora. On the other hand, there is very little influence from eg. Henry Regnery or Oswald Spengler.

“A real charter city hasn’t been tried!” I reply.

Lee Kuan Yew's Singapore is close enough, surely.

“Real socialism hasn’t been tried either!” the Effective Samaritan quips back. “Every attempt has always been co-opted by ruling elites who used it for their own ends. The closest we’ve gotten is Scandinavia which now has the world’s highest standards of living, even if not entirely socialist it’s gotta count for something!”

This argument sounds a lot more Trotskyist than Fabian to me, but it is worth noting that said ruling elites have both been nominally socialist and been widely supported by socialists throughout the world. The same cannot be said in the case of charter cities and their socialist oppositions.

For every logical inference I make, they make the opposite. Every thoughtful prior of mine, they consider to be baseless prejudice. My modus ponens, their modus tollens.

Because your priors are baseless prejudices. The Whig infighting between liberals and socialists is one of many cases where both sides are awful and each side is almost exactly right about the other side. Your example about StarCraft shows that you are prone to using baseless prejudices as your priors, and other parts of your post show that you are indeed doing the very same thing when it comes to politics.

Of all the possible intellectuals I was exposed to, surely it is suspicious that the ones whose conclusions matched my already held beliefs were the ones who stuck.

Your evaluation of both, as well as your selection of opposition (Whig opposition in the form of socialism, rather than Tory opposition in the form of eg. paleoconservatism), shows that your priors on this point are basically theological, or more precisely, eschatological. You implicitly see history as progressing along a course of growing wisdom, increasing emancipation, and widening empathy (Peter Singer's Ever-Expanding Circle). It is simply a residue from your Christian culture. The socialist is also a Christian at heart, but being of a somewhat more dramatic disposition, he doesn't think of history as a steady upwards march to greater insight, but as a series of dramatic conflicts that resolve with the good guys winning.

(unless of course he is a Trotskyist, in which case we are perpetually at a turning point where history could go either way; towards communism or towards fascism)

Yet, the combined efforts of our charity has added up to exactly nothing! I want to yell at the Samaritan whose efforts have invalidated all of mine. Why are they so hellbent on tearing down all the beauty I want to create? Surely we can do better than this.

Sure, I can tell you how to do better: focus your efforts on improving institutions and societies that you are close to and very knowledgeable about. You can do a much better job here, and the resultant proliferation of healthy institutions will, as a pleasant side effect, spread much more prosperity in the third world than effective altruism ever will.

This is the position taken by sensible people (eg. paleocons), and notably not by revolutionaries and utopian technocrats. This is fortunate because it gives the latter a local handicap and enables good, judicious people to achieve at least some success in creating sound institutions and propagating genuine wisdom. This fundamental asymmetry is the reason why there is any functional infrastructure left anywhere, despite the utopian factions far outnumbering the realists.

We both believe in doing the most good, whatever that means, and we both believe in using evidence to inform our decision making.

No, you actually don't. If your intentions really were that good, they would lead you naturally into the right conclusions, but as Robin Hanson has pointed out, even Effective Altruism is still ultimately about virtue signalling, though perhaps directed at yourself. Sorta like HJPEV's desperate effort to be a good person after the sorting hat's warning to him. This is a case of Effective Altruists being mistaken about what their own driving motives actually are.

For us to collaborate we need to agree on some basic principles which, when followed, produces knowledge that can fit into both our existing worldviews.

The correct principle is this: fix things locally (where it is easier and where you can better track the actual results) before you decide to take over the world. There are a lot of local things that need fixing. This way, if your philosophy works, your own community, nation, etc. will flourish, and if it doesn't work, it will fall apart. Interestingly, most EA's are a lot more risk averse when it comes to their own backyard than when it comes to some random country in Africa.

To minimize the chance of statistical noise or incorrect inference polluting our conclusions, we create experiments with randomly chosen intervention and control groups, so we are sure the intervention is causally connected to the outcome.

This precludes a priori any plans that involve looking far ahead, reacting judiciously to circumstances as they arise, or creating institutions that people self-select into. In the latter case, using comparable geographical areas would introduce a whole host of confounders, but having both the intervention and control groups be in an overlapping area would change the nature of the experiment, because the structure of the social networks that result would be quite different. Basically, the statistical method you propose has technocratic policymaking built into its assumptions, and so it is not surprising that it will wind up favouring liberal technocracy. You have simply found another way of using a baseless prejudice as your prior.

But this is the most telling paragraph:

Like my beliefs about Starcraft, it seems so arbitrary. Had my initial instinct been the opposite, maybe I would have breezed past Hanson’s contrarian nonsense to one day discover truth and beauty reading Piketty.

Read both. The marginal clarity you will get from immersing yourself still deeper into your native canon is enormously outshadowed by the clarity you can get from familiarising yourself with more canons. Of course, Piketty is really just another branch of the same canon, with Piketty and Hanson being practically cousins, intellectually. Compare Friedrich List, to see the point.

My initial instinct was social democracy. Later I became a communist, then, after exposure to LessWrong, I became a libertarian. Now I'm a monarchist, and it occurs to me in hindsight that social democracy, communism, and libertarianism are all profoundly Protestant ideologies, and what I thought was me being widely read was actually still me being narrowminded and parochial.

Replies from: CronoDAS
comment by CronoDAS · 2024-06-12T17:21:51.067Z · LW(p) · GW(p)

I disagree that it's easier and/or more effective to try to improve local conditions; diminishing marginal utility is a real thing.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-06-16T20:05:38.796Z · LW(p) · GW(p)

That does not even come close to cancelling out the reduced ability to get a detailed view of the impact, let alone the much less honest motivations behind such giving. 

And lives are not of equal value. Even if you think they have equal innate value, surely you can recognise that a comparatively shorter third-world life with worse prospects for intellectual and artistic development and greater likelihood of abject poverty is much less valuable (even if only due to circumstances) than the lives of people you are surrounded with, and surely you will also recognise that it is the latter that form the basis for your intuitions about the value of life.

By giving your "charity" (actually, the word "charity" stems from Latin caritas meaning care, as in giving to people you care about, whereas "altruism" is cognate with alter, meaning basically otherism, and in practice meaning giving to people you don't care about) to less worthwhile recipients, you behaving in an anti-meritocratic way and cheapening your act of giving.

Moreover, people obviously don't have equal innate value, and there is a distinct correlation between earning potential and being a utility monster, which at least partially cancels out the effect of diminishing marginal utility.

And the whole reason people care so much about morality is because the moral virtues and shortcomings of your friends and associates are going to have a huge impact on your life. If you're redirecting the virtue by giving money to random foreigners, you are basically defaulting on the debt to your friends. One of your closest friend could wind up in deep trouble and need as much help as he can possibly get. He will need virtuous friends he can rely on to help him, and any money you have given to some third worlders you will never meet is money you cannot give to a friend in need. Therefore, any giving to Effective Altruism is inherently unjust and disloyal. By all means, be charitable and give what you can. But not to strangers.

Replies from: CronoDAS
comment by CronoDAS · 2024-06-17T16:38:33.123Z · LW(p) · GW(p)

This sounds a lot like Ayn Randian selfishness but applied to the level of a friend group rather than an individual. "Potential obligations to friends and one's self are more important than the present suffering of strangers" is a consistent point of view that I rarely see eloquent arguments for, but it's certainly not one I agree with.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-06-18T12:23:31.768Z · LW(p) · GW(p)

It is part Ayn Rand, part Curtis Yarvin. Ultimately it all comes from Thomas Carlyle anyway.

And there is no need to limit yourself to potential obligations. Unless you have an exceedingly blessed life, then there should be no shortage of friends and loved ones in need of help.

Replies from: CronoDAS
comment by CronoDAS · 2024-06-21T15:44:09.895Z · LW(p) · GW(p)

These days, "a shortage of friends and loved ones" in general is not as uncommon as one might hope. :/