Posts
Comments
Here’s a better explanation that captures the intuition behind those responses: a middling probability doesn’t just come from balanced evidence, it also results from absence of evidence (because in the absence of evidence, we revert to our priors, which tend to be middling for events that aren’t dramatically outside the realm of what we consider possible). It’s not suspicious at all that there are millions of questions where we don’t have enough evidence to push us into certainty: understanding the world is hard and human knowledge is very incomplete. It would be suspicious if we couldn’t generate tons of questions with middling probabilities. Most of the examples in the post deal with complex systems that we can’t fully predict yet like the weather or the economy: we wouldn’t expect to be able to generate certainty about questions dealing with those systems.
I think it’s a sad and powerful Overton window demonstration that these days someone can write a paper like this without even mentioning space colonization, which is the obvious alternate endgame if you want a non-global-dictatorship solution.
I have a counter-example. I have a few people in my Facebook feed who regularly post outraged articles about Palestinians launching terror attacks or doing bad things. I also have a few people who regularly post outraged articles about Israelis killing Palestinian protestors. This theory would predict that the people posting the former would tend to be Palestinian or Muslim, while the latter who tend to be Israeli or Jewish. But I’m fact it’s the opposite: all the articles about Palestinians doing bad things are posted by Jews, and all the articles about Israelis are posted by Muslims.
I’ve also read the book and agree that it is totally compatible with what EY is saying. I’ve also met many people who sound like founder 1.
My theory is that founder 1s don’t exist in the wild because of confusion about epistemology. Rather, I think most people don’t like and / or are bad at top-down reasoning from first principles. I think that if you are good at it, and try to do things in the real world, your epistemology will automatically gravitate towards being reasonable. And if you are bad at it / don’t do it, it doesn’t matter how many articles you read about how to theorize or experiment... they’ll all get compressed into various heuristics that will sometimes work and sometimes fail, and you won’t know when to use which.
So, I am skeptical about the existence of readers of this essay who don’t intuitively grasp the point already, but who can be pursuaded by rational argument to adopt it.
I think the challenge of teaching the skill of top down reasoning from first principles is an interesting one. I am not sure I have seen evidence that it is a teachable skill, to be honest, but would be interested in finding some.
Thanks! Always helpful to know what the actual term is. I did a couple minutes of googling... the one contemporary player I turned up is https://collaction.org. They seem to be playing from the modern web startup playbook (design aesthetic is Kickstarter-lite), but they don't seem to have much traction: they claim six people on their team, but no evidence of revenue or fundraising, and their website is slow and a little clunky.
Their demo projects are pretty uninspiring; they don't seem to be going after genuine collective action problems, but rather they're just trying to see if they can get 50 or so people to commit to something: for instance "Random Act of Coffee: If 50 people pledge to buy an (extra) coffee for the next person to order, we will all do it!"
If I were trying to launch something like this, I think I would take on one project at a time, and pick something inspiring and ambitious enough that it might actually go viral, rather than try to get lots of small wins that aren't really wins.
Soooo.... why doesn't someone build an app for this??
I mean, seriously. As Part 1 pointed out, we have Kickstarter, but Kickstarter only solves problems where it's obvious that directly applying cash is what is needed. As soon as it gets more complicated than that, you need to trust the person who is going to be spending the cash, and then you get back into assymetric information land.
Let's take the subset of problems where a) no one is afraid to be publically affiliated with the hard-to-coordinate action, they just can't rationally take it without expecting everyone else to take it, and b) getting a sufficient number of people to agree to do it makes it rational.
A + B constrain the set of problems a lot. There are a lot of coordination problems where there are social and reputational costs to being outspoken prior to your side winning. But there are still plenty of important, hard problems where A + B apply. I think stuff around academia and the academic job market has a lot of things in this space. For instance, academics don't seem to get penalized for publically saying "hey, wouldn't it be great if we all stopped using p-values", they only get penalized if they actually stop using them.
So, for this constrained subset of problems, what if there was an app that let you manage a campaign to coordinate, with increasing levels of escalation, from:
- Expressing anonymous interest and being on the ping list
- Expressing public interest
- Expressing public commitment to take a specific action if X number of people also commit
- Actually confirming (via taking a selfie or something?) that you've taken the action
The app helps you promote and manage a campaign such that people are only called upon to take action when there's credible assurance that enough other people are taking it to make it successful.
I don't see this is a panacea for all problems, but this certainly seems like it would knock out a large subset of messt ones. Anyone see any reason this wouldn't be a great idea?