The "Everyone Can't Be Wrong" Prior causes AI risk denial but helped prehistoric people
post by Knight Lee (Max Lee) · 2025-01-09T05:54:43.395Z · LW · GW · 0 commentsContents
Why Key takeaway Priors win without extraordinary evidence Example: other false religions Example: Semmelweis's story General implications Rational arguments aren't enough AI safety implications What do we do about this? None No comments
The "Everyone Can't Be Wrong" Prior, assumes that if everyone is working on something, that thing is important. Conversely, if nobody is working on something, that thing is unimportant.
Why
In prehistoric times, this is actually true. Tribes with good habits such as cleanliness survived and spread, while tribes with bad habits tended to die out. The environment changed so slowly, that whatever habits helped the tribe survive before would usually help it survive later.
Because this was true in prehistoric times, humans evolved to assume the "Everyone Can't Be Wrong" Prior, and only diverge a little from "what everyone else does" if there is extraordinary evidence for doing so.
Key takeaway
Our beliefs do not directly conform to what we think others believe, but to what we see others do.
Priors win without extraordinary evidence
Alice was raised in a religious family who did rituals everyday for the Flying Spaghetti Monster. She won't accept that her religion was all nonsense without extraordinary evidence. Bob was raised in a world where AI risk was science fiction and no one serious worked on it or worried about it. He won't consider AI risk a high priority without extraordinary evidence.
Example: other false religions
Everyone knows the world is full of religious people praying to god(s) who don't exist. There is nothing controversial about this claim (unless you believe every religion is correct at the same time). So the question is, why are so many people so wrong?
Eliezer Yudkowsky argues [? · GW] that tribalism and motivated reasoning is one major cause. The "Everyone Can't Be Wrong" Prior may be another major cause.
It explains why every false religion needs important rituals attended by many people. Important rituals makes it hard for people to accept it is wrong, because if it was all wrong, all the serious people doing and observing the important rituals become absolute clowns.
Example: Semmelweis's story
Ignaz Semmelweis was in charge of a clinic where medical students helped mothers deliver babies. The same medical students also dissected dead bodies.
Semmelweis noticed one of his colleagues cut himself with a scalpel while dissecting a dead body, and this colleague developed a severe fever and died. The death was similar to "childbirth fever," which killed up to 18% of mothers giving birth at the clinic. No one knew about bacteria back then, but Semmelweis theorized that something coming from the dead bodies was infecting mothers with "childbirth fever," and mandated that all doctors at the clinic start washing their hands.
The clinic's death rate dropped from 18% to 2%: the same level as other clinics without dead body dissections. Unfortunately, the medical society ridiculed Semmelweis's theory, he eventually lost his job, got locked up in a mental hospital, and died from an infection there. The clinic reversed his rule on washing hands, and mothers started dying again.
Thinking from first principles, the prior probability that washing hands could save lives isn't that implausible, and a small bit of evidence would make it very worth researching. Unfortunately, from the "Everyone Can't Be Wrong" Prior, the idea that medical students are routinely killing mothers without knowing it is absolutely absurd.
Note that this example has other explanations and doesn't prove my theory right.
General implications
When a belief can neither be proved nor disproved with current technology, people evaluate the belief by how wrong everyone would have to be if the belief was true vs. if the belief was false.
Rational arguments aren't enough
Given the "Everyone Can't Be Wrong" Prior is so strong it can make people believe false religions (even ones which had no penalty for apostasy), it is extremely hard to defeat it using rational arguments alone. The rational arguments against false religions are very strong but only convince a few people, and needs decades to do that.
AI safety implications
People disbelieve AI risk for the opposite reason they believe false religions: they do not see any activity regarding AI risk. They don't see any serious people working on AI risk. They only hear a few people far away shouting about it.
What do we do about this?
You decide! Comment below.
0 comments
Comments sorted by top scores.