Sufficiently Advanced Sanity
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-20T18:11:17.677Z · LW · GW · Legacy · 25 commentsContents
25 comments
Reply to: Shalmanese's Third Law
From an unpublished story confronting Vinge's Law, written in 2004, as abstracted a bit:
"If you met someone who was substantially saner than yourself, how would you know?"
"The obvious mistake that sounds like deep wisdom is claiming that sanity looks like insanity to the insane. I would expect to discover sanity that struck me as wonderfully and surprisingly sane, sanity that shocked me but that I could verify on deeper examination, sanity that sounded wrong but that I could not actually prove to be wrong, and sanity that seemed completely bizarre."
"Like a history of 20th-century science, presented to a scientist from 1900. Much of the future history would sound insane, and easy to argue against. It would take a careful mind to realize none of it was more inconsistent with present knowledge than the scientific history of the 19th century with the knowledge of 1800. Someone who wished to dismiss the whole affair as crackpot would find a thousand excuses ready to hand, plenty of statements that sounded obviously wrong. Yet no crackpot could possibly fake the parts that were obviously right. That is what it is like to meet someone saner. They are not infallible, are not future histories of anything. But no one could counterfeit the wonderfully and surprisingly sane parts; they would need to be that sane themselves."
Spot the Bayesian problem, anyone? It's obvious to me today, but not to the me of 2004. Eliezer2004 would have seen the structure of the Bayesian problem the moment I pointed it out to him, but he might not have assigned it the same importance I would without a lot of other background.
25 comments
Comments sorted by top scores.
comment by Richard_Kennaway · 2009-12-20T21:43:31.954Z · LW(p) · GW(p)
I find it a convincing description of what someone saner than oneself looks like, i.e. P(Looks Sane|Sane) is high. However, there may be many possible entities which Look Sane but are not. For example, a flawed or malicious genius who produces genuine gems mixed with actual crackpottery or poisoned, seemingly-sane ideas beyond one's capacity to detect. If a lot more things Look Sane than are, then you get turned into paperclips for neglecting to consider P(Sane|Looks Sane).
Given the history of your thinking, I guess that in the story the entity in question is an AI, and the speaker is arguing why he is sure that the AI is super-wise? Hence the importance you now attach to the matter.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-20T23:53:25.773Z · LW(p) · GW(p)
And that was the Bayesian flaw, though no, the story wasn't about AI.
The probability that you see some amazingly sane things mixed with some apparently crazy things, given that the speaker is much saner (and honest), is not the same as the probability that you're dealing with a much saner and honest speaker, given that you see a mix of some surprising and amazing sane things with apparently crazy things.
For example, someone could grab some of the material from LW, use it without attribution, and mix it with random craziness.
Replies from: MichaelVassar, Nick_Tarleton, PhilGoetz, None↑ comment by MichaelVassar · 2009-12-22T07:44:48.133Z · LW(p) · GW(p)
Mencius Moldbug is a great current example, IMHO, of not-sane but says some surprising and amazingly sane things.
↑ comment by Nick_Tarleton · 2009-12-23T05:26:58.436Z · LW(p) · GW(p)
But
But no one could counterfeit the wonderfully and surprisingly sane parts; they would need to be that sane themselves.
is an assertion that P(Sane|Looks Sane) ~ 1, so it seems that this isn't a Bayesian flaw per se.
↑ comment by PhilGoetz · 2009-12-23T05:12:50.731Z · LW(p) · GW(p)
P(sane things plus crazy things | speaker is saner) P(speaker is saner) = P(speaker is saner | sane things plus crazy things) P(sane things plus crazy things)
The fact that P(sane things plus crazy things | speaker is saner) <> P(speaker is saner | sane things plus crazy things) isn't a problem, if you deal with your priors correctly.
I think I misinterpreted your original question as meaning "Why is this problem fundamentally difficult even for Bayesians?", when it was actually, "What's wrong with the reasoning used by the speaker in addressing this problem?"
comment by Vladimir_Nesov · 2009-12-20T20:21:22.970Z · LW(p) · GW(p)
One problem is the assumption that being right and novel on some things implies being consistently right/sane. An important feature that separates "insanity" and stupidity is that "insanity" doesn't preclude domain-specific brilliance. Certainly a person being unusually right on some things is evidence for them being consistently right on others, but not overwhelmingly strong evidence.
Replies from: PhilGoetzcomment by cousin_it · 2009-12-20T21:51:36.509Z · LW(p) · GW(p)
Spot the Bayesian problem, anyone?
Sure. If some parts of the message contain novel correct insight but the rest is incomprehensible, the simple hypothesis that the whole message is correct gets likelihood-privileged over other similarly simple hypotheses.
But I don't think you're talking about the stuff Shalmanese wants to talk about. :-) "Wisdom" isn't knowledge of facts; wisdom is possession of good heuristics. A good heuristic may be easy to apply, but disproportionately hard to prove/justify to someone who hasn't amassed enough experience - which includes younger versions of ourselves. I've certainly adopted a number of behavioral heuristics that younger versions of me would've labeled as obviously wrong. For some of them I can't offer any justification even now, beyond "this works".
comment by PhilGoetz · 2009-12-20T23:34:43.321Z · LW(p) · GW(p)
This is the shock-level problem. If you let T1, T2, ... be the competing theories, and O be the observations, and you choose Ti by maximizing P(Ti | O), and you do this by choosing Ti that maximizes P(O | Ti) * P(Ti),
... then P(O | Ti) can be at most 1; but P(Ti), the prior you assign to theory i, can be arbitrarily low.
In theory, this should be OK. In practice, P(O | Ti) is always near zero, because no theory accounts for all of the observations, and because any particular series of observations is extremely unlikely. Our poor little brains have an underflow error. So in place of P(O | Ti) we put an approximation that is scaled so that P(O | T0), where T0 is our current theory, is pretty large. Given that restriction, there's no way for P(O | Ti) to be large enough to overcome the low prior P(Ti).
This means that there's a maximum degree of dissimilarity between Ti and your current theory T0, beyond which the prior you assign Ti will be so low that you should dismiss it out of hand. "Truth" may lie farther away than that from T0.
(I don't think anyone really thinks this way; so the observed shock level problem must have a non-Bayesian explanation. But one key point, of rescaling priors so your current beliefs look reasonable, may be the same.)
So you need to examine the potentially saner theory a piece at a time. If there's no way to break the new theory up into independent parts, you may be out of luck.
Consider society transitioning from Catholicism in 1200AD to rationalist materialism. It would have been practically impossible for 1200AD Catholics to take the better theory one piece at a time and verify it, even if they'd been Bayesians. Even a single key idea of materialism would have shattered their entire worldview. The transition was made only through the noise of the Protestant Reformation, which did not move directly towards the eventual goal, but sideways, in a way that fractured Europe's religioius power structure and shook it out of a local minimum.
comment by Thomas · 2009-12-22T12:08:07.329Z · LW(p) · GW(p)
The karma score for this much saner person on LW would be low, wouldn't it? He wouldn't be able to post.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-22T13:31:22.880Z · LW(p) · GW(p)
The karma score for this much saner person on LW would be low, wouldn't it? He wouldn't be able to post.
It would reach posting range within a day. Thereafter he would have to be somewhat careful with how he presents his contrarian knowledge and be sure to keep a steady stream of posts on topics that are considered insightful by this audience. If karma balance is still a problem he can start browsing quote encyclopaedias.
Replies from: Thomas↑ comment by Thomas · 2009-12-22T17:04:34.185Z · LW(p) · GW(p)
Assuming he was that eager to post at all. My point was mainly, that one doesn't want to listen or to read a much saner person. One downvotes this person's comments on LW, if you want.
Replies from: MrHen↑ comment by MrHen · 2009-12-22T20:44:58.570Z · LW(p) · GW(p)
I don't understand why I wouldn't want to listen to a saner person. I thought that was the whole reason I was even here. What am I missing?
Replies from: Thomascomment by Shalmanese · 2009-12-20T18:46:46.106Z · LW(p) · GW(p)
I think the difference here is that science is still operating under the same conceptual framework as it was 100 years ago. As a result, scientists between different eras can put themselves into each others heads and come to mutual agreement.
Sufficiently advanced wisdom to me has always been a challenging of the very framing of the problem itself.
comment by HalFinney · 2009-12-23T21:14:14.725Z · LW(p) · GW(p)
A bit OT, but it makes me wonder whether the scientific discoveries of the 21st century are likely to appear similarly insane to a scientist of today? Or would some be so bold as to claim that we have crossed a threshold of knowledge and/or immunity to science shock, and there are no surprises lurking out there bad enough to make us suspect insanity?
comment by timtyler · 2009-12-20T19:39:08.008Z · LW(p) · GW(p)
I am not quite sure what you mean - but usually if you have the ability to recognise a system with some property P then you can conduct a search through the space of such systems for ones that you recognise have property P.
An exhaustive search may be laborious and slow sometimes, but often you can use optimisation strategies to speed things up.
Here, P would be: "future history elements that appear to be obviously right to someone from 1900".
comment by Benquo · 2009-12-20T19:31:15.577Z · LW(p) · GW(p)
"Spot the Bayesian problem, anyone?"
Hmm, would this be that you need priors for both the relative frequency of people saner than you, and the relative frequency of monkey-at-typewriter random apparent sanity, before you know whether this is evidence of sanity or insanity?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-20T20:27:32.967Z · LW(p) · GW(p)
Priors are not for grabs -- you can't "require" priors, and you can't change them.
Replies from: Vladimir_Nesov, PhilGoetz↑ comment by Vladimir_Nesov · 2009-12-21T01:32:30.583Z · LW(p) · GW(p)
The above comment is one more piece of evidence that I'm unreliable in real-time, and shouldn't take actions without explicitly rethinking them.
↑ comment by PhilGoetz · 2009-12-21T00:12:17.378Z · LW(p) · GW(p)
You can require priors in order to make a computation. Your requirement doesn't cause the priors to magically appear; but you still require them.
I think Benquo means that you need enough observations to have reliable priors.
Replies from: Benquo