When does an insight count as evidence?
post by Alex Flint (alexflint) · 2010-01-04T09:09:23.345Z · LW · GW · Legacy · 38 commentsContents
38 comments
Bayesianism, as it is presently formulated, concerns the evaluation of the probability of beliefs in light of some background information. In particular, given a particular state of knowledge, probability theory says that there is exactly one probability that should be assigned to any given input statement. A simple corrolary is that if two agents with identical states of knowledge arrive at different probabilities for a particular belief then at least one of them is irrational.
A thought experiment. Suppose I ask you for the probability that P=NP (a famous unsolved computer science problem). Sounds like a difficult problem, I know, but thankfully all relevant information has been provided for you --- namely the axioms of set theory! Now we know that either P=NP is proveable from the axioms of set theory, or its converse is (or neither is proveable, but let's ignore that case for now). The problem is that you are unlikely to solve the P=NP problem any time soon.
So being the pragmatic rationalist that you are, you poll the world's leading mathematicians, and do some research of your own into the P=NP problem and the history of difficult mathematical problems in general to gain insight into perhaps which group of mathematicians may be more reliable, and to what extent thay may be over- or under-confident in their beliefs. After weighing all the evidence honestly and without bias you submit your carefully-considered probability estimate, feeling like a pretty good rationalist. So you didn't solve the P=NP problem, but how could you be expected to when it has eluded humanity's finest mathematicians for decades? The axioms of set theory may in principle be sufficient to solve the problem but the structure of the proof is unknown to you, and herein lies information that would be useful indeed but is unavailable at present. You cannot be considered irrational for failing to reason from unavailable information, you say; rationality only commits you to using the information that is actually available to you, and you have done so. Very well.
The next day you are discussing probability theory with a friend, and you describe the one-in-a-million-illness problem, which asks for the probability that a patient has a particular illness, which is known to exist within only one in a million individuals, given that a particular diagnostic test with known 1% false positive rate has returned positive. Sure enough, your friend intuits that there is a high chance that the patient has the illness and you proceed to explain why this is not actually the rational answer.
"Very well", your friend says, "I accept your explanation but I when I gave my previous assessment I was unaware of this line of reasoning. I understand the correct solution now and will update my probability assignment in light of this new evidence, but my previous answer was made in the absence of this information and was rational given my state of knowledge at that point."
"Wrong", you say, "no new information has been injected here, I have simply pointed out how to reason rationally. Two rational agents cannot take the same information and arrive at different probability assignments, and thinking clearly does not constitute new information. Your previous estimate was irrational, full stop."
By now you've probably guessed where I'm going with this. It seems reasonable to assign some probability to the P=NP problem in the absence of a solution to the mathematical problem, and in the future, if the problem is solved, it seems reasonable that a different probability would be assigned. The only way both assessments can be permitted as rational within Bayesianism is if the proof or disproof of P=NP can be considered evidence, and hence we understand that the two probability assignments are each rational in light of differing states of knowledge. But at what point does an insight become evidence? The one-in-a-million-illness problem also requires some insight in order to reach the rational conclusion, but I for one would not say that someone who produced the intuitive but incorrect answer to this problem was "acting rationally given their state of knowledge". No sir, I would say they failed to reach the rational conclusion, for if lack of insight is akin to lack of evidence then any probability could be "rationally" assigned to any statement by someone who could reasonably claim to be stupid enough. The more stupid the person, the more difficult it would be to claim that they were, in fact, irrational.
We can interpolate between the two extremes I have presented as examples, of course. I could give you a problem that requires you to marginalize over some continuous variable, and with an appropriate choice for x I could make the integration very tricky, requiring serious math skills to come to the precise solution. But at what difficulty does it become rational to approximate, or do a meta-analysis?
So, the question is: when, if ever, does an insight count as evidence?
38 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-04T17:59:41.371Z · LW(p) · GW(p)
Now we know that either P=NP is proveable from the axioms of set theory, or its converse is
This isn't to be said casually; it would be a huge result if you could prove it, and very different from the case of neither being provable. Most things that are true about the natural numbers are not provable in any given set of axioms.
I mention this because I read that and woke up and said "What? Really?" and then read the following parenthetical and was disappointed. I suggest editing the text. If we don't know anything in particular about the relation of P=NP to set theory, it shouldn't be said.
Replies from: Sniffnoy, None↑ comment by [deleted] · 2010-01-05T02:31:13.480Z · LW(p) · GW(p)
I've heard either that P = NP is known to be falsifiable or that its negation is. I don't remember which I heard.
Replies from: bentarm↑ comment by bentarm · 2010-01-05T03:19:26.566Z · LW(p) · GW(p)
I've heard either that P = NP is known to be falsifiable or that its negation is. I don't remember which I heard.
I'm not quite sure what you mean by this. Falsifiable isn't really a word that makes sense in mathematics. P = NP is clearly falsifiable (give a proof that P!=NP) as is it's negation (give a polynomial time algorithm for an NP complete problem),
Scott Aarsonson has a paper summarising the difficulties in proving whether or not the P vs. NP question is formally independent of the Zermelo Fraenkel axioms: Is P vs NP Formally Independent (PDF file)
The paper is (obivously) pretty technical , but the take-home message is contained n the last sentence:
So I’ll state, as one of the few definite conclusions of this survey, that P = NP is either true or false. It’s one or the other. But we may not be able to prove which way it goes, and we may not be able to prove that we can’t prove it.
Replies from: None↑ comment by [deleted] · 2010-01-05T03:58:47.144Z · LW(p) · GW(p)
I'm not quite sure what you mean by this. Falsifiable isn't really a word that makes sense in mathematics. P = NP is clearly falsifiable (give a proof that P!=NP) as is it's negation (give a polynomial time algorithm for an NP complete problem)
Sure it makes sense. Something is falsifiable if if it is false, it can be proven false. It's not obvious, given that P != NP, that there is a proof of this; nor is it obvious, given that P = NP, that for one of the polynomial-time algorithms for an NP-complete problem, there is a proof that it actually is such a thing. Though there's certainly an objective truth or falsehood to P = NP, it's possible that there is no proof of the correct answer.
Replies from: jake987722↑ comment by jake987722 · 2010-01-05T06:55:36.872Z · LW(p) · GW(p)
Something is falsifiable if if it is false, it can be proven false.
Isn't this true of anything and everything in mathematics, at least in principle? If there is "certainly an objective truth or falsehood to P = NP," doesn't that make it falsifiable by your definition?
Replies from: orthonormal, Technologos↑ comment by orthonormal · 2010-01-05T07:15:26.227Z · LW(p) · GW(p)
It's not always that simple (consider the negation of G).
(If this is your first introduction to Gödel's Theorem and it seems bizarre to you, rest assured that the best mathematicians of the time had a Whiskey Tango Foxtrot reaction on the order of this video. But turns out that's just the way it is!)
↑ comment by Technologos · 2010-01-05T07:18:11.869Z · LW(p) · GW(p)
I know they get overused, but Godel's incompleteness theorems provide important limits to what can and cannot be proven true and false. I don't think they apply to P vs NP, but I just note that not everything is falsifiable, even in principle.
comment by Kaj_Sotala · 2010-01-04T11:44:22.184Z · LW(p) · GW(p)
What purpose are you after with this query? It sounds dangerously much like a semantic discussion, though I may be failing to see something obvious.
"Wrong", you say, "no new information has been injected here, I have simply pointed out how to reason rationally.
I'm not sure if this line makes sense. If somebody points out the correct way to interpret some piece of evidence, then that correct way of interpreting it is information. Procedural knowledge is knowledge, just as much as declarative.
Putting it in another way: if you were writing a computer program to do something, you might hard-code into it some way of doing things, or you might build some sort of search algorithm that would let it find the appropriate way of doing things. Here, hard-coding corresponds to a friend telling you how something should be interpreted, and the program discovering itself corresponds to a person discovering it herself. If you hard-code it, you are still adding extra lines of code into the program - that is, adding information.
Replies from: Eliezer_Yudkowsky, alexflint↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-04T18:02:30.086Z · LW(p) · GW(p)
To me also, the post sounds more like it's equivocating on the definition of "rationality" than asking a question of the form either "What should I do?" or "What should I expect?"
↑ comment by Alex Flint (alexflint) · 2010-01-04T12:14:20.573Z · LW(p) · GW(p)
What purpose are you after with this query? It sounds dangerously much like a semantic discussion, though I may be failing to see something obvious.
Fair question, I should've gotten this clear in my mind before I wrote. My observation is that there are people who reason effectively given their limited computation power and others who do not (hence the existence of this blog), and my question is by what criteria we can distinguish them given that the Bayesian definition of rationality seems to falter here.
If somebody points out the correct way to interpret some piece of evidence, then that correct way of interpreting it is information. Procedural knowledge is knowledge, just as much as declarative.
I would agree except that this seems to imply that probabilities generated by a random number generator should be considered rational since it "lacks" the procedural knowledge to do otherwise. This is not just semantics because we perceive a real performance difference between a random number generator and a program that multiplies out likelihoods and priors, and we would like to understand the nature of that difference.
comment by rosyatrandom · 2010-01-04T17:02:58.129Z · LW(p) · GW(p)
I think this post makes an excellent point, and brings to light the aspect of Bayesianism that always made me uncomfortable.
Everyone knows we are not really rational agents; we do not compute terribly fast or accurately (as Morendil states), we are often unaware of our underlying motivations and assumptions, and even those we know about are often fuzzy, contradictory and idealistic.
As such, I think we have different ways of reasoning about things, making decisions, assigning preferences, holding and overcoming inconsistencies, etc.. While it is certainly useful to have a science of quantitative rationality, I doubt we think that way at all... and if we tried, we would quickly run into the qualitative, irrational ramparts of our minds.
Perhaps a Fuzzy Bayesianism would be handy: something that can handle uncertainty, ambivalence and apathy in any of its objects. Something where we don't need to put in numbers where numbers would be a lie.
Doing research in biology, I can assure you that the more decimal places of accuracy I see, the more I doubt its reliability.
Replies from: ideclarecrockerrules↑ comment by ideclarecrockerrules · 2010-01-06T04:21:26.916Z · LW(p) · GW(p)
If you are envisioning some sort of approximation of Bayesian reasoning, perhaps one dealing with an ordinal set of probabilities, a framework that is useful in everyday circumstances, I would love to see that suggested, tested and evolving.
It would have to encompass a heuristic for determining the importance of observations, as well as their reliability and general procedures for updating beliefs based on those observations (paired with their reliability).
Was such a thing discussed on LW?
Replies from: orthonormal↑ comment by orthonormal · 2010-01-06T04:27:58.858Z · LW(p) · GW(p)
Let me be the first to say I like your username, though I wonder if you'll regret it occasionally...
Replies from: ideclarecrockerrules↑ comment by ideclarecrockerrules · 2010-01-06T08:40:00.554Z · LW(p) · GW(p)
Thank you, and thank you for the link; didn't occur to me to check for such a topic.
comment by Morendil · 2010-01-04T10:12:21.032Z · LW(p) · GW(p)
But at what difficulty does it become rational to approximate, or do a meta-analysis?
It depends on how much resources you choose to, or can afford to, devote to the question.
Say I have only a few seconds to ponder the product 538 time 347, and give a probability assignment for its being larger than 150,000; for its being larger than 190,000; and for its being larger than 240,000.
My abilities are such that in a limited time, I can reach definite conclusions about the two extreme values but not about the third; I'd have to be content with "ooh, about fifty-fifty". Given more time (or a calculator!) I can reach certainty.
If we're talking about bounded rationality rather than pure reason, the definition "identical states of knowledge" needs to be extended to "identical states of knowledge and comparable expenditure of resources".
Alternatively you need to revise your definition of "irrational" to admit degrees. Someone who can compute the exact number faster than I can is perhaps more rational than I am, but my 1:1 probability for the middle number does not make me "irrational" in an absolute sense, compared to someone with a calculator.
I wouldn't call our friend "irrational", though it may be appropriate to call them lazy.
In fact, the discomfort some people feel at hearing the words "irrational" or "rational" bandied about can perhaps be linked to the failure of some rationalists to attend to the distinction between bounded rationality and pure reason...
ETA:
the more stupid the person, the more difficult it would be to claim that they were, in fact, irrational.
So ? You already have a label "stupid" which is descriptive of an upper bound on the resources applied by the agent in question to the investigation at hand. What additional purpose would the label "irrational" serve ?
comment by Liron · 2010-01-04T18:27:13.135Z · LW(p) · GW(p)
Insight doesn't exactly "count as evidence". Rather, when you acquire insight, you improve the evidence-strength of the best algorithm available for assigning hypothesis probabilities.
Initially, your best hypothesis-weighting algorithms are "ask an expert" and "use intuition".
If I give you the insight to prove some conclusion mathematically, then the increased evidence comes from the fact that you can now use the "find a proof" algorithm. And that algorithm is more entangled with the problem structure than anything else you had before.
comment by ideclarecrockerrules · 2010-01-06T09:44:54.528Z · LW(p) · GW(p)
when, if ever, does an insight count as evidence?
I suspect you use the term "insight" to describe something that I would classify as a hypothesis rather than observation (evidence is a particular kind of observation, yes?).
Consider Pythagoras' theorem and an agent without any knowledge of it. If you provide the agent with the length of the legs of a right-angled triangle and ask for the length of the hypotenuse, it will use some other algorithm/heuristic to reach an answer (probably draw and measure a similar triangle).
Now you suggest the theorem to the agent. This suggestion is in itself evidence for the theorem, if for no other reason then because P(hypothesis H | H is mentioned) > P(H | H is not mentioned). Once H steals some of the probability from competing hypotheses, the agent looks for more evidence and updates it's map.
Was his first answer "rational"? I believe it was rational enough. I also think it is a type error to compare hypotheses and evidence.
If you define "rational" as applying the best heuristic you have, you still need a heuristic for choosing a heuristic to use (i.e. read wikipedia, ask expert, become expert, and so on). If you define it as achieving maximum utility, well, then it's pretty subjective (but can still be measured). I'd go for the latter.
P.S. Can Occam's razor (or any formal presentation of it) be classified as a hypothesis? Evidence for such could be any observation of a simpler hypotheses turning out to be a better one, similar for evidence against. If that is true, then you needn't dual-wield the sword of Bayes and Occam's razor; all you need is one big Bayesian blade.
Replies from: ciphergoth, Zack_M_Davis↑ comment by Paul Crowley (ciphergoth) · 2010-01-06T11:01:30.104Z · LW(p) · GW(p)
P.S. Can Occam's razor (or any formal presentation of it) be classified as a hypothesis?
Sadly, no; this is the "problem of induction" and to put it briefly, if you try to do what you suggest you end up having to assume what you're trying to prove. If you start with a "flat" prior in which you consider every possible Universe-history to be equally likely, you can't collect evidence for Occam's razor. The razor has to be built in to your priors. Thus, Solomonoff's lightsaber.
↑ comment by Zack_M_Davis · 2010-01-06T09:51:01.641Z · LW(p) · GW(p)
Replies from: ideclarecrockerrulesthen you needn't dual-wield the sword of Bayes and Occam's razor; all you need is one big Bayesian blade
↑ comment by ideclarecrockerrules · 2010-01-06T10:27:14.560Z · LW(p) · GW(p)
Sweet, but according to the wiki the lightsaber doesn't include full Bayesian reasoning, only the special case where the likelihood ratio of evidence is zero.
One could argue that you can reach the lightsaber using the Bayesian blade, but not vice versa.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-06T11:06:51.334Z · LW(p) · GW(p)
The lightsaber does include full Bayesian reasoning.
comment by Psychohistorian · 2010-01-04T18:52:34.013Z · LW(p) · GW(p)
You make the common error of viewing the answer as binary. A proper rationalist would be assigning probabilities, not making a binary decision. In the one-in-a-million example, your friend thinks he has the right answer, and it is because he thinks he is very probably right that he is irrational. In the P=NP example, I do not have any such certainty.
I would imagine that a review of P=NP in the manner you describe probably wouldn't push me too far from 50/50 in either direction. If it pushed me to 95/5, I'd have to discredit my own analysis, since people who are much better at math than I am have put a lot more thought into it than I have, and they still disagree.
Now, imagine someone comes up with an insight so good that all mathematicians agree P=NP. That would obviously change my certainty, even if I couldn't understand the insight. I would go from a rationally calibrated ~50/50 to a rationally calibrated ~99.99/.01 or something close. Thus, that insight certainly would be evidence.
That said, you do raise an interesting issue about meta-uncertainty that I'm still mulling over.
ETA: P=NP was a very hypothetical example about which I know pretty much nothing. I also forgot the fun property of mathematics that you can have the right answer with near certainty, but no one cares if you can't prove it formally. My actual point was about thinking answers are inherently binary. The mistake of irrational actor who has ineffective tools seems to be his confidence in his wrong answer, not the wrong answer itself.
Replies from: bentarm↑ comment by bentarm · 2010-01-04T22:21:39.226Z · LW(p) · GW(p)
Now, imagine someone comes up with an insight so good that all mathematicians agree P=NP.
All mathematicians already agree that P != NP. I'm not sure quite how much more of a consensus you could ask for on an unsolved maths problem.
(see, e.g., Lance Fortnow or Scott Aaronson)
Replies from: CarlShulman, orthonormal, None↑ comment by CarlShulman · 2010-01-05T04:40:48.253Z · LW(p) · GW(p)
Replies from: ciphergothIn a 2002 poll of 100 researchers, 61 believed the answer is no, 9 believed the answer is yes, 22 were unsure, and 8 believed the question may be independent of the currently accepted axioms, and so impossible to prove or disprove.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-05T11:06:57.807Z · LW(p) · GW(p)
8 believed the question may be independent of the currently accepted axioms, and so impossible to prove or disprove.
Wouldn't that imply P != NP since otherwise there would be a counterexample?
Replies from: Christian_Szegedy, Richard_Kennaway↑ comment by Christian_Szegedy · 2010-01-06T21:30:32.033Z · LW(p) · GW(p)
There is a known concrete algorithm for every NP-complete problem that solves that problem in polynomial time if P=NP:
Generate all algorithms and run algorithm n in 1/2^n fraction of the time, check the result of algorithm n if it stops and output the result if correct.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-07T12:52:24.231Z · LW(p) · GW(p)
Nice! More explicitly: if the polynomial-time algorithm is at (constant) index K in our enumeration of all algorithms, we'd need about R*2^K steps of the meta-algorithm to run R steps of the algorithm K. Thus, if the algorithm K is bound by polynomial P(n) in problem size n, it'd take P(n)*2^K steps of the meta-algorithm (polynomial in n, K is a constant) to solve the problem of size n.
↑ comment by Richard_Kennaway · 2010-01-05T11:52:17.274Z · LW(p) · GW(p)
Wouldn't that imply P != NP since otherwise there would be a counterexample?
No. It could be that there is an algorithm that solves some NP-complete problem in polynomial time, yet there is no proof that it does so. We could even find ourselves in the position of having discovered an algorithm that runs remarkably fast on all instances it's applied to, practically enough to trash public-key cryptography, yet although it is in P we cannot prove it is, or even that it works.
↑ comment by orthonormal · 2010-01-05T07:17:48.539Z · LW(p) · GW(p)
You mean, a substantial majority of sane and brilliant mathematicians. Never abuse a universal quantifier when math is on the line!
Replies from: bentarm↑ comment by bentarm · 2010-01-05T14:00:41.033Z · LW(p) · GW(p)
You're (all) right, of course, there are several mathematicians who refuse to have an opinion on whether P = NP, and a handful who take the minority view (although of the 8 who did so in Gasarch's survey 'some' admitted they were doing it just to be contrary, that really doesn't leave many who actually believed P=NP).
What this definitively does not mean is that it's rational to assign 50% probability to each side my main point was that there is ample evidence to suggest that P != NP (see the Scott Aaronson post I linked to above) and a strong consensus in the community that P!=NP. To insist that one should assign 50% of one's probability to the possibility that P=NP is just plain wrong. If nothing else, Aaronson's "self-referential" argument should be enough to convince most people here that P is probably a strict subset of NP.
↑ comment by [deleted] · 2010-01-05T02:26:59.993Z · LW(p) · GW(p)
All mathematicians already agree that P != NP.
Not all of them.
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2010-01-05T04:31:21.877Z · LW(p) · GW(p)
Just because there are some that disagree doesn't mean we must assign 50% probability to each case.
Replies from: Nonecomment by Dre · 2010-01-06T07:42:32.080Z · LW(p) · GW(p)
(please note that this is my first post)
I found the phrasing in terms of evidence to be somewhat confusing in this case. I think there is some equivocating on "rationality" here and that is the root of the problem.
For P=NP, (if it or its inverse is provable) a perfect Bayesian machine will (dis)prove it eventually. This is an absolute rationality; straight rational information processing without any heuristics or biases or anything. In this sense it is "irrational" to not be able to (dis)prove P=NP ever.
But in the sense of "is this a worthwhile application of my bounded resources" rationality, for most people the answer is no. One can reasonably expect a human claiming to be "rational" to be able to correctly solve one-in-a-million-illness, but not to have (or even be able to) go through the process of solving P=NP. In terms of fulfillingone's utility function, solving P=NP given your processing power is most likely not the most fulfilling choice (except for some computer scientists).
So we can say this person is taking the best trade-off between accuracy and work for P=NP because it requires a large amount of work, but not for one-in-a-million-illness because learning Bayes rule is very little work.
comment by Dre · 2010-01-06T07:41:21.195Z · LW(p) · GW(p)
(please note that this is my first post)
I found the phrasing in terms of evidence to be somewhat confusing in this case. I think there is some equivocating on "rationality" here and that is the root of the problem.
For P=NP, (if it or its inverse is provable) a perfect Bayesian machine will (dis)prove it eventually. This is an absolute rationality; straight rational information processing without any heuristics or biases or anything. In this sense it is "irrational" to not be able to (dis)prove P=NP ever.
But in the sense of "is this a worthwhile application of my bounded resources" rationality, for most people the answer is no. One can reasonably expect a human claiming to be "rational" to be able to correctly solve one-in-a-million-illness, but not to have (or even be able to) go through the process of solving P=NP. In terms of fulfillingone's utility function, solving P=NP given your processing power is most likely not the most fulfilling choice (except for some computer scientists).
So we can say this person is taking the best trade-off between accuracy and work for P=NP because it requires a large amount of work, but not for one-in-a-million-illness because learning Bayes rule is very little work.
comment by Otus · 2010-01-05T14:25:45.171Z · LW(p) · GW(p)
I don't think the two examples you gave are technically that different. Someone giving an "intuitive" answer to the diagnostic question is basically ignoring half the data; likewise, someone looking for answer to P=NP using a popularity survey is ignoring all other data (e.g. the actual math).
The difference is whether you know what data you are basing your evaluation on and whether you know you have ignored some. When you can correctly state what your probability is conditional on, you are presenting evidence.