Do people think in a Bayesian or Popperian way?
post by curi · 2011-04-10T10:18:28.936Z · LW · GW · Legacy · 38 commentsContents
38 comments
Scope Insensitivity - The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Correspondence Bias, also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
Confirmation bias, or Positive Bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
Planning Fallacy - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
Do We Believe Everything We're Told? - Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
Illusion of Transparency - Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
Evaluability - It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
The Allais Paradox (and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.
38 comments
Comments sorted by top scores.
comment by benelliott · 2011-04-10T10:25:28.106Z · LW(p) · GW(p)
Nobody here is claiming that people naturally reason in a Bayesian way.
We are claiming that they should.
Replies from: shokwave, curi↑ comment by curi · 2011-04-10T10:27:30.561Z · LW(p) · GW(p)
If people don't reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships). Right?
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.
Replies from: Eugine_Nier, benelliott, Richard_Kennaway, jmmcd↑ comment by Eugine_Nier · 2011-04-10T16:48:54.857Z · LW(p) · GW(p)
As Eliezer said in Searching for Bayes-Structure:
The way you begin to grasp the Quest for the Holy Bayes is that you learn about cognitive phenomenon XYZ, which seems really useful - and there's this bunch of philosophers who've been arguing about its true nature for centuries, and they are still arguing - and there's a bunch of AI scientists trying to make a computer do it, but they can't agree on the philosophy either -
And - Huh, that's odd! - this cognitive phenomenon didn't look anything like Bayesian on the surface, but there's this non-obvious underlying structure that has a Bayesian interpretation - but wait, there's still some useful work getting done that can't be explained in Bayesian terms - no wait, that's Bayesian too - OH MY GOD this completely different cognitive process, that also didn't look Bayesian on the surface, ALSO HAS BAYESIAN STRUCTURE - hold on, are these non-Bayesian parts even doing anything?
- Yes: Wow, those are Bayesian too!
- No: Dear heavens, what a stupid design. I could eat a bucket of amino acids and puke a better brain architecture than that.
↑ comment by benelliott · 2011-04-10T10:41:57.764Z · LW(p) · GW(p)
Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.
Humans think in an approximately Bayesian way. The biases are the places where the approximation breaks down, and human thinking starts to fail.
Claims that people think in an inductive way are common here. Note how my descriptions are different than that and account for the evidence.
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I am riding my bike to college after it rained during the night, and I notice that the rain has caused a path I use to become a muddy swamp, meaning I have to take a detour and arrive late. Next time it rains, I leave home early because I expect to encounter mud again.
If you wish to claim that most people are non-inductive you must either:
1) Show that I am unusual for thinking in this way
or
2) Show how someone else could come to the same conclusion without induction.
If you choose 1) then you must also show why this freakishness puts me at a disadvantage, or concede that other people should be inductive.
Replies from: curi↑ comment by curi · 2011-04-10T19:23:22.593Z · LW(p) · GW(p)
You have not given one example of non-inductive thinking. I really do not see how you could get through the day without induction.
I get hungry. So I guess some things I might like to eat. I criticize my guesses. I eat.
benelliott posts on less wrong. I guess what idea he's trying to communicate. With criticism and further guessing I figure it out. I reply.
Most of this is done subconsciously.
Now how about an example of induction?
In order to evaluate if it is an example of induction, you'll need to start with a statement of the method of induction. This is not b/c I'm unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
In the example you give, you don't give any explanation of what you think it has to do with induction. Do you think it's inductive because you learned a new idea? Do you think it's inductive because it's impossible to conjecture that you should do that next time it rains? Do you think it's inductive because you learned something from a single instance? (Normally people giving examples of induction will have multiple data points they learn from, not one. Your example is not typical at all.)
Replies from: benelliott↑ comment by benelliott · 2011-04-10T21:39:39.280Z · LW(p) · GW(p)
In order to evaluate if it is an example of induction, you'll need to start with a statement of the method of induction. This is not b/c I'm unfamiliar with such a thing but because we will disagree about it and we better have one to get us on the same page more (inductivists vary a lot. I know many different statements of how it works.)
I'm tempted just to point to my example and say 'there, that's what I call induction', but I doubt that will satisfy you so I will try to give a more rigorous explanation.
I view induction as Bayesian updating/decision theory with and inductive prior. To clarify what I mean, suppose I am faced with a opaque jar, containing ten beads, each of which is either red or white. What is my prior for the contents of the jar? It depends on my background knowledge.
1) I may know that someone carefully put 5 red beads and 5 white beads in the jar
2) I may know that each ball was chosen randomly with probably p, where p is a parameter which is (as far as I know) equally likely to be anywhere between 0 and 1
3) I may know that each ball was tossed in by a monkey which was drawing randomly from two barrels, one containing red balls, one containing white balls.
I may also have many other states of knowledge, but I give just three examples for simplicity.
1) is anti-inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (5-R)/(10-N), so every red I draw decreases my anticipation of red, while every white increases it.
2) is inductive. If I have drawn N balls, R of which have been red, then P(the next ball is red) = (R+1)/(N+2) (this is a theorem due to Laplace, the proof is not quite trivial). Every red ball increases my anticipation of red, while every white increases it. Notice how it takes many reds to provide strong evidence, but even one red is sufficient for a fairly large update, from 0.5 to 0.67.
3) is neither inductive nor anti-inductive. P(the next ball is red) = 0.5 regardless of what I have drawn. Past observations do not influence expectation of future observations.
With the mud, neither of the three examples perfectly describes my prior, but 2) comes closest. Most proposals for universal priors are to some extent inductive, for examples Solomonoff assigns a much higher probability to '1000 0s' than '999 0s followed by a 1'.
Brief note: Human induction, and Solomonoff Induction are more sophisticated than 2) mainly because they have better pattern spotting abilities, and so the process in not quite analogous.
↑ comment by Richard_Kennaway · 2011-04-10T10:48:40.375Z · LW(p) · GW(p)
If people don't reason in a Bayesian way, but they do reason, it implies there is a non-Bayesian way to reason which works (at least a fair amount, e.g. we managed to build computers and space ships).
There is. That does not mean that it is without error, or that errors are not errors. A&B is, everywhere and always, no more likely than A. Any method of concluding otherwise is wrong. If the form of reasoning that Popper advocates endorses this error, it is wrong.
Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.
Whoever that was is wrong.
Replies from: Oscar_Cunningham, curi↑ comment by Oscar_Cunningham · 2011-04-10T15:14:28.659Z · LW(p) · GW(p)
Replies from: Richard_KennawaySomeone told me that humans do and must think in a bayesian way at some level b/c >>it's the only way that works.
Whoever that was is wrong.
↑ comment by Richard_Kennaway · 2011-04-10T17:53:36.965Z · LW(p) · GW(p)
Eliezer can say whether curi's view is a correct reading of that article, but it seems to me that if Bayesian reasoning is the core that works, but humans do a lot of other stuff as well that is all either useless or harmful, and don't even know the gold from the dross, then this is not in contradiction with demonstrating that the other stuff is due to Popperian reasoning. It rather counts against Popper though. Or at least, Popperianism.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-04-10T19:33:38.727Z · LW(p) · GW(p)
Agreed.
↑ comment by curi · 2011-04-10T20:09:15.601Z · LW(p) · GW(p)
Here's someone saying it again by quoting Yudkowsky saying it:
http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o
No doubt Yudkowsky is wrong, as you say.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2011-04-11T06:27:46.860Z · LW(p) · GW(p)
See my other response to Oscar_Cunningham, who cited the same article.
↑ comment by jmmcd · 2011-04-10T14:25:19.338Z · LW(p) · GW(p)
The core of the problem:
Someone told me that humans do and must think in a bayesian way at some level b/c it's the only way that works.
No link to that someone? If you can remember who it was, you should go and argue with them. To everyone else, this is a straw man.
Replies from: Cyan, curi↑ comment by Cyan · 2011-04-10T14:49:26.738Z · LW(p) · GW(p)
(Certainly there are researchers looking for Bayes structure in low-level neural processing, but those investigations focus on tasks far below human cognition.)
↑ comment by curi · 2011-04-10T19:14:17.371Z · LW(p) · GW(p)
Here's someone saying it again by quoting Yudkowsky saying it:
http://lesswrong.com/lw/56e/do_people_think_in_a_bayesian_or_popperian_way/3w7o
Some straw man... I thought people would be familiar with this kind of thing without me having to quote it.
comment by jimrandomh · 2011-04-10T13:21:59.753Z · LW(p) · GW(p)
Please, stop. This has gone on long enough. You don't have to respond to everything, and you shouldn't respond to everything. By trying to do so, you have generated far more text than any reasonable person would be willing to read, and it's basically just repeating the same incorrect position over and over again. It is quite clear that we are not having a rational discussion, so there is nothing further to say.
Replies from: Nic_Smith, Emile, TheOtherDave↑ comment by Nic_Smith · 2011-04-10T19:17:59.198Z · LW(p) · GW(p)
Indeed. This Popperclipping of the discussion section should cease.
Replies from: None↑ comment by [deleted] · 2011-04-10T22:20:51.064Z · LW(p) · GW(p)
This situation seems an ideal test of the karma system.
Replies from: prase↑ comment by prase · 2011-04-10T23:47:00.828Z · LW(p) · GW(p)
And it works.
Replies from: None↑ comment by [deleted] · 2011-04-11T00:06:44.723Z · LW(p) · GW(p)
What beneficial effect have you observed? I ask because people were complaining about the forum being popperclipped. Do you disagree with these complaints? Or do you think that the karma system has trained the low-karma popperclipping participants to improve the quality of their comments? One of them recently wrote a post admitting and defending the tactic of being obnoxious - he said that his obnoxiousness was to filter out time-wasters.
Replies from: prase↑ comment by prase · 2011-04-11T00:28:58.305Z · LW(p) · GW(p)
I mean curi has now insufficient karma to post on the main page and his comments are generally heavily downvoted. People can disable viewing low karma comments, so popperclipping (whatever it means - did the old term "troll" grow out of fashion?) may not be a problem. Therefore I think that karma works.
Replies from: Desrtopa, None↑ comment by Desrtopa · 2011-04-11T02:13:12.430Z · LW(p) · GW(p)
Curi's karma periodically spikes despite posting no significantly upvoted comments or any improvement in his reception. I suspect he or someone else who frequents his site may be generating puppet accounts to feed his comments karma (his older comments appear to have gone through periodic blanket spikes.) He's posted main page and discussion articles multiple times after his karma has dropped to zero without first producing more comments that are upvoted, due to these spikes.
Replies from: prase↑ comment by prase · 2011-04-11T08:17:28.582Z · LW(p) · GW(p)
If this is true, it would be natural for the moderators to step in and ban him.
Replies from: Alicorn↑ comment by Alicorn · 2011-04-12T20:55:15.660Z · LW(p) · GW(p)
I asked matt if this could be confirmed, but apparently there's only a very time-consuming method to gather anything other than circumstantial evidence for the accusation.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-12T21:08:31.970Z · LW(p) · GW(p)
I asked matt if this could be confirmed, but apparently there's only a very time-consuming method to gather anything other than circumstantial evidence for the accusation.
Jimrandomh had an idea for setting up a script that might help, maybe talk to him? In any event, it might be useful to have the capability to do this in general. That said, since this is only the first time we've had such a problem, it doesn't seem as of right now that this is a common enough issue to really justify investing in additional capabilities for the software.
↑ comment by [deleted] · 2011-04-11T00:51:12.823Z · LW(p) · GW(p)
popperclipping (whatever it means...)
I believe that "popperclipping" is a play on words, a joke, alluding to a popular LW topic. Explaining it more might kill the joke.
I mean curi has now insufficient karma to post on the main page
Currently, on the main page, the most recent post under "Recent Posts" is curi's The Conjunction Fallacy Does Not Exist. The comments under this are showing up in the Recent Comments column. Of the five comments I see in the recent comments column, three are comments under curi's posts. That is a majority. As of now, then, it appears that curi continues to dominate discussion, either directly or by triggering responses.
Replies from: prase, prase↑ comment by prase · 2011-04-11T01:01:26.407Z · LW(p) · GW(p)
Damn, I thought it was in the discussion. Then, I retract my statement that karma works. Still, what's the explanation? Where did curi get enough karma to balance the blow from his heavily downvoted comments and posts? I have looked onto two pages of his recent activity where his score was -112 (-70 for the main page post, -42 for the rest). And I know he was near zero after his last but one main page post was published.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-04-11T02:00:40.255Z · LW(p) · GW(p)
Maybe mass upvoting by sockpuppets?
↑ comment by Emile · 2011-04-10T13:34:10.364Z · LW(p) · GW(p)
Seconded. When I discovered this ongoing conversation on Popperian epistemology, there were already three threads, some of them with hundreds of comments, and no signs of progress and mutual agreement, only argument. There may be some comments worth reading in the stack, but they're not worth the effort of digging.
↑ comment by TheOtherDave · 2011-04-10T14:00:25.028Z · LW(p) · GW(p)
While agreeing with you completely, I'll also point out that quite a few people have been feeding this particular set of threads... that is, continuing to have, at enormous length, a discussion in which no progress is being made.
comment by JoshuaZ · 2011-04-10T15:41:41.025Z · LW(p) · GW(p)
Others have already answered this, but there's another problem: you clearly haven't read the actual literature on the conjunction fallacy. It doesn't just occur in the form "A because of B." It connects with the representative heuristic. Thus, for suitably chosen A and B, people act like "A and B" is more likely than "A". See Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Tversky, Amos; Kahneman, Daniel Psychological Review, Vol 90(4), Oct 1983, 293-315. doi: 10.1037/0033-295X.90.4.293
Please stop posting and read the literature on these issues.
comment by benelliott · 2011-04-10T11:01:29.500Z · LW(p) · GW(p)
With the Allais Paradox, would you say that the decisions people make are consistent with Popperian philosophy? Or at any rate would you say that, as a Popperian, you would make similar decisions?
comment by Alexandros · 2011-04-10T10:24:51.489Z · LW(p) · GW(p)
Are you implying human thinking should be used as some sort of benchmark? Why in the space of all possible thought processes would the human family of thought processes, hacked together by evolution to work just barely well enough, represent the ideal? Also, are you applying the 'popperian' label to human thinking? If I prove human thinking to be wrong by its own standards, have I falsified the popperian process of approaching truth?
I am not well versed (or much invested) in bayes but this is not making much sense.
Replies from: radical_negative_one, curi↑ comment by radical_negative_one · 2011-04-10T10:32:05.303Z · LW(p) · GW(p)
To clarify/rephrase/expand on this, i think Alexandros is suggesting that questions "how do humans think", and "what is a rational way to think" are separate questions, and if we are discussing the first of these two questions then perhaps we have been sidetracked.
In fact, this is nicely highlighted by your very first sentence:
People think A&B is more likely than A alone, if you ask the right question. That's not very Bayesian; as far as you Bayesians can tell it's really quite stupid.
That is a quite stupid way to think, and if we want to think rationally we should desire to not think that way, regardless of whether it is in fact a common way of thinking.
comment by zaph · 2011-04-10T15:18:29.975Z · LW(p) · GW(p)
I think you should read up on the conjunction fallacy. Your example does not address the observations made in research by Kahneman and Tversky. The questions posed in the research do not assume causal relationships, they are just two independent probabilities. I won't rewrite the whole wiki article, but the upshot of the conjunction fallacy is that people using representativeness heuristic to asses odds, instead of using the correct procedures they would have used if that heuristic isn't cued. People who would never say "Joe rolled a six and a two" is more likely than "Joe rolled a two" do say "Joe is a New Yorker who rides the subway" is more likely than "Joe is a New Yorker", when presented with information about Joe.