Posts
Comments
You really should read Taleb; you can probably start with The Black Swan. His terms for these are "Mediocristan," domains that are described by Gaussian distributions, and "Extremistan," domains that are described by power laws.
For those interested, Netflix has a new documentary out about the case: https://youtu.be/9r8LG_lCbac
I'm going to try to make it.
Sorry I missed this. I'll try to attend the next one. I suggest Capital City Brewery at Metro Center.
I recommend a related essay by Hayek, "Competition as a Discovery Procedure."
This is a fallacious amphiboly, so it's deductively wrong. There's no need to even bring up induction here, and Bayesian inference is for induction. It's a category error to criticize that Bayesian inference doesn't apply. It would be like asking Bayesian inference to cook me dinner.
David Friedman laments another misuse of frequentism.
I know this is an old thread, but for any people just now reading it, I thought I'd pass along this bizarre development.
There is no such thing as absolute certainty, but there is assurance sufficient for the purposes of human life.
John Stuart Mill, On Liberty
He seems to have understood that 0 and 1 are not probabilities.
Arnold Kling has some thoughts about the plight of the unskilled college grad.
Yeah, that would be great, but I can't do it; I don't have the technical background, so I hereby delegate the task to someone else willing to write it up.
Good article on the abuse of p-values: http://www.sciencenews.org/view/feature/id/57091/title/Odds_are,_its_wrong
In A Technical Explanation of Technical Explanation, Eliezer writes,
You should only assign a calibrated confidence of 98% if you're confident enough that you think you could answer a hundred similar questions, of equal difficulty, one after the other, each independent from the others, and be wrong, on average, about twice. We'll keep track of how often you're right, over time, and if it turns out that when you say "90% sure" you're right about 7 times out of 10, then we'll say you're poorly calibrated.
...
What we mean by "probability" is that if you utter the words "two percent probability" on fifty independent occasions, it better not happen more than once
...
If you say "98% probable" a thousand times, and you are surprised only five times, we still ding you for poor calibration. You're allocating too much probability mass to the possibility that you're wrong. You should say "99.5% probable" to maximize your score. The scoring rule rewards accurate calibration, encouraging neither humility nor arrogance.
So I have a question. Is this not an endorsement of frequentism? I don't think I understand fully, but isn't counting the instances of the event exactly frequentist methodology? How could this be Bayesian?
Cool paper: When Did Bayesian Inference Become “Bayesian”?
http://ba.stat.cmu.edu/journal/2006/vol01/issue01/fienberg.pdf
This is an excellent diagnosis, and those are excellent suggestions for really learning the material.
"If you can't explain it simply, you don't understand it well enough." Albert Einstein
This relates well to my earlier frustration about the cop-out of vaguely appealing to life experience in an argument, without actually explaining anything.
I'm a little late to this game, but I spent over an hour, maybe two, comparing the information from the two websites. I had known nothing previously about the case.
My answers: 1: 0.05; 2: 0.05; 3: 0.95; 4: 0.65
So, I feel pretty vindicated. This was a great complement to Kaj Sotala's post on Bayesianism. With his post in mind, as I was considering this case, I assigned probabilities to the existence of an orgy gone wrong as against one rape and murder from one person. There is strong Bayesian evidence for Guédé's guilt, but it's exceedingly weak for Sollecito and Knox. This has really helped the idea of Bayesianism "click" for me.
komponisto, your reasoning is wonderfully thorough and sound. I can corroborate that I deliberately found myself "shutting the voice out" concerning the activity with the mop. You have a great explanation, overall. These two posts of yours are in the running for my all-time favorites.
Noted, thanks.
I know you two are joking, but I will take this opportunity to point out that I really do appreciate the culture of humility on Less Wrong. It's Yudkowsky's eighth virtue. I am aware of my profound ignorance as a mere 22-year-old undergrad.
Alternatively, is this a plea for the Skinnerian, egalitarian abolition of honorifics, as from Walden Two?
Well, I am an undergrad right now, at least for a couple more months.
Prof. Hanson,
I'm 22, and haven't encountered an opportunity where I thought to use this claim. There are probably instances where it would have been factually appropriate for me to do so, but I'm not inclined to make this point, because it seems to me like a cop-out.
Maybe I would have difficulty in explaining something highly technical or specialized to someone with no background, but crying "life experience" doesn't seem to be the proper response. It's far too vague. I would find it more appropriate to direct my debate partner to the specialized or technical material they haven't studied to understand why my position might be different.
The problem is that nebulously appealing to "life experience" doesn't even grant how the debate partner is uninformed. It's as if the person with more "life experience" is on such a higher level of understanding that they can't even communicate how their additional information informs their understanding. Like Silas Barta, I'm skeptical that even the most informed and educated people would ever be simply unable to explain the basic ideas of even the most difficult material. When this claim is not used to try to explain how their training or experience leads them to a different conclusion, I suspect that more often than not, their differing position isn't actually about any specialized training, just that their line of argumentation has run out of steam.
In critiquing postmodernism, Noam Chomsky wrote, "True, there are lots of other things I don't understand: the articles in the current issues of math and physics journals, for example. But there is a difference. In the latter case, I know how to get to understand them, and have done so, in cases of particular interest to me; and I also know that people in these fields can explain the contents to me at my level, so that I can gain what (partial) understanding I may want."
"As one shocked 42-year-old manager exclaimed in the middle of a self-reflective career planning exercise, 'Oh, no! I just realized I let a 20-year-old choose my wife and my career!'"
-- Douglas T. Hall, Protean Careers of the 21st Century
Sorry; I didn't realize that I can still post. I went ahead and posted it.
Here's the latest version, what I will attempt to post on the top level when I again have enough karma.
"Life Experience" as a Conversation-Halter
Sometimes in an argument, an older opponent might claim that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that age or quantity of experience is a proxy for legitimate authority. In and of itself, such "life experience" is necessary for an informed rational worldview, but it is not sufficient.
The claim that more "life experience" will completely reverse an opinion indicates that to the person making such a claim, belief that opinion is based primarily on an accumulation of anecdotes, perhaps derived from extensive availability bias. It actually is a pretty decent assumption that other people aren't Bayesian, because for the most part, they aren't. Many can confirm this, including Haidt, Kahneman, Tversky.
When an opponent appeals to more "life experience," it's a last resort, and it's a conversation halter. This tactic is used when an opponent is cornered. The claim is nearly an outright acknowledgment of a move to exit the realm of rational debate. Why stick to rational discourse when you can shift to trading anecdotes? It levels the playing field, because anecdotes, while Bayesian evidence, are easily abused, especially for complex moral, social, and political claims. As rhetoric, this is frustratingly effective, but it's logically rude.
Although it might be rude and rhetorically weak, it would be authoritatively appropriate for a Bayesian to be condescending to a non-Bayesian in an argument. Conversely, it can be downright maddening for a non-Bayesian to be condescending to a Bayesian, because the non-Bayesian lacks the epistemological authority to warrant such condescension. E.T. Jaynes wrote in Probability Theory about the arrogance of the uninformed, "The semiliterate on the next bar stool will tell you with absolute, arrogant assurance just how to solve the world's problems; while the scholar who has spent a lifetime studying their causes is not at all sure how to do this."
Hi Morendil,
Thanks for the comment. The particular version you are commenting on was an earlier, worse version than what I posted and then pulled this morning. The version I posted this morning was much better than this. I actually changed the claim about the Sokal affair completely.
Due to what I fear was an information cascade of negative karma, I pulled the post so that I might make revisions.
The criticism concerning both this earlier version and the newer one from this morning still holds though. I too realized after the immediate negative feedback that I actually was combining, poorly, two different points and losing both of them in the process. I think I need to revise this into two different posts, or cut out the point about academia entirely. I will concede that anecdotes are evidence as well in the future version.
Unfortunately I was at exactly 50 karma, and now I'm back down to 20, so it will be a while before I can try again. I'll be working on it.
There is a difference between science, a.k.a. basic research, and technology, a.k.a. applied science. A popular justification for funding basic research is that it suffers the positive external effects you mention, but this is inappropriately conflating science and technology. Technology doesn't suffer from external effects. The patent system and the profit motive allow for technological goods and services to be excludable.
Right, so a "public" library is a good example of a good that is provided publicly, but has little economic justification as such. A "public" good is technically specific in economics, and refers to something more narrow than what is used in everyday language.
A book is excludable, even if somewhat nonrivalrous. It's rivalrous in the sense that it can't be checked out to multiple people at once, but nonrivalrous in the sense that a book in a library can be consumed by many more people than a book kept on a shelf in someone's private home, over an extended period of time.
A library could operate without positive external effects with a subscription model.
So, it turns out that power affects what kind of moral reasoning a person uses.
Yes, degrees of rivalrousness and excludability exist on a continuum, but that's irrelevant here. Scientific knowledge isn't nonexcludable.
Let's be precise with our language. Scientific knowledge is produced in respected, formal, peer-reviewed journals. Such journals charge for access to that knowledge. We shouldn't be sloppy with how we define scientific knowledge; there is a lot of knowledge about science, that's not the same thing as scientific knowledge, which is produced by a specific, formal, institutional process.
Mancur Olson's The Logic of Collective Action might serve as a very useful theoretical tool here, for talking about groups. We might extend Olson's analysis by thinking of how different kinds of groups produce rationality and scientific information.
I'm sorry; how is scientific knowledge a public good? Yes, it is nonrivalrous in consumption, but certainly not nonexcludable. Legitimate, peer-reviewed journals charge for subscriptions, individual issues, or even for individual articles online.
Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.
It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.
This sounds very Foucauldian, almost straight out of Discipline and Punish.
I'm not Seth Godin, by the way.
We can discuss both epistemic and instrumental rationality.
So I finally picked up a copy of Probability Theory: The Logic of Science, by E.T. Jaynes. It's pretty intimidating and technical, but I was surprised how much prose there is, which makes it surprisingly palatable. We should recommend this more here on Less Wrong.
I find excruciating honesty a worthy ideal, but not everyone is prepared for it. So, plainly describing everything you intend to signal and counter-signal might come off as eccentric, but worth doing if you can pull it off. It requires the right type of audience.
Eliezer, how is progress coming on the book on rationality? Will the body of it be the sequences here, but polished up? Do you have an ETA?
Yes! Both you and Kaj Sotala seem right on the money here. Deontology falls flat. A friend once observed to me that consequentialism is a more challenging stand to take because one needs to know more about any particular claim to defend an opinion about it.
I know it's been discussed here on Less Wrong, but Jonathan Haidt's research is really great, and relevant to this discussion. Professor Haidt's work has validated David Hume's assertions that we humans do not reason to our moral conclusions. Instead, we intuit about the morality of an action, and then provide shoddy reasoning as justification one way or the other.
Mike Gibson has a great and interesting question. How would Bayesian methodology address this? Might this be an information cascade?
I think the overjustification effect might be at play.
The overjustification effect occurs when an external incentive such as money or prizes decreases a person's intrinsic motivation to perform a task. According to self-perception theory, people pay more attention to the incentive, and less attention to the enjoyment and satisfaction that they receive from performing the activity. The overall effect is a shift in motivation to extrinsic factors and the undermining of pre-existing intrinsic motivation.
In this case, the reward is status. It's important to note that the person must anticipate the reward, though. People might explicitly seek status, but subconsciously seeking status might provide enough anticipation to create the effect.
I am taking Eliezer's definition of "stupidity" to mean increased incompetence in the field wherein the person gained status. In their field, we would expect high competence. Decreased competence in their field would come about from diminished interest in that field, from the overjustification effect.
Yes, this sounds more like a problem with textbooks than with science itself.
Textbooks are often censored for political reasons, such as Japanese textbooks' treatment of Nanjing, or American textbooks' treatment of the Japanese internment camps.
This is hard science though, so this won't suffice as an explanation. I fear that people are attached to superstitions about how the brain works. Maybe people like an inaccurately simplified explanation of the brain that claims that specific, local parts of the brain perform specific functions.
We know that fMRI research is pretty sketchy, but even smart people like Sam Harris seem to rely on it too much.
For a debate involving complex religious, scientific, or political arguments, this won't suffice.
I am calling attention to reverting to "life experience" as recourse in an argument. If someone strays to that, it's clear that we're no longer considering evidence for whatever the argument is about. Referring back to "life experience" is far too nebulous to take as any evidence anything.
As for what constitutes legitimate evidence, even if anecdotes can correlate, anecdotes are not evidence!
OK, let me break it down.
I take "life experience" to mean a haphazard collection of anecdotes.
Claims from haphazardly collected anecdotes do not constitute legitimate evidence, though I concede those claims do often have positive correlations with true facts.
As such, relying on "life experience" is not rational. The point about condescension is tangential. The whole rhetorical technique is frustrating, because there is no way to move on from it. If "life experience" were legitimate evidence for the claim, the argument would not be able to continue until I have gained more "life experience," and who decides how much would be sufficient? Would it be until I come around? Once we throw the standard of evidence out, we're outside the bounds of rational discourse.
Noted. In another draft I'll change this to make the point how easy it is for high-status academics to deal in gibberish. Maybe they didn't have so much status external to their group of peers, but within it, did they?
What the Social Text Affair Does and Does Not Prove
http://www.physics.nyu.edu/faculty/sokal/noretta.html
"From the mere fact of publication of my parody I think that not much can be deduced. It doesn't prove that the whole field of cultural studies, or cultural studies of science -- much less sociology of science -- is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty, by publishing an article on quantum physics that they admit they could not understand, without bothering to get an opinion from anyone knowledgeable in quantum physics, solely because it came from a conveniently credentialed ally'' (as Social Text co-editor Bruce Robbins later candidly admitted[12]), flattered the editors' ideological preconceptions, and attacked their
enemies''.[13]"
If someone's argument, and therefore position, is irrational, how can we trust them to give honest and accurate criticism of other arguments?
Yes, thank you.
Maybe "authority" is the wrong word. What I mean is that the opponent making this claim is dismissing my stance as wrong, because of my supposed less experience. It means that they believe that truth follows from collecting anecdotes. They ascertain that because they have more anecdotes, they are correct, and I am incorrect. For not being rational, we can't trust their standard of truth to dismiss my position as wrong, since their whole methodology is hopelessly flawed.
All,
Thanks for the votes. So, I'm not exactly sure how the karma system works. On the main page I see articles from people with less than 50 points, and I see prominent users that have nonsensically low counts. Do I still need 50 points to post a main article?
Hello all,
I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.
Life Experience Should Not Modify Your Opinion http://paltrypress.blogspot.com/2009/11/life-experience-should-not-modify-your.html
When I'm debating some controversial topic with someone older than I am, even if I can thoroughly demolish their argument, I am sometimes met with a troubling claim, that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that my opinion is based primarily on nothing more than my perception from personal experience.
When my cornered opponent makes this claim, it's a last resort. It's unwarranted condescension, because it reveals how wrong their entire approach is. Just by making the claim, they demonstrate that they believe all opinions are based primarily on an accumulation of personal experiences, even their own opinions. Their assumption reveals that they are not Bayesian, and that they intuit that no one is. For not being Bayesian, they have no authority that warrants such condescension.
I intentionally avoid presenting personal anecdotes cobbled together as evidence, because I know that projecting my own experience onto a situation to explain it is no evidence at all. I know that I suffer from all sorts of cognitive biases that obstruct my understanding of the truth. As such, my inclination is to rely on academic consensus. If I explain this explicitly to my opponent, they might dismiss academics as unreliable and irrelevant, hopelessly stuck in the ivory tower of academia.
Dismiss academics at your own peril. Sometimes there are very good reasons for dismissing academic consensus. I concede that most academics aren't Bayesian because academia is an elaborate credentialing and status-signaling mechanism. Furthermore, academics have often been wrong. The Sokal affair illustrates that entire fields can exist completely without merit. That academic consensus can easily be wrong should be intuitively obvious to an atheist; religious community leaders have always been considered academic experts, the most learned and smartest members of society. Still, it would be a fallacious inversion of an argument from authority to dismiss academic consensus simply because it is academic consensus.
For all of academia's flaws, the process of peer-reviewed scientific inquiry, informed by logic, statistics, and regression analysis, offers a better chance at discovering truth than any other institution in history. It is noble and desirable to criticize academic theories, but only as part of intellectually honest, impartial scientific inquiry. Dismissing academic consensus out of hand is primitive, and indicates intellectual dishonesty.