No One Knows Stuff
post by talisman · 2009-05-12T05:11:45.183Z · LW · GW · Legacy · 47 commentsContents
47 comments
Take a second to go upvote You Are A Brain if you haven't already...
Back? OK.
Liron's post reminded me of something that I meant to say a while ago. In the course of giving literally hundreds of job interviews to extremely high-powered technical undergraduates over the last five years, one thing has become painfully clear to me: even very smart and accomplished and mathy people know nothing about rationality.
For instance, reasoning by expected utility, which you probably consider too basic to mention, is something they absolutely fall flat on. Ask them why they choose as they do in simple gambles involving risk, and they stutter and mutter and fail. Even the Econ majors. Even--perhaps especially--the Putnam winners.
Of those who have learned about heuristics and biases, a nontrivial minority have gotten confused to the point that they offer Kahneman and Tversky's research as justifying their exhibition of a bias!
So foundational explanatory work like Liron's is really pivotal. As I've touched on before, I think there's a huge amount to be done in organizing this material and making it approachable for people that don't have the basics. Who's going to write the Intuitive Explanation of Utility Theory?
Meanwhile, I need to brush up on my Python and find a way to upvote Liron more than once. If only...
Update: Tweaked language per suggestion, added Kahneman and Tversky link.
47 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-12T06:48:25.541Z · LW(p) · GW(p)
Of those who have learned about heuristics and biases, a nontrivial minority have gotten confused to the point that they offer Kahnemann and Tversky's research as justification for the self-arbitrages they've set up!
Suggested alternative wording:
"Of those who have learned about heuristics and biases, a nontrivial minority are so confused as to point to the biases research as justifying their exhibition of a bias!"
Replies from: John_Maxwell_IV, JGWeissman, marianasoffer↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-05-13T04:48:22.854Z · LW(p) · GW(p)
It's interesting that this correction has a higher score than the post itself.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-13T20:22:05.070Z · LW(p) · GW(p)
People don't seem to vote posts up or down with the same enthusiasm as they vote on comments. Why? I do not know.
Replies from: jscn↑ comment by JGWeissman · 2009-05-12T07:05:03.313Z · LW(p) · GW(p)
I strongly agree. As an anecdotal data point, I understood the suggested alternative but not the original wording. And it is a powerful point to miss because I haven't heard of Kahnemann and Tversky.
Also, if mentioning specific researchers were central to the point, I would recommend linking to a resource about them, or better yet, create entries for them on the Less Wrong Wiki and link to those.
Replies from: talisman, Emile↑ comment by Emile · 2009-05-12T08:28:34.978Z · LW(p) · GW(p)
Seconded! Those names didn't ring a bell for me either, though I'm familiar with the results from Prespect Theory (I probably read about them on OB), and that's probably what talisman was refering to.
Replies from: talisman↑ comment by marianasoffer · 2009-05-12T08:18:10.874Z · LW(p) · GW(p)
Completely agree that people just use methods such as tabu search a*, etc... without understanding them at all, same happens with machine learning techniques, or even statistic ones. Mostly they get by using the recommended algorithm/meta heuristic for the domain they are working at.
I strongly recommend python for doing this, it is the best language to begin programming with, I have several programs I did by myself, I can collaborate with the project.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-12T09:44:15.256Z · LW(p) · GW(p)
I strongly recommend python for doing this, it is the best language to begin programming with, I have several programs I did by myself, I can collaborate with the project.
Not disagreeing with you here, but you seem to have missed the implication; the reason Python was mentioned is because LessWrong is written in it.
Replies from: marianasoffer↑ comment by marianasoffer · 2009-05-13T06:20:26.985Z · LW(p) · GW(p)
Thanks for clarifying, I did not know it, I guess I have to read the introduction first.
comment by MrHen · 2009-05-12T13:33:04.037Z · LW(p) · GW(p)
Take a second to go upvote You Are A Brain if you haven't already...
This is extremely off-topic, but please do not tell me what to upvote. I actually downvoted that post because the slideshow was completely useless to me and I thought its quality was poor. This isn't to slam Liron; his post just didn't do it for me.
But just because you really, really liked it doesn't mean you get to tell me what to like.
Replies from: talisman, None↑ comment by talisman · 2009-05-12T14:16:51.106Z · LW(p) · GW(p)
I actually think Liron's slideshow needs a lot of work, but it seems very much like the kind of thing LWers should be trying to do out in the world.
the slideshow was completely useless to me
Yes, of course it was. It was created for teenagers who are utterly unfamiliar with this way of thinking.
its quality was poor
OK. Can you improve it or do better?
Replies from: MrHen↑ comment by MrHen · 2009-05-12T15:06:15.644Z · LW(p) · GW(p)
I actually think Liron's slideshow needs a lot of work, but it seems very much like the kind of thing LWers should be trying to do out in the world.
I would agree.
OK. Can you improve it or do better?
Possibly, but I have little reason to do so since this sort of thing is not particularly applicable to my life.
Of note, I am not trying to be a jerk or make this a big deal. My comment really has little to do with Liron's post. It has everything to do with you telling me to upvote something. I just, politely, want you to not do that again. I had typed up more details on why I downvoted but they are irrelevant for what I wanted to say to you.
comment by MichaelVassar · 2009-05-12T10:57:56.487Z · LW(p) · GW(p)
Talisman: what line of work are you in where you interview enough Putnam winners to have a reasonable sample size. Seriously, write to me at my SIAI email about this and I'll try to work it into our recruitment strategy if there's any practical way to do so.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-05-12T13:31:36.392Z · LW(p) · GW(p)
And likewise, can I apply for whatever position you're interviewing these people for? (I mean talisman, not SiAI. I think SIAI requires such unreasonable skills out of me as "tact" and "not voicing why you think other people are idiots".) I'm sure I'd be in the top 1%.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2009-05-12T15:38:24.422Z · LW(p) · GW(p)
no, we definitely don't require that, but we are a LOT more selective than, say Goldman or D.E. Shaw in other ways and we do require that you be able to function as part of a team.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-05-12T16:27:41.700Z · LW(p) · GW(p)
Wow! That's tough! I don't know if I could ever be more qualified than someone who nearly shut down the entire financial system! ;-)
Replies from: orthonormal↑ comment by orthonormal · 2009-05-12T16:57:13.445Z · LW(p) · GW(p)
Well, it generally does take geniuses to achieve something monumentally stupid. Same principle as (unintentionally) creating an Unfriendly AI: current researchers are not competent enough to pose any reasonable risk of it, but a future genius might just fail hard enough.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-12T06:50:58.813Z · LW(p) · GW(p)
For instance, reasoning by expected utility, which you probably consider too basic to mention
Actually, I consider it too complicated for my first book! That's going to focus on getting across even more basic concepts like 'the point of reasoning about your beliefs is to function as a mapping engine that produces correlations between a map and the territory' and 'strong evidence is the sort of evidence we couldn't possibly find if the hypothesis were false'.
Replies from: talisman, MichaelHoward, ciphergoth, hrishimittal↑ comment by talisman · 2009-05-12T11:43:31.603Z · LW(p) · GW(p)
Funny. I feel like on OB and LW utility theory is generally taken as the air we breathe.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-13T04:45:57.805Z · LW(p) · GW(p)
It is - but that's OB and LW.
↑ comment by MichaelHoward · 2009-05-12T12:59:08.749Z · LW(p) · GW(p)
'strong evidence is the sort of evidence we couldn't possibly find if the hypothesis were false'.
-blink-
If you mean this, please elaborate. If not, please change the wording before you confuse the living daylights out of some poor newcomer.
Edit: I'm not nitpicking him for infinite certainty. I acknowledge it's reasonable informally to tell me a ticket I'm thinking of buying couldn't possibly win the lottery. That's not what I mean. I mean even finding some overwhelmingly strong evidence doesn't necessarily mean the hypothesis is overwhelmingly likely to be true. If the comment's misleading then given it's subject it seems worth pointing out!
Example: Say you're randomly chosen to take a test with a false positive rate of 1% for a cancer that occurs in 0.1% of the population, and it returns positive. That's strong evidence for the hypothesis that you have that cancer, but the hypothesis is probably false.
Replies from: Nick_Tarleton, SilasBarta, Annoyance↑ comment by Nick_Tarleton · 2009-05-12T15:48:25.344Z · LW(p) · GW(p)
Strongly seconded. Generally, it seems to me that Eliezer frequently seriously confuses people by mixing literal statements with hyperbole like this or "shut up and do the impossible". I definitely see the merit of the greater emotional impact, but I hope there's some way to get it without putting off the unusually literal-minded (which I expect most people who will get anything out of OB or The Book are).
↑ comment by SilasBarta · 2009-05-12T17:34:34.946Z · LW(p) · GW(p)
Yeah, that is kind of tricky. Let me try to explain what Eliezer_Yudkowsky meant in terms of my preferred form of the Bayes Theorem:
O(H|E) = O(H) * P(E|H) / P(E|~H)
where O indicates odds instead of probability and | indicates "given".
In other words, "any time you observe evidence, amplify the odds you assign to your beliefs by the probability of observing the evidence if the belief were true, divided by the probabily of observing it if the belief were false."
Also, keep in mind that Eliezer_Yudkowsky has written about how you should treat very low probability events as being "impossible", even though you have to assign a non-zero probability to everything.
Nevertheless, his statement still isn't literally true. The strength of the evidence depends on the ratio P(E|H)/P(E|~H), while the quoted statement only refers to the denominator. So there can be situations where you have 100:1 odds of seeing E if the hypothesis were true, but 1:1000 odds (about a 0.1% chance) of seeing E if it were false.
Such evidence is very strong -- it forces you to amplify the odds you assign to H by a factor of 100,000 -- but it's far from evidence you "couldn't possibly find", which to me means something like 1:10^-10 odds.
Still, Eliezer_Yudkowsky is right that, generally, strong evidence will have a very small denominator.
EDIT: added link
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-12T18:50:42.402Z · LW(p) · GW(p)
In comments like this, we should link to the existing pages of the wiki, or create stubs of the new ones.
Bayes' theorem on LessWrong wiki.
↑ comment by Annoyance · 2009-05-12T13:21:04.572Z · LW(p) · GW(p)
Strong evidence is evidence that, given certain premises, has no chance of arising.
Of course, Eliezer has also claimed that nothing can have no chance of arising (probability zero), so it's easy to see how one might be confused about his position.
Traditionally, evidence that has less than a particular value of arising given the truth of a hypothesis (usually 5%) is considered to be strong, but that's really an arbitrary decision.
Replies from: Cyan↑ comment by Cyan · 2009-05-12T14:59:13.055Z · LW(p) · GW(p)
Traditionally, evidence that has less than a particular value of arising given the truth of a hypothesis (usually 5%) is considered to be strong, but that's really an arbitrary decision.
Correction: traditionally evidence against an hypothesis is considered strong if the chance of that evidence or any more extreme evidence arising given the truth of the hypothesis is less than an arbitrary value. (If this tradition doesn't make sense to you, you are not alone.)
↑ comment by Paul Crowley (ciphergoth) · 2009-05-12T12:44:12.606Z · LW(p) · GW(p)
I'm really surprised to hear you say that - I would have thought it was pretty fundamental. Don't you at least have to introduce "shut up and multiply"?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-12T14:19:56.485Z · LW(p) · GW(p)
First, you have to explain why relying on external math, rather than on a hunch, is a good idea. Second, you need to present a case for why shutting up and multiplying in this particular way is a good idea.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-05-12T14:27:18.154Z · LW(p) · GW(p)
First, you have to explain why relying on external math, rather than on a hunch, is a good idea.
That applies to Bayesian reasoning too, doesn't it?
Second, you need to present a case for why shutting up and multiplying in this particular way is a good idea.
That's in some ways easier - basically this comes down to standard arguments in decision theory, I think...
Replies from: Vladimir_Nesov, Nick_Tarleton↑ comment by Vladimir_Nesov · 2009-05-12T14:36:06.524Z · LW(p) · GW(p)
This applies to anything, including excavators and looking up weather on the Internet. You have to trust your tools, which is especially hard where your intuition cries "Don't trust! It's dangerous! It's useless! It's wrong!". The technical level, where you go into the details of how your tools work, is not fundamentally different in this respect.
Here I'm focusing not on defining what is actually useful, or right, or true, but on looking into the process of how people can adopt useful tools or methods. A decision of some specific human ape-brain is a necessary part, even if the tool in question is some ultimate abstract ideal nonsense. I'm brewing a mini-sequence on this (2 or 3 posts).
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-05-12T15:21:42.301Z · LW(p) · GW(p)
I think that if there is such a thing as x-rationality, its heart is that mathematical models of rationality based on probability and decision theory are the correct measure against which we compare our own efforts.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-12T15:34:06.807Z · LW(p) · GW(p)
At which point you run into a problem of formalization and choice of parameters, which is the same process of ape-brain-based decision-making. A statement that in some sense, decision theory/probability theory math is the correct way of looking at things, is somewhat useful, but doesn't give the ultimate measure (and lacks so much important detail). Since x-rationality is about human decision-making, a large part of it is extracting correct decisions out of your native architecture, even if these decisions are applied to formalization of problems in math.
↑ comment by Nick_Tarleton · 2009-05-12T16:02:41.403Z · LW(p) · GW(p)
That's in some ways easier - basically this comes down to standard arguments in decision theory, I think...
Since real gambles are always also part of the state of the world that one's utility function is defined over, you also need the moral principle that there shouldn't be (dis)utility attached to their structure. Decision theory strictly has nothing to say to the person who considers it evil to gamble with lives (operationalized as not taking the choice with the lowest variance in possible outcomes, or whatever), although it's easy to make it sound like it does. The moral principle here seems intuitive to me, but I have no idea if it is in general. (Something to Protect is the only post I can think of dealing with this.)
↑ comment by hrishimittal · 2009-05-12T09:16:06.928Z · LW(p) · GW(p)
I don't really know the formal definition or theory of expected utility, but it is something which seems to underpin almost everything that is said here on LW or on OB.
Can anyone please point me to a good reference or write a wiki entry?
Are the wikipedia references recommended?
Replies from: conchis, timtyler↑ comment by conchis · 2009-05-12T10:26:12.051Z · LW(p) · GW(p)
The wikipedia reference is a bit patchy. This Introduction to Choice under Risk and Uncertainy is pretty good if you have a bit more time, and can handle the technical parts.
Replies from: hrishimittal↑ comment by hrishimittal · 2009-05-12T11:16:16.346Z · LW(p) · GW(p)
Thanks conchis.
↑ comment by timtyler · 2009-05-12T18:33:47.676Z · LW(p) · GW(p)
Perhaps check my references here:
http://timtyler.org/expected_utility_maximisers/
Replies from: thomblake↑ comment by thomblake · 2009-05-12T19:00:14.396Z · LW(p) · GW(p)
Thanks! I hadn't heard that definition of utilitarianism before.
Replies from: timtyler↑ comment by timtyler · 2009-05-12T22:01:05.317Z · LW(p) · GW(p)
As I recall, I made this up to suit my own ends :-(
Wikipedia quibbles with me significantly - stressing the idea that utilitarianism is a form of consequentialism:
"Utilitarianism is the idea that the moral worth of an action is determined solely by its contribution to overall perceivable utility: that is, its contribution to happiness or pleasure as summed among an ill-defined group of people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome."
I don't really want "utilitarianism" to refer to a form of consequentialism - thus my crude attempt at hijacking the term :-|
Replies from: thomblake↑ comment by thomblake · 2009-05-13T14:20:11.461Z · LW(p) · GW(p)
I hadn't even considered the possibility that your definition might lead to a 'utilitarianism' that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to 'rule utilitarianism', but more interesting - the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?
Replies from: timtyler↑ comment by timtyler · 2009-05-13T15:51:34.191Z · LW(p) · GW(p)
I would still be prepared to call an agent "utilitarian" if it operated via maximising expected utility - even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They "expect" that hoarding calories is a good idea - and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn't make humans less utilitarian in my book - rather they have some bad priors - and they are wired-in ones that are tricky to update.
comment by Cameron_Taylor · 2009-05-12T10:30:52.087Z · LW(p) · GW(p)
Meanwhile, I need to brush up on my Python and find a way to upvote Liron more than once.
That doesn't require python... it requires rudimentary general problem solving ability, a certain disrespect for the spirit of the law and if automation were desired, could be implemented in one of many languages.
comment by pjeby · 2009-05-12T17:23:58.389Z · LW(p) · GW(p)
Kahneman and Tversky's research
Holy crap that's useful. System 1 and System 2 correspond almost exactly to my Savant/Speculator distinction, and other bits of the paper support monoidealism and my recent work teaching myself and others to act more confidently and creatively (not to mention improving learning) through explicit deferment of System 2 thinking during task performance. And that's just what I got from a light and partial skimming of the paper. It's going to take a chunk of time out of my day to absorb it all, but it's gonna be worth it.