Arthur Chu: Jeopardy! champion through exemplary rationality
post by syllogism · 2014-02-02T08:02:13.976Z · LW · GW · Legacy · 24 commentsContents
24 comments
http://mentalfloss.com/article/54853/our-interview-jeopardy-champion-arthur-chu
I'm not sure I've ever seen such a compelling "rationality success story". There's so much that's right here.
The part that really grabs me about this is that there's no indication that his success has depended on "natural" skill or talent. And none of the strategies he's using are from novel research. He just studied the "literature" and took the results seriously. He didn't arbitrarily deviate from the known best practice based on aesthetics or intuition. And he kept a simple, single-minded focus on his goal. No lost purposes here --- just win as much money as possible, bank the winnings, and use it to self-insure. It's rationality-as-winning, plain and simple.
24 comments
Comments sorted by top scores.
comment by CronoDAS · 2014-02-02T22:31:37.202Z · LW(p) · GW(p)
More awesomeness/munchkining by Arthur Chu:
In the Final Jeopardy round on his final day, he was in the lead. He had 18200. The second place contestant had 13400, and the third place contestant had 8400. Arthur Chu wagered 8600. Why? Because if there's a tie score at the end of a Jeopardy match, both players win their score in cash and come back the next day; 8600 was exactly the amount Chu had to wager to reach exactly twice the second place player's current score. And, as it happened, the second place player wagered everything, and she and Chu both got the right answer, resulting in a tie score. This is actually a better outcome for Chu than an outright win would be. After all, Chu already knows he can beat the player in second place, but a new opponent might turn out to be one that he can't.
Replies from: AndrewE↑ comment by AndrewE · 2014-02-04T16:25:10.360Z · LW(p) · GW(p)
There's actually even more to it that didn't occur to me at first glance. Many times a player in second will wager low enough, such that if both they and the person in the lead miss the question, the player in second will win (because the player in first has to wager enough to stay in the lead if they get the question right). But the person in second just watched Chu wager for the tie the day before. If they think Chu will do that again (and why not), then now the correct play is to wager everything for the tie. And now Chu wins if they both get it right (because it's a tie), and he also wins if they both get it wrong.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-02-06T20:00:45.390Z · LW(p) · GW(p)
just watched Chu wager for the tie the day before
It's better than this: they tape them one after another, so it would have been fresh in their mind.
comment by CronoDAS · 2014-02-02T21:55:33.577Z · LW(p) · GW(p)
There's a similar story about Dr. Joyce Brothers winning the grand prize on "The $64,000 Question" by agreeing to choose "Boxing" as her category and then doing a lot of studying. (As it turns out, "Boxing" was a narrow enough category that one person really could learn pretty much everything that a game show could ask about it.)
comment by gwern · 2014-02-02T16:01:32.071Z · LW(p) · GW(p)
And of course, Robert Craig helped his Jeopardy performance through judicious use of Anki/spaced-repetition to memorize past answers.
Replies from: komponisto↑ comment by komponisto · 2014-02-02T19:01:11.486Z · LW(p) · GW(p)
So did Chu, according to the link.
comment by Josh You (scrafty) · 2014-02-03T05:53:01.313Z · LW(p) · GW(p)
He also likes arguing with Jeff Kaufman about effective altruism.
Replies from: jkaufman, syllogism↑ comment by jefftk (jkaufman) · 2014-02-03T20:07:31.460Z · LW(p) · GW(p)
I went to college with him. More of us arguing: http://www.jefftk.com/p/value-and-money
He doesn't like lesswrong at all, for reasons similar to Apophemi's.
Replies from: None, pianoforte611, chaosmage↑ comment by [deleted] · 2014-02-05T03:36:29.522Z · LW(p) · GW(p)
I was curious about the second link. The gist appears to be:
In other words, even when the people speaking loudest or most eloquently don’t intentionally discourage participation from people who are not like them... entertaining ‘politically incorrect’ or potentially harmful ideas out loud, in public (so to speak) signals people who would be impacted by said ideas that they are not welcome.
I think the idea is that rationalists are more likely to entertain potentially offensive ideas under the premise that traditional morals/boundaries/taboos may interfere or bias totally clear, unflinching rational thinking.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-02-05T13:14:10.877Z · LW(p) · GW(p)
Yup! And Arthur thinks some ideas (racism, sexism, etc) are harmful to discuss, partly because the discussion legitimizes them.
↑ comment by pianoforte611 · 2014-02-05T00:48:58.832Z · LW(p) · GW(p)
Sounds like he would get along well with Multiheaded.
↑ comment by chaosmage · 2014-02-04T13:03:16.189Z · LW(p) · GW(p)
Where did he get his rationalist training then? If he's in a rational community that's not LW, I'd consider him a reason to check that community out.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-02-04T16:15:48.864Z · LW(p) · GW(p)
He's a smart guy who's spent a lot of time around other smart people.
Around here people tend to overestimate how much "rationality training" lesswrong provides and underestimate the level of rationality present in the rest of society.
↑ comment by syllogism · 2014-02-03T08:59:43.172Z · LW(p) · GW(p)
Can't say I'm impressed with his reasoning there.
Interesting.
Replies from: pianoforte611↑ comment by pianoforte611 · 2014-02-05T00:51:40.607Z · LW(p) · GW(p)
Why not? If you can't quantify an line of action easily then it's out as a candidate for effective altruism, which removes a very large set of possible causes, most of them in fact. And there is no reason to think that the currently quantifiable causes are the most effective.
Replies from: Nornagest, jkaufman↑ comment by Nornagest · 2014-02-05T01:16:06.627Z · LW(p) · GW(p)
Unless we have some particularly good reason to think that the unquantifiable ones (or some known subset) have better returns on average than the quantifiable ones, that's at best an argument for spending more resources on improving quantification.
Replies from: pianoforte611↑ comment by pianoforte611 · 2014-02-05T06:16:36.030Z · LW(p) · GW(p)
This is beginning to sound like burden of proof tennis, I claim that most causes can not be currently quantified to any useful degree, therefore restricting yourself to the quantifiable ones is a mistake. This isn't a problem that can be brute forced - what will be the effect of say open borders? Or human genetic engineering? Or charter cities? Or a new Standard Model of Physics? Or Basic Income guarantees? No one has a damned clue, although many would like to pretend. This list goes on and on.
But even for the somewhat quantifiable causes, the long term effects cannot be quantified. What is the effect of significant charitable giving on a person's other spending patterns? Does giving to charity reduce the number of children that effective altruists have, and is this a dysgenic effect? If so, is the dysgenic effect compensated for by the effects of the charity? Are effects such as this a problem? How does charitable giving affect the giver's work patterns, ambitions and life's trajectory? And of course, what is the long term effect of charitable giving on the target country's development and institutions? That last effect is probably where most of the positive consequences of giving show up, not silly short term metrics like DALYs. GiveWell's research into long term effects is noticeably much less than their research into short term effects - even though the long term effects dominate in the long run.
Replies from: jkaufman, IainM↑ comment by jefftk (jkaufman) · 2014-02-06T20:12:50.059Z · LW(p) · GW(p)
GiveWell's research into long term effects is noticeably much less than their research into short term effects - even though the long term effects dominate in the long run.
You might be interested in this transcript of a discussion between Holden Karnofsky of GiveWell, several people from Giving What We Can, and a few other EAs: http://www.jefftk.com/p/flow-through-effects-conversation
↑ comment by jefftk (jkaufman) · 2014-02-06T20:10:21.017Z · LW(p) · GW(p)
If you can't quantify an line of action easily then it's out as a candidate for effective altruism
If effective altruism only allowed working on things that were easily quantified then GiveWell, 80000 Hours, and GivingWhatWeCan would all be unfunded and unstaffed. Most of the benefit of those organizations is unclear and very hard to measure, but there are rough reasons to think that they're very important. Effective altruism is about doing as much good as possible with the resources you have, all things considered. Lines of action that are hard to evaluate do need to be more promising than similar more easily quantified approaches to be worth working on, but that's a pragmatic response to uncertainty.
comment by knb · 2014-02-02T23:35:15.161Z · LW(p) · GW(p)
If I recall correctly, Brad Rutter also used spaced-repetition and actually built a replica of the Jeopardy buzzer system to practice buzzing in as quickly as possible. He is the highest winning game show contestant of all time.
In 20 regular season and tournament games, Rutter has never lost a Jeopardy! match (though he twice trailed at the end of the first game of a two-day match before coming back to win in the second game—against Rick Knutsen in the finals of the 2001 Tournament of Champions, and against John Cuthbertson in the semifinals of the Ultimate Tournament of Champions).
He did lose against Watson, though.
comment by protest_boy · 2014-02-03T00:42:19.088Z · LW(p) · GW(p)
I'm not sure that he doesn't have "natural" skill or talent. I find the link now but I remember reading that he's extremely high IQ. (or something something eidetic memory something something?)
Motifs in his standup comedy routines are about how much smarter he is than everyone else, etc etc (anecdata)
Replies from: syllogism