[META] Alternatives to rot13 and karma sinks 2012-04-17T20:01:45.447Z · score: 18 (21 votes)
A call for solutions and a tease of mine 2012-01-19T14:40:39.158Z · score: -3 (34 votes)
Histocracy: Open, Effective Group Decision-Making With Weighted Voting 2012-01-17T22:35:42.774Z · score: 14 (25 votes)
Mapping Fun Theory onto the challenges of ethical foie gras 2011-12-07T20:47:10.352Z · score: 36 (36 votes)
[Link] Using the conjunction fallacy to reveal implicit associations 2011-12-06T17:29:08.389Z · score: 4 (5 votes)
Probability puzzle: Coins in envelopes 2011-12-02T05:58:02.986Z · score: 8 (11 votes)
Poker with Lennier 2011-11-15T22:21:20.561Z · score: 15 (24 votes)
[Draft] Poker With Lennier 2011-11-10T17:14:01.171Z · score: 22 (25 votes)
What does your accuracy tell you about your confidence interval? 2011-11-02T19:21:42.249Z · score: 5 (8 votes)
[FICTION] Hamlet and the Philosopher's Stone 2011-10-25T22:31:15.700Z · score: 27 (32 votes)
Real-life observations of the blue eyes puzzle phenomenon? 2011-08-08T17:10:32.957Z · score: 2 (7 votes)
Random advice: Teenage U.S. LW-ers should probably be taking more AP exams 2011-04-07T04:37:37.980Z · score: 24 (25 votes)
Strong substrate independence: a thing that goes wrong in my mind when exposed to philosophy 2011-02-18T01:40:18.733Z · score: 14 (14 votes)
You're in Newcomb's Box 2011-02-05T20:46:20.306Z · score: 40 (72 votes)
Subject X17's Surgery 2010-12-30T19:01:05.812Z · score: 11 (12 votes)
The Benefits of Two Religious Educations 2010-11-18T21:42:50.139Z · score: 8 (9 votes)


Comment by honoredb on Moloch's Toolbox (2/2) · 2017-11-08T18:17:59.517Z · score: 1 (1 votes) · LW · GW

I'm not sure the recursive argument even fully works for the stock market, these days--I suspect it's more like a sticky tradition that crudely mimic the incentive structure that used to exist, like a parasitic vine that still holds the shape of the rotted-away tree it killed. When there's any noise, recursion amplifies it with each iteration: a 1-year lookahead to a 1-year lookahead might be almost the same as a 2-year-lookahead, but it's slightly skewed by wanting to take into account short-scale price movements and different risk and time discounting. By the the time you get to a 1-year lookahead to a 1-year lookahead to a...*10, it's almost completely (maybe completely) decoupled from the actual 10-year lookahead, with no way to make money off of that decoupling.

Comment by honoredb on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-17T04:15:16.742Z · score: 2 (2 votes) · LW · GW

Sure, but it's really hard to anticipate which side will benefit more, so in expected value they're equal. I'm sure some people will think their side will be more effective in how it spends money...I'll try to persuade them to take the outside view.

Comment by honoredb on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-14T22:41:55.307Z · score: 2 (2 votes) · LW · GW

Thanks, I'll look him up.

Comment by honoredb on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-14T22:39:42.204Z · score: 1 (1 votes) · LW · GW

I think those contributors will probably not be our main demographic, since they have an interest in the system as it is and don't want to risk disrupting it. In theory, though, donating to both parties can be modeled as a costly signal (the implied threat is that if you displease me, the next election I'll only donate to your opponent), and there's no reason you can't do that through our site.

Comment by honoredb on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-14T22:36:04.651Z · score: 1 (1 votes) · LW · GW

It seems to be implicit in your model that funding for political parties is a negative-sum arms race.

What army1987 said. The specific assumption is that on the margin, the effect of more funding to both sides is either very small or negative.

In my own view, the most damaging negative-sum arms race is academia.

This is definitely an extendable idea. It gets a lot more complicated when there are >2 sides, unfortunately. Even if they agreed it was negative-sum, someone donating $100 to Columbia University would generally not be equally happy to take $100 away from Harvard. I don't know how to fix that.

Comment by honoredb on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-13T23:07:39.085Z · score: 1 (1 votes) · LW · GW


Comment by honoredb on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-13T21:43:08.788Z · score: 31 (31 votes) · LW · GW

I'm happy to specify completely, actually, I just figured a general question would lead to answers that are more useful to the community.

In my case, I'm helping to set up an organization to divert money away from major party U.S. campaign funds and to efficient charities. The idea is that if I donate $100 to the Democratic Party, and you donate $200 to the Republican party (or to their nominees for President, say), the net marginal effect on the election is very similar to if you'd donated $100 and I've donated nothing; $100 from each of us is being canceled out. So we're going to make a site where people can donate to either of two opposing causes, we'll hold it in escrow for a little, and then at a preset time the money that would be canceling out goes to a GiveWell charity instead. So if we get $5000 in donations for the Democrats and $2000 for Republicans, the Democrats get $3000 and the neutral charity gets $4000. From an individual donor's point of view, each dollar you donate will either become a dollar for your side, or take away a dollar from the opposing side.

This obviously steps into a lot of election law, so that's probably the expertise I'll be looking for. We also need to figure out what type of organization(s) we need to be: it seems ideal to incorporate as a 501c(3) just so that people can make tax-deductible donations to us (whether donations made through us that end up going to charity can be tax-deductible is another issue). I think the spirit of the regulations should permit that, but I am not a lawyer and I've heard conflicting opinions on whether the letter of the law does.

And those issues aside, I feel like there could be more legal gotchas that I'm not anticipating to do with Handling Other People's Money.

Comment by honoredb on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-13T17:18:19.752Z · score: 4 (4 votes) · LW · GW

What's the best way to get (U.S.) legal advice on a weird, novel issue (one that would require research and cleverness to address well)? Paid or unpaid, in person or remotely.

(For that matter, if anyone happens to be interested in donating good legal advice to a weird, novel non-profit organization, feel free to contact me at histocrat at gmail dot com).

Comment by honoredb on Truth and the Liar Paradox · 2014-09-03T18:34:50.949Z · score: 1 (1 votes) · LW · GW

Arthur Prior's resolution it to claim that each statement implicitly asserts its own truth, so that "this statement is false" becomes "this statement is false and this statement is true".

Pace your later comments, this is a wonderfully pithy solution and I look forward to pulling it out at cocktail parties.

Comment by honoredb on Rationalist Sport · 2014-06-18T21:34:39.152Z · score: 2 (2 votes) · LW · GW

I like people's attempts to step outside the question, but playing along...

LW-rationalists value thinking for yourself over conformity. A LW sport might be a non-team sport like fencing, a team sport in which individuals are spotlighted, like baseball, or a sport that presents constant temptation to follow cues from your teammates but rewards breaking away from the pack.

LW-rationalists value cross-domain skills. A LW sport might involve a variety of activities, like an n-athlon, or facing a quick succession of opponents who all trained together so that lessons learned against one are likely to apply to the next.

LW-rationalists value finding ways to cooperate with people whose values are different. A LW sport might involve a tension between behavior that supports the team and behavior that wins personal glory, like basketball, or it might involve more than 2 sides and more than 1 winner with potential for cooperation.

LW-rationalists value an ability to recognize when a previously useful heuristic isn't working, and break out of it. A LW sport might involve subtle shifts in the playing field that weaken some strategies and strengthen others.

Comment by honoredb on Rationality Quotes August 2013 · 2013-09-04T00:16:04.046Z · score: 4 (4 votes) · LW · GW

the effects of poverty & oppression on means & tails

Wait, what are you saying here? That there aren't any Einsteins in sweatshops in part because their innate mathematical ability got stunted by malnutrition and lack of education? That seems like basically conceding the point, unless we're arguing about whether there should be a program to give a battery of genius tests to every poor adult in India.

The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand.

Not all of them, I don't think. And then you have to have a talent that manifests early, have someone in your community who knows that a kid with a talent for arithmetic might have a talent for higher math, knows that a talent for higher math can lead to a way to support your family, expects that you'll be given a chance to prove yourself, gives a shit, has a way of getting you tested...

I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position.

Really? Then I'm sure you could name three examples.

Just going off Google, here: People being incarcerated for unsuccessful attempts to poison someone:

Person being killed for suspected unsuccessful attempt to poison someone:

Sorry, I can only read what you wrote. If you meant he lacked tact, you shouldn't have brought up insanity.

I was trying to elegantly combine the Incident with the Debilitating Paranoia and the Incident with the Telling The Citizenship Judge That Nazis Could Easily Take Over The United States. Clearly didn't completely come across.

Really? Because his mathematician peers were completely exasperated at him. What, exactly, was he politic about?

He was politic enough to overcome Vast Cultural Differences enough to get somewhat integrated into an insular community. I hang out with mathematicians a lot; my stereotype of them is that they tend not to be good at that.

Comment by honoredb on Rationality Quotes August 2013 · 2013-09-03T23:58:44.397Z · score: 2 (6 votes) · LW · GW "Oppenheimer wasn't privileged, he was only treated slightly better than the average Cambridge student."

I'm sorry, I never really rigorously defined the counter-factuals we were playing with, but the fact that Oppenheimer was in a context where attempted murder didn't sink his career is surely relevant to the overall question of whether there are Einsteins in sweatshops.

Comment by honoredb on Rationality Quotes August 2013 · 2013-08-15T19:00:51.289Z · score: 8 (10 votes) · LW · GW

Do you really think the existence of oppression is a figment of Marxist ideology? If being poor didn't make it harder to become a famous mathematician given innate ability, I'm not sure "poverty" would be a coherent concept. If you're poor, you don't just have to be far out on multiple distributions, you also have to be at the mean or above in several more (health, willpower, various kinds of luck). Ramanujan barely made it over the finish line before dying of malnutrition.

Even if the mean mathematical ability in Indians were innately low (I'm quite skeptical there), that would itself imply a context containing more censoring factors for any potential become a mathematician, you have to, at minimum, be aware that higher math exists, that you're unusually good at it by world standards, and being a mathematician at that level is a viable way to support your family.

On your specific objections to my conjugates...I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position. Hardly a gross exaggeration. Goedel didn't become clinically paranoid until later, but he was always the sort of person who would thoughtlessly insult an important gatekeeper's government, which is part of what I was getting at; Ramanujan was more politic than your average mathematician. I actually was thinking of making Newton's conjugate be into Hindu mysticism instead of Christian but that seemed too elaborate.

Comment by honoredb on Rationality Quotes August 2013 · 2013-08-07T14:06:48.660Z · score: 9 (19 votes) · LW · GW

I think it can be illustrative, as a counter to the spotlight effect, to look at the personalities of math/science outliers who come from privileged backgrounds, and imagine them being born into poverty. Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation. Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy. Newton and Turing's conjugates were murdered as teenagers on suspicion of homosexuality. I have to make these stories up because if you're poor and at all weird, flawed, or unlucky your story is rarely recorded.

Comment by honoredb on Harry Potter and the Methods of Rationality discussion thread, part 24, chapter 95 · 2013-07-31T14:34:36.105Z · score: 0 (0 votes) · LW · GW

Huh, that does make a lot more sense. I guess I'd been assuming that any reference to someone "averting" a prophecy was actually just someone forcing the better branch of an EitherOrProphecy (tvtropes). Like if Trelawney had said "HE WHO WOULD TEAR APART THE VERY STARS IN HEAVEN IF NONE STAND AGAINST IT." The inference that prophecies don't always come true fits Quirrell's behavior much better.

Comment by honoredb on Harry Potter and the Methods of Rationality discussion thread, part 24, chapter 95 · 2013-07-18T12:57:06.803Z · score: 12 (20 votes) · LW · GW

Quirrell seems to have been counterfactually mugged by hearing the prophecy of the end of the world...which would mean his decision theory, and psychological commitment to it, are very advanced.

Assume Quirrell believes that the only possible explanation of the prophecy he heard is that the apocalypse is nigh. This makes sense: prophecies don't occur for trivial events like a visitor to Hogwarts destroying books in the library named "Stars in Heaven" and "The World," and the idea of "the end of the world" being a eucatastrophe hasn't occurred to him. Assume Quirrell believes that prophecies are inevitable once spoken. Then why is Quirrell bothering to try to save the world?

Given that he hears the prophecy, Quirrell can either try T or not try ~T to avert it. Given that he tries, Quirrell is either capable C or incapable ~C of averting it. If T and C, by inevitability Quirrell will never hear the prophecy, which means that it is less likely the end of the world will occur (massive events always produce a prophecy that is heard by a wizard, so either Time finds some way to stop the end of the world or someone else hears it but fails to avert it). Say the end of the world causes -100 utility to Quirrell, and trying to stop it causes -1 utility. Then if C, a Quirrell that would try never hears the prophecy, so he never loses any utility, while a Quirrell that would not try hears the prophecy, goes out in a blaze of hedonism rather than fighting the inevitable, and loses 100 utility from the end of the world. Unfortunately, the actual world is the ~C world, where T brings -101 utility and ~T brings -100. So T looks like an irrational choice, but actually maximizes Quirrell's utility across counterfactuals.

This isn't the only explanation for Quirrell's actions; he could just prefer to go out fighting, or be betting on the slim chance that prophecies actually can be averted, or just trying to delay the end of the world as long as possible, or acting on other, weirder motives. But it's an interesting illustration of how alien a being that has truly internalized a really sophisticated decision theory might be.

Comment by honoredb on Prisoner's dilemma tournament results · 2013-07-09T23:28:52.491Z · score: 2 (2 votes) · LW · GW

T was supposed to do a bit more than it did, but it had some portability bugs so I hastily lobotomized it. All it's supposed to do now is simulate the opponent twice against an obfuscated defectbot, defect if it cooperates both times, otherwise play mimicbot. I didn't have the time to add some of the obvious safeguards. I'm not sure if K is exploiting me or just got lucky, but at a glance, what it might be doing is checking whether the passed-in bot can generate a perfect quine of itself, and cooperating only then. That would be pretty ingenious, since typically a quine chain will go "original -- functional copy -- identical copy -- identical copy", etc.

Comment by honoredb on Willing gamblers, spherical cows, and AIs · 2013-04-30T15:46:58.143Z · score: 0 (0 votes) · LW · GW

The bad news is there is none. The good news is that this means, under linear transformation, that there is such a thing as a free lunch!

Comment by honoredb on Willing gamblers, spherical cows, and AIs · 2013-04-08T23:09:21.190Z · score: 13 (13 votes) · LW · GW

I'm standing at a 4-way intersection. I want to go the best restaurant at the intersection. To the west is a three-star restaurant, to the north is a two-star restaurant, and to the northwest, requiring two street-crossings, is a four-star restaurant. All of the streets are equally safe to cross except for the one in between the western restaurant and the northern one, which is more dangerous. So going west, then north is strictly dominated by going north, then west. Going north and eating there is strictly dominated by going west and eating there. This means that if I cross one street, and then change my mind about where I want to eat based on the fact that I didn't die, I've been dutch-booked by reality.

That might need a few more elements before it actually restricts you to VNM-rationality.

Comment by honoredb on An Introduction to Control Markets · 2013-04-04T21:34:23.690Z · score: 1 (1 votes) · LW · GW

This seems like a good sketch of the endgame for histocracy, my own pie-in-the-sky organizational scheme. If you start with people voluntarily transitioning management of a resource they own to an open histocratic system with themselves as the judges, and then iterate and nest and stuff, you get something like this in the limit. I hadn't been able to envision it quite as elegantly as you do here.

Comment by honoredb on Just One Sentence · 2013-01-10T22:47:26.710Z · score: 0 (0 votes) · LW · GW

In my discipline? I guess

Write code that's easy to update without breaking dependent code.

That'll save the ancient programmers of the 1950's some time.

If I were trying to build up programming from scratch, it'd get pretty hairy.

Build a machine that, when "x = 1.1; while (10. - x*x > .0001) x = x - ((x * x - 10.) / (10.*x)); display x" is entered into it, displays a value close to the ratio of the longest side of a right triangle to another side expressed as the sum of 0 or 1 times the lengths of successive bisections.

Comment by honoredb on How to Teach Students to Not Guess the Teacher’s Password? · 2013-01-04T18:53:55.460Z · score: 1 (1 votes) · LW · GW

I came here to refer you to John Holt, but since User:NancyLebovitz already did that, I'll just add that I'm amused that your handle is Petruchio.

Comment by honoredb on Irrationality Game II · 2012-07-05T03:57:57.453Z · score: 1 (7 votes) · LW · GW

Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can't quite back it out of a prediction market's transaction history. And the amount of money to be made is small in any event because there's just not enough participation in the markets.

Comment by honoredb on Irrationality Game II · 2012-07-05T03:32:58.733Z · score: 4 (38 votes) · LW · GW

Irrationality Game

Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they're cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevitably crop up in any prediction market system due to bubbles, panics, hedging, manipulation, and either overly simple or dangerously complex derivatives. 90%

Hanson and his followers are irrationally attached to prediction markets because they flatter libertarian sensibilities. 60%

Comment by honoredb on SotW: Avoid Motivated Cognition · 2012-05-25T21:52:02.543Z · score: 0 (0 votes) · LW · GW

Yup. The propositions need to be such that you can get more confident than that.

Comment by honoredb on SotW: Avoid Motivated Cognition · 2012-05-25T19:50:00.012Z · score: 8 (8 votes) · LW · GW

My girlfriend says that a common case of motivated cognition is witnesses picking someone out of a lineup. They want to recognize the criminal, so given five faces they're very likely to pick one even if the real criminal's not there, whereas if people are leafing through a big book of mugshots they're less likely to make a false positive identification.

She suggests a prank-type exercise where there are two plants in the class. Plant A, who wears a hoodie and sunglasses, leaves to go to the bathroom, whereupon Plant B announces that they're pretty sure Plant A is actually $FAMOUS_ACTOR here incognito. Plant A pokes his head in, says he needs to go take a call, and leaves. See who manages to talk themselves into thinking that really is the celebrity.

Comment by honoredb on SotW: Avoid Motivated Cognition · 2012-05-25T19:14:56.082Z · score: 1 (3 votes) · LW · GW

This seems like it'll be easiest to teach and test if you can artificially create a preference for an objective fact. Can you offer actual prizes? Candy? Have you ever tried a point system and have people reacted well?

Assume you have a set of good prizes (maybe chocolate bars, or tickets good for 10 points) and a set of less-good prizes (Hershey's kisses, or tickets good for 1 point).

Choose a box: Have two actual boxes, labeled "TRUE" and "FALSE". Before the class comes in, the instructor writes a proposition on the blackboard, such as "The idea that carrots are good for your eyesight is a myth promoted as part of a government conspiracy to cover up secret military technology" or "A duck's quack never echoes, and nobody knows why." If the instructor believes that the proposition is true, the instructor puts a bunch of good prizes in the TRUE box and nothing in the FALSE box. Otherwise, the instructor fills the FALSE box with less-good prizes. The class comes in, and the instructor explains the rules. Then she spends 5 minutes trying to persuade the class that she believes the proposition. After that, people who think she actually believes it line up at the TRUE box, and everyone else lines up at the FALSE box. Everyone who guessed right gets a prize from their box. If you guess TRUE and you're right, your prize is better than if you guess FALSE and are right. Repeat this for a few propositions, and it's at least a useful test for whether you can separate what you want from what seems plausible.

Comment by honoredb on Fiction: A Setting Justifying the Epistemic Aggressiveness Of A Religion Stand-in · 2012-05-03T23:03:53.525Z · score: 0 (0 votes) · LW · GW

It seems likely that God would create multiple realities, populated by different sorts of people and/or with different True Religions, to feed a diverse set of people into a shared heaven. So the recursive realities would have a pyramid or lattice structure. If God has limited knowledge of the realities he's created, there could even be cycles.

Comment by honoredb on Fiction: A Setting Justifying the Epistemic Aggressiveness Of A Religion Stand-in · 2012-05-03T17:09:49.781Z · score: 35 (35 votes) · LW · GW

God is, himself, in a world filled with vague, ambiguous, sometimes contradictory hints towards a divine meta-reality. He's confused, anxious, and doesn't trust his own judgment. So he's created the Abrahamic world in order to identify the people who somehow manage to arrive at the truth given a similar lack of information. One of our religions is correct--guess right and you go to Heaven to help God try to get to Double Heaven.

Comment by honoredb on A Kick in the Rationals: What hurts you in your LessWrong Parts? · 2012-04-26T16:01:02.524Z · score: 4 (4 votes) · LW · GW

Old thread.

Comment by honoredb on Formalizing Value Extrapolation · 2012-04-26T15:32:51.445Z · score: 2 (2 votes) · LW · GW

Okay, I see that that's what you're saying. The assumption then (which seems reasonable but needs to be proven?) is that the simulated humans, given infinite resources, would either solve Oracle AI [edit: without accidentally creating uFAI first, I mean] or just learn how to do stuff like create universes themselves.

There is still the issue that a hypothetical human with access to infinite computing power would not want to create or observe hellworlds. We here in the real world don't care, but the hypothetical human would. So I don't think your specific idea for brute-force creating an Earth simulation would work, because no moral human would do it.

Comment by honoredb on Formalizing Value Extrapolation · 2012-04-26T03:59:40.840Z · score: 2 (2 votes) · LW · GW

I'm slightly worried that even formally specifying an "idealized and unbounded computer" will turn out to be Oracle-AI-complete. We don't need to worry about it converting something valuable into computronium, but we do need to ensure that it interacts with the simulated human(s) in a friendly way. We need to ensure that it doesn't modify the human to simplify the process of explaining something. The simulated human needs to be able to control what kinds of minds the computer creates in the process of thinking (we may not care, but the human would). And the computer should certainly not hack its way out of the hypothetical via being thought about by the FAI.

Comment by honoredb on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-24T19:05:06.871Z · score: 1 (1 votes) · LW · GW

a papercut doesn't leave much if any blood on the paper... as the paper moves away fast enough that blood doesn't even have time to flow on it.

It is possible to engineer, though, if you're manipulating the paper with great telekinetic precision. I accidentally bloodstained a book that way when I was about Harry's age.

Comment by honoredb on To like each other, sing and dance in synchrony · 2012-04-23T19:00:43.407Z · score: 14 (14 votes) · LW · GW

N-player rock-paper-scissors variants. They generally involve everybody standing in a circle facing inward shaking their fists three times and chanting in unison, and looking back I feel like they do have a community-building effect. But they bypass the filter because they're competitive, and are presumably appealing to LW people because they involve memorizing a large ruleset and then trying to game it.

Comment by honoredb on Newcomb's Problem and Regret of Rationality · 2012-04-23T02:57:24.322Z · score: 1 (1 votes) · LW · GW

This looks like it loses in the Smoking Lesion problem.

Comment by honoredb on Stupid Questions Open Thread Round 2 · 2012-04-20T23:29:44.402Z · score: 1 (1 votes) · LW · GW

Having seen the exchange that probably motivated this, one note: in my opinion, events can be linked both causally and acausally. The linked post gives an example. I don't think that's an abuse of language; we can say that people are simultaneously communicating verbally and non-verbally.

Comment by honoredb on A question about Eliezer · 2012-04-20T00:02:09.248Z · score: 8 (8 votes) · LW · GW

Here's their xls with the predictions

Comment by honoredb on A question about Eliezer · 2012-04-19T21:33:23.389Z · score: 7 (7 votes) · LW · GW

Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that "If Mitt Romney loses the primary election, Barack Obama will win the general election." This is actually logically equivalent to "Either Mitt Romney or Barack Obama will win the 2012 Presidential Election," barring some very unlikely events, so I posted that instead, and so I won't have to withdraw the prediction when Romney wins the primary.

Comment by honoredb on A question about Eliezer · 2012-04-19T21:15:53.202Z · score: 25 (25 votes) · LW · GW

it didn't treat mild belief and certainty differently;

It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is "No chance of occurring" and 5 is "Definitely will occur". They didn't use this in their top-level rankings because they felt it was "accurate enough" without that, but they did use it in their regressions.

Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!

They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).

They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?

I agree here, mostly. Looking through the predictions they've marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn't figure out how to properly score them they should have just left them out.

If you think you can improve on their methodology, the full dataset is here: .xls.

Comment by honoredb on A question about Eliezer · 2012-04-19T20:50:31.079Z · score: 16 (16 votes) · LW · GW

This objection is not entirely valid, at least when it comes to Krugman. Krugman scored 17/19 mainly on economic predictions, and one of the two he got wrong looks like a pro-Republican prediction.

From their executive summary:

According to our regression analysis, liberals are better predictors than conservatives—even when taking out the Presidential and Congressional election questions.

From the paper:

Krugman...primarily discussed economics...

Comment by honoredb on How can we get more and better LW contrarians? · 2012-04-19T13:19:17.641Z · score: 4 (4 votes) · LW · GW

Or just embed a poll.

Comment by honoredb on [META] Alternatives to rot13 and karma sinks · 2012-04-17T23:59:48.097Z · score: 0 (0 votes) · LW · GW


Comment by honoredb on Rationality Quotes April 2012 · 2012-04-16T15:26:14.738Z · score: 7 (11 votes) · LW · GW

That's right, Emotion. Go ahead, put Reason out of the way! That's great! Fine! ...for Hitler.

--1943 Disney cartoon

Comment by honoredb on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-15T22:42:35.947Z · score: 0 (0 votes) · LW · GW

I think Quirrell is working with an unconventional definition of Dark. Something like "in violent opposition to you."

Comment by honoredb on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-13T22:06:30.362Z · score: 4 (4 votes) · LW · GW

Or you cast the spell after doing the deed, and that one time they were too busy fleeing/claiming this wasn't what it looked like/getting castigated/getting dressed.

...just how many pregnancies has McGonagall caused, anyway?

Comment by honoredb on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-13T15:18:33.029Z · score: 1 (1 votes) · LW · GW

Most of the stuff I was hoping for hasn't panned out thus far. The ebook gets a few downloads each week, mostly as referrals from the HPMoR fan art page.

Comment by honoredb on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T19:09:18.981Z · score: 1 (1 votes) · LW · GW

See also this exchange on the tvtropes forum, where EY clarifies at least one of his reasons for removing the Griphook line.

Comment by honoredb on Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84 · 2012-04-11T19:03:41.141Z · score: 3 (3 votes) · LW · GW

Yeah, it was a total cheat. That's why I put my anagram in the Dramatis Personae.

Comment by honoredb on Forked Russian Roulette and Anticipation of Survival · 2012-04-06T20:22:55.598Z · score: 1 (1 votes) · LW · GW

I'd walk through the roulette box (sounds like fun!) but not the torture box.

Comment by honoredb on SotW: Be Specific · 2012-04-04T13:59:14.417Z · score: 0 (0 votes) · LW · GW

Oh, thanks!