Posts
Comments
I'm not sure the recursive argument even fully works for the stock market, these days--I suspect it's more like a sticky tradition that crudely mimic the incentive structure that used to exist, like a parasitic vine that still holds the shape of the rotted-away tree it killed. When there's any noise, recursion amplifies it with each iteration: a 1-year lookahead to a 1-year lookahead might be almost the same as a 2-year-lookahead, but it's slightly skewed by wanting to take into account short-scale price movements and different risk and time discounting. By the the time you get to a 1-year lookahead to a 1-year lookahead to a...*10, it's almost completely (maybe completely) decoupled from the actual 10-year lookahead, with no way to make money off of that decoupling.
Sure, but it's really hard to anticipate which side will benefit more, so in expected value they're equal. I'm sure some people will think their side will be more effective in how it spends money...I'll try to persuade them to take the outside view.
Thanks, I'll look him up.
I think those contributors will probably not be our main demographic, since they have an interest in the system as it is and don't want to risk disrupting it. In theory, though, donating to both parties can be modeled as a costly signal (the implied threat is that if you displease me, the next election I'll only donate to your opponent), and there's no reason you can't do that through our site.
It seems to be implicit in your model that funding for political parties is a negative-sum arms race.
What army1987 said. The specific assumption is that on the margin, the effect of more funding to both sides is either very small or negative.
In my own view, the most damaging negative-sum arms race is academia.
This is definitely an extendable idea. It gets a lot more complicated when there are >2 sides, unfortunately. Even if they agreed it was negative-sum, someone donating $100 to Columbia University would generally not be equally happy to take $100 away from Harvard. I don't know how to fix that.
Thanks!
I'm happy to specify completely, actually, I just figured a general question would lead to answers that are more useful to the community.
In my case, I'm helping to set up an organization to divert money away from major party U.S. campaign funds and to efficient charities. The idea is that if I donate $100 to the Democratic Party, and you donate $200 to the Republican party (or to their nominees for President, say), the net marginal effect on the election is very similar to if you'd donated $100 and I've donated nothing; $100 from each of us is being canceled out. So we're going to make a site where people can donate to either of two opposing causes, we'll hold it in escrow for a little, and then at a preset time the money that would be canceling out goes to a GiveWell charity instead. So if we get $5000 in donations for the Democrats and $2000 for Republicans, the Democrats get $3000 and the neutral charity gets $4000. From an individual donor's point of view, each dollar you donate will either become a dollar for your side, or take away a dollar from the opposing side.
This obviously steps into a lot of election law, so that's probably the expertise I'll be looking for. We also need to figure out what type of organization(s) we need to be: it seems ideal to incorporate as a 501c(3) just so that people can make tax-deductible donations to us (whether donations made through us that end up going to charity can be tax-deductible is another issue). I think the spirit of the regulations should permit that, but I am not a lawyer and I've heard conflicting opinions on whether the letter of the law does.
And those issues aside, I feel like there could be more legal gotchas that I'm not anticipating to do with Handling Other People's Money.
What's the best way to get (U.S.) legal advice on a weird, novel issue (one that would require research and cleverness to address well)? Paid or unpaid, in person or remotely.
(For that matter, if anyone happens to be interested in donating good legal advice to a weird, novel non-profit organization, feel free to contact me at histocrat at gmail dot com).
Arthur Prior's resolution it to claim that each statement implicitly asserts its own truth, so that "this statement is false" becomes "this statement is false and this statement is true".
Pace your later comments, this is a wonderfully pithy solution and I look forward to pulling it out at cocktail parties.
I like people's attempts to step outside the question, but playing along...
LW-rationalists value thinking for yourself over conformity. A LW sport might be a non-team sport like fencing, a team sport in which individuals are spotlighted, like baseball, or a sport that presents constant temptation to follow cues from your teammates but rewards breaking away from the pack.
LW-rationalists value cross-domain skills. A LW sport might involve a variety of activities, like an n-athlon, or facing a quick succession of opponents who all trained together so that lessons learned against one are likely to apply to the next.
LW-rationalists value finding ways to cooperate with people whose values are different. A LW sport might involve a tension between behavior that supports the team and behavior that wins personal glory, like basketball, or it might involve more than 2 sides and more than 1 winner with potential for cooperation.
LW-rationalists value an ability to recognize when a previously useful heuristic isn't working, and break out of it. A LW sport might involve subtle shifts in the playing field that weaken some strategies and strengthen others.
the effects of poverty & oppression on means & tails
Wait, what are you saying here? That there aren't any Einsteins in sweatshops in part because their innate mathematical ability got stunted by malnutrition and lack of education? That seems like basically conceding the point, unless we're arguing about whether there should be a program to give a battery of genius tests to every poor adult in India.
The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand.
Not all of them, I don't think. And then you have to have a talent that manifests early, have someone in your community who knows that a kid with a talent for arithmetic might have a talent for higher math, knows that a talent for higher math can lead to a way to support your family, expects that you'll be given a chance to prove yourself, gives a shit, has a way of getting you tested...
I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position.
Really? Then I'm sure you could name three examples.
Just going off Google, here: People being incarcerated for unsuccessful attempts to poison someone: http://digitaljournal.com/article/346684 http://charlotte.news14.com/content/headlines/628564/teen-arrested-for-trying-to-poison-mother-s-coffee/ http://www.ksl.com/?nid=148&sid=85968
Person being killed for suspected unsuccessful attempt to poison someone: http://zeenews.india.com/news/bihar/man-lynched-for-trying-to-poison-hand-pump_869197.html
Sorry, I can only read what you wrote. If you meant he lacked tact, you shouldn't have brought up insanity.
I was trying to elegantly combine the Incident with the Debilitating Paranoia and the Incident with the Telling The Citizenship Judge That Nazis Could Easily Take Over The United States. Clearly didn't completely come across.
Really? Because his mathematician peers were completely exasperated at him. What, exactly, was he politic about?
He was politic enough to overcome Vast Cultural Differences enough to get somewhat integrated into an insular community. I hang out with mathematicians a lot; my stereotype of them is that they tend not to be good at that.
I'm sorry, I never really rigorously defined the counter-factuals we were playing with, but the fact that Oppenheimer was in a context where attempted murder didn't sink his career is surely relevant to the overall question of whether there are Einsteins in sweatshops.
Do you really think the existence of oppression is a figment of Marxist ideology? If being poor didn't make it harder to become a famous mathematician given innate ability, I'm not sure "poverty" would be a coherent concept. If you're poor, you don't just have to be far out on multiple distributions, you also have to be at the mean or above in several more (health, willpower, various kinds of luck). Ramanujan barely made it over the finish line before dying of malnutrition.
Even if the mean mathematical ability in Indians were innately low (I'm quite skeptical there), that would itself imply a context containing more censoring factors for any potential Einsteins...to become a mathematician, you have to, at minimum, be aware that higher math exists, that you're unusually good at it by world standards, and being a mathematician at that level is a viable way to support your family.
On your specific objections to my conjugates...I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position. Hardly a gross exaggeration. Goedel didn't become clinically paranoid until later, but he was always the sort of person who would thoughtlessly insult an important gatekeeper's government, which is part of what I was getting at; Ramanujan was more politic than your average mathematician. I actually was thinking of making Newton's conjugate be into Hindu mysticism instead of Christian but that seemed too elaborate.
I think it can be illustrative, as a counter to the spotlight effect, to look at the personalities of math/science outliers who come from privileged backgrounds, and imagine them being born into poverty. Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation. Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy. Newton and Turing's conjugates were murdered as teenagers on suspicion of homosexuality. I have to make these stories up because if you're poor and at all weird, flawed, or unlucky your story is rarely recorded.
Huh, that does make a lot more sense. I guess I'd been assuming that any reference to someone "averting" a prophecy was actually just someone forcing the better branch of an EitherOrProphecy (tvtropes). Like if Trelawney had said "HE WHO WOULD TEAR APART THE VERY STARS IN HEAVEN IF NONE STAND AGAINST IT." The inference that prophecies don't always come true fits Quirrell's behavior much better.
Quirrell seems to have been counterfactually mugged by hearing the prophecy of the end of the world...which would mean his decision theory, and psychological commitment to it, are very advanced.
Assume Quirrell believes that the only possible explanation of the prophecy he heard is that the apocalypse is nigh. This makes sense: prophecies don't occur for trivial events like a visitor to Hogwarts destroying books in the library named "Stars in Heaven" and "The World," and the idea of "the end of the world" being a eucatastrophe hasn't occurred to him. Assume Quirrell believes that prophecies are inevitable once spoken. Then why is Quirrell bothering to try to save the world?
Given that he hears the prophecy, Quirrell can either try T or not try ~T to avert it. Given that he tries, Quirrell is either capable C or incapable ~C of averting it. If T and C, by inevitability Quirrell will never hear the prophecy, which means that it is less likely the end of the world will occur (massive events always produce a prophecy that is heard by a wizard, so either Time finds some way to stop the end of the world or someone else hears it but fails to avert it). Say the end of the world causes -100 utility to Quirrell, and trying to stop it causes -1 utility. Then if C, a Quirrell that would try never hears the prophecy, so he never loses any utility, while a Quirrell that would not try hears the prophecy, goes out in a blaze of hedonism rather than fighting the inevitable, and loses 100 utility from the end of the world. Unfortunately, the actual world is the ~C world, where T brings -101 utility and ~T brings -100. So T looks like an irrational choice, but actually maximizes Quirrell's utility across counterfactuals.
This isn't the only explanation for Quirrell's actions; he could just prefer to go out fighting, or be betting on the slim chance that prophecies actually can be averted, or just trying to delay the end of the world as long as possible, or acting on other, weirder motives. But it's an interesting illustration of how alien a being that has truly internalized a really sophisticated decision theory might be.
T was supposed to do a bit more than it did, but it had some portability bugs so I hastily lobotomized it. All it's supposed to do now is simulate the opponent twice against an obfuscated defectbot, defect if it cooperates both times, otherwise play mimicbot. I didn't have the time to add some of the obvious safeguards. I'm not sure if K is exploiting me or just got lucky, but at a glance, what it might be doing is checking whether the passed-in bot can generate a perfect quine of itself, and cooperating only then. That would be pretty ingenious, since typically a quine chain will go "original -- functional copy -- identical copy -- identical copy", etc.
The bad news is there is none. The good news is that this means, under linear transformation, that there is such a thing as a free lunch!
I'm standing at a 4-way intersection. I want to go the best restaurant at the intersection. To the west is a three-star restaurant, to the north is a two-star restaurant, and to the northwest, requiring two street-crossings, is a four-star restaurant. All of the streets are equally safe to cross except for the one in between the western restaurant and the northern one, which is more dangerous. So going west, then north is strictly dominated by going north, then west. Going north and eating there is strictly dominated by going west and eating there. This means that if I cross one street, and then change my mind about where I want to eat based on the fact that I didn't die, I've been dutch-booked by reality.
That might need a few more elements before it actually restricts you to VNM-rationality.
This seems like a good sketch of the endgame for histocracy, my own pie-in-the-sky organizational scheme. If you start with people voluntarily transitioning management of a resource they own to an open histocratic system with themselves as the judges, and then iterate and nest and stuff, you get something like this in the limit. I hadn't been able to envision it quite as elegantly as you do here.
In my discipline? I guess
Write code that's easy to update without breaking dependent code.
That'll save the ancient programmers of the 1950's some time.
If I were trying to build up programming from scratch, it'd get pretty hairy.
Build a machine that, when "x = 1.1; while (10. - x*x > .0001) x = x - ((x * x - 10.) / (10.*x)); display x" is entered into it, displays a value close to the ratio of the longest side of a right triangle to another side expressed as the sum of 0 or 1 times the lengths of successive bisections.
I came here to refer you to John Holt, but since User:NancyLebovitz already did that, I'll just add that I'm amused that your handle is Petruchio.
Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can't quite back it out of a prediction market's transaction history. And the amount of money to be made is small in any event because there's just not enough participation in the markets.
Irrationality Game
Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they're cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevitably crop up in any prediction market system due to bubbles, panics, hedging, manipulation, and either overly simple or dangerously complex derivatives. 90%
Hanson and his followers are irrationally attached to prediction markets because they flatter libertarian sensibilities. 60%
Yup. The propositions need to be such that you can get more confident than that.
My girlfriend says that a common case of motivated cognition is witnesses picking someone out of a lineup. They want to recognize the criminal, so given five faces they're very likely to pick one even if the real criminal's not there, whereas if people are leafing through a big book of mugshots they're less likely to make a false positive identification.
She suggests a prank-type exercise where there are two plants in the class. Plant A, who wears a hoodie and sunglasses, leaves to go to the bathroom, whereupon Plant B announces that they're pretty sure Plant A is actually $FAMOUS_ACTOR here incognito. Plant A pokes his head in, says he needs to go take a call, and leaves. See who manages to talk themselves into thinking that really is the celebrity.
This seems like it'll be easiest to teach and test if you can artificially create a preference for an objective fact. Can you offer actual prizes? Candy? Have you ever tried a point system and have people reacted well?
Assume you have a set of good prizes (maybe chocolate bars, or tickets good for 10 points) and a set of less-good prizes (Hershey's kisses, or tickets good for 1 point).
Choose a box: Have two actual boxes, labeled "TRUE" and "FALSE". Before the class comes in, the instructor writes a proposition on the blackboard, such as "The idea that carrots are good for your eyesight is a myth promoted as part of a government conspiracy to cover up secret military technology" or "A duck's quack never echoes, and nobody knows why." If the instructor believes that the proposition is true, the instructor puts a bunch of good prizes in the TRUE box and nothing in the FALSE box. Otherwise, the instructor fills the FALSE box with less-good prizes. The class comes in, and the instructor explains the rules. Then she spends 5 minutes trying to persuade the class that she believes the proposition. After that, people who think she actually believes it line up at the TRUE box, and everyone else lines up at the FALSE box. Everyone who guessed right gets a prize from their box. If you guess TRUE and you're right, your prize is better than if you guess FALSE and are right. Repeat this for a few propositions, and it's at least a useful test for whether you can separate what you want from what seems plausible.
It seems likely that God would create multiple realities, populated by different sorts of people and/or with different True Religions, to feed a diverse set of people into a shared heaven. So the recursive realities would have a pyramid or lattice structure. If God has limited knowledge of the realities he's created, there could even be cycles.
God is, himself, in a world filled with vague, ambiguous, sometimes contradictory hints towards a divine meta-reality. He's confused, anxious, and doesn't trust his own judgment. So he's created the Abrahamic world in order to identify the people who somehow manage to arrive at the truth given a similar lack of information. One of our religions is correct--guess right and you go to Heaven to help God try to get to Double Heaven.
Okay, I see that that's what you're saying. The assumption then (which seems reasonable but needs to be proven?) is that the simulated humans, given infinite resources, would either solve Oracle AI [edit: without accidentally creating uFAI first, I mean] or just learn how to do stuff like create universes themselves.
There is still the issue that a hypothetical human with access to infinite computing power would not want to create or observe hellworlds. We here in the real world don't care, but the hypothetical human would. So I don't think your specific idea for brute-force creating an Earth simulation would work, because no moral human would do it.
I'm slightly worried that even formally specifying an "idealized and unbounded computer" will turn out to be Oracle-AI-complete. We don't need to worry about it converting something valuable into computronium, but we do need to ensure that it interacts with the simulated human(s) in a friendly way. We need to ensure that it doesn't modify the human to simplify the process of explaining something. The simulated human needs to be able to control what kinds of minds the computer creates in the process of thinking (we may not care, but the human would). And the computer should certainly not hack its way out of the hypothetical via being thought about by the FAI.
a papercut doesn't leave much if any blood on the paper... as the paper moves away fast enough that blood doesn't even have time to flow on it.
It is possible to engineer, though, if you're manipulating the paper with great telekinetic precision. I accidentally bloodstained a book that way when I was about Harry's age.
N-player rock-paper-scissors variants. They generally involve everybody standing in a circle facing inward shaking their fists three times and chanting in unison, and looking back I feel like they do have a community-building effect. But they bypass the filter because they're competitive, and are presumably appealing to LW people because they involve memorizing a large ruleset and then trying to game it.
This looks like it loses in the Smoking Lesion problem.
Having seen the exchange that probably motivated this, one note: in my opinion, events can be linked both causally and acausally. The linked post gives an example. I don't think that's an abuse of language; we can say that people are simultaneously communicating verbally and non-verbally.
Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that "If Mitt Romney loses the primary election, Barack Obama will win the general election." This is actually logically equivalent to "Either Mitt Romney or Barack Obama will win the 2012 Presidential Election," barring some very unlikely events, so I posted that instead, and so I won't have to withdraw the prediction when Romney wins the primary.
it didn't treat mild belief and certainty differently;
It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is "No chance of occurring" and 5 is "Definitely will occur". They didn't use this in their top-level rankings because they felt it was "accurate enough" without that, but they did use it in their regressions.
Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!
They did not. Per the paper, those were simply thrown out (as people do on PredictionBook).
They also penalise people for hedging, yet surely a hedged prediction is better than no prediction at all?
I agree here, mostly. Looking through the predictions they've marked as hedging, some seem like sophistry but some seem like reasonable expressions of uncertainty; if they couldn't figure out how to properly score them they should have just left them out.
If you think you can improve on their methodology, the full dataset is here: .xls.
This objection is not entirely valid, at least when it comes to Krugman. Krugman scored 17/19 mainly on economic predictions, and one of the two he got wrong looks like a pro-Republican prediction.
From their executive summary:
According to our regression analysis, liberals are better predictors than conservatives—even when taking out the Presidential and Congressional election questions.
From the paper:
Krugman...primarily discussed economics...
That's right, Emotion. Go ahead, put Reason out of the way! That's great! Fine! ...for Hitler.
I think Quirrell is working with an unconventional definition of Dark. Something like "in violent opposition to you."
Or you cast the spell after doing the deed, and that one time they were too busy fleeing/claiming this wasn't what it looked like/getting castigated/getting dressed.
...just how many pregnancies has McGonagall caused, anyway?
Most of the stuff I was hoping for hasn't panned out thus far. The ebook gets a few downloads each week, mostly as referrals from the HPMoR fan art page.
See also this exchange on the tvtropes forum, where EY clarifies at least one of his reasons for removing the Griphook line.
Yeah, it was a total cheat. That's why I put my anagram in the Dramatis Personae.
I'd walk through the roulette box (sounds like fun!) but not the torture box.