Posts
Comments
Russ, I think that if you take the example literally, the price would be 91%, not 50%, and you wouldn't expect to make money.
Eliezer, the PS definitely clarifies matters.
Although I also think the example is actually instructive if taken literally too. In particular, if you see nine heads in a row, each additional head means you expect a higher chance of heads next flip. But you do not expect an increase in the price of the contract that pays $1 if heads comes up. THAT still has an expected price change of zero, even thought we expect more heads going forward.
In other words, future EVENTS can be predictable, but future PRICES cannot.
Eliezer, your main point is correct and interesting, but the coin flip example is definitely wrong. The market's beliefs don't affect the bias of the coin! The map doesn't affect the territory.
The relevant FINANCE question is 'how much would you pay for a contract that pays $1 if the coin comes up heads?'. This is then the classic prediction market type contract.
The price should indeed be ten elevenths. Of course, you don't expect to make money buying this contract, which was exactly your point.
What WILL be true is that the expected change in the price of the contract from one period to the next will be zero. This need not mean that it goes up 50% of the time, but the expected value next period (in this case) is the current price.
The first proof that I know of this was done by Paul Samuelson in 1965, in his paper 'Proof that Properly Anticipated Prices Fluctuate Randomly'.
Nature sounds a bit like a version of Rory Breaker from 'Lock, Stock and Two Smoking Barrels':
"If you hold back anything, I'll kill ya. If you bend the truth or I think your bending the truth, I'll kill ya. If you forget anything I'll kill ya. In fact, you're gonna have to work very hard to stay alive, Nick. Now do you understand everything I've said? Because if you don't, I'll kill ya. "
You say 'That's not how it works.' But I think that IS how it works!
If progress were only ever made by people as smart as E.T. Jaynes, humanity would never have gotten anywhere. Even with fat tails, intelligence is still roughly normally distributed, and there just aren't that many 6 sigma events. The vast majority of scientific progress is incremental, notwithstanding that it's only the revolutionary achievements that are salient.
The real question is, do you want Friendly A.I. to be achieved? Or do you just want friendly A.I. to be achieved by YOU? There's no shame in the latter one, but the preclusion of the latter speaks little about progress towards the former (which I happen to think this blog is immensely valuable towards).
While your point about a world that hadn't used nuclear weapons being safer today is unclear, I think your claim that 'you wouldn't drop the bomb' is driven by hindsight bias. At the time, the far more pressing issue from Truman's perspective was how to end the war with a minimum loss of US life, and the long term consequences of the bomb were far from clear.
I also think that memorials like Hiroshima Day pervert the overall moral perspective on World War 2. Because it was a large, salient act of destruction, it gets remembered. The Burma Railway and the Rape of Nanking (brutality which didn't even serve any strategic purpose) don't have any memorials. It is a gross distortion for Hiroshima to let the Japanese be primarily viewed as the victims of World War 2. EVERYTHING about World War 2 was horrible, but you can't only emphasise one bit of that horror without affecting the overall perception.
As to the question of why you didn't want to just have a show of force, I remember Victor Davis Hanson arguing that in order to prevent conflicts from re-starting later, there is a psychological importance in the enemy realising that they are well and truly beaten. Without this, he argued, it's possible for revisionists to re-stoke the conflict later. Hitler did exactly this when he claimed that the German army in WW1 was on the verge of victory when it was stabbed in the back by politicians at home, instead of actually being days away from total defeat. Say what you will about the bomb, but it certainly let the Japanese know that they were beaten, and Japanese militarism hasn't resurfaced since.
In a repeated game of Prisoners Dilemma, Tit-For-Tat seems to be a dominant strategy. With Hiroshima, the Japanese found out that payback's a bitch. The only injustice is to the extent that the individuals who bore the brunt of the attack weren't personally the ones who instigated it, but this is true of every war in history. I feel for their suffering, but no more or less than any other civilians in World War 2, or anywhere else.
I'm quite convinced about how you analyze the problem of what morality is and how we should think about it, up until the point about how universally it applies. I'm just not sure that 'humans different shards of god shatter' add up to the same thing across people, a point that I think would become apparent as soon as you started to specify what the huge computation actually WAS.
I would think of the output as not being a yes/no answer, but something akin to 'What percentage of human beings would agree that this was a good outcome, or be able to be thus convinced by some set of arguments?'. Some things, like saving a child's life, would receive very widespread agreement. Others, like a global Islamic caliphate or widespread promiscuous sex would have more disagreement, including potentially disagreement that cannot be resolved by presenting any conceivable argument to the parties.
The question of 'how much' each person views something as moral comes into play as well. If different people can't all be convinced of a particular outcome's morality, the question ends up seeming remarkably similar to the question in economics of how to aggregate many people's preferences for goods. Because you never observe preferences in total, you let everyone trade and express their desires through revealed preference to get a pareto solution. Here, a solution might be to assign them a certain amount of morality dollars to each outcome, let them spend as they wish, and add it all up. Like economics, there's still the question of how to allocate the initial wealth (in this case, how much to weigh the opinions of each person).
I don't know how much I'm distorting what you meant - it almost feels like we've just replaced 'morality as preference' with 'morality as aggregate preference', and I don't think that's what you had in mind.
I'll be interested to see what your metamorality is. The one thing that I think has been missing so far from the discussion is the question that without some metamorality, what language do we have to condemn someone else who chooses a different morality from ours? Obviously you can't argue morality into a rock, but we're not trying to do that, only argue it into another human who shares fundamentally similar architecture, but not necessarily morality.
Moreover, to say that one can abandon a metamorality without affecting one's underlying morality doesn't imply that society as a whole can ditch a particular metamorality (eg Judeo-Christian worldviews) and still expect the next generation's morality to stay unchanged. If you explicitly reject any metamorality, why should your children bother to listen to what you say anyway? Isn't their morality just as good as yours?
It may be possible that religious metamorality serve as a basis to inculcate a particular set of moral teachings, which only then allows the original metamorality to be abandoned. eg It causes at least some of the population to do the right thing for the wrong reasons, when they otherwise might not have done the right thing at all.