Posts
Comments
Robin, I see a fair amount of evidence that winner take all types of competition are becoming more common as information becomes more important than physical resources.
Whether a movie star cooperates with or helps subjugate the people in central Africa seems to be largely an accidental byproduct of whatever superstitions happen to be popular among movie stars.
Why doesn't this cause you to share more of Eliezer's concerns? What probability would you give to humans being part of the winning coalition? You might have a good argument for putting it around 60 to 80 percent, but a 20 percent chance of the universe being tiled by smiley faces seems important enough to worry about.
Eliezer,
This is a good explanation of how easy it would be to overlook risks.
But it doesn't look like an attempt to evaluate the best possible version of an Oracle AI.
How hard have you tried to get a clear and complete description of how Nick Bostrom imagines an Oracle AI would be designed? Enough to produce a serious Disagreement Case Study?
Would the Oracle AI he imagines use English for its questions and answers, or would it use a language as precise as comupter software?
Would he restrict the kinds of questions that can be posed to the Oracle AI?
I can imagine a spectrum of possibilities that range from an ordinary software verification tool to the version of Oracle AI that you've been talking about here.
I see lots of trade-offs here that increase some risks at the expense of others, and no obvious way of comparing those risks.
Eliezer, your non-response causes me to conclude that you aren't thinking clearly. John Maynard Smith's comments on Gould are adequate. Listen to Kaj and stick to areas where you know enough to be useful.
"progress in quality of vertebrate brain software (not complexity or size per se), and this shift in adaptive emphasis must necessarily have come at the expense of lost complexity elsewhere. Look at humans: we've got no muscles, no fangs, practically no sense of smell, and we've lost the biochemistry for producing many of the micronutrients we need." This looks suspicious to me. What measure of complexity of the brain's organization wouldn't show a big increase between invertebrates and humans? For the lost complexity you claim, only the loss of smell looks like it might come close to offsetting the increase in brain complexity; I doubt either of us has a good enough way of comparing the changes in complexity to tell much by looking at these features. If higher quality brains have been becoming more complex due to a better ability to use information available in the environment to create a more complex organization, there's no obvious reason to expect any major barriers to an increase in overall complexity.
Many popular reports of Eddington's test mislead people into thinking it provided significant evidence. See these two Wikipedia pages for reports that the raw evidence was nearly worthless. Einstein may have known how little evidence that test would provide.
When you hear someone say "X is not evidence ...", remember that the Bayesian concept of evidence is not the only concept attached to that word. I know my understanding of the word evidence changed as I adopted the Bayesian worldview. My recollection of my prior use of the word is a bit hazy, but it was probably influenced a good deal by beliefs about what a court would admit as evidence.(This is a comment on the title of the post, not on Earl Warren's rationalization).
Michael, I don't understand what opportunities you're referring to that could qualify as arbitrage. Also, reputation isn't necessarily needed - there are many investors who would use their own money to exploit the relevant opportunities if there were good reason to think they could be identified, without needing to convince clients of anything. One of the reasons I don't try to exploit opportunities that I can imagine involving apocalypse in the 2020s is that I think it's unlikely that markets will see any new information in the next few years that would make those opportunities less profitable if I wait to try exploiting them.
The treasury bond market appears to be as close to such a market as we can expect to get. It shows interest rates for bonds maturing in 2027 with a yield about 0.20% higher than those maturing in 2017, and bonds maturing in 2037 have a lower interest rate than those maturing in 2027. That's a clear prediction that apocalypse isn't expected. Markets for more than a few years into the future normally say that the best forecast is that conditions will stay the same and/or that existing trends will continue.
Rejecting Punctuated Equilibrium theory on the grounds that Gould was a scientifically dishonest crackpot seems to require both fundamental attribution error and an ad hominem argument.
It appears counterproductive to use the word mutants to describe how people think of enemies. Most people can easily deny that they've done that, and therefore conclude they don't need to learn from your advice. I think if you were really trying to understand those who accept misleading stereotypes of suicide bombers, you'd see that their stereotype is more like "people who are gullible enough to be brainwashed by the Koran". People using such stereotypes should be encouraged to think about how many people believe themselves to be better than average at overcoming brainwashing.
And for those who think suicide bombers are unusual deviants, I suggest reading Robert Pape's book Dying to Win.
Eliezer, if you anticipate a default more than 90 days in advance, it doesn't matter that other investors do also. You hold the Treasury bills to maturity and they are paid off before the default.
Most people who have thought carefully about the risk-free interest rate realize that any real-world security provides merely an approximation to that ideal. The fact that people rarely describe t-bond rates using verbose but more accurate phrases such as "the nearest we can come to measuring the risk-free interest rate" doesn't tell you much about how many fail to see that it's more accurate. I haven't read Black Swan (but have read a prior book of Taleb's). I doubt typical investors ought to follow the advice you've quoted, but it seems plausible that some investors ought to. His description of treasury bills as extremely safe seems accurate enough for practical purposes. It only requires that investors be able to anticipate a U.S. government default / hyperinflation something like 90 days in advance (i.e. it's a good deal more reasonable than describing 30-year bonds as extremely safe). Good investing is mostly about avoiding big mistakes, not about perfectly avoiding all errors, and Taleb's advice would reduce one's risk by a big factor compared to most competing advice.
The claim that "when you have to actually bet, you still bet at 1:5 odds" overlooks some information that is commonly communicated via markets. When I trade on a market, I often do it by submitting a bid (offer to buy) and/or an ask (offer to sell). The difference between the prices at which I'm willing to place those two kinds of orders communicates something beyond what I think the right odds are. If I'm willing to buy "Hillary Clinton Elected President in 2008" at 23 and sell at 29, and only willing to buy "Person Recovers from Cryonic Suspension by 2040" at 8 and sell at 44, that tells you I'm more uncertain about wise odds for cryonics than for the 2008 election. For more sophisticated markets, option prices could communicate even more info of this sort.
I doubt that anyone is advocating the version of the Modesty Argument that you're attacking. People who advocate something resembling that seem to believe we should only respond that way if we should assume both sides are making honest attempts to be Bayesians. I don't know of anyone who suggests we ignore evidence concerning the degree to which a person is an honest Bayesian. See for example the qualification Robin makes in the last paragraph of this: http://lists.extropy.org/pipermail/extropy-chat/2005-March/014620.html. Or from page 28 of http://hanson.gmu.edu/deceive.pdf "seek observable signs that indicate when people are self-deceived about their meta-rationality on a particular topic. You might then try to disagree only with those who display such signs more strongly than you do." There seems to be enough agreement on some basic principles of rationality that we can conclude there are non-arbitrary ways of estimating who's more rational that are available to those who want to use them.