Posts
Comments
The safest investment is Treasury Inflation Protected Securities (TIPS). Ordinary investors should avoid investing in derivative securities such as options. If you are rationally pessimistic go with TIPS.
Also, you would never get the 1/100 odds because in a sense money is more valuable in the state in which the economy is doing poorly. So say there are two bonds, each in 30 years have a 99% chance of paying 0 and a 1% chance of paying $1,000. The first bond pays off in a state in which the economy has done very poorly, the second in a state in which the economy has done OK. The first bond will cost a lot more than the second.
If you do want to play with derivative securities just maintain a short position in the S&P 500. If you think the decline will be gradual rather then all at once you could just keep buying short term put options on the S&P 500. As the market declines you will gain wealth which you could use to increase your short position.
If you are really, really pessimistic spend your money stocking up on canned goods and guns.
Doug S.
I'm interested in learning more about extremely early readers. I would be grateful if you contacted me at
EconomicProf@Yahoo.com
High functioning autism might in part be caused by an "overclocking" of the brain.
My evidence:
(1) Autistic children have on average larger brains than neurotypical children do. (2) High IQ parents are more likely than average to have autistic children. (3) An extremely disproportionate number of mathematical geniuses have been autistic. (4) Some children learn to read before they are 2.5 years old. From what I know all of these early readers turn out to be autistic.
Eliezer-
“What justifies the right of your past self to exert coercive control over your future self? There may be overlap of interests, which is one of the typical de facto criteria for coercive intervention; but can your past self have an epistemic vantage point over your future self?”
In general I agree. But werewolf contracts protect against temporary lapses in rationality. My level of rationality varies. Even assuming that I remain in good health for eternity there will almost certainly exist some hour in the future in which my rationality is much lower than it is today. My current self, therefore, will almost certainly have an “epistemic vantage point over [at least a small part of my] future self.” Given that I could cause great harm to myself in a very short period of time I am willing to significantly reduce my freedom in return for protecting myself against future temporary irrationality.
Having my past self exert coercive control of my future self will reduce my future information costs. For example, when you download something from the web you must often agree to a long list of conditions. Under current law if these terms of conditions included something like “you must give Microsoft all of your wealth” the term wouldn’t be enforced. If the law did enforce such terms then you would have to spend a lot of time examining the terms of everything you agreed to. You would be much better off if your past self prevented your current self from giving away too much in the fine print of agreements.
“If you constrain the contracts that can be written, then clearly you have an idea of good or bad mindstates apart from the raw contract law, and someone is bound to ask why you don't outlaw the bad mindstates directly.”
The set of possible future mindstates / world state combinations is very large. It’s too difficult to figure out in advance which combinations are bad. It’s much more practical to sign a Werewolf contract which gives your guardian the ability to look at the mindstate / worldstate you are in and then decide if you should be forced to move to a different mindstate.
“why force Phaethon to sacrifice his pride, by putting him in that environment?”
Phaethon placed greater weight on freedom than pride and your type of paternalism would reduce his freedom.
But in general I agree that if most humans alive today were put in the Golden Age world then many would do great harm to themselves and in such a world I would prefer that the Sophotechs exercise some paternalism. But if such paternalism didn’t exist then Warewolf contracts would greatly reduce the type of harm you refer to.
ShardPhoenix wrote "Doesn't the choice of a perfect external regulator amount to the same thing as directly imposing restrictions on yourself, thereby going back to the original problem?"
No because if there are many possible future states of the world it wouldn't be practical for you in advance to specify what restrictions you will have in every possible future state. It's much more practical for you to appoint a guardian who will make decisions after it has observed what state of the world has come to pass. Also, you might pick a regulator who would impose different restrictions on you than you would if you acted without a regulator.
ShardPhoenix also wrote "Another way to do it might be to create many copies of yourself (I'm assuming this scenario takes place inside a computer) and let majority (or 2/3s majority or etc) rule when it comes to 'rescuing' copies that have made un-self-recoverable errors."
Good idea except in the Golden Age World these copies would become free individuals who could modify themselves. You would also be financially responsible for all of these copies until they became adults.
You are forgetting about "Werewolf Contracts" in the Golden Age. Under these contracts you can appoint someone who can "use force, if necessary, to keep the subscribing party away from addictions, bad nanomachines, bad dreams or other self-imposed mental alterations."
If you sign such a contract then, unlike what you wrote, it's not true that "one moment of weakness is enough to betray you."
Non-lawyers often believe that lawyers and judges believe that laws and contracts should be interpreted literally.
"Eliezer, I'd advise no sudden moves; think very carefully before doing anything."
But about 100 people die every minute!
I have signed up with Alcor. When I suggest to other people that they should sign up the common response has been that they wouldn't want to be brought back to life after they died.
I don't understand this response. I'm almost certain that if most of these people found out they had cancer and would die unless they got a treatment and (1) with the treatment they would have only a 20% chance of survival, (2) the treatment would be very painful, (3) the treatment would be very expensive, and (4) if the treatment worked they would be unhealthy for the rest of their lives; then almost all of these cryonics rejectors would take the treatment.
One of the primary cost of cryonics is the "you seem insane tax" one has to pay if people find out you have signed up. Posts like this will hopefully reduce the cryonics insanity tax.
You and Robin seem to be focused on different time periods. Robin is claiming that after ems are created one group probably won't get a dominant position. You are saying that post-singularity (or at least post one day before the singularity) there will be either one dominant group or a high likelihood of total war. You are not in conflict if there is a large time gap between when we first have ems and when there is a singularity.
I wrote in this post that such a gap is likely: http://www.overcomingbias.com/2008/11/billion-dollar.html
Have you ever had a job where your boss yelled at you if you weren't continually working? If not consider getting a part-time job at a fast food restaurant where you work maybe one day a week for eight hours at a time. Fast food restaurant managers are quite skilled at motivating (and please forgive this word) "lazy" youths.
Think of willpower as a muscle. And think of the fast food manager as your personal trainer.
My guess is your problem arises from never having had to stay up all night doing homework that you found boring, pointless, tedious, and very difficult.
"In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world."
If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.
Also, if you really believe this shouldn't you want the CIA to start assassinating AI programmers?
Economists do look at innovation. See my working paper "Teaching Innovation in principles of microeconomics classes."
The Real Ultimate Power: Reproduction.
Two compatible users of this ability can create new life forms which possess many of the traits of the two users. And many of these new life forms will themselves be able to reproduce, leading to a potential exponential spreading of the users' traits. Through reproduction users can obtain a kind of immortality.
Sorry, I misread the question. Ignore my last answer.
We should take into account the costs to a scientist of being wrong. Assume that the first scientist would pay a high price if the second ten data points didn't support his theory. In this case he would only propose the theory if he was confident it was correct. This confidence might come from his intuitive understanding of the theory and so wouldn't be captured by us if we just observed the 20 data points.
In contrast, if there will be no more data the second scientist knows his theory will never be proved wrong.
Carl Shulman,
Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer's Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field.
The best way to judge productivity differences is to look at salaries. Would Google be willing to pay Eliezer 50 times more than what it pays its average engineer? I know that managers are often paid more than 50 times what average employees are, but do pure engineers ever get 50 times more? I really don't know.
The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ.
Based on my limited understanding of AI, I suspect that AGI will come about through small continuous improvements in services such as Google search. Google search, for example, might get better and better at understanding human requests and slowly acquire the ability to pass a Turing test. And Google doesn't need a "precise theory to permit stable self-improvement" to continually improve its search engine.
"Maybe someday, the names of people who decide not to start nuclear wars will be as well known as the name of Britney Spears." should read:
"Maybe someday, the names of people who prevent wars from occurring will be as well known as the names of people who win wars."
If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.
Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.
If we assume that Omega almost never makes a mistake and we allow the chooser to use true randomization (perhaps by using quantum physics) in making his choice, then Omega must make his decision in part through seeing into the future. In this case the chooser should obviously pick just B.
"What takes real courage is braving the outright incomprehension of the people around you,"
I suspect that autistics are far more willing than neurotypicals to be true iconoclast because many neurotypicals find autistics incomprehensible regardless of what the autistics believe. So the price of being an intellectual iconoclast is lower for autistics than for most other people.
Carl,
Are you sure the dilution of Hellworlds would work if, given that you do something today that causes you to be damned, all future copies you make of yourself will spend eternity in Hell?
Eliezer, it was irrational of you to write this post. You must assign a non-zero probability to the proposition that the Biblical Mary was the mother of God. This is especially true since so many people believe that she was and by your own beliefs you must give this some weight. Well, if Mary is the mother of God and if Hell is real then you have just decreased your afterlife utility by an amount greater than being tortured for 3^^^3 years. Thus, there was a significant negative utility to your having written this post. So to prove that you are committed to rationality, please write another post apologizing for insulting Mary.
The Soviet new "man" that Stalin wanted to create was a half-ape, half-man super-warrior.
See http://news.scotsman.com/ViewArticle.aspx?articleid=2688011
I think that militarily President Bush under-reacted to 9/11. The U.S. faces a tremendous future threat of being attacked by weapons of mass destruction. Unfortunately, before 9/11 it was politically difficult for the President to preemptively use the military to reduce such threats. 9/11 gave President Bush more political freedom and he did use it to some extent. But I fear he has not done enough. I would have preferred, for example, that the U.S., Russia, China, UK, Israel and perhaps France announced that in one year they will declare war an any other nation that either has weapons of mass destruction or doesn't allow highly intrusive inspections to make sure they don't have weapons of mass destruction. After 9/11 Bush might have been able to negotiate this. Now it is probably too late.
Perhaps firms should conduct "blind" interviews of potential employees in which the potential employee is interviewed while behind a screen.
TGGP,
I have not read the Myth of the Rule of Law.
In the first year of law school students learn that for every clear legal rule there always exists situations for which either the rule doesn't apply or for which the rule gives a bad outcome. This is why we always need to give judges some discretion when administering the law.
Eliezer, you wrote:
"Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?"
Won't our descendants who do have genes or code that causes them to maximize their genetic fitness come to dominate the billions of galaxies. How can there be any other stable long term equilibrium in a universe in which many lifeforms have the ability to choose their own utility functions?
Eliezer,
Your posts on evolution are fantastic. I hope there will be many more of them.
Torture,
Consider three possibilities:
(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.
Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).
This is a very general problem. If the government decides to give away $X to someone, people are willing to spend up to $X to get it. If people intensively compete for the money then you would expect people to collectively spend $X trying to get the $X.
The main purpose of medical tort law is to enrich trial lawyers. Doctors and trial lawyers play legal games in which the doctors try to minimize their liability with disclosures and the trial lawyers argue that the disclosures don't offer legal protection for the doctors. There is little political incentive for anyone to care about the value of disclosure to patients. This is especially true since patients who care about such information will do their own research.
I find certain types of video games far more addictive than the average human does. This, however, reduces my demand for these video games. I have never bought or played World of Warcraft because I strongly suspect that I would become addicted to the game. If enough potential addicts are like me then games that become too addictive will suffer in the marketplace.