Congratulations to Paris Hilton

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-19T00:31:07.000Z · LW · GW · Legacy · 97 comments

Contents

97 comments

...on signing up for cryopreservation with the Cryonics Institute.

(No, it's not a joke.)

Anyone not signed up for cryonics has now lost the right to make fun of Paris Hilton,
because no matter what else she does wrong, and what else you do right,
all of it together can't outweigh the life consequences of that one little decision.

Congratulations, Paris.  I look forward to meeting you someday.

Addendum:  On Nov 28 '07, Paris Hilton denied being signed up for cryonics.  Oh well.

97 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Robin_Hanson2 · 2007-10-19T00:37:58.000Z · LW(p) · GW(p)

Wow; I'm impressed by her (in a different way than before). Of course that consequence outweighing claim depends crucially on the probability of cryonics working.

comment by CarlShulman · 2007-10-19T00:40:30.000Z · LW(p) · GW(p)

She doesn't understand the process: "And if you're immediately cooled, you can be perfectly preserved."

comment by bw2 · 2007-10-19T00:56:02.000Z · LW(p) · GW(p)

She does not understand that life gives meaning to life. I am starting to wonder whether she is really as brilliant as I thought.

comment by Anders_Sandberg · 2007-10-19T01:13:16.000Z · LW(p) · GW(p)

If this is not a hoax or she does a Leary, we will have her around for a long time. Maybe one day she will even grow up. But seriously, I think Eli is right. In a way, given that I consider cryonics likely to be worthwhile, she has demonstrated that she might be more mature than I am.

To get back to the topic of this blog, cryonics and cognitive biases is a fine subject. There is a lot of biases to go around here, on all sides.

comment by CarlShulman · 2007-10-19T02:22:32.000Z · LW(p) · GW(p)

http://www.mmdnewswire.com/mgicin-curtis-eugene-lovell-ii-to-be-frozen-for-e-hundred-yers-1300-2.html She may have listened to her magician.

Replies from: Benquo
comment by Benquo · 2016-03-28T19:28:39.869Z · LW(p) · GW(p)

Maybe I should follow the same heuristic and find some magicians to listen to.

comment by Peter_de_Blanc · 2007-10-19T04:26:00.000Z · LW(p) · GW(p)

I would think that SIAI is a better investment than cryonics.

comment by Richard_Hollerith · 2007-10-19T04:44:03.000Z · LW(p) · GW(p)

(I agree, Peter.)

no matter what else she does wrong, and what else you do right, all of it together can't outweigh the life consequences of that one little decision.

I think a person's life should be evaluated by what effect they have on civilization (or more precisely, on the universe) not by how long they live. I think that living a long time is a merely personal end, and that a properly lived life is devoted to ends that transcend the personal. Isn't that what you think, Eliezer?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-19T05:02:45.000Z · LW(p) · GW(p)

Reworded. Apparently this comment was very unclear. Original in italics below.

See the comment below: My primary reason for signing up for cryonics was because I got sick of the awkwardness, in important conversations, of trying to explain why cryonics was a good idea but I wasn't signed up for cryonics.

The secondary considerations, though they did not swing the decision, are probably of much greater relevance to readers of this blog.

I have found that it's not possible to be good only in theory. I have to hold open doors for little old ladies, even if it slows me down for a few precious seconds, because otherwise I start to lose my status as an altruist. In other words, I have found that I must be impractical in some places in order to maintain my ability to be practical in other places.

Cryonics strikes a blow against Death. It is not only of selfish significance. You can do it because you believe that humanity needs to get over this business of dying all the time, and you want to do yourself what you advocate that others do.

I don't advocate that anyone sign up for cryonics in place of donating to the Singularity Institute. But it might be wiser to sign up for cryonics than to go out to a fancy dinner. It should come out of the "personal/aesthetic" account, not out of the "altruistic" account (yes, Virginia, you do have mental accounts; money is fungible to Bayesians but you are not one).

Original:

I did once think that. But I found out that it's not possible to be good only in theory. That I still have to hold open doors for old ladies, even at the cost of seconds, because if I breeze right past them, I lose more than seconds. I have to strike whatever blows against Death I can. And, I have to be able to advocate for cryonics because it's part of a general transhumanist philosophy and some senior transhumanists assume "you're not taking it seriously" unless you're signed up.

Your mileage may vary. I'd advise you to agonizingly trade off cryonics against eating out or going to movies, and not worry about trading it off against your donations to the Singularity Institute. (Yes, money is totally fungible in theory, but not in practice.)

comment by Tiiba2 · 2007-10-19T05:23:37.000Z · LW(p) · GW(p)

SIAI welcomes its new Director of Sex Appeal.

comment by ChrisA · 2007-10-19T07:56:43.000Z · LW(p) · GW(p)

On cryonics, basically I understand the wager as that for a (reasonably) small sum I can obtain a small, but finite, possibility of immortality, therefore the bet has a very high return, so I should be prepared to make the bet. But this logic to me has the same flaw as Pascal’s wager. There are many bets that appear to have similar payoff. For instance, although I am an atheist, I cannot deny there is a small, probably smaller than cryonics, chance that belief in a God is correct. Could cryonics be like religion in this way, an example of exposure bias, resulting from the fact that someone has a business model called cryonics; as a result this approach to immortality is given higher visibility (whether through traditional advertising or through motivation on behalf of the investors to raise it's profile)?

comment by Richard_Hollerith · 2007-10-19T08:04:50.000Z · LW(p) · GW(p)

Tell me again: you advise a person to spend on their personal cryonic preservation before they donate to SI?

(Cryopreservation has very low expected payoff till after the singularity, does it not?)

I perceive a fundamental tension between personal goals and goals that transcend the personal. I.e. until ~400 years ago civilization advanced mainly as a side effect of people's advancing their personal interests, but the more a person's environment diverges from the EEA, the more important it is for the person to choose to advance civilization directly, as an end in itself. I believe it is an error to regard civilization as the servant of the individual. Ultimately, it is the other way around though in the short term it is hazardous as political doctrine to regard the individual as the servant of the state, the nation or the race.

The transhumanist program of holding out to everyone immortality and the end of suffering works against progress by causing the people with the most potential to contribute to civilization to invest in personal goals.

One might ask, since billions of people already believe (false) narratives about Everlasting Life, what harm inspiring smaller numbers with (true) narratives of immortality? It harms because the small fraction of the population with most of the potential to contribute to civilization is immune to the narrative about Everlasting Life but susceptible to the narrative of the cryonicist, the life-extensionist and the transhumanist. I realize I am giving offense by asserting that the vast majority of the potential to contribute to the world is concentrated in a small fraction of the human population. I do not do so gratuitously; this analysis happens to depend on that fact.

There are still many important ways to advance civilization without being a singularitarian. Transhumanism strikes me as a bad influence on those prospective contributors (by distracting them with new personal aspirations). I grant that transhumanism will create more singularitarians than would be created were it not for transhumanism, but I doubt that the singularity benefits from those people. Most people will detract from the singularity by becoming singularitarians. The goal IMO should not be to recruit as many singularitarians as possible but rather to encourage the right people to join while trying to discourage or not attract the attention of the wrong people. Here I am giving offense by contradicting the deeply-held belief that including more stakeholders in a decision will improve the quality of the decision. Sorry for the heterodoxy! I doubt a person who needs a personal motive to contribute to the singularity will prove a positive influence on the singularity.

Anyway, that is why I am a singularitarian but not a cryonicist, life-extensionist or transhumanist.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-19T08:35:29.000Z · LW(p) · GW(p)

But this logic to me has the same flaw as Pascal’s wager.

Pascal's Wager, quantifying the complexity penalty in Occam's Razor, has a payoff probability on the order of 2^-bits(Christianity). Imagine a decimal point, followed by a string of 0s the length of the Bible, followed by a 1.

Cryonics simply "looks like it ought to work". The technical probability seems better than one-half, excluding the probability that humanity itself survives.

The problem with Pascal's Wager is not the large payoff but the tiny, unsupported probability.

Cryonics would be a decent bet even if it only paid off an extra hundred years; though I admit that in this case I would probably not spend the money, because the whole endeavor would take on a different meaning.

Tell me again: you advise a person to spend on their personal cryonic preservation before they donate to SI?

Now that would be hypocrisy. I worked for SIAI for four years, and on the Singularity for a total of eight years, before I signed up for cryonics.

If you like, consider it this way: It was necessary that I be able to advocate cryonics, and it was easier to simply sign up for cryonics than to explain why I myself wasn't signed up. That was the primary driver behind my actual decision - I got tired of explaining - and would suffice even in the absence of other reasons.

But the moral aesthetics of the secondary reasons are rather complicated:

I perceive a fundamental tension between personal goals and goals that transcend the personal.

If we are here to help others, than what are the others here for? Take away the individuals and there is no civilization. Individuals need selves. It's not the same as being selfish. But neither is it the same as having no center. Transform into centerless altruists, and we would have destroyed a part of what we fought to preserve.

I observe nonetheless that it is possible to sign up for cryonics, not because you wish to live at others' expense, but because you believe humanity needs to move forward and get over the Death thing, and you want to do yourself what you have advocated others do.

Similarly, if you advocate that humanity should retain their selves, you may try to rejoice in your own immortality because that is what you advocate others do.

Like I said, the moral aesthetics of the secondary considerations are complicated. It was quite a struggle to express them.

comment by ChrisA · 2007-10-19T10:59:04.000Z · LW(p) · GW(p)

“The problem with Pascal's Wager is not the large payoff but the tiny, unsupported probability”

Why is the unsupported probability a problem? As long as there is any probability the positive nature of the wager holds. My problem with Pascal’s Wager is that there are any number of equivalent bets, so why chose Pascal’s bet over any of the others available? Better not to chose any, and spend the resources on a sure bet, i.e. utility in today’s life not a chance at a future one.

On Cryonics, while the technical nature of process is clearly more (by a huge amount) feasible than the existence of a god, it is not clear to me that the critical part can actually work, i.e. the transference of consciousness to a new body. The results of a successful cryonics experiment seem to me to be the creation of a very good copy of me. At least in this respect the god solution is better.

A better bet than cryonics seems to me to be quantum immortality (aka many worlds immortality). At least the majority of people working in the relevant field reportedly believe in the MW hypothesis so technically it is probably on par with cryonics. On this basis I should put any immortality investment into maximising the numbers of me (with continuity of conciousness), say by sensible choices on diet, avoiding risky sports etc. But no-one makes any money with this solution.

comment by Robin_Hanson2 · 2007-10-19T11:07:11.000Z · LW(p) · GW(p)

We should make it clear that most Overcoming Bias readers probably place a low probability on cryonics working, and on needing to deal with rogue AIs anytime soon. Some apparently place high probabilities on these. Me, I see them as likely enough to be worth considering, but still far less likely than not.

comment by MichaelAnissimov · 2007-10-19T11:26:13.000Z · LW(p) · GW(p)

Holy crap. Paris Hilton actually did something smart for once.

Replies from: timtyler
comment by timtyler · 2011-05-18T17:48:18.395Z · LW(p) · GW(p)

Holy crap. Paris Hilton actually did something smart for once.

So: this hoax gets perpetuated further here.

Replies from: steven0461, MichaelAnissimov
comment by steven0461 · 2011-05-18T18:13:04.734Z · LW(p) · GW(p)

Voted down because it's four years old.

comment by MichaelAnissimov · 2011-05-22T14:59:54.829Z · LW(p) · GW(p)

I guess I'm just a guy that likes perpetuating hoaxes! (At the time there were a couple news reports on it, so it made sense to post, but anyway I just deleted it.)

comment by mitchell_porter2 · 2007-10-19T11:26:59.000Z · LW(p) · GW(p)

because you believe humanity needs to move forward and get over the Death thing

In the end, this sort of rhetoric is false. Cryonics offers you more time, that's all. "Something or other must make an end of you some day... Everything happens to everybody sooner or later if there is time enough", as George Bernard Shaw put it in his own immortalist essay. More time to look for genuine immortality, if you wish, but that search has no scientific argument behind it remotely comparable to the argument for reversibility of cryostasis. A chance to reach a Friendly Singularity and become more than human in presently inconceivable ways - I imagine that is also a consideration for anyone who has thought about superintelligence. But it's still not immortality.

What cryonics, especially coupled with the Singularity perspective, might allow you to "get over" are certain crude forms of resignation to being what you are and of ending as everyone else has always ended. But to call it a triumph over Death, with a capital D, as if it were the overcoming of death in all its forms, is to inflate justified revolutionary expectations into unjustified transcendental ones.

But I'm still very happy about this development, and just wrote to Paris to say so.

comment by Sebastian_Hagen2 · 2007-10-19T11:28:55.000Z · LW(p) · GW(p)

I'm horribly confused by this thread.

Eliezer: That I still have to hold open doors for old ladies, even at the cost of seconds, because if I breeze right past them, I lose more than seconds. I have to strike whatever blows against Death I can.

Why? What is wrong with taking an Expected Utility view of your actions? We're all working with limited resources. If we don't choose our battles to maximum effect, we're unlikely to achieve very much.

I understand your primary reason (it's easier to argue for cryonics if you're signed up yourself), but that one only applies to people trying to argue for cryonics, and for whom the financial obligation is less of a cost than the time and persuasiveness lost in these arguments.

I don't understand the secondary reasons at all.

Transform into centerless altruists, and we would have destroyed a part of what we fought to preserve.

Agreed, but 1/(6.6 10*9) isn't a very large part, and that's not even considering future lives. An Expected Utility calculation still suggests that if you can exert any non-negligible effect on the probability of a Friendly Intelligence Explosion or it's timing, that effect will vastly outweigh whatever happens to yourself (according to most common non-egoistical value systems).

comment by michael_vassar3 · 2007-10-19T13:15:20.000Z · LW(p) · GW(p)

As shocked as I am by the Paris thing, it doesn't compare to how shocked I am by Eliezer thinking that cryonics is higher priority than SIA, or even than asteroid defense or the very best African Aid charities such as Toby Ord recommends.

I'm totally with Sebastian Hagen here.

Richard Hollerith: We haven't spoken yet, and I think that we should. E-mail me, OK? michaelaruna at yahoo dot com

mitchellporter: Realistically speaking there are some proposals for living forever which make sense, but beyond this there's the chance that our preferences, when converted into utility functions, are satisfiable with the resources that will be at hand post singularity.

comment by michael_vassar3 · 2007-10-19T13:18:53.000Z · LW(p) · GW(p)

I should qualify this. I'm totally with Sebastian in theory. In practice we can't re-write ourselves as altruists, and if we were to do so we would have to ditch lots of precomputed solutions to every-day problems. We have limited willpower both to behave non-automatically and to rewrite our automatic behaviors, and we should be using it in better ways than by not tipping people in restaurants that we aren't going to return to.

comment by Nick_Tarleton · 2007-10-19T13:24:34.000Z · LW(p) · GW(p)

CI, not Alcor? That's a little surprising.

A more optimistic take on the (very interesting) cryonics vs. SIAI debate is that, since Ms. Hilton has proven herself open to cryonics, she may be more open than most celebrities to sponsoring and advocating low-visibility, high-impact charities. Her money could do a lot of good and her fame could generate a lot more money for SIAI/Lifeboat/CRN/.... OTOH, as long as she's considered stupid, her support could be bad PR for a fringe-sounding organization that wants to be taken seriously in public policy. Anyway, she probably already gets $BIGNUM requests for charity every day.

I think most people (who aren't trying to be more like expected utility maximizers) don't trade off personal purchases against charitable contributions very much, so encouraging the average person to sign up for cryonics doesn't seem very likely to detract from their donations. It could be just as likely to increase them by giving them a personal stake in existential risk issues.

It seems a little strange from a utilitarian perspective to focus on money spent on cryonics as money that could be better given to SIAI, as opposed to money spent on selfish purchases with lower expected return (probably including all luxuries), although I do see a good moral-aesthetic reason for it.

comment by Nick_Tarleton · 2007-10-19T13:30:34.000Z · LW(p) · GW(p)

Vassar's second post makes me see another good reason to focus on cryonics vs. other luxury goods. Even if cryonics has a higher expected return, it's so deferred, abstract, and uncertain that the willpower cost of not purchasing cryonics is very low compared to other luxuries.

BTW, the first comment on that article is depressing: "I feel sorry for society in the future!"

comment by Recovering_irrationalist · 2007-10-19T13:36:43.000Z · LW(p) · GW(p)

This isn't an attack on cryonics or Eliezer (I'm in favour of them), just venting frustration at a bias he's quite frustrated with himself that tends to pop up when very smart people predict the future. This is a biases blog after all. To be kinder/less pedantic just replace each "can't" with "won't", it's still pretty bad.

Anyone not signed up for cryonics has now lost the right to make fun of Paris Hilton, because no matter what else she does wrong, and what else you do right, all of it together can't outweigh the life consequences of that one little decision.

That implies...

  • Personally surviving to Singularity dominates my utility function. It can't be outweighed by any other possible sets of results.
  • It can't for Paris.
  • Cryonics can't fail.
  • Humanity can't fail to survive all existential risks.
  • My revival can't be prevented by anything else, unknown unknowns included.
  • Paris's can't.
  • I can't make it without cryonics.
  • Paris can't.

...can't fail to all come out as true.

That's just for one reader. For all readers, multiply the probabilities of #1, #5 and #7 for Mr Average Reader, times that probability by itself once per reader, then times that by the probabilities for the other 5 points.

I'm simplifying for brevity (for example there's at least 3 reasons you can't just multiply for the average reader) but you get the idea. Care to bet on Eliezer's Wager? :-)

I'm relatively new to all this so sorry if I did something daft.

comment by michael_vassar3 · 2007-10-19T13:56:17.000Z · LW(p) · GW(p)

Recovering Irrationalist: Only a couple daft things. OK 6 at least, maybe 7. "cryonics can't fail" has to be replaced by "the chance of cryonics working is not tiny", which seems to be a reasonable evaluation. Likewise, for all subsequent uses of "can't". We always work with probability distributions here. The possible daft thing is taking Eleizer too literally. He clearly didn't literally mean that we had lost our right to criticize Paris. I'm welcome to criticize anyone, and have been known to criticize lots of people who do Calorie Restriction, which seems more useful and cheaper than cryonics ignoring willpower costs, even though I no longer do CR myself. Maybe the post should be read as "Yay Cryonics!", a sentiment that I would second. OTOH, as I have said many times before, it seems to me that if the current chance of Friendly Singularity is extremely low and the expected chance of cryonics working is low or vice versa the expected personal selfish value of signing up for cryonics may still be less than that for donating to SIAI.

comment by michael_vassar3 · 2007-10-19T14:36:53.000Z · LW(p) · GW(p)

Carl: He seems like a respectable 'debunking' magician. Like Houdini, Penn and Teller, and Randi, he argues against the supernatural, so taking him seriously doesn't seem like a strong criticism of the reasoning process leading Paris to cryonics, though Alcor would seem like the obvious choice rather than the cryonics institute.

comment by Recovering_irrationalist · 2007-10-19T15:10:33.000Z · LW(p) · GW(p)

Michael: I said I wasn't attacking cryonics, but I guess I overlooked being interpreted as protecting my right to insult celebrities! I'll be more explicit.

My problem is with the words: "all of it together can't outweigh the life consequences of that one little decision". I'm not saying cryonics isn't worthwhile, and I'm not saying Eliezer's wrong to praise Paris Hilton. If you say "I don't eat people because humans are poisonous", and I argue your reasoning, that doesn't mean I called you a cannibal.

Even with probability distributions and an overwhelmingly high value placed on personal survival, there are many ways at least one non-cryonically-signed reader's decisions could beat hers. That's not an argument against cryonics, it's an argument against the conjunction fallacy.

comment by michael_vassar3 · 2007-10-19T15:37:50.000Z · LW(p) · GW(p)

Recovering: I'm not signed up for cryonics, though I think I may sign up eventually when the marginal benefit in singularity risk of a dollar spent saving the world is much lower. I definitely don't think that everyone should sign up, but I didn't take the claim literally. My main point was that unlike Eliezer, you did seem to be speaking literally/precisely, and

Cryonics can't fail.

Humanity can't fail to survive all existential risks.

My revival can't be prevented by anything else, unknown unknowns included.

Paris's can't.

I can't make it without cryonics.

Paris can't.

don't follow from his statement.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-19T15:51:15.000Z · LW(p) · GW(p)

I do not think that people should prioritize cryonics over SIAI, so stop being shocked, Vassar. I think people should prioritize cryonics over eating at fancy restaurants or over having a pleasant picture in their room. If anyone still does this I don't want to hear them asking whether SIAI or cryonics has higher priority.

I wish people would try a little harder to read into my statements, though not for Straussian reasons. By saying "life consequences" I specifically meant to restrict the range to narrower than "consequences in general", i.e., personal rather than global expected utility.

Specific consequences can render Paris's cryo contract irrelevant or ineffectual, but that doesn't change the expected utilities in personal life consequences. It's pretty hard to see something with a larger life-EU than a cryo contract that can be amortized over millions of years.

Yes, Paris can be criticized! Anyone can be criticized. But it is considered hypocritical to criticize another for a flaw that you could realistically be repairing in yourself but haven't, and it is in this sense that I spoke of "losing the right".

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-19T16:06:39.000Z · LW(p) · GW(p)

The results of a successful cryonics experiment seem to me to be the creation of a very good copy of me.

This is a black hole that sucks up arguments. Beware.

I'll deal with this on Overcoming Bias eventually, but I haven't done many of the preliminary posts that would be required.

Meanwhile, I hope you've noticed your confusion about personal identity.

In underlying physics, there are no distinct substances moving through time: all electrons are perfectly interchangeable, the only fundamentally real things are points in configuration space rather than individual objects, and a strong argument has been made (by Julian Barbour) that these points never change amplitudes. In timeless physics the future is itself an informational copy of the past, rather than anything "moving" from the past to the future. Your spatiotemporal intuitions which tell you that objects are persistent, distinct from one another, and move through time, are simply lying to you.

comment by Recovering_irrationalist · 2007-10-19T16:31:17.000Z · LW(p) · GW(p)

Michael: I see that's true if his statement is a measure of probability distributions. I thought he meant there's no possibly future where anything I could have done would have made me better off than Paris's one decision made her better off. Looks like I've assumed a common meaning for something used on this blog as a technical term - if so, I apologize.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-19T17:31:59.000Z · LW(p) · GW(p)

If I mention that Douglas Hofstadter's "Godel, Escher, Bach" is one of the Greatest Books Ever, people don't jump up and say: "Are you saying it would be better to buy that book than to donate to SIAI?"

No, it isn't. Neither does it feel to me like it would be wise to advise people to go through life bookless, or even that I should advise people to only take books out from the library. If you're going to own any book, you may as well own that one; and it is not totally unconnected to AI, so perhaps there will be trickle effects.

To me the notion of trading off cryonics against existential risk prevention has a flavor of remarkable oddity to it. It is like a city councillor proposing to spend money on a library, and someone jumping up and saying, "But what about the children starving in Abeokuto, Nigeria? Why not spend the money on Abeokuto? Why do you hate the Abeokutans so?" Why pick that particular occasion to make the protest, rather than, say, someone buying a speedboat?

comment by Nick_Tarleton · 2007-10-19T18:07:40.000Z · LW(p) · GW(p)

Why pick that particular occasion to make the protest, rather than, say, someone buying a speedboat?

Off the top of my head, that someone is interested in cryonics is very strong Bayesian evidence that they'll be easier than average to persuade to donate to SIAI. On the other hand, this would equally justify suggesting to them to cut back on other luxuries to donate. But like Michael Vassar suggested and I elaborated on, since the benefit of cryonics is far-off and uncertain, it may take less willpower to give up than other luxuries. But surely not that much less...

(As you might guess, I'm currently trying to make this decision for myself.)

I hate to suggest it of high-caliber rationalists, but I wonder if the decision to forgo cryonics might be sometimes partly motivated by conspicuous self-sacrifice signaling, or even if the altruistic justification is sometimes partly a rationalization for forgoing cryonics in favor of other luxury goods.

comment by Richard_Hollerith · 2007-10-19T19:06:59.000Z · LW(p) · GW(p)

that someone is interested in cryonics is very strong Bayesian evidence that they'll be easier than average to persuade to donate to SIAI.

That is it! That is what bothers me about Eliezer's advocacy of cryonics, which I will grant is no more deserving of reproach than most personal expenditures. IIUC, his livelihood depends on donations to the SIAI. Someone once quipped that it is impossible to convince a man of the corrrectness of some proposition if his livelihood depends on his not believing it. Sometimes I worry that his enthusiasm for cryonics is a sign that his dependency on donations will bias his judgement on important things, not just cryonics. It would reassure me if singularitarian leaders had secure incomes that derive from a source that does not depend on the opinions and perceptions of prospective donors. Proposed solution: Eliezer continues to solicit donations but makes it clear that he reserves the right to spend them in any way he likes, e.g., meditating in a monastery for a year or starting a family. I.e. he changes his pitch to, "I deserve your support because I have demonstrated that I am an exceptionally altruistic, hard-working and intelligent man, and am likely to continue to contribute significantly to our civilization. Also, he is blinded to the identity of the donors so as not to be preferentially influenced by any public statements the donors might make.

comment by michael_vassar3 · 2007-10-19T19:32:22.000Z · LW(p) · GW(p)

Nick: Personally, I forgo cryonics in favor of luxury goods all the time rather unapologetically. I don't see how this could constitute conspicuous self-sacrifice signaling. Spending on things like cryonics or SIAI is generally going to be driven by idealized semi-aspirational self-models which are not hyperbolic discounters, not heavy discounters, and extend their self-concept fairly broadly rather than confining it according to biological imperatives. For such a self-model, there's not much self to sacrifice. For the self that makes most of my small decisions there is a self to sacrifice, and that self doesn't get sacrificed in favor of some future person who supposedly is "me" just because there is a good argument backing that supposition.

comment by g · 2007-10-19T20:01:35.000Z · LW(p) · GW(p)

It's hard to see how not signing up for cryonics could be "conspicuous" (except for a small minority of professional transhumanists who might face questioning about it, like Eliezer) since (1) to an excellent first approximation no one has signed up for cryonics, so the signal gives rather little information, and (2) it's only going to become public if you are close to death (or if you put out a press release about it like Paris Hilton, I guess).

To most people, abstaining from other luxury goods in favour of cryonics is going to look and feel much more like self-sacrifice than abstaining from cryonics in favour of other luxury goods. In fact, it's probably only among (a certain sort of) high-calibre rationalists that Nick's conjecture would have any plausibility.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-19T20:02:47.000Z · LW(p) · GW(p)

Sometimes I worry that his enthusiasm for cryonics is a sign that his dependency on donations will bias his judgement on important things, not just cryonics.

I do not understand the logic of this. I have no livelihood interest in cryonics.

It would reassure me if singularitarian leaders had secure incomes that derive from a source that does not depend on the opinions and perceptions of prospective donors.

Anyone wants to buy me an annuity, go for it. It would reassure me too.

comment by Richard_Hollerith · 2007-10-19T20:48:12.000Z · LW(p) · GW(p)

Do you have livelihood interest in donations to SIAI?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-19T20:51:55.000Z · LW(p) · GW(p)

Yes, obviously.

...did I say something unclear? I'm a bit worried because I seem to be misinterpreted a lot, in this thread, and looking back, I can't see why.

comment by Doug_S. · 2007-10-19T21:08:50.000Z · LW(p) · GW(p)

Is it okay to say "I don't want to be cryonically preserved because I don't want to be brought back to life in the future after I die normally?"

comment by Constant2 · 2007-10-19T21:29:58.000Z · LW(p) · GW(p)

The livelihood argument only goes to motivation, and Eliezer's motivation is of no interest to me. Why should it be? I don't need to trust his motivation - I only need to read and evaluate his arguments. Or am I missing something?

comment by Richard_Hollerith · 2007-10-19T22:01:45.000Z · LW(p) · GW(p)

What you miss is that Eliezer has chosen to accept an immense responsibility (IIUC because no one else had accepted it) namely to guide the world through the transition from evolved intelligence to engineered intelligence. Consequently, Eliezer's thought habits are of high interest to me.

comment by Richard_Hollerith · 2007-10-19T22:13:21.000Z · LW(p) · GW(p)

TEXTAREAs in Firefox 1.5 have a disease in which a person must exercise constant vigilance to prevent stray newlines. Hence the ugly formatting.

comment by Recovering_irrationalist · 2007-10-19T22:35:52.000Z · LW(p) · GW(p)

I'm a bit worried because I seem to be misinterpreted a lot, in this thread, and looking back, I can't see why.

In my case, maybe I need to learn when and how to interpret statements as describing expected utility or probability distributions rather than sets of actual events.

Is there a link that explains this clearly, and is it just a BayesCraft thing or is there reading material outside the Bayesphere I should be able to interpret like this?

comment by Richard_Hollerith · 2007-10-19T22:38:52.000Z · LW(p) · GW(p)

RI, in this comment section, you can probably safely replace "utility function" with "goal" and drop the word "expected" altogether.

comment by Tom3 · 2007-10-19T23:36:06.000Z · LW(p) · GW(p)

"Congratulations, Paris. I look forward to meeting you someday.

Posted by Eliezer Yudkowsky"

Pffff hahahaha

comment by mtraven · 2007-10-20T02:24:00.000Z · LW(p) · GW(p)

You neglected to mention that her motivation for signing up for cryonics was to be with her (similarly frozen) pet chihuahua. So Eliezer will have a rival for his affections.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-20T02:35:43.000Z · LW(p) · GW(p)

For the love of cute kittens, I didn't mean it that way. "I look forward to meeting you someday" is what I would say of any human being who signed up for cryonics.

comment by Richard_Hollerith · 2007-10-20T05:54:26.000Z · LW(p) · GW(p)

Eliezer clarified earlier that this blog entry is about personal utility rather than global utility. That presents me with another opportunity to represent a distinctly minority (many would say extreme) point of view, namely, that personal utility (mine or anyone else's) is completely trumped by global utility. This admittedly extreme view is what I have sincerely believed for about 15 years, and I know someone who held it for 30 years without his becoming an axe murderer or anything horrid like that. To say it in other words, I regard humans as means to nonhuman ends. Of course this is an extremely dangerous belief, which probably should not be advocated except when it is needed to avoid completely mistaken conclusions, and it is needed when thinking about simulation arguments, ultratechnologies, the eventual fate of the universe and similarly outre scenarios. If the idea took hold in ordinary legal or political deliberations, unnecessary suffering would result, so let us be discreet about to whom we advocate it.

Specifically, I wish to reply to Take away the individuals and there is no civilization which is a reply to my I believe it is an error to regard civilization as the servant of the individual. Ultimately, it is the other way around. Allow me to rephrase more precisely: Ultimately, the individual is the servant of the universe. I used civilization as a quick proxy for the universe because the primary way the individual contributes to the universe is by contributing to civilization.

The study of ultimate reality is of course called physics (and cosmology). There is an unexplored second half to physics. The first half of physics, the part we know, asks how reality can be bent towards goals humans already have. The second half of physics begins with the recognition that the goals humans currently have are vanities and asks what the deep investigation of reality can tell us about what goals humans ought to have. This "obligation physics" is the proper way to ground the civilization-individual recursion. Humanism, liberalism, progressivism and transhumanism ground the recursion in the individual, which might be the mistake made by most contemporary educated humans that could benefit the most from correction. The mistake is certainly very firmly entrenched in world culture. Perhaps the best way to see the mistake is to realize that subjective experience is irrelevant except as a proxy for the relevant things. What matters is objective reality.

Contrary to what almost every thoughtful person believes, it is possible to derive ought from is: the fact that no published author has done so correctly so far does not mean it cannot be done or that it is beyond the intellectual reach of contemporary humans. In summary my thesis is that the physical structure of reality determines the moral structure of reality.

comment by ChrisA · 2007-10-20T08:31:52.000Z · LW(p) · GW(p)

Eliezer “This is a black hole that sucks up arguments. Beware. I'll deal with this on Overcoming Bias eventually, but I haven't done many of the preliminary posts that would be required. Meanwhile, I hope you've noticed your confusion about personal identity.”

I look forward to the posts on consciousness, and yes, I don’t feel like I have a super coherent position on this. I struggle to understand how me is still me after I have died, my dead body is frozen, mashed up and then reconstituted some indefinite time in the future. Quarks are quarks but a human is an emergent property of quarks so interchangeability doesn't necessarily follow at a macro scale. (A copy of a painting is not equivalent to the original, no matter how good a copy). This is why I don’t invest in cryonics. To me there should be better continuity to qualify as transference of consciousness, but I can't be explicit on what I mean by better.

comment by douglas · 2007-10-20T08:53:51.000Z · LW(p) · GW(p)

If we equate the decision to undergo cryonics with the decision to live forever, then I think calling it a small decision is problematic. Suppose I were to say, "You will live forever. That is your nature." It seems most people have one of two ways of dealing with this possibility-- 1) create an endlessly beautiful future (heaven) or, 2) deny the possibility (death is an ultimate end). These actions do not seem to me to be based on the notion that living forever is a small decision.

comment by Rolf_Nelson2 · 2007-10-20T15:59:42.000Z · LW(p) · GW(p)

Here's my data point:

  1. Like Michael Vassar, I see the rationality of cryonics, but I'm not signed up myself. In my case, I currently use altruism + inertia (laziness) + fear of looking foolish to non-transhumanists + "yuck factor" to override my fear of death and allow me to avoid signing up for now. Altruism is a constant state of Judo.

  2. My initial gut emotional reaction to reading that Eliezer signed up for cryonics was irritation that Eliezer asks for donations, and then turns around and spends money on this perk that most people, including me, don't indulge in. (An analogy is the emotion that strikes you if you hear that the president of a charity drives a Ferrari that he bought out of his charity salary.)

  3. I then quickly realized (even before seeing Eliezer's elaboration) that this reaction is illogical, it doesn't matter if you spend money on cryonics rather than, say, on eating out more often, or buying a house that's slightly larger than you need for bare survival. So, I discount this emotion.

  4. However, it's not clear to me what % of the non-cryonics majority will reach step 3. There are many ways someone could easily rationalize the emotions of step 2 if, unlike me, they were inclined to do so in this case. (I can give examples of plausible rationalizations on request.)

  5. One way to mitigate, for people who didn't reach step 3, would be to point out that, while signing up for cryonics when you're on death's door is a 5 to 6-figure investment, signing up through life insurance when you're young and healthy (which I presume is Eliezer's situation) is extremely cheap.

  6. Eliezer is a product of Darwinian evolution. An extreme outlier, to be sure, with the "altruism knob" cranked up to 11, but a product of evolution nonetheless, with all the messy drives that entails. I would be more bothered if he claimed to be altruistic 100% of the time, since that would cause me to doubt his honesty.

  7. (Corollary to (6)) If someone is considering donating, but is holding off because "I am not sufficiently convinced Eliezer is altruistic enough, I'm going keep my money and wait until I meet someone with a greater probability of being altruistic", please let me know (here, or at rzolf.h.d.nezlson@gmail.com, remove z's) and I will be happy to enlighten you on all the ways this reasoning is wrong.

comment by TGGP2 · 2007-10-20T20:13:00.000Z · LW(p) · GW(p)

Richard, if morality is a sort of epiphenomenon with no observable effects on the universe, how could anyone know anything about it?

comment by g · 2007-10-20T22:32:00.000Z · LW(p) · GW(p)

Where did Richard say anything resembling "with no observable effects on the universe"?

comment by Richard_Hollerith · 2007-10-20T23:44:00.000Z · LW(p) · GW(p)

Yes, TGGP, I've reread my comment and cannot see where I . . .

comment by Richard_Hollerith · 2007-10-20T23:59:00.000Z · LW(p) · GW(p)

TGGP, I maintain that the goals that people now advocate as the goal that trumps all other goals are not deserving of our loyalty and a search most be conducted for a goal that is so deserving. (The search should use essentially the same intellectual skills as physicists.) The identification of that goal can have a very drastic effect on the universe e.g. by inspiring a group of bright 20 year-olds to implement a seed AI with that goal as its utility function. But that does not answer your question, does it?

comment by mtraven · 2007-10-21T05:02:00.000Z · LW(p) · GW(p)

I have to admire a blog that can go from Paris Hilton to the metaphysics of morality in only a few short hops.
Richard said The first half of physics, the part we know, asks how reality can be bent towards goals humans already have.
That's engineering, not physics. Then later you say:
I maintain that the goals that people now advocate as the goal that trumps all other goals are not deserving of our loyalty and a search most be conducted for a goal that is so deserving. (The search should use essentially the same intellectual skills as physicists.)
While your goal of finding better goals is admirable, I can't see that the skills of physicists are particularly applicable. Traditionally philosophers and religions claim to have that kind of expertise. You can view religion as a technology for enabling people not to think too hard about ultimate goals, for better or worse. Often such thinking is unproductive.

comment by TGGP2 · 2007-10-21T06:34:00.000Z · LW(p) · GW(p)

So, then Richard, do you assert that morality does have observable effects on the universe? Do you think that a physicist can do an experiment that will grant him/her knowledge of morality? You have been rather vague by saying that just as we discovered many positive facts with science, so we can discover normative ones, even if we have not been able to do so before. You haven't really given any indication as to how anyone could possibly do that, except by analogizing again to fields that have only discovered positive rather than normative facts. It would seem to me the most plausible explanation for this difference is that there are none of the latter.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-21T07:26:00.000Z · LW(p) · GW(p)

Richard, assuming that you're thinking the way my past self was thinking, you should find the following question somewhat disturbing:

How would you recognize a moral discovery if you saw one? How would you recognize a criterion for recognizing moral discoveries if you saw one? If you can't do either of these things, how can you build an AI that makes moral discoveries, or tell whether or not a physicist is telling the truth when she says she's made a moral discovery?

comment by Richard_Hollerith · 2007-10-21T21:53:00.000Z · LW(p) · GW(p)

Thanks for the nice questions.

comment by mtraven · 2007-10-21T23:12:00.000Z · LW(p) · GW(p)

Handily, the Templeton Foundation took out a two-page ad in the New York Times today where a number of luminaries discuss the purpose of the universe. Presumably our personal goals should be in tune with the overarching goal of the universe, if there is one.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-21T23:49:00.000Z · LW(p) · GW(p)

Non sequitur, mtraven.

"Let us understand, once and for all, that the ethical progress of society depends, not on imitating the cosmic process, still less in running away from it, but in combating it." -- T. H. Huxley
comment by Richard_Hollerith · 2007-10-22T00:23:00.000Z · LW(p) · GW(p)

Certainly ethical naturalism has encouraged many oppressions and cruelties. Ethical naturalists must remain constantly aware of that potential.

comment by Richard_Hollerith · 2007-10-22T00:41:00.000Z · LW(p) · GW(p)

Er, they needn't remain constantly aware. They need only take it into account in all their public statements.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-22T00:44:00.000Z · LW(p) · GW(p)

You surely realize you haven't answered any of the tough questions here. Evolution is a natural process, but not an ethical one. The second law of thermodynamics is a universal trend but this doesn't make entropy a terminal value. So how would you recognize a natural ethical process if you saw one?

comment by Brandon_Reinhart · 2007-10-22T01:35:00.000Z · LW(p) · GW(p)

Eliezer, in what way do you mean "altruism" when you use it? I only ask for clarity.

I don't understand how altruism, as selfless concern for the welfare of others, enters into the question of supporting the singularity as a positive factor. This would open a path for a singularity in which I am destroyed to serve some social good. I have no interest in that. I would only want to support a singularity that benefits me. Similarly, if everyone else who supports the efforts to achieve singularity is simply altruistic no one is looking out for their own welfare. Selfish concern (rational self interest) seems to increase the chance for a safe singularity.

comment by TGGP2 · 2007-10-22T01:39:00.000Z · LW(p) · GW(p)

I think it was the Stoics who said one's ethical duty was to act in accordance with the Universe. Marcus Aurelius did a lousy job of making sure his son was competent to run the empire though.

comment by Tom_McCabe · 2007-10-22T01:55:00.000Z · LW(p) · GW(p)

"So how would you recognize a natural ethical process if you saw one?"

Suppose that you observe process A- maybe you look at it, or poke around a bit inside it, but you don't make a precise model. If you extrapolate A forward in time, you will get a probability distribution over possible states (including the states of all the other stuff that A touches). If A consistently winds up in very small regions of this distribution, compared to what your model is, and there's no way to fix your model without making it extremely complex, you can say A is an "ethical process". Two galaxies, or two rocks, or two rivers, can easily collide; but if you look at humans, or zebras, or even fish, you will notice that they run into each other much less often than you would expect if you made a simple Newtonian model.

comment by mtraven · 2007-10-22T04:11:00.000Z · LW(p) · GW(p)

People who believe that the universe has a goal (I'm not really one of them, except on Thursdays) also tend to believe that humans are the culmination or at least the instrument of that goal. Humans are free to try to combat the universe's goals if they want to, but they may just be fulfilling the universe's goal by rebelling against it.

comment by douglas · 2007-10-22T06:43:00.000Z · LW(p) · GW(p)

"How would you recognize a natural ethical process if you saw one?"
How would you recognize an ethical process if you saw one? If you saw an ethical process would you think it unnatural, or supernatural, or what exactly? (Sorry if that's a silly question)

comment by Bandwagon_Smasher · 2007-10-22T07:18:00.000Z · LW(p) · GW(p)

I'm new to this whole cryonics debate, so I have a question: How long do you all believe you'll be frozen for? If you think that being revived is scientifically possible, what are the developments that need to be achieved to get to that point?

Off the top of my head, I would think you'd need, at the very least, (1) prevention of cellular damage of the brain cells during the freezing and reviving processes; (2) the ability to revive dead tissue in general; and (3) the ability to perfectly replicate consciousness after it has been terminated.

I would think paying money (I would assume it is very costly) for any freezing service now makes no sense now since a better system might be developed in the future, more closely to the time of one's own death. Also, you don't know how you're going to die, and to what extent your brain will be destroyed in the process. Furthermore, if you're talking hundreds of years, you have no guarantee of whether your physical remains or contractual rights to be unfrozen will survive, since things in the world might change significantly. Finally, if (2) or (3) is achieved at all, I would expect some serious social upheaval that might endanger the ability of anyone to get unfrozen at all anyway.

Based on all this, I would think the small probability of success is not worth giving up dinner and movies. Of course, Paris doesn't really have to make the trade off, so I guess my reasoning doesn't really apply to her. Perhaps its not that she's smarter, just luckier. But I'm not a scientist, so please inform me if I'm missing something.

comment by mitchell_porter2 · 2007-10-22T10:31:00.000Z · LW(p) · GW(p)

BS - Cost of cryonics: "no less than US$28,000 and rarely more than US$200,000". One way to fund this is with a life insurance policy.

The cryonics organizations themselves are always seeking better methods of suspension, and your contract is with a suspension provider, not with a suspension technology, so the point about technological advance is moot.

There are in general two conceptions of how revival might work. One is through nanotechnological repair of freezing damage to the cells (along with whatever condition originally caused a person's death). This may be combined with the growth of a new host body if only the brain has been frozen (that's the economy-class ticket to the post-cryo future). The other method would involve high-resolution imaging of the frozen person, as in the Visible Human Project, but with subcellular resolution of neuronal structure and composition, and then comprehensive simulation of the brain structure thus revealed, perhaps in a robot body or just in a virtual reality (at first), under the assumption that this is equivalent to revival. The issues are then the same as in "mind uploading" - what happens to personhood and identity when you can have multiple copies, slightly inaccurate copies, and so forth. I disagree with the computational philosophy of mind, so I don't think that constitutes survival, but the details are not exactly clear, and in any case cryonics is the best existing method of physical preservation after death, so it's the best that we have to work with.

Philosophical issues aside, the two resurrection processes described (reversal of intracellular damage, mapping of intracellular structure) are merely an extrapolation of our existing abilities to image molecules and manipulate them. We have every reason to think they are possible, especially for a material object at very low temperatures. As for timescales, certainly some cryonicists have thought in terms of centuries. But the rising paradigm among the small group of people who follow these matters, is that artificial intelligence is coming, and will boost itself past human intelligence, within decades, not centuries. Those extrapolated molecular capabilities, it is thought, will be achieved very rapidly, as an incidental side-effect of that process. If superhuman artificial intelligence, having become superhuman, is "friendly" towards human beings, then one would expect the Great Unthawing to occur more or less immediately. But as several of the posters above have indicated, human-friendliness is an outcome which will have to be worked for, and which trumps everything else - if we get everything else right, and that wrong, then everything else will count for nothing, and the unfriendly superhuman AI steamrolls the human race in pursuit of whatever imperative does guide its behavior.

So the prognosis is mixed. But if you can afford it, cryonics is a more than reasonable option to take up.

comment by Richard_Hollerith · 2007-10-23T03:17:00.000Z · LW(p) · GW(p)

For the sake of brevity, I borrow from Pascal's Mugger.

If a Mugger appears in every respect to be an ordinary human, let us call him a "very unconvincing Mugger". In contrast, an example of a very convincing Pascal's Mugger is one who demonstrates an ability to modify fundamental reality: he can violate physical laws that have always been (up to now) stable, global, and exception-free. And he can do so in exactly the way you specify.

For example, you say, "Please Mr Mugger follow me into my physics laboratory." There you repeat the Millikan oil-drop experiment and demand of the mugger that he increase the electrical charge on the electrons in the apparatus by an amount you specify (stressing that he should leave all other electrons alone).

Then you set up an experiment to measure the gravitational constant G and demand that he increase or decrease G by a factor you specify (again stressing that he should leave G alone outside the experimental apparatus).

You ask him to violate the conservation of momentum in a system you specify by a magnitude and direction you specify.

I find it humorous to use the phrase "signs and wonders" for such violations of physical laws. You demand and verify other signs and wonders.

The Mugger's claim that your universe -- your "spacetime" -- is an elaborate simulation and that he exists outside the simulation is now very convincing.

My reason for introducing the very convincing Mugger is that I believe that under certain conditions, unless and until you acquire a means of modelling the part of reality outside the simulation that does not rely on communicating with the Mugger, the Mugger has Real Moral Authority over you: it is not too much of an exaggeration to say you should regard every communication from the Mugger as the Voice of God.

The Mugger's authority does not derive from the fact that he can at any time crush you like a bug. Many ordinary humans have had that kind of power over other humans. His authority stems from the fact that he is in a better position than you or anyone else you know to tell you how your actions might have a permanent effect on reality. But we are getting ahead of ourselves.

Probably the only conditions required on that last proposition are that our spacetime, which is the only "compartment" of reality we know about so far -- will end after a finite amount of time and that we become confident of that fact. In cosmology these days this is usually modelled as the Big Rip.

I believe the utility of directing one's efforts at a compartment of reality that might go on forever completely trumps the utility of directing efforts at a compartment of reality that will surely end even if the end is 100,000,000,000 years away and this remains true regardless the ratio of the probabilities that one's efforts will prove effective in those two compartments.

If scientists determine that the universe is going to end in 12 months or 10 years or 100 years and if during the time remaining to us society and the internet continue to operate normally, I tend to suspect that I could convince many people that the only hope we have for our lives and our efforts to have any ultimate or lasting relevance is for us to contribute to the discovery and investigation of a compartment of reality outside our spacetime because it is an intrinsic property of spacetime -- by which I mean the thing modelled by Einstein's equation -- that a spacetime that ends after a finite amount of time cannot support or host a causal chain that goes on indefinitely, and as we shall see, such chains are central to the search for intrinsic value.

Of course we have no evidence for what exists beyond our spacetime, and no concrete reason to believe we ever will find any evidence, but we have no choice but to conduct the search.

And that puts us in the proper frame for us to meet the very convincing Mugger: "Delighted to meet you, Mr Mugger. Please tell me and my civilization how to make our existence and our efforts meaningful."

The very convincing Mugger is "ontologically privileged": he has a causal model of the part of reality outside or beyond our spacetime. More precisely, the signs and wonders he performed on demand lead us to believe that it is much more probable that he can acquire such a model than that we can do so without his help.

Now we come to the heart of how I propose to derive a normative standard from positive facts: I propose that causal chains that go on forever or indefinitely are important; causal chains that peter out are unimportant. In fact, the most important thing about you is your ability to have a permanent effect on reality. Instead of worrying that the enemy will sap your Precious Bodily Fluids, you should worry that he will sap your Precious Ability to Initiate Indefinitely-Long Causal Chains.

The ontologically privileged observer has not proven to us that he has enough knowledge to tell us how to create causal chains that go on indefinitely. But unless we discover new fundamental physics, communicating with the privileged observer is the most likely means of our acquiring such knowlege. For us to communicate to the Mugger is a link in a causal chain that might go on indefinitely if the Mugger can cause effects that go on indefinitely. In the absence of other concrete hopes to permanently affect reality, helping the Mugger strikes me as the most likely way for my life and efforts to have True Lasting Meaning.

Now some readers are asking, But what do we do if we never stumble on a way to communicate with an ontologically privileged observer? My answer is that my purpose here is not to cover all contingencies but rather to exhibit a single contingency in which I believe it is possible to deduce ought from is.

Saying that only indefinitely-long causal chains are important does not tell us which indefinitely-long causal chains are good and which ones are evil. But consider my contingency again: you find yourself in communication with an ontologically privileged observer. After extensive investigation you have discovered no other way to cause effects that go on indefinitely and have no concrete hope of ever discovering a way. Once he has demonstrated that he exists outside your spacetime, the only information you can obtain about him is what he tells you. Sure, the privileged observer might be evil. But if you really have no way to learn about him and no way to cause effects that go on indefinitely except through communication with him, perhaps you should trust him. After contemplating for ~7 years, I think so.

I know I risk sounding arrogant or careless, but I must say I do not consider the possibility that our spacetime is an elaborate simulation important to think about. I use it here only to take advantage of the fact that the audience is already familiar with it and with the Mugger. There is another possibility I do consider important to think about that also features a communications link with an ontologically privileged observer. I would have used that possibility if it would not have made the comment longer.

In summary, I believe we can derive ought from is in the following situation: our reality "contains a horizon" the other side of which we are very unlikely to be able to model. The physical structure of the horizon allows us to become highly confident of this negative fact. But we have stumbled on a means to communicate with a mind beyond the horizon, who I have been calling the ontologically privileged observer. Finally, our spacetime will come to an end, and reality allows us to become highly confident of that fact.

Although a causal chain can cross a communications link, you cannot use the link to construct a causal model of the reality on the other side of the link. Perhaps your interlocutor will describe the other side to you, but you cannot use the link to verify he is telling the truth unless you already have an causal model of the other side (e.g. you know there is a trusted computer on the other side attached to trusted sensory peripherals and you know the "secrets" of the trusted computer and trusted sensors, which is quite a lot to know).

And there is my very compressed reply to You have been rather vague by saying that just as we discovered many positive facts with science, so we can discover normative ones, even if we have not been able to do so before. You haven't really given any indication as to how anyone could possibly do that.

comment by TGGP2 · 2007-10-23T03:49:00.000Z · LW(p) · GW(p)

Richard, my objections in my e-mail to you still stand. I suppose to a Pete Singer utilitarian it might be correct that we assign equal weight of importance to everyone in and beyond our reality, but not everyone accepts that and you have not established that we ought to. If I am a simulation, I take my simulation as my reality and care as little about the space-time simulating me (outside of how they affect me) as another simulation someone in our reality might create. Outside of the issue of importance, you still have not established how we obtain oughts. You simply ask that we accept the authority of someone even as you acknowledge that this person may be a liar and/or malevolent. You have hit the "worship" button without regard to whether it is Nyarlathotep/Loki/Lucifer you are worshiping (in that respect you are not all that different from the adherents of the more primitive religions). Your post was also quite long. I suggest you get a blog of your own to host it on. All the cool people are doing it.

comment by Richard_Hollerith · 2007-10-23T05:40:00.000Z · LW(p) · GW(p)

The ought is, You ought to do whatever the very credible Mugger tells you to do if you find yourself in a situation with all the properties I list above. Blind obedience does not have a very good reputation; please remember, reader, that the fact that the Nazis enthusiastically advocated and built an interstate highway system does not mean that an interstate highway system is always a bad idea. Every ethical intelligent agent should do his best to increase his intelligence, his knowledge of reality and to help other ethical intelligent agents do the same. That entails consistently resisting tyranny and exploitation. But intelligence can be defined as the ability to predict and control reality or to put it another way to achieve goals. So, if your only goal is to increase intelligence, you set up a recursion that has to bottom out somehow. You cannot increase intelligence indefinitely without eventually confronting the question of what other goals the intelligence you have helped to create will be applied to. That is a tricky question that our civilization does not have much success answering, and I am trying to do better.

comment by Richard_Hollerith · 2007-10-23T06:55:00.000Z · LW(p) · GW(p)

I suppose to a Pete Singer utilitarian it might be correct that we assign equal weight of importance to everyone in and beyond our [spacetime].

In the scenario with all the properties I list above, I assign most of the intrinsic good to obeying the Mugger. Some intrinsic good is assigned to continuing to refine our civilization's model of reality, but the more investment in that project fails to yield the ability to cause effects that persist indefinitely without the Mugger's help, the more intrinsic good gets heaped on obeying the Mugger. Nothing else gets any intrinsic good, including every human and in fact every intelligent agent in our spacetime. Agents in our spacetime must make do with whatever instrumental good derives from the two intrinsic goods. So for example if Robin is expected to be thrice as useful to those two goods as Eliezer is, then he gets thrice as much instrumental good. Not exactly Pete Singer! No one can accuse me of remaining vague on my goals to avoid offending people! I might revise this paragraph after learning more decision theory, Solomonoff induction, etc.

comment by TGGP2 · 2007-10-24T01:40:00.000Z · LW(p) · GW(p)

You have not established that one ought to "do his best to increase his intelligence, his knowledge of reality and to help other ethical intelligent agents do the same". Where is the jump from is to ought? I know Robin Hanson gave a talk saying something along those lines, but he was greeted with a considerable amount of disagreement from people whose ethical beliefs aren't especially different from his.

That entails consistently resisting tyranny and exploitation.
If a tyrant's goal was to increase their knowledge of reality and spread it which they chose to go about with violence and exploitation, resistance could very well hinder those goals.

But intelligence can be defined as the ability to predict and control reality or to put it another way to achieve goals.
That would make Azathoth incredibly intelligent, and Azathoth isn't called the "blind idiot" for nothing.

So, if your only goal is to increase intelligence
You haven't established that ought to be our goal.

You cannot increase intelligence indefinitely without eventually confronting the question of what other goals the intelligence you have helped to create will be applied to.
The intelligence might have no other goals other than those I choose to give it and the intelligence I am endlessly increasing might be my own.

That is a tricky question that our civilization does not have much success answering, and I am trying to do better.
Why is a "civilization" the unit of analysis rather than a single agent?

I assign most of the intrinsic good to obeying the Mugger
I do not and you have not established that I should.

the more intrinsic good gets heaped on obeying the Mugger.
You have not established that obeying the mugger will actually lead to preferable results.

comment by Richard_Hollerith · 2007-10-24T09:02:00.000Z · LW(p) · GW(p)

The blog "item" to which this is a comment started 5 days ago. I am curious whether any besides TGGP and I are still reading. One thing newsgroups and mailing lists do better than blogs is to enable conversational threads to persist for more than a few days. Dear reader, just this once, as a favor to me, please comment here (if only with a blank comment) to signal your presence. If no one signals, I'm not continuing.

Why is a "civilization" the unit of analysis rather than a single agent?
Since you put the word in quotes, I take it you hold something akin to the views of Margaret Thatcher who famously said that there is no society, just individuals and families. You should have been exposed to the mainstream view often enough to notice that my statement can be translated to an equivalent statement expressed in terms of individuals. If we introduce too many deviations from consensus reality at once, we are going to lose our entire audience. Please continue as if I had not used the word and had said instead that if there exists individuals who have a successful answer to the tricky question then they are not promoting the answer to the singularitarian community broadly understood or I would have become aware of them already.

Yes, I take as postulates

  • the desirabitity of increasing the intelligence of whatever part of reality is under your control,
  • the desirability of continuously refining your model of reality,
  • that the only important effects are those that go on forever,
  • for that matter, that the probability of a model of reality is proportional to 2^K where K is the complexity of the model in bits (Occam's razor).

What I meant by deriving ought from is is that what you learn about reality can create a behavioral obligation e.g. in certain specific circumstances it creates an obligation to obey an ontologically priviledged observer. This is not usually acknowledged in expositions about morality and the intrinsic good -- at least not to the extent I acknowledge it here. But yeah, you have a point that without the three oughts I listed above, I could not derive the ought of obeying the Mugger, so instead of my saying that you can derive ought from is, I should in the future say that it is not commonly understood by moral philosophers how much the moral obligations on an agent depend on the physical structure of the reality in which the agents finds himself. Note that he cannot do anything about that physical structure and consequently about the existence of the moral obligation (assuming the postulates above).

comment by g · 2007-10-24T10:57:00.000Z · LW(p) · GW(p)

I am still reading. I'm inclined to agree with you that if some sort of moral realism is correct and if some demonstrably-godlike being tells you "X is good" then you're probably best advised to believe it. I don't understand how you get from there to the idea that we should be studying the universe like physicists looking for answers to moral questions; so far, so far as I know, all even-remotely-credible claims to have encountered godlike beings with moral advice to offer have been (1) from people who weren't proceeding at all like physicists and (2) very unimpressive evidentially.

I think it's no more obvious that increasing the intelligence of whatever part of reality is under your control is good than that (say) preventing suffering is good, and since we don't even know whether there are any effects that go on for ever it seems rather premature to declare that only such effects matter.

I think it's obvious (assuming any sort of moral realism, or else taking "obligations" in a suitably relativized way) that the moral obligations on an agent depend on the physical facts about the universe, and you don't need to consider exotic things like godlike beings to discover that. If you're driving along a road, then whether you have an obligation to brake sharply depends on physical facts such as whether there's a person trying to cross the road immediately in front of you.

(You wrote 2^K where you meant 2^-K. I assume that was just a typo.)

comment by Richard_Hollerith · 2007-10-24T19:00:00.000Z · LW(p) · GW(p)

TGGP pointed out a mistake, which I acknowledged and tried to recover from by saying that what you learn about reality can create a behavioral obligation. g pointed out that you don't need to consider exotic things like godlike beings to discover that. If you're driving along a road, then whether you have an obligation to brake sharply depends on physical facts such as whether there's a person trying to cross the road immediately in front of you. So now I have to retreat again.

There are unstated premises that go into the braking-sharply conclusion. What is noteworthy about my argument is that none of its premises has any psychological or social content, yet the conclusion (obey the Mugger) seems to. The premises of my argument are the 4 normative postulates I just listed plus the conditions on when you should obey the Mugger. It is time to recap those conditions:

  • You find yourself in communication with an ontologically privileged observer.

  • After extensive investigation you have discovered no other way to cause effects that go on indefinitely.

  • You have no concrete hope of ever discovering a way.

  • Once the observer has demonstrated that he exists outside your spacetime, the only information you can obtain about him is what he tells you.

  • You have no concrete hope of ever discovering anything about the observer besides what he tells you.

Notice that there are no psychological or social concepts in those two lists! No mention for example of qualia or subjective mental experience. No appeal to the intrinsic moral value of every sentient observer, which creates the obligation to define sentience, which is distinct from and I claim fuzzier than the concept which I have been calling intelligence. Every concept in every premise comes from physics, cosmology, basic probability theory, information technology and well-understood parts of cognitive science and AI. The lack of pyschosocial concepts in the premises make my argument different from every moral argument I know about that contains what at first glance seems to be a psychological or social conclusion.

I think it's no more obvious that increasing the intelligence of whatever part of reality is under your control is good than that (say) preventing suffering is good
When applied to ordinary situations (situations that do not involve e.g. ultratechnology or the fate of the universe) those two imperative lead to largely the same decisions because if you have only a little time to do an investigation, asking a person, Are you suffering? is the best way to determine if there is any preventable or reversible circumstance in his life impairing his intelligence, which I remind I am defining as the ability to acheive goals. Suffering though is a psychological concept and I recommend for ultratechnologists and others concerned with the ultimate fate of the universe to keep their fundamental moral premises free from psychological or social concepts.

All even-remotely-credible claims to have encountered godlike beings with moral advice to offer have been (1) from people who weren't proceeding at all like physicists and (2) very unimpressive evidentially.
Their claims have been very unimpressive because they weren't proceeding like physicists. Impressive evidence would be an experiment repeatable by anyone with a physics lab that receives the Old Testament in Hebrew (encoded as UTF8) from a compartment of reality beyond our spacetime. For the evidence to have moral authority, there would have to be a very strong reason to believe that the message was not sent from a transmitter in our spacetime. (The special theory of relativity seems to be able to provide the strong reason.)

since we don't even know whether there are any effects that go on for ever it seems rather premature to declare that only such effects matter.
An understandable reaction. You might never discover a way to cause an effect that goes on forever even if you live a billion years and devote most of your resources to the search. I sympathize!

comment by Nick_Tarleton · 2007-10-24T19:51:00.000Z · LW(p) · GW(p)

I'm still reading.

It is not obvious why creating a causal chain that goes on indefinitely is uniquely morally relevant. (Nor is it obvious that the concept is meaningful in reality - a causal chain with a starting point can be unboundedly long but at no actual point in time will it be infinite.) I do see it as valuable to look for ways to escape this space-time continuum, because I presently want (and think I will continue to want) (post)humanity to continue existing and growing indefinitely, but I don't believe there is any universal validity to this value. (If values like this form attractors for complex - i.e. not paperclip-maximizing - intelligences I suppose they would be in a sense "objective", but would not acquire any more normative force, whatever that is.) I don't see this value as "unreal" because it's subjective, though. My subjectivity is very real to me.

Saying that only indefinitely-long causal chains are important does not tell us which indefinitely-long causal chains are good and which ones are evil.

This was Eliezer's point: how could you ever recognize which ones are good and which ones are evil? How could you even recognize a process for recognizing objective good and evil?

comment by TGGP2 · 2007-10-24T21:37:00.000Z · LW(p) · GW(p)

Physicists have been proceeding like physicists for some time now and none of them has done anything like receiving the Old Testament from outside of our space-time. Why would you even expect a laboratory experiment to have such a result? It also seems you are postulating an extra-agent (the Mugger), which limits the amount of control experimenters have and in turn makes the experiment unrepeatable.

comment by Richard_Hollerith · 2007-10-24T21:53:00.000Z · LW(p) · GW(p)

This was Eliezer's point: how could you ever recognize which ones are good and which ones are evil? How could you even recognize a process for recognizing objective good and evil?

I have only one suggestion so far, which is that if you find yourself in a situation which satisfies all five of the conditions I just listed, obeying the Mugger initiates an indefinitely-long causal chain that is good rather than evil. I consider, "You might as well assume it is good," to be equivalent to, "It is good." Now that I have an example I can try to generalize it, which is best done after the scenario has been expressed mathematically. That is my plan of research. So for example I am going to characterize mathematically the notion of a possible world in which an agent can become confident of a "negative fact" about its environment. An example of a negative fact is, I will probably not be able to refine further my model of the Mugger using any evidence except what the Mugger tells me. Then I will try to determine whether our reality is an example of a possible world that allows agents to become confident of negative facts. I will try to devise a way to compute an answer to the question of how to trade off the two goals of obeying the Mugger and refining my model of reality.

A moral system must contain some postulates. I have retracted my claim that one can derive ought from is and apologize for advancing it. Above I give a list four postulates I consider unobjectionable -- the list whose last item is Occam's razor. I do not claim that you and I will come to agree on the fundamental moral postulates if we knew more, thought faster, were more the people we wished we were, had grown up farther together. I do not claim that we have or can discover a procedure that allows two rational humans always to cooperate. I do not claim that this is the summer of love. I reserve the right to continue to advocate for my fundamental moral postulates even if it causes conflict.

comment by Richard_Hollerith · 2007-10-24T22:47:00.000Z · LW(p) · GW(p)

Physicists have been proceeding like physicists for some time now and none of them has done anything like receiving the Old Testament from outside of our space-time.
As far as I know, none of them are looking for a message from beyond the space-time continuum. Maybe I will try to interest them in making the effort. My main interest however is a moral system that does not break down when thinking about seed AI and the singularity. Note that the search for a message from outside space-time takes place mainly at the blackboard and only at the very end moves to the laboratory for the actual construction of the experimental apparatus. Moreover, it is irrational to expect the message to arrive in any human tongue or in a human-originated encoding like Ascii or UTF8. How absurd! The rational approach is an embryonic department of mathematics called anticryptography. Also, the SETI project probably knows an algorithm to detect a signal created by an intelligent agent about which we know nothing specific trying to communicate with another intelligent agent about which it knows nothing specific.

It also seems you are postulating an extra-agent (the Mugger), which limits the amount of control experimenters have and in turn makes the experiment unrepeatable.
I see your point. To explain the concept of the ontologically privileged observer, I borrowed Pascal's Mugger because my audience is already familiar with that scenario. I have another scenario in which a physicist finds himself in a dialog or monologue with an ontologically privileged observer in which physicists retain their accustomed level of control over their laboratories.

comment by TGGP2 · 2007-10-25T02:48:00.000Z · LW(p) · GW(p)

I don't think you've established that "You might as well consider it good", I might as well not consider it good or bad. You haven't given a reason to consider it more good than bad, just hope. I might hope my lottery ticket is a winner, but I have no reason to expect it to be.

If you want to persuade physicists to start looking for messages from beyond the space-time continuum, you'd better be able to offer them a method. I am completely at a loss for how one might go about it. I certainly don't know how you are going to do it at the blackboard. Anything you write on the blackboard comes from you, not something outside space-time. Anticryptography would sound like the study of decrypting encryptions, which is already covered by cryptography. As far as I know, SETI is just dicking around and has no algorithms of the type you speak of, but my information just comes from Michael Crichton and others critical of them. I don't see how you can have this other observer and at the same time have the scientist with control over the lab.

You haven't come up with much of a moral system either, you just say to do what the Mugger says, when we are not in contact with any such Mugger and have no reason to suppose what the Mugger wants us to do is good.

comment by Richard_Hollerith · 2007-10-25T14:18:00.000Z · LW(p) · GW(p)

In cryptography, you try to hide the message from listeners (except your friends). In anticryptography, you try to write a message that a diligent and motivated listener can decode despite his having none of your biological, pyschological and social reference points.

I certainly don't know how you are going to do it at the blackboard. Anything you write on the blackboard comes from you, not something outside space-time.
I meant that most of the difficulty of the project is in understanding our laws of physics well enough to invent a possible novel method for sending and receiving messages.

I don't see how you can have this other observer and at the same time have the scientist with control over the lab.
It is possible for the fundamental laws of physics as we know them to continue to apply without exception and for physicists to discover a novel method of sending or receiving messages because the fundamental laws are not completely deterministic. Specifically, when a measurement is performed on a quantum system, the result of the measurement is "random". If as E.T.Jaynes taught saying that something is random is a statement about our ignorance rather than a statement about reality then it is not a violation of the fundamental laws to discover that the data we used to consider random in actuality has a signal or a message in it.

comment by Richard_Hollerith · 2007-10-25T14:31:00.000Z · LW(p) · GW(p)

No blog yet, but I now have a wiki anyone can edit. Click on "Richard Hollerith" to go there.

comment by TGGP2 · 2007-10-25T19:28:00.000Z · LW(p) · GW(p)

In quantum experiments the random outcomes are the same for all experimenters, so it can be repeated and the same probabilities will be observed. When you have someone else sending messages, you can't rely on them to behave the same for all experimenters. If there are a larger group of Muggers that different scientists could communicate with, than experiments might reveal statistical information about the Mugger class of entity (treating them as experimental subjects), but it's a stretch.

comment by Richard_Hollerith · 2007-10-25T21:00:00.000Z · LW(p) · GW(p)

Do you consider the following a fair rephrasing of your last comment? A quantum measurement has probability p of going one way and p - 1 of going the other way where p depends on a choice made by the measurer. That is an odd property for the next bit in a message to have, and makes me suspicious of the whole idea.

If so, I agree. Another difficulty that must be overcome is, assuming one has obtained the first n bits of the message, to explain how one obtains the next bit.

Nevertheless, I believe my primary point remains: since our model of physics does not predict the evolution of reality exactly, the discovery of a previously overlooked means of receiving data need not violate our model of physics. The discovery that if you do X, you can read out the Old Testament in UTF-8, would constitute the addition of a new conjunct to our current model of physics, but not a falsification of the model. That last sentence is phrased in the language of traditional rationality, but my obligation in this argument is only to establish that looking for a new physical principle for receiving data is not a complete waste of resources, and I think the sentence achieves that much.

Also, I wish to return to a broader view to avoid the possibility of our getting lost in a detail. My purpose is to define a system of valuing things suitable for use as the goal system of a seed AI. This scenario in which physicists find themselves in communication with an ontologically privileged observer is merely one contingency that the AI should handle correctly (and a lot more fruitful to think about than simulation scenarios IMHO). It is also useful to consider special cases like this one to keep the conversation about the system of value from becoming too abstract.

comment by TGGP2 · 2007-10-26T01:43:00.000Z · LW(p) · GW(p)

I do not consider your rephrasing to be accurate. I wasn't giving the measurers choice, they are all supposed to follow the same procedure in order to obtain the same (probabilistic) results. It is the Mugger, or outside agent, that is making choices and therefore preventing the experiment from being controlled and repeatable.

What do you see as the major deficiencies in our model of reality? That the behavior of quantum particles is probabilistic rather than deterministic?

comment by Antoine · 2007-11-29T17:17:00.000Z · LW(p) · GW(p)

Don't believe everything the tabloids say.

"Paris Sets The Record Straight On 'Ellen'":

http://cbs5.com/entertainment/Paris.Hilton.Ellen.2.598168.html

quote:

The tabloids even stooped so low as to discuss her plans after death. Hilton was quoted as saying "It's so cool, all the cells in your body are still alive when death is pronounced and if you're immediately cooled, you can be perfectly preserved. My life could be extended by hundreds and thousands of years."

Hilton denied ever making those comments and pointed out to DeGeneres that she doesn't speak that way. This made the audience laugh.

"I don't want to be frozen. It's kind of creepy," Hilton said.

comment by CarlShulman · 2007-11-29T19:01:00.000Z · LW(p) · GW(p)

Thanks Antoine,

I'll file this with Walt Disney.

comment by MichaelHoward · 2009-02-23T09:27:00.000Z · LW(p) · GW(p)

Simon Cowell now too, apparently. No, I don't read the Daily Mail!

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-29T16:36:04.798Z · LW(p) · GW(p)

Now says he was joking. Not sure if eternity strictly needs the man, but I don't dislike him enough to wish him dead!

comment by jeronimo196 · 2021-09-22T08:35:20.676Z · LW(p) · GW(p)

If we accept MWI, cryonics is a backdoor to Quantum Immortality, one which waiting and hoping may not offer.