Best shot at immortality?

post by tomme · 2012-03-22T10:29:49.057Z · LW · GW · Legacy · 82 comments

Contents

82 comments

What looks, at the moment, as the most feasible technology that can grant us immortality (e.g., mind uploading, cryonics)?

I posed this question to a fellow transhumanist and he argued that cryonics is the answer, but I failed to grasp his explanation. Besides, I am still struggling to learn the basics of science and transhumanism, so it would be great if you could shed some light on my question.

82 comments

Comments sorted by top scores.

comment by NancyLebovitz · 2012-03-22T10:55:49.717Z · LW(p) · GW(p)

My guess as to why cryonics is the best method currently available is that it's the best bet for keeping you potentially alive until better methods are developed.

Speaking just for myself, I'm not convinced that any method can give immortality (Murphy happens), but there's a good bit of hope for greatly extended lifespans.

Replies from: BrandonReinhart
comment by BrandonReinhart · 2012-03-23T06:52:57.730Z · LW(p) · GW(p)

What we know about cosmic eschatology makes true immortality seem unlikely, but there's plenty of time (as it were) to develop new theories, make new discoveries, or find possible new solutions. See:

Cirkovic "Forecast for the Next Eon: Applied Cosmology and the Long-Term Fate of Intelligent Beings"

Adams "Long-term astrophysical processes"

for an excellent overview of the current best-estimate for how long a human-complexity mind might hope to survive.

Just about everything CIrkovic writes on the subject is really engaging.

comment by DanielLC · 2012-03-22T17:50:50.805Z · LW(p) · GW(p)

Cryonics is useful for preserving your body until a method for immortality is developed. It is not, on it's own such a method.

If you die now, cryonics is the only method available to give you a chance at immortality.

Cryonics is either not an answer, or the only answer, depending on what exactly you mean by the question.

Replies from: BrandonReinhart
comment by BrandonReinhart · 2012-03-23T06:46:27.504Z · LW(p) · GW(p)

More importantly, cryonics is useful for preserving information. (Specifically, the information stored by your brain.) Not all of the information that your body contains is critical, so just storing your spinal cord + brain is quite a bit better than nothing. (And cheaper.) Storing your arms, legs, and other extremities may not be necessary.

(This is one place where the practical reasoning around cryonics hits ugh fields...)

Small tissue cryonics has been more advanced than whole-body. This may not be the case anymore, but certainly was say four years ago. So storing your brain alone gave you an improved bet at good information retention over storing the whole-body. I believe that whole-body methods have improved somewhat in the past few years, but still have a ways to go. Part of the problem lies in efficient perfusion of cryoprotectants through the body.

If you place credence on the possibility of ems, then you might consider investing in neuro-preservation. In that case, you wouldn't need revival, only good scanning and emulation tech.

Edit: Also, I highly recommend the Alcor site. The resources there span the gamut from high level to detailed and there's good coverage of the small tissue and cryoprotectant problems among other topics. http://www.alcor.org/sciencefaq.htm

comment by hankx7787 · 2012-03-22T16:38:02.227Z · LW(p) · GW(p)

If you are older you should definitely be focusing on strategies for biological life extension (calorie restriction, or whatever), and everyone should sign up for cryonics as an insurance policy.

Ultimately, with full molecular nanotechnology, whether the engineering of negligible senescence is biological or digital is rather beside the point ("What exactly do you mean by ‘machine’, such that humans are not machines?" - Eliezer Yudkowsky).

However, Unfriendly AI would render the whole point moot. So the most important thing is to guarantee we get Friendly AI right.

Replies from: DanArmak
comment by DanArmak · 2012-03-23T01:06:19.981Z · LW(p) · GW(p)

If you have a sufficiently high probability estimate of either FAI or UFAI arriving before your natural death.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-23T01:11:37.482Z · LW(p) · GW(p)

Given cryonics, the point of death isn't crucial for the significance of existential risks for personal survival.

comment by James_Miller · 2012-03-22T13:00:52.424Z · LW(p) · GW(p)

It depends on your age. If you are under 30 it's probably staying alive long enough until we reach longevity escape velocity.. If you are over 60 it's probably cryonics.

Edited in response to Vladimir_Nesov's comment.

Replies from: Vladimir_Nesov, drethelin
comment by Vladimir_Nesov · 2012-03-22T17:25:34.651Z · LW(p) · GW(p)

If you are under 30 it's almost certainly staying alive long enough until we reach longevity escape velocity.

Downvoted for the insane "almost certainly".

Replies from: James_Miller
comment by James_Miller · 2012-03-22T21:56:55.048Z · LW(p) · GW(p)

Good point.

comment by drethelin · 2012-03-22T16:01:46.390Z · LW(p) · GW(p)

This isn't really an answer. It's kind of like telling someone about Moore's law when they ask what the next big advance in processing will be.

Replies from: James_Miller
comment by James_Miller · 2012-03-22T21:56:24.089Z · LW(p) · GW(p)

Why not? Both have actionable implications and are falsifiable paths to immortality..

comment by keefe · 2012-04-06T17:35:11.196Z · LW(p) · GW(p)

Directly answering the question, organ replacement including some brain augmentation that shifts into uploading eventually seems most likely. Cryonics isn't really a direct answer to the question, if you want to talk about trajectories to achieve immortality.... It's too hard to predict where a breakthrough will be made or a wall will be hit. I think the most feasible trajectory is focusing on money and power, then organ replacement and traditional life extension techniques, which include general existential risk reduction.

comment by advancedatheist · 2012-03-24T18:11:54.126Z · LW(p) · GW(p)

"Mind uploading" makes debatable assumptions about how the mind works. It might have the result of killing you while leaving behind a Siri-like app which tricks living people's theory of mind into thinking that you have survived the upload.

Cryonics, by contrast, falls into the realm of testable neuroscience, as Sebastian Seung argues in his new book:

http://www.box.com/shared/static/pakcq9ffu2fral7r5uvd.png

comment by djcb · 2012-03-22T12:31:53.277Z · LW(p) · GW(p)

I don't think the cryonics of today are a very good bet for immortality, and mind-uploading may still take a while to develop... So I guess your best bet is to use the currently known methods to improve your life expectancy, and hope for the situation to improve during your lifetime (or even better, work on it!).

comment by Will_Newsome · 2012-03-22T11:55:01.270Z · LW(p) · GW(p)

I'm not sure what criteria you're intending with "feasible", but I'd say FAI, as uploading/cryonics have a lot of failure modes, one of which is uFAI. Unless something weird happens, e.g. a currently-hidden AI keeps us from gobbling the stars, then an FAI once unleashed should be able to revive every human who's ever died, so even if you die before it's developed you should still be okay. (If an FAI would want to do that, anyway.) Whereas most people would be skeptical that an AI could be powerful enough to resurrect every human ever, I'm actually more skeptical that we're not currently at the mercy of an AI or an entire coalition of AIs. Fermi paradox and what not. I'd say that there's a lot of structural uncertainty, though, and that it would be unwise to put much faith in any hypotheses that involve highly advanced technology/intelligences.

Replies from: wedrifid, cousin_it
comment by wedrifid · 2012-03-22T12:44:02.441Z · LW(p) · GW(p)

Unless something weird happens, e.g. a currently-hidden AI keeps us from gobbling the stars, then an FAI once unleashed should be able to revive every human who's ever died, so even if you die before it's developed you should still be okay.

There seems to be rather a lot of information lost beyond the chance of recovery. The mapping of 'current world as best as the FAI could plausibly deconstruct' to 'possible histories that would lead to this state' is not 1:1.

The best I could expect of an FAI is the ability to construct a probability distribution over all the likely combinations of humans who could have lived and perhaps 'resurrect' rather a lot of people who never lived on the hope that it'd get most the ones who did live in the process.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T12:57:15.431Z · LW(p) · GW(p)

There seems to be rather a lot of information lost beyond the chance of recovery.

Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they'd otherwise never have been able to get back, it's even a clear-cut trade scenario. I see no reason not to expect this by default. (Steve's idea; I think it's pretty epic, especially if the part works where the AIs collectively catch all each others' quantum entanglements which enables them to coordinate to reverse the past. That'd be freakin' awesome. And either way, I think hearing this idea was the first time I thought, "wow, if a mere human can think of that, imagine what ideas a freaking superintelligence could come up with".)

Replies from: wedrifid, NancyLebovitz, ryjm
comment by wedrifid · 2012-03-22T13:03:31.127Z · LW(p) · GW(p)

Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they'd otherwise never have been able to get back, it's even a clear-cut trade scenario. I see no reason not to expect this by default.

I'm not even confident there are other AIs out there in the universe. At least, not in our Everett-Branch-Future-And-Relevant-History-Light-Cone.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T13:10:10.947Z · LW(p) · GW(p)

I'm not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn't it the default assumption 'round these parts?

Replies from: wedrifid, JoshuaZ
comment by wedrifid · 2012-03-22T13:25:42.421Z · LW(p) · GW(p)

I'm not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn't it the default assumption 'round these parts?

The default assumption is that there are (or will be) many other FAIs that light from our world-history will directly interact with? I didn't know that. It's certainly not mine. I thought it was more likely that we were for practical purposes alone. If you'll pardon the shorthand reasoning:

  • Fermilike considerations... epically unlikely that life emerges all the way to superintelligence takeoff
  • Anthropics and self indication and suchforth... most EBs where one superintelligence emerges will not be branches in which more than one universe emerges.
  • There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible.
  • The physics is complicated, often speculative (by folks like Tegmark) and beyond me but it all adds up to "we're probably effectively alone in the universe as we see it".

So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-03-22T15:39:51.943Z · LW(p) · GW(p)

how likely would you consider it to be conditional on us not being simulated/overseen?

So it's possible that spacetime is infinitely dense and if you're a superintelligence there's no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn't seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can't get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might've been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I'm not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah... maybe 25% or so, but that's only factoring in some structural uncertainty. Meh.

'Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren't already the default expectation. It's at times like this when advanced rationality skills would be helpful.

comment by Will_Newsome · 2012-03-22T13:58:37.339Z · LW(p) · GW(p)

Fermilike considerations... epically unlikely that life emerges all the way to superintelligence takeoff

I don't follow. Do you think intelligences would loudly announce their existence over a long enough time period such that we would know about it? It always struck me as more likely that AGIs were quiet than that they didn't exist. Remember, all those stars you see at night don't necessarily exist; could just as easily be an illusion. All it'd take is for one superintelligence to show up somewhere and decide that we weren't worth killing but that we shouldn't get to see what's actually going on as it gobbles all the unoccupied planets. There are various reasons it would want to do this. [ETA: The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic "explanations".]

There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible

Hm, this might be a difference of perspective; I'm not very confident in the simulation argument as it's usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)

So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?

(They don't have to be Friendly, they just have to be willing to trade.) I don't have a strong opinion either way. If we're being overseen then it seems true by definition that we'll run into other AGIs if we build an FAI, so I was focusing on the scenario where we're not being overseen/simulated/fucked-with. In such a scenario I don't know what probability to put on it... I'll think about it more.

Replies from: wedrifid, Luke_A_Somers, wedrifid
comment by wedrifid · 2012-03-22T14:28:06.436Z · LW(p) · GW(p)

If we're being overseen then it seems true by definition that we'll run into other AGIs if we build an FAI

This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T14:30:00.978Z · LW(p) · GW(p)

True, mea culpa. I swear, there's something about the words "by definition" that makes you misuse them even if you're already aware of how often they're misused. I almost never say "by definition" and yet it still screwed me over.

comment by Luke_A_Somers · 2012-03-22T22:53:32.821Z · LW(p) · GW(p)

The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic "explanations".

I keep running into people who think anthropic reasoning doesn't explain anything, or that have it entirely backwards. One prominent physicist whose name eludes me commented in an editorial published in physics today that anthropic reasoning was worthless unless the life-compatible section of the probability distribution of universal laws was especially likely. This so utterly misses the point that he clearly didn't understand the basic argument.

I've never encountered anyone who's willing to admit to buying anything stronger than the weak anthropic principle, which seems utterly obviously true:

1) If the universe didn't enable the formation of sapient life, it wouldn't exist (edited to clarify: sapient life, not the universe). If the universe made the formation of such life fantastically unlikely in any one location but the extent of the universe is larger than the reciprocal of that probability density, it would likely exist.

2) Our existence thus doesn't indicate much about the general hospitability of the rules of the universe to the formation of sapient life, because the universe is awfully large, possibly infinite.

3) In the event that the rules of the universe that we observe are consequences of more fundamental laws, and those fundamental laws are quantum mechanical in nature so that multiple variants get a nonzero component, then the probability of life forming in this universe is taken as the OR among all of those variants.

That's really all there is to it...

comment by wedrifid · 2012-03-22T14:32:36.009Z · LW(p) · GW(p)

There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible

Hm, this might be a difference of perspective; I'm not very confident in the simulation argument as it's usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)

Is there a typo in there? The simulation argument doesn't seem to fit.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T14:34:07.398Z · LW(p) · GW(p)

Oh, I was assuming... never mind, it's probably not worth untangling.

comment by JoshuaZ · 2012-03-22T13:14:58.710Z · LW(p) · GW(p)

something really weird is going on cosmologically speaking, but isn't it the default assumption 'round these parts?

Not as far as I can tell. What do you mean?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T13:18:01.514Z · LW(p) · GW(p)

Sorry, my sentence was unclear; "it" was referencing the belief that at least one intelligence besides us has already shown up or will show up somewhere in the universe at some point. It seems to me that most people, including most people on LessWrong, think this is likely.

comment by NancyLebovitz · 2012-03-22T16:18:42.546Z · LW(p) · GW(p)

Is there any reason to think that such detailed information as would be needed to recreate people wouldn't get lost in noise?

Replies from: Will_Newsome, bogdanb
comment by Will_Newsome · 2012-03-22T16:39:59.449Z · LW(p) · GW(p)

An expanding superintelligence sphere acts as a lightyears-wide optical lens, providing extremely redundant observations of far-off objects. This can be combined with superintelligent error-correction and image reconstruction. If you have multiple such superintelligences then you get even more angles. But yeah, I haven't done the actual calculations; it'd be super cool if someone else did them.

On another note, about six months ago I spent a few days looking at the quantum information theory literature trying to figure out if AIs could coordinate to reverse the past; I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I'd like to do that someday.

Replies from: moridinamael, Eugine_Nier
comment by moridinamael · 2012-03-22T17:04:35.357Z · LW(p) · GW(p)

But the "error correction and image reconstruction" itself is not lossless. There is inevitable distortion caused by scattering off the unknown distribution of interstellar dust particles and from gravitational lensing between the AI and its target. Not to mention all the truly random crap happening in the interstellar void as the photons interact with the quantum foam. The inversion methods you suggest do not yield a true image, merely a consistent one.

Replies from: bogdanb
comment by bogdanb · 2012-03-22T17:17:52.313Z · LW(p) · GW(p)

It could still be enough to resurect a person, if the difference from truth is on the order of the difference between me right now and me after sleeping for a few years. (Hint: the two me’s are very different, but they’re still recognizable as me.)

comment by Eugine_Nier · 2012-03-23T03:00:15.171Z · LW(p) · GW(p)

I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I'd like to do that someday.

I'm not a total expert, but try me.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-24T11:14:53.723Z · LW(p) · GW(p)

I'll have to spend a few hours reloading the concepts into my brain. When I do that I'll post it to Discussion.

comment by bogdanb · 2012-03-22T17:24:23.528Z · LW(p) · GW(p)

Does anyone have an estimate of how many actualy different humans there can be (i.e., the size of brain-space measured in units such that someone about one unit away from me would seem like the same person to someone who knows me).

It might be possible to simply create all humans that could have existed; those who actually did would be a subset, we just couldn’t tell which ones.

comment by ryjm · 2012-03-22T21:21:38.778Z · LW(p) · GW(p)

What is the recommended literature related to the ideas both you and wedrifid have been discussing in this thread? I googled but I figure it wouldn't hurt to ask either. Thanks.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T23:39:17.305Z · LW(p) · GW(p)

Which aspects? I think the only field relevant to what we were discussing is information theory. The stuff about superintelligences coordinating doesn't have any existent literature, but similar ideas are discussed on LessWrong in the context of decision theory.

comment by cousin_it · 2012-03-22T12:35:58.948Z · LW(p) · GW(p)

Your comment seems far-fetched. For one, an AI with such awesome powers could also choose to run a copy of you starting from any moment when you feel unhappy, not just the moment of your death. Since the universe around me stubbornly keeps on looking normal, something will probably stop "rescue sims" from happening.

Replies from: Will_Newsome, None, Will_Newsome
comment by Will_Newsome · 2012-03-22T12:50:09.592Z · LW(p) · GW(p)

I'm trying to avoid assuming a metaphysic where simulations are assumed to be possible, because I'm not sure such metaphysics ultimately make sense. (Maybe you can guess my rationale: I think "measure" and "existence" and so on are very fuzzy, and I think if we reason in terms of decision theoretic significant-ness then it might turn that running a simulation of something doesn't double its "measure", and what matters is what already "actually existed/exists", i.e. what's already "actually significant".) If you don't assume a simulationist metaphysic then "rescue sims" are dubious, whereas reviving people who are known to have already existed seems more like a straightforward application of technology. If you take a sort of common-sense layman's perspective, reviving the dead sounds a lot less speculative than running an exact simulation of a mind on a computer in a way that will actually change the past. ...No?

Replies from: cousin_it
comment by cousin_it · 2012-03-22T13:13:45.263Z · LW(p) · GW(p)

The layman's perspective sounds reasonable enough, but seems to fall apart on closer inspection. What makes a human brain different from a simulation? Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21? Why are future simulations of you necessarily less "significant" than current you? This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T13:42:24.039Z · LW(p) · GW(p)

(The following probably won't be understandable / won't appear motivated. Sorry.)

Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21?

You can make a copy, but as soon as you simulate it diverging from the original then you're imagining someone that never existed in a timeline that didn't actually happen. Otherwise you're just fooling yourself about what actually happened, you're not causing something else to happen. Whereas if you revive a mind that died and have it have new experiences then you're not deluding yourself about what actually happened, you're just continuing the story.

Why are future simulations of you necessarily less "significant" than current you?

Because the simulator would just be deluding themselves about what actually happened, like minting counterfeit currency; the important aspect of me is that I'm here embedded in this particular decision policy computation with these and such constraints. Take me out of my contexts and you don't have me anymore. If you make a thousand crackfics involving Romeo running away to a Chinese brothel then nobody's going to listen to your stories unless they have tremendous artistic merit. And if a thousand Romeo & Juliet crackfics are shouted out in the middle of a forest but nobody hears them, do they have any decision theoretic significance?

But I haven't actually worked out the math, so it's possible things don't work like I think they do.

This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.

Well, it's a theory about anthropics... quantum immortality is also a theory that is only testable by death, but I don't think that's suspicious as such. (In fact I don't actually think quantum immortality is only testable by death, which you might be able to ascertain from what I wrote above, but um, I strongly suspect that I'm not understandable. Anyway, death is the simplest example.)

Replies from: cousin_it
comment by cousin_it · 2012-03-22T14:15:36.766Z · LW(p) · GW(p)

You might be on to something, but I can't understand it properly until I figure out what "decision-theoretic significance" really means, and why it seems to play so nicely with both classical and quantum coinflips. Until then, "measure" seems to be a more promising explanation, though it has lots of difficulties too.

comment by [deleted] · 2012-03-22T20:10:31.884Z · LW(p) · GW(p)

I don't think this argument makes causal sense. If you'd been uplifted into a rescue sim, you certainly wouldn't have made this post. The universe looks, from all of our perspectives, exactly like it would if rescue sims were possible, since that doesn't currently make any testable preditictions. You(sub future) might see some differing effects, but that version of you isn't around right now, and can't provide evidence on the subject.

Replies from: cousin_it
comment by cousin_it · 2012-03-22T20:21:02.302Z · LW(p) · GW(p)

You're right that my words don't provide new evidence to you, but if you anticipate becoming a rescue sim at some point and that doesn't happen, that's evidence against rescue sims for you.

Replies from: None
comment by [deleted] · 2012-03-22T20:48:52.984Z · LW(p) · GW(p)

Even internally, no, that still doesn't work. The evidence that your current continuity has observed is not influenced by whether or not rescue sims exist. That's the same thing as saying that you have seen no evidence one way or the other. Even if multiple other versions of you are instantiated in the future, what the continuity of yourself that is typing this observes doesn't change.

Replies from: cousin_it
comment by cousin_it · 2012-03-22T21:54:52.381Z · LW(p) · GW(p)

I don't think that's the right way to do Bayesian updating in the presence of observer-splitting.

Imagine I sell you a device that I claim to be a fair quantum coin. The first run of the device gives you 1000 heads in a row. You try again, and get another 1000 heads. You come back to my store to demand a refund, and I reply that my fair coin gives rise to many branches including this one, so you have nothing to complain about. Do you buy my explanation, or insist that the coin is defective?

Replies from: None
comment by [deleted] · 2012-03-22T22:20:08.024Z · LW(p) · GW(p)

I started to write a rebuttal, but it's quickly becoming clear to me that I don't have a systematic way of reasoning about this topic. I don't necessarily agree with you, but I need to give the matter a lot more thought. Thank you for giving me something to think about.

Replies from: None
comment by [deleted] · 2012-03-23T15:20:56.485Z · LW(p) · GW(p)

My concern is basically that I'm profoundly uncomfortable with the idea of evidence flowing backwards in time. I mean, you're updating your beliefs about the future based on what you haven't seen happen in the future.

comment by Will_Newsome · 2012-03-22T15:05:14.962Z · LW(p) · GW(p)

Wait a second, your objection doesn't really strongly counter my point, right? 'Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn't applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it's being inconsistent. (I mean not necessarily, but still.) Also, if it doesn't resurrect those in graves or urns then it's not gonna resurrect cryonauts either, so cryonics is out. And your "rescue sim" argument doesn't seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?

Replies from: cousin_it
comment by cousin_it · 2012-03-22T16:05:07.455Z · LW(p) · GW(p)

Also, if it doesn't resurrect those in graves or urns then it's not gonna resurrect cryonauts either, so cryonics is out.

Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we'll figure out how to thaw cryonauts in 100, so they get some bonus years.

Replies from: moridinamael, Will_Newsome
comment by moridinamael · 2012-03-22T16:39:55.128Z · LW(p) · GW(p)

I don't think it's a matter of an intelligence being strong or weak. I'm relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you're familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.

Unless you're conjecturing FAI powers so advanced that the modern understanding of information theory doesn't apply, or unless I'm missing the point entirely.

comment by Will_Newsome · 2012-03-22T16:11:30.688Z · LW(p) · GW(p)

I think those possibilities are unlikely. /shrugs

comment by amjbot · 2012-03-25T23:51:34.380Z · LW(p) · GW(p)

How do you define immortality?

comment by MileyCyrus · 2012-03-22T14:34:45.219Z · LW(p) · GW(p)

No one's mentioned spiritual options?

Pretty much every religion promises something after this life. Of course, these religions are probably false, but how probably? I think there must be at least a 2% chance that one of the religions is true.

Replies from: wedrifid, Will_Newsome, Will_Newsome
comment by wedrifid · 2012-03-22T16:12:20.519Z · LW(p) · GW(p)

Pretty much every religion promises something after this life. Of course, these religions are probably false, but how probably? I think there must be at least a 2% chance that one of the religions is true.

If you actually believe then it would seem that the smart choice would be to dedicate all your attention to determining which religion has the greatest chance of being true (modified by how much you weight the consequences each religion provides.)

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T17:00:44.446Z · LW(p) · GW(p)

which religion has the greatest chance of being true

Trying to think about how a religion could be "true" is hurting my brain. (Is it true that the sun goes 'round the Earth? Yes and no...)

ETA: Holy crap, that's the fastest downvote I've ever gotten. Couldn't have been more than three seconds.

Replies from: wedrifid, TimS
comment by wedrifid · 2012-03-22T18:46:37.046Z · LW(p) · GW(p)

Trying to think about how a religion could be "true" is hurting my brain.

Well, if some chick actually got knocked up without getting laid and her kid raised some folks from the dead, transmogrified booze at parties then resurrected himself from death after getting executed then it'd start looking like one religion might be true.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T22:02:20.550Z · LW(p) · GW(p)

You know, I always wondered about the semantics of "virgin Mary". 'Cuz according to Jews rape doesn't count, right? And a bunch of Roman soldiers were there around that time. And metaphysically speaking there's nothing stopping the "Holy Spirit" taking the form of a Roman soldier...

Water into wine isn't hard if you have a potent cannabis tincture. The resurrection, though... that seems hard to do without real magick.

Replies from: gwern
comment by gwern · 2012-03-23T01:20:09.039Z · LW(p) · GW(p)

Are you thinking of the old Jewish slander about a centurion knocking up Mary?

comment by TimS · 2012-03-22T17:10:48.749Z · LW(p) · GW(p)

Not my downvote, but did you mean to assert geocentrism (sun round earth) rather than heliocentrism (earth round sun)?

Replies from: Eugine_Nier, Will_Newsome
comment by Eugine_Nier · 2012-03-23T02:57:32.022Z · LW(p) · GW(p)

Next thing you're going to claim that centrifugal force and the Coriolis effect don't exist.

Replies from: TimS
comment by Will_Newsome · 2012-03-22T17:11:21.643Z · LW(p) · GW(p)

Yes.

Replies from: pedanterrific
comment by pedanterrific · 2012-03-22T17:44:23.230Z · LW(p) · GW(p)

I suspect you would get a lot fewer downvotes if the options were [thumbs up], [thumbs down], and [???].

comment by Will_Newsome · 2012-03-22T14:44:30.488Z · LW(p) · GW(p)

I did in my comment, just using different language. Replace "AI" or "simulator" or whatever with "God" or "god" or whatever and you get typical spiritual afterlife/resurrection claims. (Additionally, "spirit"/"spiritual" can be translated as "computation"/"computational" in some contexts.)

comment by Will_Newsome · 2012-03-22T15:57:47.394Z · LW(p) · GW(p)

Does anyone know of any public fora who talk about cool things and that have high enough intelligence and general epistemic standards that comments like the above wouldn't be downvoted?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T16:00:15.182Z · LW(p) · GW(p)

Alternatively, does anyone know of any public fora who talk about cool things and that have high enough intelligence and general epistemic standards that comments like the above wouldn't be downvoted?

Replies from: None, wedrifid
comment by [deleted] · 2012-03-22T16:17:11.509Z · LW(p) · GW(p)

I am very confused about why your comments are being downvoted, though I suspect Miley's are being downvoted because of reasons like 'we don't want to privilege your silly hypothesis by even addressing it when there are 80 posts on LW that crush religious hypotheses into dust', + 'well-kept gardens die by pacifism'.

Though I strongly suspect that if MileyCyrus had properly signaled distaste for religious ideas and proposed prayer as a possible avenue or immortality without making an apparently high estimate of their probability, s/he wouldn't have been downvoted.

Replies from: None, Manfred, MileyCyrus
comment by [deleted] · 2012-03-22T20:05:27.787Z · LW(p) · GW(p)

I dunno, it'd still be a bad idea. There's no chain of demonstrable causality leading from prayer to longevity.

comment by Manfred · 2012-03-22T18:35:23.099Z · LW(p) · GW(p)

Well, that would have been lying, so it can't be very proper.

comment by MileyCyrus · 2012-03-22T21:54:44.148Z · LW(p) · GW(p)

Though I strongly suspect that if MileyCyrus had properly signaled distaste for religious ideas and proposed prayer as a possible avenue or immortality without making an apparently high estimate of their probability, s/he wouldn't have been downvoted.

You think my 2% estimate was high? Richard Dawkins assigned theism approximately the same probability. I can understand if you think my confidence in atheism is low, but is so ludicrously low that it deserves 9 downvotes?

What probability would you assign theism?

Replies from: pedanterrific, None, Will_Newsome
comment by pedanterrific · 2012-03-22T23:20:21.832Z · LW(p) · GW(p)

This is disingenuous to the point of being dishonest. Reference:

Williams: “You I think, Richard, believe you have a disproof of god.”

Dawkins: No, I don’t! you were wrong when you said that. I constructed in The God Delusion a 7-point scale, of which ’1′ was, ‘I know god exists’, ’7′ was ‘I know god doesn’t exist’ and I called myself a ’6′.

[...]

Dawkins: “I believe that when you talk about agnosticism, It’s very important to make a distinction between ‘I don’t know whether X is true or not, therefore it’s 50-50 likely or unlikely’ and that’s the kind of agnostic which I don’t-which I’m definitely not. I think one can place estimates of probability on these things and I think the probability of any supernatural creator existing is very very low. So I’m-let’s say I’m a 6.9.

..

On pp50-1 of The God Delusion, Dawkins lays out the 7 point scale he referred to in this conversation. Here are points 6 and 7 of the 7-point scale:

\6. Very low probability [of the existence of god] but short of zero. De facto atheist: ‘I cannot know for certain, but I think god is very improbable, and live my life on the assumption that he is not there.

\7. Strong atheist. ‘I know there is no God, with the same conviction as Jung “knows” there is one.

Dawkins goes on to say:

I count myself in category 6, but leaning toward 7 – I am agnostic only to the extent that I am agnostic about fairies at the bottom of the garden.

comment by [deleted] · 2012-03-23T01:48:39.342Z · LW(p) · GW(p)

Richard Dawkins assigned theism approximately the same probability.

Well, if we're going to start dropping names, Eliezer would "be substantially more worried about a lottery device with a 1 in 1,000,000,000 chance of destroying the world, than a device which destroyed the world if the Judeo-Christian God existed." It's not the same hypothesis, but it's close, and it's stupid to use ethos so much anyway.

I can understand if you think my confidence in atheism is low, but is so ludicrously low that it deserves 9 downvotes?

No, it's not so low that it deserves 9 downvotes. The fact that it has received so many is disturbing.

What probability would you assign theism?

I think I'll side with the perspective advanced by Eliezer here:

Any numerical founding at all is likely to be better than a vague feeling of uncertainty; humans are terrible statisticians. But pulling a number entirely out of your butt, that is, using a non-numerical procedure to produce a number, is nearly no foundation at all; and in that case you probably are better off sticking with the vague feelings of uncertainty.

comment by Will_Newsome · 2012-03-22T22:46:06.132Z · LW(p) · GW(p)

"6.9 out of 7" is such a weird probability that I wonder if Dawkins just made it up on the fly or something.

Replies from: jaimeastorga2000, Zetetic
comment by jaimeastorga2000 · 2012-03-22T23:18:31.203Z · LW(p) · GW(p)

It comes from his 7 point scale for measuring belief along the theist/atheist spectrum.

Replies from: Zetetic
comment by Zetetic · 2012-03-23T21:38:31.951Z · LW(p) · GW(p)

That makes sense. It still seems to be more of a rhetorical tool to illustrate that there is a spectrum of subjective belief. People tend to lump important distinctions like these together: "all atheists think they know for certain there isn't a god" or "all theists are foaming at the mouth and have absolute conviction", so for a popular book it's probably a good idea to come up with this sort of scale like this, to encourage people to refine their categorization process. I kind of doubt that he meant it to be used as a tool for inferring Bayesian confidence (in particular, I doubt 6.9 out of 7 is meant to be fungible with P(god exists) = .01428).

comment by Zetetic · 2012-03-22T23:11:25.387Z · LW(p) · GW(p)

Given that he's pretty disposed to throwing out rhetorical statements, I'd say that's a reasonable hypothesis. I'd be surprised if there was more behind it than simply recognizing that his subjective belief in any religion was 'very, very low', and just picking a number that seemed to fit.

comment by wedrifid · 2012-03-22T16:09:54.082Z · LW(p) · GW(p)

Alternatively...

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T16:13:49.703Z · LW(p) · GW(p)

I asked my girlfriend if I should do one more, but she voted against. Something about "being obnoxious".

Replies from: wedrifid
comment by wedrifid · 2012-03-22T16:18:30.215Z · LW(p) · GW(p)

Good call. I (guessed that) mine should be ok in as much as it it serves as acknowledgement. Conversation, not monologue.

I was neutral with respect to the original objection in as much as I thought you had a point regarding biased reception of the then parent but wasn't comfortable with the snide framing.