Could/would an FAI recreate people who are information-theoretically dead by modern standards?
post by AlexMennen · 2011-01-22T21:11:19.278Z · LW · GW · Legacy · 45 commentsContents
45 comments
If someone gets cremated or buried long enough for eir brain to fully decompose into dirt, it becomes extremely difficult to revive em. Nothing short of a vastly superhuman intelligence would have a chance of doing it. I suspect that it would be possible for a superintelligence to do it, but unless there's a more efficient way to do it, it would require recomputing the Earth's history from the time the AGI is activated back to the death of the last person it intends to save. Not only does this require immense computational resources that could be used to the benefit of people who are still alive, it also requires simulating people experiencing pain (backwards). On the other hand, this saves people's lives. Does anyone have any compelling arguments on why an FAI would or would not recreate me if I die, decompose, and then the singularity occurs a long time after my death?
Why do I want to know? Well, aside from the question being interesting in its own right, it is an important factor in deciding whether or not cryonics is worth-while.
45 comments
Comments sorted by top scores.
comment by Zack_M_Davis · 2011-01-22T23:09:27.088Z · LW(p) · GW(p)
Does anyone have any compelling arguments on why an FAI would or would not recreate me if I die, decompose, and then the singularity occurs a long time after my death?
A possible reason why not besides physical impossibility: are you sure you want to be recreated?---I don't mean to invoke any tired cishumanist tropes about death being the natural order of things or any nonsense like that. But it seems plausible to me that a benevolent superintelligence would have no trouble contriving configurations of matter that at least some of us would prefer to personal survival; specifically, the AI could create people better than us by our own standards.
Presumably you don't want to live on exactly as you are now; you have a desire for self-improvement. But given the impossibility of ontologically fundamental mental entities, there's an identity between "self-improvement" and "being deleted and immediately replaced with someone else who happens to be very similar in these-and-such ways"; once you know what experiences are taking place at what points in spacetime, there's no further fact of the matter as to which ones of them are "you." In our own world, this is an unimportant philosophical nicety, but superintelligence has the potential to open up previously inaccessible edge cases, such as extremely desirable outcomes that don't preserve enough personal identity to count as "survival" in the conventional sense.
comment by Schlega · 2011-01-22T21:25:00.681Z · LW(p) · GW(p)
This scenario is much farther along the impossible scale than reviving an intact brain. If I wanted to live forever, I would make absolutely sure that I had a plan that did not involve violating the laws of physics.
(Not that I'm an expert physicist, but my understanding is that decomposition is an irreversible process.)
comment by Manfred · 2011-01-22T21:31:25.683Z · LW(p) · GW(p)
It is physically impossible. The AI would have know the exact state of every atom on earth and every photon that has left earth in order to recreate you. And that's doubly impossible because of quantum effects.
Replies from: HonoreDB, Kevin↑ comment by HonoreDB · 2011-01-22T23:29:52.646Z · LW(p) · GW(p)
And that's doubly impossible because of quantum effects.
I question this part. Regardless of what quantum model you're using, it's logically impossible for there to be an unobservable aspect of my brain that's still relevant to my identity.
Replies from: Manfred↑ comment by Manfred · 2011-01-23T00:12:19.548Z · LW(p) · GW(p)
From the perspective of someone looking back at the past, measurement erases information. So when the dirt measures your brain, some of the information about what those atoms were doing before - position, momentum, spin, etc. is erased.
Example: If you take a bunch of spin-up atoms and measure them along the left-right axis, they will no longer be spin-up. Measurement erased the information that was there before. Similar principles are what make quantum cryptography work - if I sent you a spin-up atom and then a spin-right atom as the key, the attacker doesn't know which axis to measure for which atom, so they end up erasing part of the information when they measure the key.
Replies from: HonoreDB↑ comment by HonoreDB · 2011-01-23T01:05:41.539Z · LW(p) · GW(p)
In the quantum cryptography case, the attacker can be said to have "lost information" in the measuring because the sender still has that information (and the receiver has half of it, if I remember correctly). So it's still relevant. But for a datum about the brain to be lost irrecoverably, it has to have never affected anything, including macroscopic facts about my brain, and it cannot have been determined by any macroscopic facts about my brain. Which means the datum never actually existed.
Replies from: Manfred↑ comment by Manfred · 2011-01-23T03:27:23.171Z · LW(p) · GW(p)
It exists only statistically - information seems to be more like entropy than like energy. That's quantum mechanics. If you measure an atom to have spin up, it COULD be because it was alway spin up, or it could be that it was spin anything-but-down and you just got lucky. You might say "but since the fact that it was spin-x didn't affect the result, how do we know it existed at all?" Well, that's what bell's inequality is for, basically. The data in your brain isn't a hidden variable, it's part of the quantum state, and so is subject to being messed with when measured.
Replies from: HonoreDB↑ comment by Kevin · 2011-01-23T11:01:59.703Z · LW(p) · GW(p)
"You" is rather fungible. There are lots of mes out there that are close enough to me that I would identify with their conscious experience.
Replies from: wedrifid, Manfredcomment by Jack · 2011-01-22T23:56:49.590Z · LW(p) · GW(p)
it would require recomputing the Earth's history from the time the AGI is activated back to the death of the last person it intends to save.
Doing it this way sounds a lot harder than discovering the initial conditions of the universe and then simulating forward, checking the simulation against conditions at the time the AGI exists in. Then you just pull people out of the simulation when they 'die' on Earth.
The probability for this being possible follows the probability of the Simulation hypothesis. As for the ethics of it- I'm not convinced the existence of additional copies information theoretically indistinguishable from already existing people alters the relevant moral calculations one iota.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-01-23T00:47:31.606Z · LW(p) · GW(p)
The probability for this being possible follows the probability of the Simulation hypothesis.
No. The simulation hypothesis doesn't require the ground level universe to have the same laws of physics as our own. This tracks with the much narrower idea of a Simulation hypothesis where the ground level universe has the same laws of physics as us.
Replies from: Jackcomment by komponisto · 2011-01-23T02:41:00.677Z · LW(p) · GW(p)
Upvoted from -1 to 0 because the question is interesting and important; the fact that the answer seems clear to some is not cause for downvoting, in my view. (This is the Discussion section; would you have downvoted this post if it had been an Open Thread comment?)
As it happens, I don't think the answer is necessarily a settled matter, any more than time travel in general (of which this can be considered a special case) is. It seems quite possible that the universe may contain much more information about its past than we can currently envision being able to extract. (One might for instance compare the old belief that determining the chemical composition of stars was hopeless due to their distance.) Yes, thermodynamics may imply that not all of the past can be reconstructed, but it may not be necessary to extract all of it in order to reconstruct a few human minds. Presumably a huge (by present standards) expenditure of negentropy would be involved, but as far as I know the universe seems to contain a whole lot of "wasted" negentropy at the moment.
comment by Morendil · 2011-01-23T16:54:34.626Z · LW(p) · GW(p)
Depends what you mean by "recreating"; if you mean a copy that could fool anyone who knew all the relevant facts about that person into believing that it was, indeed, that person that they were dealing with, it's kind of obvious that a powerful AI could do it.
comment by Sniffnoy · 2011-01-22T23:30:39.154Z · LW(p) · GW(p)
The question seems ill-posed. I know what "information-theoretically dead" means, I think I know what "dead by modern standards" means, but I'm not sure it's possible to make sensible the notion of "information-theoretically-dead by modern standards".
comment by benelliott · 2011-01-22T21:17:47.861Z · LW(p) · GW(p)
Do we know if physics is backwards deterministic? If its not then information can be permanently lost beyond all hope of recovery so the answer is almost certainly no. Otherwise I'm not sure.
Replies from: ata↑ comment by ata · 2011-01-23T22:43:56.211Z · LW(p) · GW(p)
Physics is time-reversible as far as we can tell, so at the most global level, information is neither gained nor lost over time, but even if we overcame the apparently omniscience-requiring challenge of getting a complete snapshot of the universe to simulate backwards, we would then get many pasts instead of one past.
No one knows what superintelligences can't do, but unless we're wrong about physics in some very surprising and suspiciously convenient ways, this certainly appears to be impossible.
comment by anonym · 2011-01-25T04:37:33.314Z · LW(p) · GW(p)
If long-range weather prediction is impossible in general, even assuming determinism and no random elements, then what you're describing, which would entail solving multiple chaotic dynamical systems, is "doubleplus impossible" in practice.
comment by [deleted] · 2011-01-23T19:12:55.147Z · LW(p) · GW(p)
OK, let's make a few assumptions. Assume, for a start, that all the information in your brain is necessary to resurrect you, down to the quantum level. And that this is incompressible - that even computronium requires as much mass/energy as is contained in your brain in order to perfectly simulate it. But assume also that an FAI can convert matter/energy into computronium.
By the Bekenstein Bound, the absolute maximum number of brain states that could be possible is 10^(7.79640 10^41) , which we can approximate as 10^10^41. Note this is a SERIOUSLY HIGH estimate. That would be the number of possible states of any* roughly-spherical lump of matter of the right radius and mass. The mass of a brain is between one and two kilograms, so very roughly, we'd need 10^10^41 kg of computronium to simulate every possible brain state.
The mass of the galaxy is 10^42 kg, so the mass of the galaxy is exponentially smaller than the mass that would be needed to simulate every possible brain state.
So that's an upper limit - but that limit rests on wanting simulations of every possible brain state to exist simultaneously, where 'brain state' means everything from the state my brain is in now, to the state it was in one nanosecond ago, to the state that a 1.5kg bag of sugar was in half an hour ago, to...
If we wish to limit ourselves to brains that have existed, then the figure could be much lower. By definition, every human brain thus far has been made from the mass on the earth's crust and atmosphere, and there have been roughly a hundred billion people ever - they could easily be computed if we had precise data about their states, using considerably less mass than that of the Earth.
So the question becomes, is there any way of winnowing down the 'possible brain states' to the 'have-existed brain states'? I would expect so, to quite a large extent. Firstly, you'd only want to use those brain states that correspond to actual states of human brains - the state that corresponds to being a chunk of my armchair could be thrown away. That in itself would get rid of the vast bulk of possible states of matter.
Secondly, most of the resulting states would be duplicates. There would be my brain now, my brain a second ago, my brain as it would be in a universe where I'd chosen a different pair of socks this morning but otherwise nothing was different... I don't think anyone in the world would have any actual problem (as opposed to philosophical problems) identifying those people as one individual.
Thirdly, we can limit by known history. Obviously as one goes further back, what 'known history' is is foggier, but we can, for example, not simulate any brains whose memories would require them to have been born on Alpha Centauri in the year 5BC and to be 12 in the year 2032 - getting rid of brains with logically inconsistent or physically impossible pasts would winnow it down some more.
And finally, if all else fails, you can brute force the problem over multiple universes. Of the brains that are left to be simulated, select a subset of them to simulate using a quantum experiment, and leave the others to other aspects of the multiverse (this is assuming the multiversal interpretation of quantum physics to be correct, but I would imagine that a greater-than-human intelligence would have some way of confirming this beyond all doubt).
So my own thought is it's physically possible (assuming a situation where a FAI exists, which is a whole other problem in itself), but extraordinarily difficult. I would not expect it to happen unless and until a good proportion of the galaxy was converted to computronium. But should that happen, and should a FAI exist, then it would pretty much be the definition of 'friendliness' that it would resurrect people, starting with those easiest to bring back.
My own guesstimate is that, conditional on FAI being achieved in the next 100 years (so enough information is preserved to make them relatively easy to resurrect), and conditional on it being able to use mass utterly efficiently for computation, then there is probably as great as a 40% chance that those alive today but dead before FAI is created would eventually be resurrected with enough fidelity that they couldn't themselves tell the difference. How high you put the probability of FAI and computronium is, of course, up to you.
comment by lsparrish · 2011-01-24T03:45:52.043Z · LW(p) · GW(p)
One approach that might work rather than simulating physics backwards (which strikes me as implausible given the nature of quantum mechanical effects) would be to analyze the information coming back from interstellar dust particles. This would tend to be impacted only by sources of light and gravity at a given precise distance.
Thus a crude model of the large planetary bodies of the solar system at a given moment could be calculated by triangulating information from the correct set of dust particles, which could be refined over time, as new data points are added, to indicate large land masses, and eventually resolved into human bodies and their individual atoms. This should be sufficient to resurrect everyone from their moment of death, as well as restoring every memory they ever forgot.
I don't see this as a good substitute for cryonics because there are plausible universes where FAI is e.g. self limiting in a way that would prevent it from converting the solar system (perhaps the galaxy) into computronium, which I think would probably be a minimal prerequisite to actually pulling something like this off. I also think an intelligence explosion scenario is not inevitable (may be impossible or extraordinarily unlikely for some reason), and that cryonics can work with incremental human-level improvements in technology.
Even in the high-speed fooming FAI class of scenario, such tasks as converting the galaxy to computronium, and collecting enough data from enough points, are limited by the speed of light factor and might take hundreds of thousands of years, if not millions. In the mean time, those who are cryopreserved could be up and walking around, forming new experiences and laying the groundwork for future civilization. They would also be more likely to have opportunities to take part in early interstellar and intergalactic colonization efforts.
comment by XiXiDu · 2011-01-23T11:34:03.929Z · LW(p) · GW(p)
It might be possible if there are a lot of information available about you, e.g. chat transcripts, videos and people who knew you very well. This is called a beta level simulations as described in some of Alastair Reynolds’ novels.
Resurrection without a backup. As with ecosystem reconstruction, such "resurrections" are in fact clever simulations. If the available information is sufficiently detailed, the individual and even close friends and associations are unable to tell the difference. However, transapient informants say that any being of the same toposophic level as the "resurrector" can see marks that the new being is not at all like the old one. "Resurrections" of historical figures, or of persons who lived at the fringes of Terragen civilization and were not well recorded, are of very uneven quality.
Encyclopedia Galactica - Limits of Transapient Power
The above is of course fictional evidence but I think the idea is very interesting and might not be impossible given a large amount of information and a superhuman AI.
It might work even better if some of your DNA is available so that you can be cloned and your clone imprinted with recordings, extrapolations and behavioural patterns based on lifelogs. I suppose that even without a DNA sample, given sufficiently powerful AI, such a beta-level simulation might be sufficiently close so that only a powerful posthuman being could notice any difference compared to the original. At least that's a nice idea :-)
ETA In any case, I'd go with cryonics. Because beta-level simulations are just crude simulations. Further I believe that if you don't want to go with cryonics you can find some reassurance in the many many-worlds ideas. MWI, the simulation argument, dust theory, Tegmark's mathematical universe etc. If one of them is factual, there will be some "copies" of you alive and happy somwhere.
comment by cousin_it · 2011-01-23T06:06:57.729Z · LW(p) · GW(p)
If future AIs can do such things, they can also probably run "rescue sims" that save you from awful things happening in your life. If you have ever experienced awful suffering that shouldn't look OK to an AI running CEV, this indicates future AIs likely won't be saving people as you suggest. Note that observing other people's awful suffering shouldn't count as evidence to you, because you have no access to their subjective anticipation (the branches of other people you see may all have very low probability). Freaky.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-01-23T06:20:35.485Z · LW(p) · GW(p)
A "rescue sim" could possibly branch a person before they have some horrible experience, which would result in a copy that never had that experience, but instance that does have the experience would still exist.
comment by James_Miller · 2011-01-22T22:26:21.899Z · LW(p) · GW(p)
It seems more likely that the AGI would figure out a way to travel backwards in time to rescue you.
Replies from: Pavitra↑ comment by Pavitra · 2011-01-23T00:33:56.806Z · LW(p) · GW(p)
If time travel were possible, a uFAI Boltzmann brain would have already gone back to the Big Bang and eaten the universe.
Replies from: Desrtopa, James_Miller, Document, JoshuaZ↑ comment by Desrtopa · 2011-01-23T00:57:03.952Z · LW(p) · GW(p)
According to this book, our current models predict that time travel to the past is theoretically possible, but not practical enough to allow for that. You can't go back further than the spacetime configuration that allows for travel back in time has existed.
Replies from: Pavitra↑ comment by James_Miller · 2011-01-23T03:19:08.153Z · LW(p) · GW(p)
You're assuming a very large universe.
Replies from: Pavitra↑ comment by Document · 2011-01-24T10:30:27.546Z · LW(p) · GW(p)
Inherent paradoxes of causal-loop-style time travel aside, isn't the Big Bang where a Boltzmann brain is most likely to be in the first place?
Replies from: Pavitra↑ comment by Pavitra · 2011-01-26T03:14:37.666Z · LW(p) · GW(p)
Not that I have actual relevant technical knowledge or anything, but why would it be in the relatively small amount of space and time shortly following the Big Bang, as opposed to the vastly larger entire rest of the universe?
Replies from: Document↑ comment by JoshuaZ · 2011-01-23T00:51:28.339Z · LW(p) · GW(p)
This depends on the nature and limits of time travel. But it does seem fair that if time travel could allow what the parent post wanted then this is a plausible problem. However, it is slightly misleading to talk about uFAIs and Boltzmann brains in this context. If generic time travel exists, we should simply see the repeated intervention of all sorts of entities.
Replies from: Pavitra↑ comment by Pavitra · 2011-01-23T16:52:03.163Z · LW(p) · GW(p)
Only the winner can go back and set things up.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-01-23T17:47:45.748Z · LW(p) · GW(p)
While that might be true in the short-run (incidentally is that a direct quote from Bill and Ted?) if one is dealing with Boltzmann brains then there is no long-term winner, just never-ending fluctuations that become rarer and rarer as the universe cools down.
Replies from: Pavitra↑ comment by Pavitra · 2011-01-26T03:17:24.982Z · LW(p) · GW(p)
A Boltzmann brain can't really become powerful if there already exists a Robot God watching for upstarts and squishing them as they arise. And yes, that's a quote.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-01-26T03:51:45.048Z · LW(p) · GW(p)
I don't follow your logic. Can you expand?
Replies from: Pavitra↑ comment by Pavitra · 2011-01-26T04:52:47.821Z · LW(p) · GW(p)
Suppose an AGI comes into being. (I mentioned Boltzmann brains as a sort of lower bound on what it would take for this to occur: the universe being very large, this is virtually guaranteed to happen at least once at some point somewhere.)
What happens to the newborn godling? This greatly depends on its immediate environment. If it's born into a region of spacetime where it's surrounded by dumb matter (any sub-foom intelligence being effectively "dumb" for these purposes), then it quickly takes over its future light cone.
If the space it controls gets large enough (which it will, given time), then it will have to contend with the possibility of contenders emerging, Boltzmann gods like itself spontaneously forming out of the ether (or rather, since it's eaten its future light cone, out of the computronium). Luckily for it, it has a vast resource and positional advantage over the upstarts, by virtue of having time and space to prepare. The upstarts have minds that can fit into a volume small enough to form by pure chance (and the Law of Large Numbers), whereas the standing god has no such limitations.
So we can anticipate that, in a conflict between two Boltzmann gods, the firstborn would win.
If we further postulate that full-strength time travel is possible, then it follows that a Boltzmann (or other) god would eventually figure this out and travel back to the beginning of the universe, so as to control all of everything. By the previous argument, an AGI with first-mover advantage from the beginning of the universe would be able to easily prevent any serious threats to itself from arising. Thus, only one godling will ever rise to full strength, only one will go back in time and set itself up, and there will be no (successful) revolutions.
From this we may infer that either: (1) the universe is not that big; (2) fully general time travel isn't possible; (3) we are instrumental to the Robot God's plans in some way; (4) we are a component of the RG's mind; (5) we are a side effect of the RG's computation and it doesn't notice or care enough to kill us.
I don't really think 1, 3, or 4 is really plausible. 5 without 2, in such a way that 2 is still false of the sub-reality that the side effect created, would probably recurse until we got a non-time-traveling side-effect sub-reality. That leaves 2: you can't travel back in time to before the invention of the time machine (or some other such restriction).
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-01-26T05:06:03.113Z · LW(p) · GW(p)
Ah ok. So it seems we have different conceptions about how and when Boltzmann brains arise. As I understand it, the vast majority of Boltzmann brains will arise after what we would normally call the heat death of the universe. They won't in general have the resources to control their future light-cone because it won't have enough usable energy. That's aside from the fact, that this assumes Boltzmann brains that have their beliefs about reality around them in some way correlated with reality, something which does not necessarily have such a high probability (this is the the classic argument behind why you might be a Boltzmann brain who has lasted a fraction of a second and will soon dissolve back into chaos.)
Replies from: Pavitra↑ comment by Pavitra · 2011-01-26T22:00:53.656Z · LW(p) · GW(p)
this assumes Boltzmann brains that have their beliefs about reality around them in some way correlated with reality, something which does not necessarily have such a high probability
Certainly the vast majority of Boltzmann brains don't become gods, in the same way that the vast majority of virtual particles don't form brains. But it only takes one, ever.
However, it occurs to me that I haven't actually done the math, and the improbability of an AGI forming out of the ether may well exceed the space-and-time volume of the universe from Big Bang to heat death.
the vast majority of Boltzmann brains will arise after what we would normally call the heat death of the universe. They won't in general have the resources to control their future light-cone because it won't have enough usable energy.
This sounds like a plausible argument for heat death as a hard deadline on the birth of a Boltzmann god. But once one exists, any others that arise in its future light cone are rendered irrelevant.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-01-26T22:02:58.567Z · LW(p) · GW(p)
This sounds like a plausible argument for heat death as a hard deadline on the birth of a Boltzmann god. But once one exists, any others that arise in its future light cone are rendered irrelevant.
That seems correct. So the details come down to precisely how many Boltzmann brains one expects to arise, what sort of goals they'll have, and how much resources they'll have. This seems very tough to estimate.