A proposal for a cryogenic grave for cryonics

post by Roko · 2010-07-06T19:01:36.898Z · LW · GW · Legacy · 204 comments

Followup to: Cryonics wants to be big

We've all wondered about the wisdom of paying money to be cryopreserved, when the current social attitude to cryopreservation is relatively hostile (though improving, it seems). In particular, the probability that either or both of Alcor and CI go bankrupt in the next 100 years is nontrivial (perhaps 50% for "either"?). If this happened, cryopreserved patients may be left to die at room temperature. There is also the possibility that the organizations are closed down by hostile legal action.A

The ideal solution to this problem is a way of keeping bodies cold (colder than -170C, probably) in a grave. Our society already has strong inhibitions against disturbing the dead, which means that a cryonic grave that required no human intervention would be much less vulnerable. Furthermore, such graves could be put in unmarked locations in northern Canada, Scandinavia, Siberia and even Antarctica, where it is highly unlikely people will go, thereby providing further protection. 

In the comments to "Cryonics wants to be big", it was suggested that a large enough volume of liquid nitrogen would simply take > 100 years to boil off. Therefore, a cryogenic grave of sufficient size would just be a big tank of LN2 (or some other cryogen) with massive amonuts of insulation.

So, I'll present what I think is the best possible engineering case, and invite LW commenters to correct my mistakes and add suggestions and improvements of their own.

If you have a spherical tank of radius r with insulation of thermal conductivity k and thickness r (so a total radius for insulation and tank of 2r) and a temperature difference of ΔT, the power getting from the outside to the inside is approximately

25 × k × r × ΔT 

If the insulation is made much thicker, we get into sharply diminishing returns (asymptotically, we can achieve only another factor of 2). The volume of cryogen that can be stored is approximately 4.2 × r3, and the total amount of heat required to evaporate and heat all of that cryogen is 

4.2 × r× (volumetric heat of vaporization + gas enthalpy) 

The quantity is brackets for Nitrogen and a ΔT of 220 °C is approximately 346,000,000 J m-3. Dividing energy by power gives a boiloff time of 

1/12,000 × r× k-1 centuries

Setting this equal to 1 century, we get:

r2/k = 12,000. 

Now the question is, can we satisfy this constraint without an exorbitant price tag? Can we do better and get 2 or 3 centuries? 

"Cryogel" insulation with a k-value of 0.012 is commercially available Meaning that r would have to be at least 12 meters. A full 12-meter radius tank would weigh 6000 tons (!) meaning that some fairly serious mechanical engineering would be needed to support it. I'd like to hear what people think this would cost, and how the cost scales with r. 

The best feasible k seems to be fine granules or powder in a vacuum. When the mean free path of a gas increases significantly beyond the characteristic dimension of the space that encloses it, the thermal conductivity drops linearly with pressure. This company quotes 0.0007 W/m-K, though this is at high vacuum. Fine granules of aerogel would probably outperform this in terms of the vacuum required to get down to < 0.001 W/m-K. 

Supposing that it is feasible to maintain a good enough vacuum to get to 0.0007 W/m-K, perhaps with aerogel or some other material. Then r is a mere 2.9 meters, and we're looking at a structure that's the size of a large room rather than the size of tower block, and a cryogen weight of a mere 80 tons. Or you could double the radius and have a system that would survive for 400 years, with a size and weight that was still not in the "silly" range.

The option that works without the need for a vacuum is inviting because there's one less thing to go wrong, but I am no expert on how hard it would be to make a system hold a rough vacuum for 100 years, so it is not clear how useful that is.

As a final comment, I disagree that storing all patients in one system is a good idea. Too many eggs in one basket is never good when you're trying to maximize the probability that each patient will survive. That's why I'm keen on finding a system that would be small enough that it would be economical to build one for a few dozen patients, say (cost < 30 million).  

So, I invite Less Wrong to comment: is this feasible, and if so how much would it cost, and can you improve on my ideas?

In particular, any commenters with experience in cryogenic engineering would delight me with either refinement or critique of my cryogenic ideas, and delight me even more with cost estimates of these systems. Its also fairly critical to know whether you can hold a 99% vacuum for a century or two. 

 

 

 


 

A: In addition to this, many scenarios where cryonics is useful to the average LW reader are scenarios where technological progress is slow but "eventually" gets to the required level of technology to reanimate you, because if progress is fast you simply won't have time to get old and die before we hit longevity escape velocity. Slow progress in turn correlates with the world experiencing a significant "dip" in the next 50 or so years, such as a very severe recession or a disaster of some kind. These are precisely the scenarios where a combination of economic hardship and hostile public opinion might kill cryonics organizations. 

204 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2010-07-06T20:57:09.952Z · LW(p) · GW(p)

With the development of commercial space flight, at some point launching cryonauts into space might become cost-effective.

Right now, with a human head weighting about 5 kg, launching it would cost about $150,000 (not counting the cryopreservation equipment, which is probably significant, and has to withstand the launching stress). Comparing this with a price tag of Alcor full-body preservation, which is also $150,000, it's not totally bonkers to suppose that in a few decades it could become competitive, even without the fancy space elevators.

If it's possible to use the low temperature in space, despite the solar radiation, to keep the temperature down, or somehow keep the package in the shadow, maybe of a specifically crafted accompanying object (I'm not sure about that -- while it's a critical question), it could be a no-maintenance solution, where one would have to perform a deliberate and rather costly procedure to disturb it.

Replies from: Roko
comment by Roko · 2010-07-06T21:17:06.223Z · LW(p) · GW(p)

True, I considered that possibility. But the problem is, what do you do with it once it's in space? You have to somehow make sure that it stays cold and doesn't hit anything or deorbit.

How would you prevent solar radiation from heating it up? Reflective on the sun facing side, black on the other side? But then you have to keep a control system operating for hundreds of years!

And think of the required radiation shielding! All of your stuff is getting irradiated, so you need lots of lead.

And if it goes wrong, what do you do?

No, space is not good.

Now if you could get it to the moon, you'd be in business. Bury it in one of the always-shaded craters, perhaps a kilometer below the surface, and it'll be safe for millions of years.

Replies from: Eliezer_Yudkowsky, Vladimir_Nesov
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-07T05:14:14.641Z · LW(p) · GW(p)

And think of the required radiation shielding! All of your stuff is getting irradiated, so you need lots of lead.

Is the radiation going to cause significant information-theoretic damage? In how long?

Replies from: Roko
comment by Roko · 2010-07-07T10:53:56.866Z · LW(p) · GW(p)

I don't know, wikipedia states that you'd receive 0.5-1 Sievert per year in normal conditions, where the safe dose for a living human is 0.002 Sv/yr. However, that's for a living human.

In a solar flare event, the dose would go up.

I'd bet that it would take 1000s of years for this to add up to irreparable damage, with some uncertainty regarding a solar flare.

Replies from: None
comment by [deleted] · 2010-07-07T17:07:39.466Z · LW(p) · GW(p)

Shopping is hard, let's do math!

First, we need a conceptual framework. The whole point of cryonics is to stop chemistry, so if you're cryopreserved and then exposed to ionizing radiation over any period of time, you'll experience the same amount of damage as if you were alive and exposed to that much radiation all at once. (Being alive and exposed to radiation over a period of time is different; you experience less damage because your cells have time to repair themselves.)

Wikipedia says "Estimates are that humans unshielded in interplanetary space would receive annually roughly 400 to 900 milli-Sieverts (mSv) (compared to 2.4 mSv on Earth)". Wikipedia also says that an acute exposure of 4500 to 5000 mSv is "LD50 in humans (from radiation poisoning), with medical treatment". Now, LD50 isn't LD100, but we can agree that it's a Very Bad Dose.

Generously, assuming that the Very Bad Dose is 5000 mSv, and Outer Space's Death Rays are 400 mSv/yr, being Cryopreserved In Space will give you a Very Bad Dose in 12.5 years. This is compared to roughly 2000 years on Earth.

That answers one half of Eliezer's question. My answer to the other half (is this significant in information-theoretic terms) is mu. When you're cryopreserved, you're double dead - dead from whatever killed you, and dead again from cryopreservation damage. You're betting that the future can fix this, but you shouldn't give the future even more work to do, and being triple dead from radiation damage wouldn't help.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-07T17:49:59.763Z · LW(p) · GW(p)

This fits in with something I've been wondering in general just for Earth based cryopreservation. How much effort do cryonic organizations make to ensure that there's a minimum of radiation exposure to the cryopreserved individuals? Even background radiation matters a lot more than it would for a living person since there's no ongoing repair mechanisms. I suspect that the bodies are not being subject to much external radiation simply because the cryochambers themselves would block most of it. But, the bodies themselves will generate some radiation, primarily from the decay of potassium 40 and carbon 14. Naively if one were trying absolutely to minimize this one would try to have people who knew they were likely to die soon (due to terminal illness) to eat diets which have less potassium. One could also conceive of having foods made made with carbon that had a low amount of c-14. But given the proportions I'm pretty sure that the bulk of the radiation will be from potassium 40. Robert Ettinger at one point presented a back of the envelope calculation that showed that the radiation just from potassium 40 is unlikely to be a problem if one is the range of 50-100 years, but if one is interested in longer ranges then this becomes a more serious worry.

Replies from: Roko, None
comment by Roko · 2010-07-07T21:32:37.571Z · LW(p) · GW(p)

Remember that it takes a lot more radiation to erase someone than to merely kill them.

To information-theoretically erase a person would seem to require that at least 40% of the molecules in their brains are altered, which would seem to imply at least 10^24 or so radioactive particles. This is extremely high.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-07T21:57:48.459Z · LW(p) · GW(p)

I'm curious where you are getting the 40% number. I'm not completely sure what we mean by erasing a person since the mind isn't a binary presence that is there or not. Damage can result in loss of some aspects of personality or some memories without complete erasure. Presumably, most people would like to minimize that issue.

Given your 40% claim I tentatively agree with your 10^24 number. There's a minor issue of cascading particles but that shouldn't be a problem since most of the radiation is going to be low energy beta particles. I am however concerned slightly that radiation could result in additional free radicals which are able to jump around and do unpleasant chemical stuff even at the temperatures of liquid nitrogen. I suspect that this would not be a major issue either but I don't think I have anywhere near enough biochem knowledge to make a strong conclusion about this.

Additionally, as STL pointed out, we don't want to make things more difficult for the people reviving them. This combines badly with the first-in-last-out nature of cryonics- the bodies which have been around longer will have more radiation damage and will already be much more technically difficult to revive. Moreover, some people will strongly prefer being reanimated in their own bodies rather than as simulations on computers. The chance that that can occur is lower if the bodies have serious problems due to radiation damage.

Replies from: Roko
comment by Roko · 2010-07-07T22:36:47.149Z · LW(p) · GW(p)

Say you randomly alter 1% of the molecules in the brain. Then almost every neuron would still recognizably be a neuron, and still have synapses that connected to the right things, and any concentration of neurotransmitter X would still recognizably be type X (rather than Y). There is no way I see for 1% random destruction to erase the person information-theoretically.

The difference between 1% and 40% is not actually so much... 10^22 vs 10^24. Still huge.

Replies from: JoshuaZ, wedrifid
comment by JoshuaZ · 2010-07-07T22:53:34.789Z · LW(p) · GW(p)

Say you randomly alter 1% of the molecules in the brain. Then almost every neuron would still recognizably be a neuron, and still have synapses that connected to the right things, and any concentration of neurotransmitter X would still recognizably be type X (rather than Y). There is no way I see for 1% random destruction to erase the person information-theoretically.

Would this be enough to keep thresholds for action potentials correct? I'm more familiar with neural nets for computational purposes than with actual neural architecture, but for neural nets this matters a lot. You can have wildly different behavior even with the same neurons connected to each other just by changing the potential levels. Learning behavior consists not just in constructing or removing connections but also in strengthening and weakening existing connections.

I don't know why you mention the concentrations of neurotransmitters since that's a fairly temporary thing which (as far as I'm aware) doesn't contain much in the way of actual data except about neurons which have fired very recently.

Replies from: Roko
comment by Roko · 2010-07-08T00:24:27.177Z · LW(p) · GW(p)

Would this be enough to keep thresholds for action potentials correct

What determines the threshold for an action potential? If it's something bigger than a few dozen molecules, it seems that a random 1% destruction can't erase it.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-08T00:53:21.253Z · LW(p) · GW(p)

I don't know enough about the mechanisms to to comment. Do we have any more biologically inclined individuals here who can?

comment by wedrifid · 2010-07-07T23:43:44.999Z · LW(p) · GW(p)

There is no way I see for 1% random destruction to erase the person information-theoretically.

I suspect you are right. Since the important structures involved are significantly larger than one molecule, most of the single molecule alterations will be rather obvious and easy to reverse (for a given kind of 'easy').

comment by [deleted] · 2010-07-07T18:34:27.307Z · LW(p) · GW(p)

Life is tough. Unlife is tougher.

comment by Vladimir_Nesov · 2010-07-06T21:22:48.375Z · LW(p) · GW(p)

Now if you could get it to the moon, you'd be in business. Bury it in one of the always-shaded craters, perhaps a kilometer below the surface, and it'll be safe for millions of years.

Yes, that would be lots better. The cost of robotics should eventually go down as well, which would enable relatively cheap crater-seeking moon-burrowing robots (though sabotage would become cheaper over time too).

comment by Mass_Driver · 2010-07-06T19:53:38.741Z · LW(p) · GW(p)

Good idea! A few refinements:

  • You probably don't want a literally spherical tank; it might roll away and hit something or bother someone. Trading a few % of efficiency for a flattened, ridged bottom might be a good idea.

  • If you're going to rely on social taboos against disturbing graves, you probably have to keep bodies/tank down to 30, if not an even lower number. A group of family and friends who are buried together in the same crypt are eccentric; a community of essentially unrelated people who are buried together in the same crypt are a cult, and lose a lot of the respect that they would otherwise get from mainstream culture.

  • Does having a grave with no human/infrastructural maintenance mean that you can't slap a generator on it somewhere? What would having a small solar panel or a petroleum mini-tank do for the chances of repairing minor cracks in the vacuum, or of reducing heat infiltration?

Replies from: NancyLebovitz, Roko, Roko, Roko
comment by NancyLebovitz · 2010-07-07T09:51:53.453Z · LW(p) · GW(p)

If you're going to rely on social taboos against disturbing graves, you probably have to keep bodies/tank down to 30, if not an even lower number. A group of family and friends who are buried together in the same crypt are eccentric; a community of essentially unrelated people who are buried together in the same crypt are a cult, and lose a lot of the respect that they would otherwise get from mainstream culture.

I'm pretty sure this is mistaken-- people generally don't wreck graveyards, even though large numbers of people are buried there.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-08T03:48:24.702Z · LW(p) · GW(p)

Right, but the graveyard is thought of as a place where many individuals are separately buried. It's OK, if slightly mischievous, to enter a graveyard, tell spooky stories there, maybe even make out -- but you would never do any of those things inside a grave.

If we build a cryoyard in which there are many individual cryotanks nearby, that will probably be fine, and might cut down on security costs. But if we put all the bodies in the same cryotank, then we run a nontrivial risk of setting off people's creepy cult alarms, and the taboo against disturbing graves-of-people-who-are-not-markedly-unholy may or may not hold.

Replies from: JoshuaZ, NancyLebovitz
comment by JoshuaZ · 2010-07-08T03:55:24.236Z · LW(p) · GW(p)

Yes, and in the very worst case scenario, the weirdness factor would make some teenagers more likely to try to go and vandalize them as a dare. Weird cult having strange frozen crypts is almost asking for that to happen. Unfortunately, this is real life, so we can't even have the satisfaction of this sort of thing triggering the terrible monsters that sleep beneath the cursed ground. (Why yes, I have watched too many bad horror movies. Whatever gave you that impression?)

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-08T04:12:18.599Z · LW(p) · GW(p)

Unfortunately, this is real life.

If you were developing a simulation of a Universe for entertainment purposes, how long would you let the inhabitants think they were at the top level of reality before introducing firm evidence that something was seriously off?

Just curious.

Replies from: NancyLebovitz, DSimon
comment by NancyLebovitz · 2010-07-08T11:21:40.310Z · LW(p) · GW(p)

Depends on how long the backstory is.

Also, it's plausible that any species which can simulate complex universes has a longer attention span than we do.

Consider the range of human art. It's plausible that simulators would have at least as wide a range, and I can see purist simulators (watchmaker Gods) and interventionists.

comment by DSimon · 2010-07-08T20:47:29.236Z · LW(p) · GW(p)

I'd do it over and over again, in all sorts of different ways, record the hilarious results, and after each such session reset the simulation back to an earlier, untampered saved state.

Replies from: steven0461
comment by steven0461 · 2010-07-08T21:23:41.057Z · LW(p) · GW(p)

I've long suspected that we live in the original universe's blooper reel.

comment by NancyLebovitz · 2010-07-08T11:18:53.434Z · LW(p) · GW(p)

This doesn't match my intuitions at all, but I'm not an expert on normal people.

Is there any way the plausible range of reactions to big cryonics facilities can be tested?

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-08T13:37:57.193Z · LW(p) · GW(p)

Let's ask our neighbors!

comment by Roko · 2010-07-06T20:13:25.996Z · LW(p) · GW(p)

generator

I've thought hard about this, but I see no way to get anything to be reliable enough, with the exception of a radioisotope thermal generator.

Repairing the vacuum can be done with getters and adsorbers (they just absorb gas molecules chemically), which is a no-moving parts solution. The insulation layer could be full of little sorbs.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-06T21:29:36.993Z · LW(p) · GW(p)

radioisotope thermal generator.

[grin] I wasn't sure if those were sci-fi or not.

the tanks would be buried under the ground

Sure, for starters, but it's hard to say what will and won't be permafrost in 100 years, what with the non-trivial risk of catastrophic climate change and all. If the tank is built right, I think rolling, although unlikely, would still be one of the top 5 most likely failure modes; it is an easy enough flaw to fix.

Even municipal water towers, e.g., aren't perfect spheres, and nobody expects those to fall off their columns and plow through downtown Suburb Beach.

Replies from: gwern, Roko
comment by gwern · 2010-07-07T02:11:55.407Z · LW(p) · GW(p)

Far from being sci-fi, they are quite common (if we're talking about the same thing): http://en.wikipedia.org/wiki/Radioisotope_thermoelectric_generator#History Common enough that they're the main reason NASA has been targeted by green groups, even.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-08T03:48:57.100Z · LW(p) · GW(p)

Cool!

comment by Roko · 2010-07-06T21:47:08.372Z · LW(p) · GW(p)

You're right to worry about global warming. But permafrost is soil, not ice. Permafrost means "always frozen soil".

I suspect that there are regions of northern Canada where even a +20 degree warming would not get rid of the permafrost. Though the cost of getting to these places may be prohibitive? Anyone live in Canada and know about Nunavut?

Replies from: kraryal
comment by kraryal · 2010-07-07T03:57:08.729Z · LW(p) · GW(p)

I can verify that these places are accessible, and that the permafrost extends quite a bit farther south than one might expect. I used to live just south of the Yukon territory.

There are regular long-haul trucks that go up there all year round; if you go in winter, you can use an ice road to get to the very cold and remote places. Given the regular volume of traffic, I'd say the cost is not prohibitive. I can get precise figures if you'd like.

Replies from: Roko
comment by Roko · 2010-07-07T11:15:21.479Z · LW(p) · GW(p)

Thanks. Do you know what places have the coldest winter temperature?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-07-07T18:00:06.151Z · LW(p) · GW(p)

Hits on google for "coldest place on earth" seem unanimous that it's somewhere in Antarctica. Here's an interesting newspaper article:

http://www.telegraph.co.uk/news/worldnews/antarctica/6121866/Scientists-identify-coldest-place-on-earth.html

This sounds like it could be a lot of fun.

Replies from: Roko, D_Alex
comment by Roko · 2010-07-08T01:10:38.676Z · LW(p) · GW(p)

That's pretty cool. As I said, -70C is thermodynamically very useful. A phase change heat-pipe could capture that cold from the winter, meaning that throughout the summer your system still only sees an outside temperature of -70C.

comment by D_Alex · 2010-07-08T05:16:54.585Z · LW(p) · GW(p)

This place is much colder...

http://gcaptain.com/maritime/blog/wp-content/uploads/2007/07/inside-lng-tank.jpg

If you could only get permission to use it...

comment by Roko · 2010-07-06T20:14:16.767Z · LW(p) · GW(p)

f you're going to rely on social taboos against disturbing graves, you probably have to keep bodies/tank down to 30, if not an even lower number

I agree. The ideal is just one person.

comment by Roko · 2010-07-06T20:03:12.610Z · LW(p) · GW(p)

I should have made clear: the tanks would be buried under the ground, probably in the Canadian permafrost. No rolling.

comment by kraryal · 2010-07-07T04:00:32.925Z · LW(p) · GW(p)

I think that the idea is good, and the engineering is fine for back-of-the-envelope, but can we please call it a "vault" or something instead of a grave? Cryonics already has an image problem, and we don't want to suggest the people in the grave are permanently dead.

Replies from: Roko, Tenek, Roko
comment by Roko · 2010-07-07T17:05:55.623Z · LW(p) · GW(p)

But, on the other hand, there is the advantage that if you spin it as a grave, then when the cryonics company goes bust, the law protects the patient (to some extent) from being disturbed. For example, creditors can't dismantle it for scrap if it's a grave, but if it's an Alcor/CI asset, then they can.

Rather like patently atheistic cryonicists having to say that they have a "religious" objection to autopsy.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-07-07T18:04:43.761Z · LW(p) · GW(p)

Rather like patently atheistic cryonicists having to say that they have a "religious" objection to cryonics.

"cryonics" -> "autopsy", I assume

comment by Tenek · 2010-07-07T18:04:26.233Z · LW(p) · GW(p)

Then we can suggest that they're temporarily dead, but they're still dead, so it's a "grave". Religions have been saying that death is temporary for thousands of years anyways, it wouldn't be anything new.

comment by Roko · 2010-07-08T11:01:58.625Z · LW(p) · GW(p)

Maybe we should spin a cryo-patient as "undead". Isn't there a vampire show on TV that everyone is glued to? I can't even remember the name.

Get a cryonics contract. You too can be a zombie, which is AWESOME!

comment by Sebastian_Hagen · 2010-07-06T19:43:53.171Z · LW(p) · GW(p)

As a final comment, I disagree that storing all patients in one system is a good idea. Too many eggs in one basket is never good when you're trying to maximize the probability that each patient will survive.

Why? Several baskets certainly make sense if you're trying to maximize the probability that at least a few patients survive, and might make sense if you assign significantly negative utility to higher variance in your probability distribution about survival percentage. If you just care about the mean, why would more baskets be better?

Replies from: Roko, lsparrish
comment by Roko · 2010-07-06T19:56:08.111Z · LW(p) · GW(p)

(a) Because a single, large target is very inviting for people to try and break deliberately, but if there are N small targets spread out all over the place, the effort required to inflict a lot of damage is extreme, especially if each grave is under 5 meters of concrete and in a remote location.

Also, the idea of killing "the elitists" by rupturing their collective cryograve seems more righteous than going to Jack Smith's grave and specifically killing Jack. It seems more like murder that way. So the optimal solution is one person per grave.

(b) because graves can use slightly different technology, and in the time between when you set the scheme up and when civilization or just cryonics companies collapse, you can see which designs actually fail, and rescue the patients inside them. A large population of say, 100 graves, with 10 examples of each design type will yield information about what works best as the worst ones break. Then over time you should become more confident in the remaining designs that have zero instances of failure.

Replies from: RolfAndreassen, jimmy
comment by RolfAndreassen · 2010-07-06T20:19:29.225Z · LW(p) · GW(p)

It seems to me that, for someone to conceive of their actions as "specifically killing Jack", they have to believe that cryogenics works. If they don't, they're not killing Jack, they're just vandalising his grave, and he was clearly a weirdo. This doesn't necessarily invalidate your points; I'm just saying that you should be careful not to project your own beliefs onto future opposers-of-cryogenics, or you will defend against the wrong attitudes.

Replies from: Roko
comment by Roko · 2010-07-06T20:23:54.455Z · LW(p) · GW(p)

I think that people who deliberately wanted to smash cryo facilities would do so because they were jealous, i.e. they thought that there was a chance that it would work. This is especially the case if civilization is going pear shaped and they feel that the cryonauts are getting a lucky escape, and there are rumors of "the elite" escaping to the future that way, etc etc.

If you don't think cryo works, you don't have a motive to expend lots of effort smashing cryo facilities. So the only protection required against people who think it won't work is that the facility is so remote they'll never bump into it.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-07T05:16:00.660Z · LW(p) · GW(p)

I think you're Rokomorphizing an awful lot. You just need to be in a state of mind where smashing a cyro container seems cool, something that can score points with your friends, and where you think you can get away with it.

Replies from: NancyLebovitz, Roko
comment by NancyLebovitz · 2010-07-07T07:14:49.300Z · LW(p) · GW(p)

And in particular, where smashing cryonics facilities will infuriate the people who care about them, even if you don't believe cryonics will work.

I don't have a feeling for whether anti-cryonicism will ever get to that point. My feeling is that the sort of vandalism I'm talking about is extremely impulsive, and just not having cryonic storage near where people live is enough to greatly improve the odds that there won't be random vandalism.

Replies from: wedrifid
comment by wedrifid · 2010-07-07T07:20:23.684Z · LW(p) · GW(p)

and just not having cryonic storage near where people live is enough to greatly improve the odds that there won't be random vandalism.

Also guns. People with guns.

Replies from: ciphergoth, NancyLebovitz
comment by Paul Crowley (ciphergoth) · 2010-07-07T07:30:08.787Z · LW(p) · GW(p)

According to Mike Darwin one cryonics facility (don't remember which, sorry) has already been shot at from the street.

Replies from: cupholder
comment by cupholder · 2010-07-07T08:41:33.633Z · LW(p) · GW(p)

For being a cryonics facility? Is there enough evidence to determine if it could've been just a random drive-by?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-07T10:32:12.013Z · LW(p) · GW(p)

I'm afraid all I know about it is a brief remark from Mike Darwin somewhere in this sequence of videos:

http://www.youtube.com/user/KoanPhilosopher#grid/user/B6A98520CF2F56AC

comment by NancyLebovitz · 2010-07-07T07:31:12.429Z · LW(p) · GW(p)

You probably mean security guards. Note that decent security is going to add something to the cost of cryonics.

However, this gets to the scarier possibility-- government policies opposed to cryonics. Any ideas about the odds of that happening?

Replies from: wedrifid, JoshuaZ
comment by wedrifid · 2010-07-07T08:21:29.790Z · LW(p) · GW(p)

You probably mean security guards. Note that decent security is going to add something to the cost of cryonics.

Absolutely, and this conversation has prompted me to consider how best to handle such factors to ensure my head has the maximum chance of survival.

However, this gets to the scarier possibility-- government policies opposed to cryonics. Any ideas about the odds of that happening?

Now that is really scary. Also beyond my ability to create a reliable estimate. I wonder which country is the least likely to have such political problems? Like, the equivalent of the old style swiss banks but for heads.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-07T09:57:21.560Z · LW(p) · GW(p)

It's hard to predict that far ahead, though Scandinavia is looking attractive-- the people there don't have a history of atrocious behavior, and there's cold climate available.

The nightmare scenario is a hostile world government, or similar effect of powerful governments-- think about the US exporting the war on drugs.

I hate saying this, but the only protective strategies I can see are aimed at general increase of power-- make money, develop political competence (this can be a community thing, it doesn't mean everyone has to get into politics) and learn how to be convincing to normal people.

Replies from: steven0461, wedrifid, Nick_Tarleton
comment by steven0461 · 2010-07-07T20:59:27.106Z · LW(p) · GW(p)

Scandinavia is looking attractive-- the people there don't have a history of atrocious behavior

While I don't expect future Vikings to raid cryonics facilities, I feel this statement should have been qualified somehow.

Replies from: gwern
comment by gwern · 2010-07-08T00:08:42.725Z · LW(p) · GW(p)

For what it's worth, the Vikings were very peaceable and property-respecting in Scandinavia - I'm sure we're all familiar with Saga-era Iceland's legal system, and the respect for property was substantial even in the culture (why was Burnt Njal's death so horrifying? because besides burning to death, it destroyed the farm). And even outside they weren't so bad; you can't raid a place too quickly if you raze it to the ground.

comment by wedrifid · 2010-07-07T11:54:13.766Z · LW(p) · GW(p)

The nightmare scenario is a hostile world government, or similar effect of powerful governments-- think about the US exporting the war on drugs.

And the even bigger risk of such political singletons would be that they probably aren't too keen on allowing development of technological singleton needed to pull off the reanimation.

I hate saying this, but the only protective strategies I can see are aimed at general increase of power-- make money, develop political competence (this can be a community thing, it doesn't mean everyone has to get into politics) and learn how to be convincing to normal people.

Agree again. Unfortunately most of the ways I can imagine to attain the necessary power take more financial resources and skills than developing an FAI.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-07T13:15:59.847Z · LW(p) · GW(p)

Could you expand on what you mean by a political singularity?

And it's my impression that merely ordinary amounts of wealth can make a difference to politics if they're applied to changing minds.

Replies from: wedrifid
comment by wedrifid · 2010-07-07T13:46:05.123Z · LW(p) · GW(p)

Could you expand on what you mean by a political singularity?

In this context, exactly what you mean by 'hostile world government'. By 'singularity' I refer to anything that can be conceptualised as a single agent that has full control over its environment. For example, a world government would qualify assuming there were no independent colonies (or aliens) within realistic reach of our solar system.

Few entities with absolute power is likely to be inclined to relinquish that power to another entity. Don't tell big brother that you are going to make him irrelevant!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-07T13:50:02.535Z · LW(p) · GW(p)

I find "political singularity" to be very unclear, and I'm curious about whether other LessWrongians came up with the intended meaning.

Replies from: wedrifid, JoshuaZ
comment by wedrifid · 2010-07-07T14:21:22.280Z · LW(p) · GW(p)

I was paraphrasing Bostrom from memory, and meant singleton. The relevant section is up to and including the first sentence of '2'.

comment by JoshuaZ · 2010-07-07T13:54:35.583Z · LW(p) · GW(p)

I came up with the intended meaning but it required context. I think that overarching world government or the like would probably be more clear. This seems like an example of possible overuse of a "singularity" paradigm, or at least fondness for the term.

Replies from: whpearson, wedrifid
comment by whpearson · 2010-07-07T14:01:05.865Z · LW(p) · GW(p)

I suspect the intended word was singleton

Which has less overloaded meaning.

Replies from: wedrifid
comment by wedrifid · 2010-07-07T14:11:52.770Z · LW(p) · GW(p)

That's the one. Edited.

comment by wedrifid · 2010-07-07T14:14:19.574Z · LW(p) · GW(p)

This seems like an example of possible overuse of a "singularity" paradigm, or at least fondness for the term.

Or a spelling error when referencing a somewhat credible authority. I didn't use 'overarching world government' because it would be clear but convey the wrong meaning.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-07T20:07:52.551Z · LW(p) · GW(p)

Ah ok. This makes a lot of sense. Political singleton makes a lot of sense.

comment by Nick_Tarleton · 2010-07-07T18:02:24.341Z · LW(p) · GW(p)

I hate saying this, but the only protective strategies I can see are aimed at general increase of power

Why do you hate saying this, out of curiosity?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-07T18:11:38.983Z · LW(p) · GW(p)

Because getting good at that sort of thing would mean getting past gigantic ugh fields at my end.

Replies from: whpearson
comment by whpearson · 2010-07-07T18:31:19.119Z · LW(p) · GW(p)

It might just be my own ugh field talking, but can you think of long-lived institutions that haven't had broad public support that continued their mission effectively over time. Even stuff like the catholic church has had periods where it wasn't really following its mission statement.

Or do you think you can get broad-scale public support? I'd rate that plausible in less theistic countries,

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-07T18:41:24.187Z · LW(p) · GW(p)

Cryonics doesn't need broad public support, it just needs to not be substantially attacked.

If we can get it filed under weird harmless hobby which has enough of a lobby that it's not worth fucking with, I think that would be probably be enough.

If violent rage against cryonics starts building, that's a hard problem. At the moment, I don't know what to do about that one, except for the usual political and propaganda efforts.

I don't know if it's possible to get many people to actually sign up for it unless the tech for revival looks at least imminent, so public support would have to be based in principle-- probably property rights and/or autonomy.

Long-lived institutions without broad public support? The only thing I can think of is Talmud study, and I don't know if that would count as an institution.

Replies from: whpearson
comment by whpearson · 2010-07-07T19:30:58.414Z · LW(p) · GW(p)

If we can get it filed under weird harmless hobby which has enough of a lobby that it's not worth fucking with, I think that would be probably be enough.

Okay we gain money and power now. What happens in 70-100 years when we aren't around to wield it. Will our descendants care upon our behalf? How do we create self-sustaining social systems?

I'm not interested much in Cryo for myself (although I wouldn't mind getting frozen for Science). But these kinds of questions matter for things like existential risk reduction that is time dependent. Like meteor deflection or FAI theory when the science of AI is getting close to human level (if it is a long hard slog, and can't be done before we figure out how intelligence works).

I don't know if it's possible to get many people to actually sign up for it unless the tech for revival looks at least imminent, so public support would have to be based in principle-- probably property rights and/or autonomy.

If we could get it to be a status symbol to be signed up for cryonics people will flock to it. You want to make it visible as well. Perhaps having your dewar as a coffee table or something.

Long-lived institutions without broad public support? The only thing I can think of is Talmud study, and I don't know if that would count as an institution.

Freemasons? Although it is hard to tell how well they keep to their mission statement they might be an example of a long-lived institution that does keep their mission.

Replies from: NancyLebovitz, Strange7
comment by NancyLebovitz · 2010-07-07T19:41:40.607Z · LW(p) · GW(p)

Okay we gain money and power now. What happens in 70-100 years when we aren't around to wield it. Will our descendants care upon our behalf? How do we create self-sustaining social systems?

Good question-- you obviously can't control the future of an institution, all you can do is improve the odds.

And this isn't something where I have actual knowledge, so anything I could say would be pulling it off the top of my head.

I don't think the "who'd care about the early adopters?" question is a real problem-- if you can get the thing going at all, it's going to have to have a lot of regard for promises and continuity.

comment by Strange7 · 2010-07-08T11:04:38.841Z · LW(p) · GW(p)

Freemasons?

They don't cut rocks anymore. Like, at all.

How would you feel if, a couple hundred years from now, there actually was a Cult of the Severed Head, with silly initiation rituals and charity fundraisers and a football team, but most of them just figured all this 'corpsicle' nonsense was really just symbolic, and spent most of their time arguing about which version of Robert's Rules of Order they should be using and how to lure people away from the Rotary Club?

Replies from: whpearson
comment by whpearson · 2010-07-08T15:53:01.781Z · LW(p) · GW(p)

Wiki says that the origin of freemasonry is uncertain. Do you have better sources? Was the purpose of freemasons to help them cut rock? Or was it just a group of people who shared something banding together to help each other? E.g. freemasonry was never about cutting rock to use a Hansonianism.

I'm not suggesting we copy freemasonry whole cloth. Simply that we need to look at what social organisations survive, at all.

Replies from: Blueberry
comment by Blueberry · 2010-07-08T17:56:00.573Z · LW(p) · GW(p)

Freemasonry was literally never about stone work. The stone work and ideas of architecture are used as an analogy for a system of morality, as I understand it.

Replies from: mattnewport
comment by mattnewport · 2010-07-08T18:01:52.570Z · LW(p) · GW(p)

Wikipedia suggests that the theory that freemasonry evolved from stonemason's guilds is considered at least plausible.

comment by JoshuaZ · 2010-07-10T14:59:24.197Z · LW(p) · GW(p)

However, this gets to the scarier possibility-- government policies opposed to cryonics. Any ideas about the odds of that happening?

This has happened at least once in British Columbia. See this article. As far as I am aware this is at present the only location which specifically singles out cryonics although there are other areas where the regulations for body disposal inadvertently prevent the use of cryonics.

Replies from: lsparrish
comment by lsparrish · 2010-07-10T16:04:44.322Z · LW(p) · GW(p)

This kind of stuff makes me boil with anger. Some bureacrat busybody inserts garbage about irradiation into a law at the last second, and there's nothing we can do to get it out? Is there some kind of international law against defamation? Because that is exactly what this is. And the stuff they prattle on about it taking advantage of patients in a vulnerable state is total nonsense. What they're doing -- pressuring patients into not cryopreserving -- is taking advantage, and in a particularly grotesque and unconscionable manner.

Ironically, if I were to send them a letter or call them about this stupid law they'd take it as me being a foreign busybody. This is stupid. They're the ones harming BC's global reputation by keeping such idiotic laws on the books.

/rant

Replies from: Vladimir_M, Roko
comment by Vladimir_M · 2010-07-10T17:18:26.086Z · LW(p) · GW(p)

Isparrish:

Some bureacrat busybody inserts garbage about irradiation into a law at the last second, and there's nothing we can do to get it out? Is there some kind of international law against defamation?

On the contrary -- as a general rule, in English-speaking countries, legislators enjoy immunity from any legal consequences of anything they say or write in the course of their work. This is known as "parliamentary privilege," and goes far beyond the free speech rights of ordinary citizens. In particular, they are free to commit libel without repercussions, as long as they speak in official capacity.

In the U.S., this is even written explicitly into the constitution ("for any speech or debate in either House, [the Senators and Representatives] shall not be questioned in any other place").

comment by Roko · 2010-07-10T18:02:46.488Z · LW(p) · GW(p)

Life is not fair. Don't expect other people to not randomly screw up our prospects, up to and including causing our deaths.

The solution is for us rationalists/transhumanists/future-oriented folk to become richer, better organized and more numerous so that there are more resources available to prevent more things like this from happening.

comment by Roko · 2010-07-07T11:00:28.429Z · LW(p) · GW(p)

Yes, agreed that there is a class of vandal of that form. However, it's not hard to put a cryo-grave high in the mountains of northern Canada, buried under a protective shell of 10 meters of earth and 3 meters of reinforced concrete. The casual thug wouldn't know where it is, wouldn't be able to get there if he did know, and wouldn't be able to get at it if he were there.

The more worrying class are people who are strongly motivated to get liquidate cryopatients.

Replies from: Nisan
comment by Nisan · 2010-07-07T20:02:54.525Z · LW(p) · GW(p)

a cryo-grave high in the mountains of northern Canada, buried under a protective shell of 10 meters of earth and 3 meters of reinforced concrete.

Putting a big gravestone on top would be a good idea. If the cryo organization fails, it's too easy for all paper and digital records of the grave to be lost. In that scenario, a gravestone will make it more likely that the grave will be rediscovered before the liquid nitrogen boils away.

It'll still be safe from random vandals if it's underground in a remote location, and in the case of anti-cryonics fanatic vandals, there's nothing you can do to keep them from finding out where all the graves are.

Replies from: Roko, JoshuaZ
comment by Roko · 2010-07-07T20:56:10.296Z · LW(p) · GW(p)

If the cryo organization fails, it's too easy for all paper and digital records of the grave to be lost.

Well... maybe. That's an interesting problem. One solution is to just expect that as technology increases, ability to find stuff increases.

Another is to make some kind of hint or signal that only a highly advanced civilization would find, like writing a message down in atomically small writing on many tiny pieces of metal, and scattering them around the planet, or encrypting the location with a very strong encryption.

Another is for the grave to be "smart" and to broadcast that it is getting to the end of its lifetime.

Even better, just people reading this discussion in the future would think "hey, let's check Canada for cryonauts"

Replies from: DSimon
comment by DSimon · 2010-07-08T20:51:57.438Z · LW(p) · GW(p)

Hello, people of the future! Please unfreeze us, and give us warm soup! We'll be very grateful! Thanks much.

Seriously, though, I wonder about the ability of future archeologists to dig through historical Internet information. At the moment, the only attempt to create a thorough historical archive of the Internet is the Wayback machine, and since (I estimate) that the growth of the Internet is accelerating faster than the cheapness of reliable long-term storage, they'll either have to get lots more funding or start being more selective about what they archive.

In terms of the ability to maintain information of interest to future archaeologists through a straight-up global disaster, the Internet isn't any better than paper. Maybe we need to start looking into cuneiform printers...

Replies from: Roko
comment by Roko · 2010-07-08T21:06:50.237Z · LW(p) · GW(p)

I think that getting the grave found at the other end is a less serious problem than building it to last. If they have nanotech, they can explore the entire surface of the earth in great detail, including doing an ultrasound scan of the entire crust. Also the thing would have a magnetic signature, being metallic. And if you were really concerned, you could build in a powerful permanent magnet, which would make it even more detectable. You could even use temperature differentials to power a weak radio transmitter, but honestly that's probably making it too easy to find. Better to have a whole host of slight anomalies.

comment by JoshuaZ · 2010-07-07T21:01:33.699Z · LW(p) · GW(p)

If the cryo organization fails, it's too easy for all paper and digital records of the grave to be lost.

You could handle this by having each separate cryonic organization exchange data about locations of grave sites. The probability that they will all fail is much lower than any single one failing. Moreover, the most likely situations resulting in such large scale failure will be situations where the human economy is so damaged that replacing the liquid nitrogen will not be feasible.

comment by jimmy · 2010-07-07T06:17:52.443Z · LW(p) · GW(p)

What do you think about a honeycomb like structure that has individual cells for a single person, but is bundled together enough to get a lot of the insulation benefits of being big?

comment by lsparrish · 2010-07-06T19:55:55.501Z · LW(p) · GW(p)

When you consider the pool of potential patients (over a given century) is in the billions, a few million per location does not necessarily constitute putting all your eggs in one basket. And the process of making it mainstream enough for this to happen could have a huge positive impact the sanity waterline.

Replies from: Roko
comment by Roko · 2010-07-06T20:16:16.213Z · LW(p) · GW(p)

I'm thinking about actually proposing this to cryo-companies, so we have to deal with the real world, where there are tens of patients per decade, not billions.

Replies from: lsparrish
comment by lsparrish · 2010-07-08T18:57:59.603Z · LW(p) · GW(p)

With only a few dozen patients, I don't think you will see appreciable economies of scale. The whole idea seems to me reliant on at least a few thousand patients becoming available within a short period of time (or prepaying).

Replies from: Roko
comment by Roko · 2010-07-08T19:26:05.998Z · LW(p) · GW(p)

My calculations indicate that you could have a system that lasted for 200 years for just $500,000, though with a scale of perhaps 100 units being built this would go down by a factor of 2-3 and the reliability would go up.

Replies from: lsparrish
comment by lsparrish · 2010-07-10T14:46:32.671Z · LW(p) · GW(p)

At r=2.9 meters, the size is about in the 10,000 neuro patient range. (V~=102m^3, patients per cubic meter is about 125). You might only fill it part way though if you are aiming for maximum duration, as the less cryogen is displaced the longer the system stays cold. Even so, this could probably hold every cryonicist currently in existence.

Replies from: Roko
comment by Roko · 2010-07-11T13:45:50.031Z · LW(p) · GW(p)

Though it's reliability, not just cost that matters. If there were fewer patients per grave, (e.g. 10 per grave), then the reliability goes up (see my previous comments to this effect)

Replies from: lsparrish, wedrifid
comment by lsparrish · 2010-07-11T14:12:30.006Z · LW(p) · GW(p)

Still, filling it to 50% of its volume would only bring down refill time by 50%. And you only can fill to a certain percentage with patients as they are irregularly shaped. I suppose the real question is whether cost or hands-off reliability is the biggest concern.

Replies from: Roko
comment by Roko · 2010-07-11T14:33:45.177Z · LW(p) · GW(p)

I think that with 10 patients, such a system would cost $100k each, which is pretty good. With many such systems scattered around the remote, cold parts of the world, the probability of any fraction of systems being vandalized goes down, and the information gained about how such systems fail comes in quickly as a few of them fail (e.g. vacuum leaks).

comment by wedrifid · 2010-07-11T15:02:40.157Z · LW(p) · GW(p)

e.g. 10 per grave

Not graves!

comment by GreenRoot · 2010-07-06T19:34:42.867Z · LW(p) · GW(p)

... and a ΔT of 220 °C ...

With liquid nitrogen at -196°C and the average temp in the places you suggest well below freezing (A few minutes of googling suggests it wouldn't be hard to find an average annual temp of -20°C.), I think you could use a more-optimistic ΔT of 175°.

Replies from: Roko, Roko
comment by Roko · 2010-07-07T12:42:23.953Z · LW(p) · GW(p)

There is a good idea along these lines, though. Have an outer shell cooled by dry ice, which takes about 6 times more heat per unit volume than nitrogen to heat from solid to gas at ambient temperature. The dry ice sublimes at -78C.

If you do this, the ΔT that the dry ice sees matters, so building the facility somewhere with a very cold winter temperature makes sense.

comment by Roko · 2010-07-06T19:43:14.514Z · LW(p) · GW(p)

Sure, you could do that. You only gain a factor of 1.25 for that, though.

comment by Soki · 2010-07-08T16:18:39.738Z · LW(p) · GW(p)

If you care about cryonics and its sustainability during an economic collapse or worse, chemical fixation might be a good alternative. http://en.wikipedia.org/wiki/Chemical_brain_preservation

The main advantage is that it requires no cooling and is cheap. People might be normally buried after the procedure, so it would seem less weird.
However, a good perfusion of the brain with the fixative is hard to achieve.

Chemical fixation could also be combined with those low maintenance cryonic graves just in case the nitrogen boils off.

Replies from: Roko
comment by Roko · 2010-07-08T18:05:36.158Z · LW(p) · GW(p)

Agreed re: this.

What I'd love to know is how chemical and thermodynamic means of preservation interact, for example if you can get someone to -40C in the permafrost, will chemical preservation suffice? What about -70C? How much difference does temperature make? (Arrhenius equation suggests that a 10C decrease roughly halves reaction rates, so -70C is 2^10 or 1000 times slower than 30C, and -140C is 2^17 or 131,000 times slower)

Replies from: Roko
comment by Roko · 2010-07-08T20:53:40.847Z · LW(p) · GW(p)

Interesting that 2^20 hours is 120 years, so

an hour at room temp ==

a decade at -135 ==

a century at -170C ==

a millenium at LN2

comment by Mitchell_Porter · 2010-07-07T11:04:21.060Z · LW(p) · GW(p)

Australia claims 42% of Antarctica. That should be plenty of room.

Replies from: wedrifid
comment by wedrifid · 2010-07-07T13:23:50.476Z · LW(p) · GW(p)

Antarctica seems suitable, but why do you suggest that part owned by Australia specifically?

Replies from: Mitchell_Porter, Roko
comment by Mitchell_Porter · 2010-07-08T04:46:50.601Z · LW(p) · GW(p)

A few months ago, I was thinking about the possibility of cryonic suspension as part of the Australian health-care system. With perhaps 100,000 new people to suspend each year, the AAT seems an obvious place to put them. And once the infrastructure was in place, people from other countries could get involved; it would just be a matter of fashioning the necessary financial and other arrangements.

So perhaps your studies should focus on Antarctic geopolitics, the better to protect our future cryo-bases. Unfortunately, I think the pattern theory of identity (according to which your copy is still you) is an illusion, and that this is all cryonics is likely to provide - a way to make copies of the frozen originals. So I find myself wanting to be supportive of the impulse behind cryonics, but unable to earnestly advocate the creation of national cryosuspension facilities. At best I can just try not to impede such an effort should it arise.

Replies from: lsparrish, Kingreaper, wedrifid, Roko
comment by lsparrish · 2010-07-09T19:17:18.056Z · LW(p) · GW(p)

Does it help you at all to think of cryonics as a form of advanced reproduction?

comment by Kingreaper · 2010-07-09T12:45:26.233Z · LW(p) · GW(p)

Unfortunately, I think the pattern theory of identity (according to which your copy is still you) is an illusion, and that this is all cryonics is likely to provide - a way to make copies of the frozen originals.

Would you also disagree with the pattern theory of identity as applied to, say, a game of chess?

Imagine I am playing chess on a chessboard with a friend, and then we have to go home, and I copy down the positions of all the pieces and put the board away. The next day, we get out another board, put the pieces into their positions, and start playing from there.

Are we playing the same game of chess?

Replies from: Blueberry
comment by Blueberry · 2010-07-09T18:35:21.335Z · LW(p) · GW(p)

No, it then becomes a Zombie Chess game.

comment by wedrifid · 2010-07-08T06:07:43.322Z · LW(p) · GW(p)

A few months ago, I was thinking about the possibility of cryonic suspension as part of the Australian health-care system.

That's a thought. I must confess I hadn't considered my country to be particularly likely to be a world leader in cryonics adoption.

Unfortunately, I think the pattern theory of identity (according to which your copy is still you) is an illusion, and that this is all cryonics is likely to provide - a way to make copies of the frozen originals.

That belief must be a frustrating belief. Right or wrong I must say my anticipated experience is a whole lot better. But then... I philosophically evaluate preferences over the entire state of the universe by default and yes, 'identity' and affiliation with this form are not something that particularly comes up.

comment by Roko · 2010-07-08T10:57:52.677Z · LW(p) · GW(p)

Unfortunately, I think the pattern theory of identity (according to which your copy is still you) is an illusion, and that this is all cryonics is likely to provide - a way to make copies of the frozen original

I think that cryonics patients could actually be repaired rather than sliced and scanned. It would be more difficult, but with advanced nanotechnology and the nice access that the blood vessels provide, it seems that it would be pretty easy to do. Repairing the body would be even easier.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-09T02:27:34.867Z · LW(p) · GW(p)

I think that cryonics patients could actually be repaired

So do I. But the result will be a copy. During sleep and hypothermia, the brain remains in the same physical phase. Cellular metabolism never shuts down, for example. But I would be rather surprised if the "neurophysical correlate of selfhood" survives the freezing transition.

ETA: See followup comment.

Replies from: ciphergoth, Roko
comment by Paul Crowley (ciphergoth) · 2010-07-09T12:10:10.817Z · LW(p) · GW(p)

When you say you would be surprised, is there any actual observation that could surprise you here?

Replies from: None, Roko
comment by [deleted] · 2010-07-09T13:48:42.694Z · LW(p) · GW(p)

It's not as though Mitchell's belief is uniquely untestable. It's more like we can't collect any evidence at all about whether identity is preserved, just by reanimating a bunch of people and asking them.

We'd need some sort of neurological description of what "selfhood" means, and then presumably testing to see whether this property is preserved after reanimation would be the actual surprising observation.

Until then, it's irrational to dismiss either theory based purely on the argument that "even if we cryopreserve you, it wouldn't falsify your theory", since this applies to both sides.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-09T13:57:31.660Z · LW(p) · GW(p)

No, the position that is unfalsifiable is that there is a distinction here at all.

Replies from: randallsquared
comment by randallsquared · 2010-07-11T19:45:35.470Z · LW(p) · GW(p)

I don't think so. I'm a processist (though I do think it's unlikely that quantum effects matter), but I can imagine kinds of discoveries that would falsify my current belief on that matter. It could turn out, once we localize and understand consciousness:

...that it's not even "on" or merely suspended all the time, but sometimes is "off" in the normal course of brain operation.

...that it's possible to erase clear memories even with the brain in the same physical state (this would support either Porter's view or some more spiritual dualism).

...that there is more than a single thread of consciousness, and no particular continuity of identity for the person as a whole, even though some thread is operating all the time.

Of those, one and three even seem plausible, but I can't think of a way to do the experiments at our current level of understanding and technology. In any case, once we actually have a working and well-tested theory of consciousness, identity will either vanish or be similarly well-understood.

comment by Roko · 2010-07-09T13:04:42.625Z · LW(p) · GW(p)

Actually, there is. If we cryopreserved Mitchell and then reanimated him, he would be very surprised: it would falsify his theory.

If we did it to anyone else, however, that wouldn't be enough. It would have to be him.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-09T13:31:07.432Z · LW(p) · GW(p)

I suspect you wrong him here - I'm guessing post-freeze Mitchell would say "Obviously I feel like I'm the same person, but now I know I've been cryopreserved I must conclude I'm a copy, not the real thing. I feel good about being alive, but it's copy-Mitchell who feels good, not the guy who got frozen."

Replies from: Roko
comment by Roko · 2010-07-09T13:32:37.188Z · LW(p) · GW(p)

Well, in that case he really has joined the fairy cult.

comment by Roko · 2010-07-09T11:26:00.051Z · LW(p) · GW(p)

The neurophysical correlate of selfhood can survive a temperature drop to 0 but it can't survive a phase change?

So selfhood is kind of like latent heat of fusion?

This is grade A+ magical thinking.

Replies from: soreff
comment by soreff · 2010-07-09T17:15:31.426Z · LW(p) · GW(p)

So that is why there is such interest in vitrification! grin/duck/run...

Replies from: Roko
comment by Roko · 2010-07-09T19:15:12.971Z · LW(p) · GW(p)

Yes, that's right, if you get vitrified, does that count as a different phase?!

comment by Roko · 2010-07-07T19:46:19.159Z · LW(p) · GW(p)

It makes a difference legally. Actually I suspect that Antarctica may be a bridge too far legally, even though thermodynamically it's nice to have access to -89C (the blackbody radiation from -89C is less than 15% of that at room temperature because it scales as T^4, which is important as radiative heating may turn out to be the hardest mode of thermal transport to block)

comment by GreenRoot · 2010-07-06T19:26:43.301Z · LW(p) · GW(p)

Why limit yourself to no maintenance at all in your feasibility speculations? Tending graves is common across cultures. As long as you're spinning a tank of liquid nitrogen as a "grave", why not spin a nitrogen topoff as equivalent to keeping the grass trimmed or bringing fresh flowers?

Replies from: Roko
comment by Roko · 2010-07-06T19:32:20.147Z · LW(p) · GW(p)

Because if the shit hits the fan and cryo companies go bust, who can you rely on to pay $5000 for a tanker to come every few years? I don't even think I'd rely on my kids to do that, every year without fail, even if there's a major depression and their own kids are going hungry.

And if the shit really hits the fan (civilizational collapse) then there will be no liquid nitrogen.

Replies from: GreenRoot, Strange7
comment by GreenRoot · 2010-07-06T19:47:21.345Z · LW(p) · GW(p)

I see what you mean. It's a matter of what threat you have in mind. I'm thinking mainly of the hostility of a pretty-much intact society to cryonics, and how to take your idea of protecting preserved people by using the notion of "respect for the dead" further, also incorporating the idea of honoring the dead by maintaining shrines/graves, etc.

You're totally right that if there's a global depression or civilizational collapse, then the threat of thawing comes more from inability to maintain rather than unwillingness or opposition.

Maybe it would help to split the post, or maybe organize this discussion, to investigate these ideas separately? It seems that engineering speculation about zero-maintenance cryonics is interesting and useful, and that using the "grave" analogy to make cryonics more acceptable and safe from interference is also interesting, but different issues and constraints arise for each of them.

comment by Strange7 · 2010-07-08T11:15:08.316Z · LW(p) · GW(p)

Could someone design a stainless-steel prayer wheel that doubles as a hand-cranked device for condensing nitrogen from the atmosphere?

"We maintain this mechanism to honor our ancestors, that one day they may be reborn" sounds like the kind of thing some Shinto priestesses could've kept straight for all of recorded history, let alone a few centuries.

Replies from: Roko
comment by Roko · 2010-07-08T13:15:05.045Z · LW(p) · GW(p)

Moving parts, it would break.

If you could persuade people to keep a fire lit in a certain location for most of the time, you could use the heat energy to power a TAD-OPTR cyrocooler with no moving parts. It's an interesting idea.

You could design it so that the fire only has to be stoked 1% of the time on average, for example.

comment by GreenRoot · 2010-07-06T20:15:05.041Z · LW(p) · GW(p)

A couple thoughts on places to look for ideas, places where people have probably been thinking about similar challenges:

  • Interstellar Travel There's a lot of speculation about feasibility here, and I think people generally assume the need for some sort of long-term, low-power cryogenic preservation. They do assume access to interstellar vacuum, though.
  • DNA "arks" and similar biodiversity libraries. I haven't heard of anything in this space looking at zero- or low-maintenance preservation, but maybe there's a paranoid fringe?
Replies from: Roko
comment by Roko · 2010-07-06T20:20:01.061Z · LW(p) · GW(p)

They do assume access to interstellar vacuum, though.

And presumably also interstellar temperatures of 3 degrees above absolute zero!

Replies from: apophenia
comment by apophenia · 2010-07-06T23:30:24.072Z · LW(p) · GW(p)

I think GreenRoot refers to the situation where this isn't available, or they wouldn't have to worry about cryogenic preservation.

Replies from: Roko
comment by Roko · 2010-07-06T23:41:33.352Z · LW(p) · GW(p)

That doesn't make sense to me. Space is cold. You can't be doing interstellar travel and not have access to cold.

Replies from: apophenia
comment by apophenia · 2010-07-07T01:29:49.935Z · LW(p) · GW(p)

I understand. I was thinking

How would you prevent solar radiation from heating it up?

..but I'm misjudging relative distances? That is, a spaceship wouldn't spend sufficient time near stars?

Replies from: Roko
comment by Roko · 2010-07-07T10:42:49.220Z · LW(p) · GW(p)

If you're in interstellar space, i.e. travelling to another star, the inverse square law and the large distances very quickly kill radiation heat from either the destination or origin star.

However, if (as Vladimir suggested) you want to stay close to the sun, i.e. in earth orbit, you have to use a reflective shield.

comment by Mitchell_Porter · 2010-07-10T03:59:11.887Z · LW(p) · GW(p)

My comments in this sub-thread brought out more challenges and queries than I expected. I thought that by now everyone would expect me to periodically say a few things out of line regarding identity, consciousness, and so on, and that only the people I was addressing might respond. I want to reply in a way which provides some context for the answers I'm going to give, but which covers old territory as little as possible. So I would first direct interested parties to my articles here, for the big picture according to me. Those articles are flawed in various ways, but much of what I have to say is there.

Just to review some basics: The problems of consciousness and personal identity are even more severe than is generally acknowledged here. Understanding consciousness, for example, is not just a matter of identifying which part of the brain is the conscious part. From the perspective of physics, any such identification looks like property dualism. Here I want to mention a view due to JanetK, which I never answered, according to which the missing ingredient is "biology": the reason that consciousness looks like a problem from a physical perspective is because one has failed to take into account various biological facts. My reply is that certainly consciousness will not be understood without those facts, but nonetheless, they do nothing to resolve the sort of problems described in my article on consciousness, because they can still be ontologically reduced to elaborate combinations of the same physical basics. Some far more radical ontological novelty will be required if we are going to assemble stuff like "color", "meaning", or "the flow of time" out of what physics gives us.

What we have, in our theories of consciousness, is property dualism that wants to be a monism. We say, here is the physical thing - a brain, or maybe a computer or an upload if we are being futuristic - and that is where the mind-stuff resides, or it is the mind-stuff. But for now, the two sides of the alleged identity are qualitatively distinct. That is why it is really a dualism, but a "property dualism" rather than a "substance dualism". The mind is (some part of) the brain, but the mindlike properties of the mind simply cannot be identified with the physical properties of the brain.

The instinct of people trained in modern science is to collapse the dualism onto the physical side of the equation, because they have been educated to think of reality in those terms. But color, meaning and time are real, so if they are not really present on the physical side of the identity, then a truly monistic solution has to go the other way. The problem now is that it sounds as if we are rejecting the reality of matter. This is why I talked about monads: it is a concept of what is physically elementary which can nonetheless be expanded into something which is actually mindlike in its "interior". It requires a considerable rethink of how the basic degrees of freedom in physics are grouped into things; and it also requires that what we would now call quantum effects are somewhere functionally relevant to conscious cognition, or else this ontological regrouping would make no difference at the level where the problem of consciousness resides. So yes, there are several big inferential leaps there, and a prediction (that there is a quantum neurobiology) for which there is as yet no support. All I can say is that I didn't make those leaps lightly, and that all simpler alternatives appear to be fatally compromised in some way.

One consequence of all this is that I can be a realist about the existence of a conscious self in ways which must sound very retrograde to everyone here who has embraced the brave new ideas of copying, patternist theories of identity, the unreality of time on the physical level, and so on. To my way of thinking, I am a "monad", some subsystem of the brain with many degrees of freedom, which is a genuine ontological unity, and whose state can be directly identified with (and not just associated with) my state in the world as I perceive it subjectively. This is an entity which persists in time, and which interacts with its environment (presumably, simpler monads making up the neighboring subsystems of the brain). If one grants for a moment the possibility of thinking about reality in these terms, clearly it makes these riddles about personal identity a lot simpler. There is a very clear sense in which I am not my copies. At best, they are other monads who start out in the same state. There is no conscious sorites paradox. Whenever you have consciousness, it is because you have a monad big enough to be conscious - it's that simple.

So having set the stage - and apologies to anyone tired of my screeds on these subjects - now we can turn to cryonics. I said to Roko

I would be rather surprised if the "neurophysical correlate of selfhood" survives the freezing transition.

to which he responded

The neurophysical correlate of selfhood can survive a temperature drop to 0 but it can't survive a phase change?

I posit that, in terms of current physics, the locus of consciousness is some mesoscopic quantum-coherent subsystem of the brain, whose coherence persists even during unconsciousness (which is just a change of its state) but which would not last through the cryonic freezing of the brain. If this persistent biological quantum coherence exists, it will exist because of, and not in spite of, metabolic activity. When that ceases, something must happen to the "monad" (which is just another name for something like "big irreducible tensor factor in the brain's wavefunction") - it comes apart into simpler monads, it sheds degrees of freedom until it becomes just another couple of correlated electrons, I don't have a fixed idea about it. But this is what death is, in the monadic "theory". If the frozen brain is restored to life, and a new conscious condensate (or whatever) forms, that will be a new "big tensor factor", a new "monad", and a new self. That is the idea.

You could accept my proposed metaphysics for the sake of argument and still say, but can't you identify with the successor monad? It will have your memories, and so forth. In other words, this ontology of monadic minds should still allow for something like a copy. I don't really have a fixed opinion about this, largely because how the conscious monad accesses and experiences its memories and identity remains completely untheorized by me. The existence of a monad as a persistent "substance" suggests the possibility that memories in a monad might be somehow internal to it, rather than externally supplied data which pops into its field of consciousness when appropriate. This in turn suggests that a lot of what is written, in futurist speculation about digital minds, transferrable memories, and so forth, would not apply. You might be able to transfer unconscious dispositions but not a certain type of authentic conscious memory; it might be that the only way in which the latter could be induced in a monad would be for it, that particular monad, to "personally" undergo the experience in question. Or, it might really be the case that all forms of memory, knowledge, perception and so forth are externally based and externally induced, so that my recollection of what happened this morning is not ontologically any different from the same "recollection" occurring in a newly created copy which never actually had the experience.

Again, I apologize somewhat for going on at such length with these speculations. But I do think that the philosophies of both mind and matter which are the consensus here - I'm thinking of a sort of blithe computationalism with respect to consciousness, and the splitting multiverse of MWI as a theory of physics - are very likely to be partly or even completely false, and this has to have implications for topics like cryonics, AI, various exotic ethical doctrines based on a future-centric utilitarianism, and so on.

Replies from: Roko, Roko
comment by Roko · 2010-07-11T12:17:48.648Z · LW(p) · GW(p)

the locus of consciousness is some mesoscopic quantum-coherent subsystem of the brain

Why do people keep trying to posit quantum as the answer to this problem when it has been so soundly refuted?

Based on a calculation of neural decoherence rates, we argue that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence time scales (∼10-13–10-20s) are typically much shorter than the relevant dynamical time scales (∼10-3–10-1s), both for regular neuron firing and for kinklike polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way

Replies from: RobinZ
comment by RobinZ · 2010-07-11T12:46:06.072Z · LW(p) · GW(p)

Why do people keep trying to posit quantum as the answer to this problem when it has been so soundly refuted?

My current leading hypotheses:

  • "Quantum mechanics" feels like a mysterious-enough big rock to crack the equally mysterious phenomenon of "consciousness".
  • Free will feels like it requires indeterminism, and quantum mechanics is often described as indeterministic.
Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-12T05:48:47.048Z · LW(p) · GW(p)

There is a long history of diverse speculation by scientists about quantum mechanics and the mind. There was an early phase when biology hardly figured and it was often a type of dualism inspired by Copenhagen-interpretation emphasis on "observers". But these days the emphasis is very much on applying quantum mechanics to specific neuromolecular structures. There are papers about superpositions of molecular conformation, transient quantum coherence in ionic complexes, phonons in filamentary structures, and so on. To me, this work still doesn't look good enough, but it's a necessary transitional step, in which ambitious simple models of elementary quantum biophysics are being proposed. The field certainly needs a regular dose of quantitative skepticism such as Tegmark provided. But entanglement in condensed-matter systems is a very subtle thing. There are many situations in which long-range quantum order forms despite local disorder. Like it or not, you can't debunk the idea of a quantum brain in a few pages because we assuredly have not thought of all the ways in which it might work.

As for the philosophical rationale of the thing, that varies a lot. But since we know that most neural computation is not conscious, I find it remarkably natural to suppose that it's entanglement that makes the difference. Any realistic hypothesis is not going to be fuzzy and just say "the quantum is the answer". It will be more like, special long-lived clathrins found in the porosome complex of astrocytes associated with glutamate-receptor hotspots in neocortical layer V share quantum excitons in a topologically protected way, forming a giant multifractal cluster state which nonlocally regulates glutamatergic excitation in the cortex - etc. And we're just not at that level yet.

Replies from: RobinZ, Douglas_Knight
comment by RobinZ · 2010-07-12T05:56:41.148Z · LW(p) · GW(p)

What evidence is there that would promote any given quantum-mechanical theory of consciousness to attention?

I mean that sincerely - there ought to be some reason that, say, you have to come up with your monad theory, and I quite frankly don't know of any that would impel me to do so.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-12T07:41:42.381Z · LW(p) · GW(p)

How I got here:

Starting point: consciousness is real. This sequence of conscious experiences is part of reality.

Next: The physical world doesn't look like that. (That consciousness is a problem for atomism has been known for more than 2000 years.)

So let us suppose that this is how it feels to be some physical thing "from the inside". Here we face a new problem if we suppose that orthodox computational neuroscience is the whole story. There must then be a mapping from various physical states (e.g. arrangements of elementary particles in space, forming a brain) to the corresponding conscious states. But mappings from physics to causal-functional roles are fuzzy in two ways. We don't have, and don't need, an exact criterion as to whether any particular elementary particle is part of the "thing" whose state we are characterizing functionally. Similarly, we don't have, and don't need, a dividing line in the space of all possible physical configurations providing an exact demarcation between one computational state and another.

All this is just a way of saying that functional and computational properties are not entirely objective from a physical standpoint. There are always borderline cases but we don't really care about not having an exact border, because most of the time the components of a functioning computational device are in physical states which are obviously well in correspondence with the abstract computational states they represent. A device whose components are constantly testing the boundaries of the mapping is a device in danger of deviating from its function.

However, when it comes to consciousness, a fuzzy-but-good-enough mapping like this is not good enough, because consciousness (according to our starting point) is an entirely real and "objective" element of reality. It is what it is "exactly", and therefore its counterpart in physical ontology must also have an exact characterization, both with respect to physical parts and with respect to physical states. A coarse-grained many-to-one mapping which is irresolvably fuzzy at the edges is not an option.

But this is a very hard thing to achieve if we persist in thinking of the physical world as a sort of hurricane of trillions of particles in space, with all that matters cognitively being certain mass movements of particles and things made out of them. Fortunately, as it turns out, quantum mechanics suggests the possibility of a rather different physical ontology, and neuroscience still has plenty of room for quantum effects to be cognitively relevant. Thus one is led to consider quantum ontologies in which there is something which can be the exact physical counterpart of consciousness, and theories of mind in which quantum effects are part of the brain's machinery.

Replies from: RobinZ, red75
comment by RobinZ · 2010-07-12T12:52:52.165Z · LW(p) · GW(p)

I think you grant excessive reliability to your impressions of consciousness. A philosophical argument along the lines proposed is an awfully weak thread to hang a theory on.

comment by red75 · 2010-07-12T07:59:25.008Z · LW(p) · GW(p)

Doesn't it mean that consciousness is epiphenomenon? As all quantum algorithms can be expressed as equivalent classical algorithms, and we can have unconscious computer which is functionally equivalent to human brain.

ETA: I can't see any reason to associate consciousness with some particular kind of physical object/process, as it undermines functional significance of consciousness as high-level coordination, decision making and self-representation system of brain.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-12T08:22:19.382Z · LW(p) · GW(p)

No, it would just mean that you can have unconscious simulations of consciousness. Think of it like this. We say that the things in the universe which have causal power are "quantum tensor factors", and consciousness always inhabits a single big tensor factor, but we can simulate it with lots of little ones interacting appropriately. More precisely, consciousness is some sort of structure which is actually present in the big tensor factor, but which is not actually present in any of the small ones. However, its dynamics and interactions can be simulated by the small ones collectively. Also, if you took a small tensor factor and made it individually "big" somehow (evolved it into a big state), it might individually be able to acquire consciousness. But the hypothesis is that consciousness as such is only ever found in one tensor factor, not in sets of them. It's a slightly abstract conception when so many details are lacking, but it should be possible to understand the idea: the world is made of Xs, an individual X can have property Y, a set of Xs cannot, but a set of Xs can imitate the property.

What would really make consciousness epiphenomenal is if we persisted with property dualism, so we have the Xs, their "physical properties", and then their correlated "subjective properties". But the whole point of this exercise is to be able to say that the subjective properties (which we know to exist in ourselves) are the "physical properties" of a "big" X. That way, they can enter directly into cause and effect.

Replies from: JoshuaZ, red75
comment by JoshuaZ · 2010-07-13T13:40:39.529Z · LW(p) · GW(p)

No, it would just mean that you can have unconscious simulations of consciousness.

Doesn't this undermine your entire philosophical basis for your argument which rests on the experience of consciousness being real? if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities? This seems P-Zombieish.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-14T06:54:31.518Z · LW(p) · GW(p)

if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities?

It's like asking, why do you think you exist, when there are books with fictional characters in them? I don't know exactly what is happening when I confirm by inspection that some reality exists or that I have consciousness. But I don't see any reason to doubt the reality or efficacy of such epistemic processes, just because there should also be unconscious state machines that can mimic their causal structure.

comment by red75 · 2010-07-12T11:15:52.086Z · LW(p) · GW(p)

I understand you. Your definition is "real consciousness" is quantum tensor factor that belong to particular class of quantum tensor factors, because we can find them in human brains and
we know that at least one human brain is conscious and
consciousness must be physical entity to participate in causal chain.
All other quantum tensor factors and their sets are not consciousness by definition.

Questions are:

  1. How to define said class without fuzziness, when it is yet not known what is not "real consciousness"? Should we include dolphins' tensor factors, great apes' ones and so on?

  2. Is it always necessary for something to exist as physical entity to participate in causal chain? Does temperature exist as physical entity? Does "thermostatousness" of refrigerator exist as physical entity?

Of course, temperature and "termostatousness" are our high-level description of physical systems, they don't exist in your sense. So, it seems that you see contradiction in subjectively apparent existence of consciousness and apparent nonexistence of physical representation of consciousness as high-level description of brain functions. Don't you see flaw in that contradiction?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-13T10:20:51.910Z · LW(p) · GW(p)

Causality for statistical or functional properties mostly reduces to generalizations about the behavior of exact microstates. ("Microstate" means physical state completely specified in its microscopic detail. A purely thermodynamic or macroscopic description is a "macrostate".) The entropy goes up because most microstate trajectories go from the small phase-space volume into the large phase-space volume. Macroscopic objects have persistent traits because most microstate trajectories for those objects stay in the same approximate region of state space.

So the second question is about ontology of macrostate causation. I say it is fundamentally statistical. Cause and effect in elemental form only operates locally in the microstate, between and within fundamental entities, whatever they are. Macrostate tendencies are like theromodynamic laws or Zipf's law, they are really statements about statistics of very large and complex chains of exact microscopic causal relations.

The usual materialist idea of consciousness is that it is also just a macrostate phenomenon and process. But as I explained, the macrostate definition is a little fuzzy, and this runs against the hypothesis that consciousness exists objectively. I will add that because these "monads" or "tensor factors" containing consciousness are necessarily very complex, there should be a sort of internal statistical dynamics. The laws of folk psychology might just be statistical mechanics of exact conscious states. But it is conceptually incoherent to say that consciousness is purely a high-level description if you think it exists objectively; it is the same fallacy as when some Buddhists say "everything only exists in the mind", which then implies that the mind only exists in the mind. A "high-level description" is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.

The first question is a question about how a theory like this would develop in detail. I can't say ahead of time. The physical premise is, the world is a web of tensor factors of various sizes, mostly small but a few of them big; and consciousness inhabits one of these big factors which exists during the lifetime of a brain. If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness. In principle, such a physical theory should itself tell you whether these big factors arise dynamically in a particular physical entity, given a specification of the entity.

Does this answer the final remark about contradiction? Each tensor factor exists completely objectively. The individual tensor factor which is complex enough to have consciousness also exists objectively and has its properties objectively, and such properties include all aspects of its subjectivity. The rest of the brain consists of the small tensor factors (which we would normally call uncorrelated or weakly correlated quantum particles), whose dynamics provide unconscious computation to supplement conscious dynamics of the big tensor factor. I think it is a self-consistent ontology in which consciousness exists objectively, fundamentally, and exactly, and I think we need such an ontology because of the paradox of saying otherwise, "the mind only exists in the mind".

Replies from: red75
comment by red75 · 2010-07-13T16:06:21.394Z · LW(p) · GW(p)

If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness.

What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition? If we will feed internal states of classical brain simulation into quantum box (outputs discarded), containing 10^2 or 10^20 entangled particles/quasi-particles, will it make simulation conscious? How in principle can we determine that it will or will not?

A "high-level description" is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.

Interesting thing is that mind as a high-level description of brain workings is mind-dependent on the same mind (it's not a paradox, but a recursion), not on a mind. Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain. Thus mind is subjective in a sense that it is conceptual description of brain workings (including concepts of self, mind and so on), and mind is objective in a sense that its content can be reconstructed from structure of brain.

I think we need such an ontology because of the paradox of saying otherwise, "the mind only exists in the mind".

It isn't paradox, really.


I can't help imagining procedure of accepting works on philosophy of mind: "Please, show your tensor factor. ... Zombies and simulations are not allowed. Next".

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-14T06:45:27.593Z · LW(p) · GW(p)

What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition?

The difference is between conscious and not conscious. This will translate mathematically into presence or absence of some particular structure in the "tensor factor". I can't tell you what structure because I don't have the theory, of course. I'm just sketching how a theory of this kind might work. But the difference between small and big is number of internal degrees of freedom. It is reasonable to suppose that among the objects containing the consciousness structure, there is a nontrivial lower bound on the number of degrees of freedom. Here is where we can draw a line between small and big, since the small tensor factors by definition can't contain the special structure and so truly cannot be conscious. However, being above the threshold would just be necessary but not sufficient, for presence of consciousness.

How in principle can we determine that [something] will or will not [be conscious]?

If you have a completed theory of consciousness, then you answer this question just as you would answer any other empirical question in a domain where you have a well-tested theory: You evaluate the data using the theory. If the theory tells you all the tensor factors in the box are below the magic threshold, there's definitely no consciousness there. If there might be some big tensor factors present, it will be more complicated, but it will still be standard reasoning.

If you are still developing the theory, you should focus just on the examples which will help you finish it, e.g. Roko's example of general anesthesia. That might be an important clue to how biology, phenomenology, and physical reality go together. Eventually you have a total theory and then you can apply it to other organisms, artificial quantum systems like in your thought experiment, and so on.

Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain.

Any causal model using macrostates leaves out some micro information. For any complex physical system, there is a hierarchy of increasingly coarse-grained macrostate models. At the bottom of the hierarchy is exact physical fact - one model state for each exact physical microstate. At the top of the hierarchy is trivial model with no dynamics - same macrostate for all possible microstates. In between are many possible coarse-grainings, in which microstates are combined into macrostates. (A macrostate is therefore a region in the microscopic state space.)

So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.

Replies from: red75
comment by red75 · 2010-07-14T09:13:53.399Z · LW(p) · GW(p)

So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.

Here's the point of divergence. There is peculiar coarse-graining. Specifically it is conceptual self-model consciousness uses to operate on (as a wrote earlier it uses concepts of self, mind, desire, intention, emotion, memory, feeling, etc. When I think "I want to know more", my consciousness uses concepts of that model to (crudely) represent actual state of (part of) brain including parts which represent model itself). Thus, to find a consciousness in a system it is necessary to find a coarse-graining such that corresponding macrostate of system is isomorphic to physical state of part of the system (it is not sufficient, however). Or in map-territory analogy to find a part of territory that isomorphic to a (crude) map of territory.

Edit: Well, it seems that lower bound on information content of map is necessary for this approach too. However, this approach doesn't require adding fundamental ontological concepts.

Edit: Isomorphism condition is too limiting, it will require another level of course-graining to be true. I'll try to come up with another definition.

comment by Douglas_Knight · 2010-07-12T06:35:31.780Z · LW(p) · GW(p)

But since we know that most neural computation is not conscious, I find it remarkably natural to suppose that it's entanglement that makes the difference.

This really sounds to me like a perfect fit for Robin's grandparent post. If, say, nonlocality is important, why achieve it through quantum means?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-12T08:09:03.984Z · LW(p) · GW(p)

This is meant to be ontological nonlocality and not just causal coordination of activities throughout a spatial region. That is, we would be talking about entities which do not reduce to a sum of spatially localized parts possessing localized (encapsulated) states. An entangled EPR pair is a paradigm example of such ontological nonlocality, if you think the global quantum state is the actual state, because the wavefunction cannot be factorized into a tensor product of quantum states possessed by the individual particles in the pair. You are left with the impression of a single entity which interfaces with the rest of the universe in two places. (There are other, more esoteric indications that reality has ontological nonlocality.)

These complex unities glued together by quantum entanglement are of interest (to me) as a way to obtain physical entities which are complex and yet have objective boundaries; see my comment to RobinZ.

comment by Roko · 2010-07-11T13:35:03.573Z · LW(p) · GW(p)

Not only does this quantum brain idea violate known experimental and theoretical facts about the brain, it also violates what we know about evolution. Why would evolution design a system that maintains coherence during sleep and unconsciousness, if this has no effect on inclusive genetic fitness?

(Mitchell Porter thinks that his "copy" would behave essentially identically to what he would have done had he not "lost his selfhood", so in terms of reproductive fitness, there's no difference)

Replies from: Blueberry, randallsquared, Mitchell_Porter
comment by Blueberry · 2010-07-11T17:03:59.882Z · LW(p) · GW(p)

Though I agree that this quantum brain idea is against all evidence, I don't think the evolutionary criticism applies. Not every adaptation has a direct effect on inclusive genetic fitness; some are just side effects of other adaptations.

Replies from: Roko
comment by Roko · 2010-07-11T20:24:38.493Z · LW(p) · GW(p)

Sure, but the empirical difficulty of maintaining quantum coherent state would imply that it isn't the kind of thing that would happen by accident.

comment by randallsquared · 2010-07-11T19:27:14.452Z · LW(p) · GW(p)

Well, it might be that maintaining the system rather than restarting it when full consciousness resumes is an easier path to the adaptation, or has some advantage we don't understand.

Of course, if the restarted "copy" would seem externally and internally as a continuation, the natural question is why bother positing such a monad in the first place?

comment by Mitchell_Porter · 2010-07-12T08:30:26.409Z · LW(p) · GW(p)

If you want something that flies, the simplest way is for it to have wings that still exist even when it's on the ground. We don't actually know (big understatement there) the relative difficulty of evolving a "persistent quantum mind" versus a "transient quantum mind" versus a "wholly classical mind".

There may also be an anthropic aspect. If consciousness can only exist in a quantum ontological unit (e.g. the irreducible tensor factors I mention here), then you cannot find yourself to be an evolved intelligence based solely on classical computation employing many such entities. Such beings might exist in the universe, but by hypothesis there would be nobody home. This isn't relevant to persistent vs transient, but it's relevant for quantum vs classical.

Replies from: Roko, Roko
comment by Roko · 2010-07-12T11:02:13.629Z · LW(p) · GW(p)

You seem to jump to the conclusion that, in the favorable case, (that consciousness only exists in quantum computers AND quantum coherence is the fundamental basis of persistent identity), the coherence timescale would obviously be your whole lifetime, even if hypothermia, anesthetics, etc happen, but as soon as you are cryopreserved, it decoheres, so that the physical basis of persistent identity corresponds perfectly to the culturally accepted notion.

But that would be awfully convenient! Why not assign most of your probability to the proposition that evolution accidentally designed a quantum computer with a decoherence timescale of one second? ten seconds? 100 seconds? 1000 seconds? 10,000 seconds? Why not postulate that unconsciousness or sleep destroys the coherence? After all, we know that classical computation is perfectly adequate for evolutionarily adaptive tasks (because we can do them on a classical computer).

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-13T09:38:15.483Z · LW(p) · GW(p)

This is, first of all, an exercise in taking appearances ("phenomenology") seriously. Consciousness comes in intervals with internal continuity, one often comes to waking consciousness out of a dream (suggesting that the same stream of consciousness still existed during sleep, but that with mental and physical relaxation and the dimming of the external senses, it was dominated by fantasy and spontaneous imagery), and one should consider the phenomenon of memory to at least be consistent with the idea that there is persistent existence, not just throughout one interval of waking consciousness, but throughout the whole biological lifetime.

So if you're going to think about yourself as physically actual and as actually persistent, you should think of yourself as existing at least for the duration of the current period of waking consciousness, and you have every reason to think that you are the same "you" who had those experiences in earlier periods that you can remember. The idea that you are flickering in and out of existence during a single day or during a lifetime is somewhat at odds with the phenomenological perspective.

Cryopreservation is far more disruptive than anything which happens during a biological lifetime. Cells full of liquid water freeze over and grow into ice crystals which burst their membranes. Metabolism ceases entirely. Some, maybe even most models of persistent biological quantum coherence have it depending on a metabolically maintained throughput of energy. To survive the freezing transition, it seems like the "bio-qubits" would have to exist in molecular capsules that weren't penetrated as the ice formed.

Replies from: Roko
comment by Roko · 2010-07-13T09:58:34.062Z · LW(p) · GW(p)

But if you're going to argue phenomenologically, then any form of reanimation that restores the persons memory in a continuous way will seem (from the inside) to be continuous.

Can I ask: have you ever been under a general anesthetic?

It is a philosophically significant life event, because what you experience is just so incredibly at odds with what actually happens. You lie there waiting for the anesthetic to take effect, and then the next instant, your eyes open and find your arm/leg/whatever in plaster, and a glance at the clock suggests that 3 hours have passed.

I'd personally want to be cryopreserved before I fully lost my marbles so that I can experience that kind of time travel. Imagine closing your eyes, then reopening them and it's the 23rd century? How cool would that be?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-13T11:21:15.710Z · LW(p) · GW(p)

I must have been, at some point, but a long time ago and don't remember.

Clearly there are situations where extra facts would lead you to conclude that the impression of continuity is an illusion. If you woke up as Sherlock Holmes, remembering your struggle with Moriarty as you fell off a cliff moments before, and were then shown convincingly that Holmes was a fictional character from centuries before, and you were just an artificial person provided with false memories in his image, you would have to conclude that in this case, you had erred somehow in judging reality on the basis of subjective appearances.

It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out. Reliable reconstruction would require such a profound knowledge of brain structure and function, that there wouldn't be room for continuing uncertainty about quantum effects in the brain. By then you would know it was there or not there, so regardless of how the revivee felt, the people(?) doing the reviving should already know the answers regarding identity and the nature of personal existence.

(I add the qualification reliable reconstruction, because there might well be a period in which it's possible to experiment with reconstructive protocols while not really knowing what you're doing. Consider the idea of freezing a C. elegans and then simulating it on the basis of micrometer sections. We could just about do this today, except that we would mostly be guessing how to map the preserved ultrastructure to computational elements of a simulation. One would prefer the revival of human beings not to proceed via similar trial and error.)

In the present, the question is whether subjectively continuous but temporally discontinuous experience, such as you report, is evidence for the self only having an intermittent physical existence. Well, the experience is consistent with the idea that you really did cease to exist during those 3 hours, but it is also consistent with the idea that you existed but your time sense shut down along with your usual senses, or that it stagnated in the absence of external and internal input.

Replies from: Roko, Roko, JoshuaZ
comment by Roko · 2010-07-13T12:32:36.065Z · LW(p) · GW(p)

that there wouldn't be room for continuing uncertainty about quantum effects in the brain.

There is no uncertainty. A large amount of evidence points to the lack of quantum effects in the brain. Furthermore, there was never really any evidence in favor of quantum effects, and certainly none has been produced.

comment by Roko · 2010-07-13T12:18:34.340Z · LW(p) · GW(p)

I think that most of the problems of consciousness have already been figured out; Gary Drescher, Dan Dennett, Drew McDerrmot have done it. They just don't yet have overwhelming evidence, so you have to be "light like a leaf blown by the winds of evidence" to see their answer as being correct.

It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out.

The remaining unsolved problems in this area seem to be related to the philosophy of computations-in-general, such as "what counts as implementing a computation" or anthropic/big world problems.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T15:13:25.799Z · LW(p) · GW(p)

The remaining unsolved problems in this area seem to be related to the philosophy of computations-in-general, such as "what counts as implementing a computation" or anthropic/big world problems.

Which is to say, decision theory for algorithms, understanding of how an algorithm controls mathematical structures, and how intuitions about the real world and subjective anticipation map to that formal setting.

Replies from: Roko
comment by Roko · 2010-07-13T16:21:31.887Z · LW(p) · GW(p)

Well, that's one possible solution. But not without profound problems, for example the problem of lack of a canonical measure over "all mathematical structures" (even the lack of a clean definition of what "all structures" means).

But it certainly solves some problems, and has the sort of "reductionistic" feel to it that indicates it is likely to be true.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T16:28:57.509Z · LW(p) · GW(p)

Well, that's one possible solution. But not without profound problems, for example the problem of lack of a canonical measure over "all mathematical structures" (even the lack of a clean definition of what "all structures" means).

Logics allow to work with classes of mathematical structures (not necessarily individual structures), which seems to be a good enough notion of working with "all mathematical structures". A "measure" (if, indeed, it's a useful concept) is aspect of preference, and preferences are inherently non-canonical, though I hope to find a relatively "canonical" procedure for defining ("extracting") preference in terms of an agent-program.

Replies from: Roko
comment by Roko · 2010-07-13T16:44:16.473Z · LW(p) · GW(p)

In the case of MWI quantum, the measure is Integral[ ], and if Robin's Mangled Worlds is true, there's no doubt that this measure is not "preference".

What is the difference between the MWI/Mangled Big World and other Big Worlds such that measure is preference in others but not in MWI/Mangled?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T18:19:35.599Z · LW(p) · GW(p)

Any given concept is what it is. Truth about any given concept is not a matter of preference.

But in cases where there is no "canonical choice of a concept", it is a matter of choice which concept to consider. If you want a concept with certain properties, these properties already define a concept of their own, and might determine the mathematical structure that satisfies them, or might leave some freedom in choosing one you prefer for the task.

In case of quantum mechanical measure, you want your concept of measure to produce "probabilities" that conform with the concept of subjective anticipation, which is fairly regular and thus create illusion of "universality", because preferences of most minds like ours (evolved like ours, in our physics) have subjective anticipation as a natural category, a pattern that has significant explanatory (and hence, optimization) power. But subjective anticipation is still not a universally interesting concept, one can consider a mind that looks at your theories about it, says "so what?", and goes on optimizing something else.

Replies from: Roko
comment by Roko · 2010-07-13T18:49:00.944Z · LW(p) · GW(p)

The reason I spoke about Mangled Worlds MWI is that the Integral[ ] measure is not dependent upon subjective anticipation.

This is because in mangled worlds QM there is a physically meaningful sense in which some things cease to exist, namely that things (people, computers, any complex or macroscopic phenomenon) get "Mangled" if their Integral[ ] measure gets too low.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T19:24:52.476Z · LW(p) · GW(p)

That preference is a cause of a given choice doesn't prohibit physics to also be a cause. There is rarely an ultimate source (unique dependence). You value thinking about what is real (accords with physical laws) because you evolved to value real things. There are also concepts which are not about our physical laws which you value, because evolution isn't a perfect designer.

This is also a free will argument. I say that there is a decision to be made about which concepts to consider, and you say that the decision is already made by the laws of physics. It's easier to see how you do have free will for more trivial choices. It's more difficult to consider acting and thinking as if you live in different physics. In both cases, the counterfactual is physically impossible, you couldn't have made a different choice. Your thoughts accord with the laws of physics, caused by physics, embedded within physics. And in both cases, what is actually true (what action you'll perform; and what theories you'll think about) is determined by your decision.

As an agent, you shouldn't (terminally) care about what laws of physics say, only about what your preference says, so this cause is always more relevant, although currently less accessible to reflection.

Replies from: Roko
comment by Roko · 2010-07-13T20:14:45.687Z · LW(p) · GW(p)

Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don't quite see what about my reply made you think that this was relevant?

The point is that in Mangled world QM there is such a think as objective probability, even though the world is (relatively) big, and it basically turns out to be defined by just the number of instances of something rather than something else.

Replies from: Nick_Tarleton, Vladimir_Nesov
comment by Nick_Tarleton · 2010-07-13T20:32:43.347Z · LW(p) · GW(p)

I think Vladimir is essentially saying that caring about that objective property of that particular mathematical structure is still your "arbitrary", subjectively objective preference. I don't think I understand where the free will argument comes in either.

Replies from: Roko, Vladimir_Nesov
comment by Roko · 2010-07-13T20:49:27.159Z · LW(p) · GW(p)

Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein's Middle Earth.

But I think that what Big Worlds calls into question is whether there is such a thing as "what actually exists" and "what will actually happen". That's the problem. I agree that evolution could (like it did in the case of subjective anticipation and MWI QM) have played a really cruel trick on us.

But I brought up Mangled Worlds because it seems that Mangled worlds is a case where there is such a thing as "what will actually happen" and "what actually exists", even though the world is relatively big (though mangled worlds is importantly different to MWI with no mangler or world-eater)

The important difference between MWI and Mangled-MWI is that if you say "ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10 measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time .

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T21:06:23.446Z · LW(p) · GW(p)

The important difference between MWI and Mangled-MWI is that if you say "ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10 measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time.

So what? Not everyone cares about what happens in this world. Plus, you don't have to exist in this world to optimize it (though it helps).

Replies from: Roko
comment by Roko · 2010-07-13T21:15:41.338Z · LW(p) · GW(p)

If we take as an assumption that Mangled-worlds MWI is the only kind of "Bigness" that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything.

Though, of course, acausally speaking, a slim probability that some other world exists is enough for people to (perhaps?) take notice of it.

EDIT: One way to try to salvage objective reality from Big Worlds would be to drive a wedge between "other worlds that we have actual evidence for" (such as MWI) and "Other worlds that are in-principle incapable of providing positive evidence of their existence", (such as Tegmark's MUH), then showing that all of the evidentially implied big worlds are not problematic for objectivity, as seems to be the case for Mangled-MWI. However, this would only work if one were willing to part with kolmogorov/Bayesian reasoning, and say that certain perfectly low-complexity hypotheses are thrown out for being "too big" and "too hypothetical".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T21:29:20.534Z · LW(p) · GW(p)

If we take as an assumption that Mangled-worlds MWI is the only kind of "Bigness" that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything.

I'm fairly sure at this point it's conceptual confusion to say that. You can care about mathematical structures, and control mathematical structures, that have nothing to do with the real world. These mathematical structures don't have to be "worlds" in any usual sense, for example they don't have to be processes (have time), and they don't have to contain you in them in any form.

One of the next iterations of ambient decision theory should make it clearer, though the current version should give a hint (but probably isn't worth the bother in the current form, considering it has known philosophical/mathematical bugs - but I'm studying, improving my mathematical sanity).

Replies from: Roko, Roko
comment by Roko · 2010-07-13T23:08:03.157Z · LW(p) · GW(p)

Perhaps the distinction I'm interested is the difference between control and function-ness.

There is an abstract mathematical function, say, the parity function of the number of open eyes I have. It is a function of me, but I wouldn't say that I am controlling it in the conventional sense, because it is abstract.

Replies from: Nick_Tarleton, Vladimir_Nesov
comment by Nick_Tarleton · 2010-07-13T23:16:31.348Z · LW(p) · GW(p)

More abstract than whether your eyes are open? They're about the same distance from the underlying physics.

Replies from: Roko
comment by Roko · 2010-07-13T23:31:32.635Z · LW(p) · GW(p)

I guess if there were an actual light that lit up as a function of the parity, then I would feel comfortable with "control", and I would say that I am controlling the light

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-07-14T00:00:59.483Z · LW(p) · GW(p)

... Whether the light is on is also pretty abstract, no?

comment by Vladimir_Nesov · 2010-07-13T23:16:00.809Z · LW(p) · GW(p)

The role of decision-theoretical notion of control is to present consequences of your possible decisions for evaluation by preference. Whatever fills that role, but if one can value mathematical abstractions, then the notion of control has to describe how to control abstractions. Conveniently, the real world can be seen as just another mathematical structure (class of structures).

Replies from: Roko
comment by Roko · 2010-07-13T23:28:48.717Z · LW(p) · GW(p)

I would say that the conventional usage of the word "control" requires the thing-under-control to be real, but sure, one can use the words how one pleases.

It worries me somewhat that we seem to concerned with what word-set we use here; this indicates that the degree to which we value performing certain actions depends whether we frame it as

"controlling something that's no more-or-less real than the laptop in front of you"

versus

"this nonexistent abstraction happens to be a function of you; so what? There are infinitely many abstract functions of you"

Is there some actual substance here?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T23:46:51.190Z · LW(p) · GW(p)

This complication is created by the same old ontology problem: if preference talks about the real world, power to you (though that would make physics relevant, which is no good too), but if it doesn't, we have to deal with that. And we can't assume a priori what preference talks about.

My previous position (and, it seems, long-held position of Wei Dai's) was to assume that preference can be expressed as talking about behavior of programs (as in UDT), since ultimately it has to determine behavior of agent's program, and seeing the environment as programs fits the pattern and allows to express preferences that hold arbitrary agent's strategies as the best option.

Now, since ambient decision theory (ADT) suggests treating the notions of consequences of agent's decision as logical theories, it became more natural to see environment as models of those theories, and so structures more general than programs. But more importantly, if, as logical theories, preferred concepts do not refer to programs (even though they can directly influence only behavior of agent's program), there is no easy way of converting them into preference-about-programs equivalents. Getting the info out of those theories may well be undecidable, something to work on during decision-making and not on the preliminary stage of preference-definition.

Replies from: Roko, Roko
comment by Roko · 2010-07-13T23:58:01.442Z · LW(p) · GW(p)

Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You'd import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-14T00:15:35.862Z · LW(p) · GW(p)

Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You'd import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.

Scary, and I haven't even finished converting myself into a pure mathematician yet. :-) I was hoping to avoid these issues by somehow limiting preference to programs, but investigation led me back to the harder problem statement. Ultimately, a simpler understanding has to be found, that sidesteps the monstrosity of set-theoretical infrastructure and diversity of logics. At this point though, I expect to benefit from conceptual clarity brought by standard mathematical tools.

comment by Roko · 2010-07-13T23:54:37.910Z · LW(p) · GW(p)

This complication is created by the same old ontology problem: if preference talks about the real world, power to you, but if it doesn't, we have to deal with that.

I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-14T00:05:16.453Z · LW(p) · GW(p)

I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.

I believe as much: for foundational study of decision-making, the notions of "real world" are useless, which is why we have to deal with "all mathematical structures", somehow accessed through more manageable concepts (for which the best fit is logic, though that's uncomfortable for many reasons).

(I'd still expect that it's possible to extract some fuzzy outline of the concept of the "real world", like it's possible to vaguely define "chairs" or "anger".)

Replies from: Roko
comment by Roko · 2010-07-14T00:46:58.570Z · LW(p) · GW(p)

(I'd still expect that it's possible to extract some fuzzy outline of the concept of the "real world", like it's possible to vaguely define "chairs" or "anger".)

Maybe. Though my intuition seems to point to a more fundamental role for "reality" in decisionmaking.

Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?

I predict that we'll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.

But I am willing to be proven wrong.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-14T00:57:25.008Z · LW(p) · GW(p)

Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?

Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what "was supposed to be chosen" by evolution. This argument applies to other evolutionary drives more easily.

Replies from: Roko
comment by Roko · 2010-07-14T10:22:17.362Z · LW(p) · GW(p)

I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.

I did not mean why should there even be a clear and unique generalization of the human concept of reality -- for the time being I was assuming that there wouldn't be one.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-14T12:24:23.362Z · LW(p) · GW(p)

You don't try to generalize, or extrapolate human goals. You try to figure out what they already are.

comment by Roko · 2010-07-13T22:32:37.797Z · LW(p) · GW(p)

control mathematical structures

I think that this is a different sense of the word "control" than controlling physical things.

they don't have to contain you in them in any form.

Can you elaborate on this?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T22:51:57.253Z · LW(p) · GW(p)

I think that this is a different sense of the word "control" than controlling physical things.

UDT is about control in the same sense. See this comment for a point in that direction (and my last comment on "Ambient decision theory go-through" thread on SIAI DT list). I believe this to be conceptual clarification of the usual notion of control, having the usual notion ("explicit control") as a special case (almost, modulo explicit dependence bias - it allows to get better results than if you only consider the explicit dependence as stated).

they don't have to contain you in them in any form.

Can you elaborate on this?

See "ambient dependence" on DT list, but the current notion (involving mathematical structures more general than programs) is not written up. I believe "logical control", as used by Wei/Eliezer, refers to basically the same idea. In a two-player game, you can control the other player's decisions despite not literally sitting inside their head.

Replies from: steven0461, Roko
comment by steven0461 · 2010-07-13T22:56:37.987Z · LW(p) · GW(p)

I just accidentally found this other decision theory google group and thought LWers might find it of interest.

comment by Roko · 2010-07-13T23:10:48.643Z · LW(p) · GW(p)

I'm not on that list. Do you know who the list owner is?

Just as a note, my current gut feeling is that it is perfectly plausible that the right way to go is to do something like UDT but with a notion of what worlds are real (as in Mangled worlds QM).

However, I shall read your theory of controlling that which is unreal and see what I make of it!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-13T23:24:33.735Z · LW(p) · GW(p)

I'm not on that list.

Yes you are (via r****c at googlemail.com). IIRC, you got there after I sent you an invitation. Try logging in on the list page.

Replies from: Roko
comment by Roko · 2010-07-13T23:32:42.879Z · LW(p) · GW(p)

Oh, thanks. Obviously I accepted and forgot about it.

comment by Vladimir_Nesov · 2010-07-13T20:55:38.700Z · LW(p) · GW(p)

Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein's Middle Earth.

But you do care about optimizing Middle Earth (let it be Middle Earth with Halting Oracles to be sure), to some tiny extent, even though it doesn't exist at all.

comment by Vladimir_Nesov · 2010-07-13T20:45:03.185Z · LW(p) · GW(p)

Free will is about dependencies: one got to say that the outcome depends on your decision. At the same time, outcome depends on other things. Here, considering quantum mechanical measure depends on what's true about the world, but at the same time it depends on what you prefer to consider. Thus, saying that there are objective facts dictated by the laws of physics is analogous to saying that all your decisions are already determined by the physical laws.

My argument was that as in the case of the naive free will argument, here too we can (indeed, should, once we get to the point of being able to tell the difference) see physical laws as (subjectively) chosen. Of course, as you can't change your own preference, you can't change the implied physical laws seen as aspect of that preference (to make them nicer for some purpose, say).

comment by Vladimir_Nesov · 2010-07-13T20:22:45.301Z · LW(p) · GW(p)

Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don't quite see what about my reply made you think that this was relevant?

It is relevant, but I ran out of expectation to communicate this quickly, so let's all hope I figure out and write up in detail my philosophical framework for decision theory sometime soon.

comment by JoshuaZ · 2010-07-13T12:41:24.835Z · LW(p) · GW(p)

It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out.

I don't agree with this claim. One would simply need an understanding of what brain systems are necessary for consciousness and how to restore those systems to a close approximation to pre-existing state (presumably using nanotech). This doesn't take much in the way of actually understanding how those systems function. Once one had well-developed nanotech one could learn this sort of thing simply be trial and error on animals (seeing what was necessary for survival, and what was necessary for training to stay intact) and then move on to progressively larger brained creatures. This doesn't require a deep understanding of intelligence or consciousness, simply an understanding of what parts of the brain are being used and how to restore them.

comment by Roko · 2010-07-12T10:48:52.768Z · LW(p) · GW(p)

We don't actually know (big understatement there) the relative difficulty of evolving a "persistent quantum mind" versus a "transient quantum mind" versus a "wholly classical mind".

Actually, we do. We've been trying for decades to build viable quantum computers, and it turns out to be excruciatingly hard.

comment by timtyler · 2010-07-07T14:25:29.180Z · LW(p) · GW(p)

Mass cryonic suspension does not seem likely to be affordable anytime soon: "As of 2010, only around 200 people have undergone the procedure since it was first proposed in 1962" - http://en.wikipedia.org/wiki/Cryonics

Replies from: lsparrish
comment by lsparrish · 2010-07-08T22:36:45.625Z · LW(p) · GW(p)

Maybe it just hasn't been marketed properly.