Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques
post by ChrisHallquist · 2013-10-17T20:36:53.567Z · LW · GW · Legacy · 184 commentsContents
184 comments
In June 2012, Robin Hanson wrote a post promoting plastination as a superior to cryopreservation as an approach to preserving people for later uploading. His post included a paragraph which said:
We don’t actually know that frozen brains preserve enough brain info. Until recently, ice formation in the freezing process ripped out huge brain chunks everywhere and shoved them to distant locations. Recent use of a special anti-freeze has reduced that, but we don’t actually know if the anti-freeze gets to enough places. Or even if enough info is saved where it does go.
This left me with the impression that the chances of the average cryopreserved person today of being later revived aren't great, even when you conditionalize on no existential catastrophe. More recently, I did a systematic read-through of the sequences for the first time (about a month 1/2 ago), and Eliezer's post You Only Live Twice convinced me to finally sign up for cryonics for three reasons:
- It's cheaper than I realized
- Eliezer recommended Rudi Hoffman to help with the paperwork
- Eliezer's hard drive analogy convinced me the chances of revival (at least conditionalizing on no existential catastrophe) are good
Note: Signing of this letter does not imply endorsement of any particular cryonics organization or its practices. Opinions on how much cerebral ischemic injury (delay after clinical death) and preservation injury may be reversible in the future vary widely among signatories.
I don't find that terribly encouraging. So now I'm back to being pessimistic about current cryopreservation techniques (though I'm still signing up for cryonics because the cost is low enough even given my current estimate of my chances). But I'd very much be curious to know if anyone knows what, say, Nick Bostrom or Anders Sandberg think about the issue. Anyone?
Edit: I'm aware of estimates given by LessWrong folks in the census of the chances of revival, but I don't know how much of that is people taking things like existential risk into account. There are lots of different ways you could arrive at a ~10% chance of revival overall:
- (50% chance of no existential catastrophe) * (30% chance current cryopreservation techniques are adequate) * (70% chance my fellow humans will come through for me beyond avoiding existential catastrophe) = 10.5%
is one way. But:
- (15% chance no existential catastrophe) * (99% chance current cryopreservation techniques are adequate) * (70% chance my fellow humans will come through for me beyond avoiding existential catastrophe) = ~10.4%
is a very similar conclusion from very different premises. Gwern has more on this sort of reasoning in Plastination versus cryonics, but I don't know who most of the people he links to are so I'm not sure whether to trust them. He does link to a breakdown of probabilities by Robin, but I don't fully understand the way Robin is breaking the issue down.
184 comments
Comments sorted by top scores.
comment by Paul Crowley (ciphergoth) · 2013-10-18T09:29:23.233Z · LW(p) · GW(p)
Paul Crowley here.
I am signed up for cryonics and I'd encourage others to do likewise. My comments on Twitter were meant as a caution against a naïve version of the expert opinion principal. As you can see, I've worked very hard to find out whether expert dismissal of cryonics is on expert grounds, or on ill thought out "silliness" grounds combined with a lingering supernatural idea of death. The evidence I currently have points overwhelming at the latter. To me this poses a problem not for cryonics but for how to make the best use of expert opinion.
Assuming a good future, I'd put the chances of revival through scanning and emulation at over 50%. For the cryonics process to do so much damage that there's no way for us to infer the lost information given full-blown neural archaeology, Nature would have to surprise us.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-10-18T21:44:01.595Z · LW(p) · GW(p)
I'm curious to know what you think a "naive" version of the expert opinion principle is, and what a less-naive one would be.
After thinking about this beyond my comments on Twitter, one issue is that judgments about cryonics aren't just judgments about cryonics - they're about future technology that could be used to revive / upload cryopreserved people. I think I'd expect biologists (esp. neuroscientists and cryobiologists) to be trustworthy on current preservation techniques, but few will have anything approaching expertise about future technology.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-10-20T20:14:20.377Z · LW(p) · GW(p)
I don't know. I'm more confident that the expert dismissal of cryonics is not founded in expertise than I am of any particular strategy on when to defer to expert belief. so I'd use cryonics to measure the strategy rather than vice versa.
By and large cryonics critics don't make clear exactly what part of the cryonics argument they mean to target, so it's hard to say exactly whether it covers an area of their expertise, but it's at least plausible to read them as asserting that cryopreserved people are information-theoretically dead, which is not guesswork about future technology and would fall under their area of expertise.
comment by V_V · 2013-10-18T13:50:33.709Z · LW(p) · GW(p)
Eliezer's hard drive analogy convinced me the chances of revival (at least conditionalizing on no existential catastrophe) are good
Why did you find the analogy convincing? It doesn't look like a good analogy:
It's cherry picked: erasing information from hard drives is hard, because they are very stable information storage devices. A powered down hard drive can retain its content for at least decades, probably centuries if the environmental conditions are good. Consider a modern DRAM chip instead: power it down and its content will disappear within seconds. Retention time can be increased to days or perhaps weeks by cooling to cryogenic temperatures before power down, and after the data has become unreadable by normal means, specialized probes and microscopy techniques could in some cases still retrieve it for some time, but ultimately the data will fade. It's unlikely than any future technology will be ever able to recover data from a RAM chip that has been powered down for months, even if cryogenically stored. Of course brains are neither RAM chips nor hard drives, but the point is that having high data remanence is specific to certain technologies and not some general property of all practical information storage systems.
It suggests an Argument from Ignorance: "Pumping someone full of cryoprotectant and gradually lowering their temperature until they can be stored in liquid nitrogen is not a secure way to erase a person." The implicit argument here is that since you can't be sure that cryopreservation destroys a person, you should infer that it doesn't. That's obviously a fallacy.
Furthermore the referenced post introduces spurious motives (signalling) for signing up for cryonics, committing the fallacy of Social Conformance: "Not signing up for cryonics - what does that say? That you've lost hope in the future. That you've lost your will to live. That you've stopped believing that human life, and your own life, is something of value."
Nick Bostrom or Anders Sandberg
Why do you particularly care about their opinion? They are not domain experts, and being futurists/transhumanists there is a non-negligible chance that their opinion on the subject is biased.
Replies from: None, gwern, passive_fist↑ comment by [deleted] · 2013-10-18T19:50:20.962Z · LW(p) · GW(p)
There's another problem with the hard drive analogy - he's comparing the goal of completely erasing a hard drive to erasing just enough of a person for them to not be the "same" person anymore.
With the kind of hardcare hard drive erasing he talks about, the goal is to make sure that not one bit is recoverable. If the attacker can determine that there may have been a jpeg somewhere at sometime, or what kind of filesystem you were using, they win.
The analogous win in cryonics would be for future archeologists to recover any information about who you were as a person, which orders of magnitude easier than doing a near-complete reconstruction of the algorithm-that-is-you.
Replies from: BaconServ↑ comment by BaconServ · 2013-10-18T20:38:34.514Z · LW(p) · GW(p)
Where, exactly, in the brain that information is stored is all too relevant a question. For all we know it could be more feasible to scrape identity from IRC logs and email history. I would not personally be upset if I no longer had a favorite flavor of ice cream, for example.
↑ comment by gwern · 2013-10-21T15:45:30.317Z · LW(p) · GW(p)
I agree with this objection to the hard-drive analogy. Some of the pro-cryonics arguments like the examples of wood frogs, & human drowning victims/coma/flatliners as well as what is currently believed about how memory is stored go some way to showing that memory & personality are closer to hard drives than RAM, but are far from conclusive. I've suggested in the past that would close down this objection would be training some small creature which can be vitrified & revived successfully, and verifying that its memories are still there, but as far as I know, this has never been done.
↑ comment by passive_fist · 2013-10-20T04:04:40.403Z · LW(p) · GW(p)
Retention time can be increased to days or perhaps weeks by cooling to cryogenic temperatures before power down
Actually no, modern DRAM loses information in milliseconds, even assuming you could cool it down to liquid helium temperatures. After a few seconds the data is almost entirely random.
Replies from: pengvado↑ comment by pengvado · 2013-10-20T05:20:42.449Z · LW(p) · GW(p)
Here's a citation for the claim of DRAM persisting with >99% accuracy for seconds at operating temperature or hours at LN2. (The latest hardware tested there is from 2007. Did something drastically change in the last 6 years?)
Replies from: passive_fist↑ comment by passive_fist · 2013-10-20T20:18:09.029Z · LW(p) · GW(p)
Yup, the introduction of DDR3 memory. See http://www1.cs.fau.de/filepool/projects/coldboot/fares_coldboot.pdf
comment by BaconServ · 2013-10-17T21:08:11.676Z · LW(p) · GW(p)
It is important to realize that the moment you sign up for cryonics and the advancements made thus far are not the ones that you are likely to be subject to. While current practices may well fail to preserve usable identity-related information (regardless of experts opinions on whether or not this is happening (if it were known now, we could know it)), advancements and research continue to be made. It is not in your best interests to be preserved as soon as possible, but it is in your best interests to sign up as soon as possible, to ensure eventual preservation. Too often I see people basing their now-decisions on the now-technology, rather than the now-measured rate of advancement of the technology. The condition of, "If I'm going to be dieing soon," is simply not likely enough that most of us should be implicitly considering it as a premise.
Replies from: Ishaan, V_V↑ comment by Ishaan · 2013-10-19T08:15:12.665Z · LW(p) · GW(p)
Can you get the cryo procedure itself at a lower rate by signing up now? Or is it just a matter of insurance premiums? (if the latter, it might be better to cut out the middleman and do the low-risk-investing directly?) .
Replies from: BaconServ↑ comment by BaconServ · 2013-10-19T08:35:57.665Z · LW(p) · GW(p)
I wouldn't know about rates, I'm more basing it on the uncertainty of when your death will be. Someone else under this comment tree mentioned life insurance in relation to the cryonics payment, but I really haven't looked into it myself. Honestly, I'm just stating the logic as I see it. My own stance is that I don't value my perpetual existence very highly. I'm not suicidal, and I see myself as likely to live a long time with some form of life extension, but I'm not too choosy about when I'm going to die.
Replies from: Ishaan↑ comment by Ishaan · 2013-10-19T19:49:34.528Z · LW(p) · GW(p)
Oh, ok. I thought we were ignoring the risk of dying now for the purposes of discussion, since you were talking about how you should base your actions off your estimate of tech. capability at the estimated time of death rather than current tech.
↑ comment by V_V · 2013-10-18T14:27:34.034Z · LW(p) · GW(p)
Too often I see people basing their now-decisions on the now-technology, rather than the now-measured rate of advancement of the technology.
The rate of advancement of cryopreservation technology that occurred in the past is already an ill-defined concept, very difficult to measure in a sensible way. Making extrapolations to the future is basically a random guess.
Anyway, if you think there is a chance that viable cryopreservation technology may appear within your lifetime, it seems that the best course of action would be to start investing money in low-risk assets and then sign up for cryonics if and when the technology become available, rather than making a blind commitment right now.
↑ comment by bokov · 2013-10-18T16:01:43.218Z · LW(p) · GW(p)
If you are confident that you will be able to pay for cryo out of pocket whenever that time comes, then waiting would be the best course of action.
However, most people fund cryo by making the cryo provider the beneficiary of their life insurance policy. For them, it is better to sign up as soon as possible so that they lock in lower insurance rates (which go up the older you are obviously) and to limit the risk that they will have developed a health condition that will make them uninsurable by the time they do try to sign up.
Replies from: V_V↑ comment by V_V · 2013-10-18T19:51:59.216Z · LW(p) · GW(p)
Does a life insurance pay off more than a low-risk investment over your expected lifespan?
If it does, you might get a life insurance policy but delay signing up for cryonics until and if you are confident that it works. This way you also save on the membership fees.
↑ comment by Lumifer · 2013-10-18T20:07:41.478Z · LW(p) · GW(p)
Does a life insurance pay off more than a low-risk investment over your expected lifespan?
It pays off less.
The insurance companies actually take your premiums and put them into low-risk investment portfolios. The expected payout of the insurance policy is less than the expected value of the portfolio -- that's how insurance companies make a profit.
Replies from: CarlShulman, bokov, V_V↑ comment by CarlShulman · 2013-10-21T00:19:47.807Z · LW(p) · GW(p)
Also, insurance companies often front-load the premium payments relative to actuarial risk: at first you pay more than the actuarially fair amount, and less later. The insurance companies know that many people will eventually let the coverage drop due to financial trouble, error, etc so there is a cross-subsidy to those who make use of the whole policy (similar to the way one can get free loans from credit card companies by always paying the monthly bill, which the companies accept as a cost of acquiring customers who will take out high-interest balances).
This makes life insurance more desirable for someone unusually likely to keep the coverage for its whole term, and less so for the typical person.
Replies from: bokov, Lumifer↑ comment by Lumifer · 2013-10-21T17:31:20.720Z · LW(p) · GW(p)
This makes life insurance more desirable for someone unusually likely to keep the coverage for its whole term
Of course whether someone will actually keep the coverage for the whole term is uncertain and calculating a reasonable forecast seems rather hard.
↑ comment by bokov · 2013-10-18T20:40:43.632Z · LW(p) · GW(p)
If an insurance company screws up and loses money on what it thought was a low-risk investment, is it still obligated to pay out your beneficiaries?
Do you believe you are less likely to screw up in this manner than a professional fund manager hired by a multi-million dollar company whose existence depends on accurate forecasts?
Will your heirs and/or creditors find it easier to grab your portfolio or an insurance policy of which they are neither the owners nor the beneficiaries?
Replies from: Lumifer↑ comment by Lumifer · 2013-10-18T20:50:50.068Z · LW(p) · GW(p)
If an insurance company screws up and loses money on what it thought was a low-risk investment, is it still obligated to pay out your beneficiaries?
Sure, but if it screws up in a sufficiently epic fashion it won't have any money to pay to your beneficiaries.
Do you believe you are less likely to screw up in this manner than a professional fund manager hired by a multi-million dollar company whose existence depends on accurate forecasts?
Me? Why, yes, I do.
Will your heirs and/or creditors find it easier to grab your portfolio or an insurance policy of which they are neither the owners nor the beneficiaries?
I don't know enough of bankruptcy/probate law to answer.
I might also point out that I had said nothing about the wisdom of buying term life insurance. I suspect that, as usual, it depends. For some people it's a good deal, for others not so much.
↑ comment by BaconServ · 2013-10-18T19:27:19.230Z · LW(p) · GW(p)
Naive measurements are still measurements. Personally, I would like to see more research into prediction as a science. It is difficult because you want to jump ahead decades, and five-year-out predictions have only so much value, but I (naively) predict that we'll get better at prediction the more we practice concretely doing so. I would (naively) expect that the data used to calibrate predictions would come from interviews with researchers paired with advanced psychology.
Replies from: gwern, Lumifer, V_VPlotting your personal rate of advancement against the team and the field, we predict a breakthrough in April of next year.
So, I can just slack off until then?
↑ comment by gwern · 2013-10-21T15:34:50.326Z · LW(p) · GW(p)
Personally, I would like to see more research into prediction as a science.
You could try reading up on what is already known. Silver's Signal and the Noise is not a bad start if you know nothing at all; if you already have some expertise in the area, the anthology Principles of Forecasting edited by Armstrong (available in Google & Libgen IIRC) covers a lot of topics.
↑ comment by Lumifer · 2013-10-18T19:43:25.446Z · LW(p) · GW(p)
Personally, I would like to see more research into prediction as a science.
If you want non-domain-specific, that's called statistics, specifically statistical modeling.
Replies from: BaconServ↑ comment by BaconServ · 2013-10-18T20:32:59.089Z · LW(p) · GW(p)
I specifically want it to be more specific.
Replies from: Lumifer↑ comment by Lumifer · 2013-10-18T20:45:32.965Z · LW(p) · GW(p)
Do you mean more confidence in the prediction? Narrower error bands?
Replies from: BaconServ↑ comment by BaconServ · 2013-10-19T02:11:55.388Z · LW(p) · GW(p)
More domain-specific, invention specific, plotting events with ever-narrower error bands. If you could isolate the specific month that news media gets in a buzz about uploading three years in advance, that would be a significant increase in prediction usefulness.
Replies from: Lumifer↑ comment by Lumifer · 2013-10-19T02:33:01.241Z · LW(p) · GW(p)
That doesn't sound realistic to me. It sounds impossible.
Replies from: BaconServ↑ comment by BaconServ · 2013-10-19T02:56:01.197Z · LW(p) · GW(p)
It may be impossible even with significant effort, but it's certainly impossible without having first tried. I want to know, if nothing else, the limits of reasonable prediction. That itself would be useful information. Even if we can never get past 50% accuracy for predictions, knowing that all such predictions can only be 50% likely is actionable in the presence of any such prediction.
Replies from: Lumifer↑ comment by Lumifer · 2013-10-19T03:06:38.826Z · LW(p) · GW(p)
I want to know, if nothing else, the limits of reasonable prediction.
These are very much domain-specific plus are the function of available technology.
You seem to want psychohistory -- unfortunately it's entirely fiction.
Replies from: BaconServ↑ comment by BaconServ · 2013-10-19T03:20:25.240Z · LW(p) · GW(p)
I want exactly as I've stated: Research into a method.
Replies from: Lumifer↑ comment by Lumifer · 2013-10-19T03:25:22.015Z · LW(p) · GW(p)
So, um... go for it?
Replies from: BaconServ↑ comment by BaconServ · 2013-10-19T03:27:37.938Z · LW(p) · GW(p)
I'm afraid that the only methods I can think up require vast collection of many different types of data, far beyond what I can currently manage myself.
Replies from: bokov, Lumifer↑ comment by bokov · 2013-10-19T05:10:47.128Z · LW(p) · GW(p)
Actually, there are prediction markets. Unfortunately the most useful one, Intrade got closed (maybe the three-letter-agencies felt threatened?) but hopefully there will be others. Oh, far from accurate, don't get me wrong.
But at least if you wanted to have some kind of starting estimate for something you knew nothing about, you could sometimes find one at Intrade.
Replies from: BaconServ↑ comment by BaconServ · 2013-10-19T05:18:09.796Z · LW(p) · GW(p)
I suspect it closed because it wasn't giving the kind of powerful results necessary to get funding-type attention. I suppose to start off, a prediction agency should predict its own success. :P
Now I'm curious about any self-referential predictions Intrade made...
↑ comment by Lumifer · 2013-10-19T03:45:28.187Z · LW(p) · GW(p)
Well then, I can only assure you that I'm certain such research is being actively conducted. I'm pretty sure the Three-Letter Agencies are very much interested in prediction. Any political organisation is very much interested in prediction. Anyone who plays in the financial markets is very much interested in prediction. So no, it doesn't look like there are too few resources committed to this problem.
Unfortunately, the problem seems to be really hard.
comment by jefftk (jkaufman) · 2013-10-27T17:12:04.501Z · LW(p) · GW(p)
To answer this question we should talk to lots of neuroscientists. Here's one:
There's a burden of proof issue here: If there is a small group making a scientific claim that the larger scientific community finds ludicrous, skepticism should be the default position. I'm not aware of any peer-reviewed publication explicitly debunking cryonics. Probably the reason is that practicing lab scientists aren't inclined to write up a refutation of an particular idea when all you need to see it's bullshit is an undergraduate-level understanding of biology. So, since I can't point you to a systematic refutation, I'll give you this in the way of citation: http://jcb.rupress.org/content/188/1/145.full
This is a technically impressive study, they get really pretty and informative EM results. Excepting minor advances in the few years since it was published, this is close to state of the art as far a vitrification of brain tissue goes. If what the cryonics huckster companies were offering provided THIS level of preservation in a whole brain, then maybe we could have an interesting conversation. Cryonics would still be hopeless and vapid for other reasons, but at least you could count on fine membrane structure in synapses being preserved.
But in order to GET such good preservation, they had to take a slice of brain 400 micrometers thick and 1mm in diameter and vitrify it at 2000 bar. The high pressure required would damage an entire brain, and this works specifically because you're dealing with a small volume of tissue. Slicing an entire MOUSE brain and reconstructing it at this level is a major goal of connectomics, pretty far off. But I promise you (my lab does this kind of freezing for EM fairly routinely, it's time-consuming and the sectioning is artifact-prone), what the huckster companies are offering to do to your head is NOT going to give the kind of preservation that maintains synaptic structure. Thinking that in the early 21st century you're going to pay a company to freeze your brain in a way that it could be "reanimated" is insane, and shows a stunning naivete of the underlying biological complexity. The service you are paying for is "please destroy my already-dead brain in a way that involves chemicals and coldness". -- David Ruhl on facebook:LessWrong
There was also a discussion with another neuroscientist, kalla724, here a year ago.
Do people know of other places where a neuroscientist who knows about vitrification gives their opinion?
Replies from: Eliezer_Yudkowsky, ChrisHallquist, Gurkenglas↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-12T02:01:03.288Z · LW(p) · GW(p)
"When all you need to see it's bullshit is an undergraduate-level understanding of biology" is an extremely clear cue that the speaker does not understand the current state of the cryonics debate and cannot be trusted to summarize it. Anyone who does not specifically signal that they understand the concept of information-theoretic death as mapping many cognitive-identity-distinct initial physical states to the same atomic-level physical end state is not someone whose summaries or final judgment you can possibly reasonably trust. If they cite sources you might look at those sources, but you're going to have to figure out on your own what it means.
Replies from: Calvin, EHeller↑ comment by Calvin · 2014-01-12T02:50:47.882Z · LW(p) · GW(p)
Can you please elaborate on how and why sufficient understanding of the concept of information-theoretic death as mapping many cognitive-identity-distinct initial physical states to the same atomic-level physical end state helps to alleviate concerns raised by the author?
Replies from: KnaveOfAllTrades↑ comment by KnaveOfAllTrades · 2014-01-12T10:56:37.904Z · LW(p) · GW(p)
The basic idea of getting cryonics is that it offers a chance of massively extended lifespan, because there is a chance that it preserves one's identity. That's the first-run approximation, with additional considerations arising from making this reasoning a bit more rigorous, e.g. that cryonics is competitive against other interventions, that the chance is not metaphysically tiny, etc.
One thing we might make more rigorous is what we mean by 'preservation'. Well, preservation refers to reliably being able to retrieve the person from the hopefully-preserved state, which requires that the hopefully-preserved state cannot have arisen from many non-matching states undergoing the process.
The process that squares positive numbers preserves perfectly (is an injection), because you can always in theory tell me the original number if I give you its square. The process that squares real numbers preserves imperfectly but respectably since, for any positive output, that output could have come from two numbers (e.g. 1^2=1=(-1)^2). Moreover, if we only cared about the magnitude (modulus, i.e. ignoring the sign) of the input, even squaring over real numbers would perfectly preserve what we cared about.
Similarly, there is a chance that the hopefully-preserved states generated by cryonics do/will be generated only by the original identity, or possibly some acceptably close identities. That we do not currently know if it is possible to retrieve acceptably close identities from hopefully-preserved states—or even if we did, how one would do so—does not necessarily make the probability that it is possible to do so in principle low enough that cryonics can be laughed off.
A monkey might be bamboozled by the sequence of square numbers written in Arabic numerals, but that would not prove that the rule could not be deduced in principle, or that information had been lost for human purposes. Similarly we might currently be unable to reverse vitrification or look under a microscope and retrieve the identity, but it is unfair to demand this level of proof, and it is annoying and frustrating in the same way as logical rudeness (even if technically it is not logically rude) when every few months another person smugly spouts this type of argument as a 'refutation' of cryonics and writes cryonicists off, and then gets upvoted handsomely. (Hence Eliezer losing patience and outright declaring that people who don't seem to (effectively) understand this point about mappings don't have a clue.)
Formalisations of these concepts arise in more obviously mathematical contexts like the study of functions and information theory, but it feels like neither of those should be necessary background for a smart person to understand the basic idea. But in all honesty, I think the inferential gap for someone who has not explicitly considered at least the idea of injections before is big enough that often people apply the absurdity heuristic or become scared to do something unconventional before the time it takes to cross that inferential gap.
I think there's a good chance that there are neurodegenerative conditions that are currently irreversible but which many more would think worth working on than cryonics, simply because they associate cryonics with 'computer nerd failure mode' or apply the absurdity heuristic or because attacking neurodegenrative conditions is Endorsed by Experts whereas cryonics is not or because RationalWiki will laugh at them. Possible partial explanation: social anxiety that mockery will ensure for trying something not explicitly endorsed by an Expert consensus (which is a realistic fear, given how many people basically laugh at cryonicists or superficially write it off as 'bullshit'). And yes, in this mad world, social anxiety really might be the decisive factor for actual humans in whether to pursue an intervention that could possibly grant them orders of magnitude more lifespan.
↑ comment by EHeller · 2014-01-13T02:22:32.402Z · LW(p) · GW(p)
It seems from the comment that he does understand information theoretic views of death- after all he is talking about preserving the information via thin slices and EM scanning.
To get good vitrification results, enough to preserve the synaptic structure, his claim is that you have to take mm thin slices of brain and preserve them at nearly 2000 atmospheres of pressure. This is obviously not what current cryonics procedures do. They shoot you full of cryoprotectants (which destroys some chemical information in the brain at the outset), and then freeze the brain whole (which will lead to a lot of fracturing).
To attack his point, I think you'd need to claim that either:
- he is technically wrong and current vitrification does decently preserve synaptic structures.
- mechanically scrambling synaptic structures via cracking is a 1 to 1 process that preserves information.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-13T11:08:26.362Z · LW(p) · GW(p)
(2) is the interesting claim, though I'd hardly trust his word on (1) since I didn't see any especially alarming paragraphs in the actual paper referenced. I'm not an expert on that level of neurobiology (Anders Sandberg is, and he's signed up for cryonics), but I am not interested in hearing from anyone who has not demonstrated that they understand that we are talking about doing intelligent cryptography to a vitrified brain and potentially molecule-by-molecule analysis and reasoning, rather than, "Much cold. Very damage. Boo."
Unless someone spells out exactly what is supposed to destroy all cues of a piece of info, by explaining why two cognitively distinct start states end up looking like molecularly identical endstates up to thermal noise, so that we can directly evaluate the technical evidence for ourselves, all they're asking us to do is trust their authoritative summary of their intuitions; and you'd be just plain dumb to trust the authoritative summary of someone who didn't understand the original argument.
I'm trying not to be impatient here, but when I actually went to look at the cited paper and it said nothing at all about damage, it turned out this eminent authority's original argument consisted merely of, "To read off great synaptic info with current big clumsy microscopes and primitive imaging processing, we need big pressures. Look at this paper involving excellent info and big pressures. Cryonicists don't have big pressures. Therefore you're dead QED."
Replies from: Alsadius, EHeller, Calvin↑ comment by EHeller · 2014-01-13T17:55:34.541Z · LW(p) · GW(p)
(2) is the interesting claim
So say something about it. Your whole comment is an attack on 1, but regardless of his word on whether or not thing slice vitrification is currently the best we can do, we KNOW fracturing happens with current brain preservation techniques. Liquid nitrogen is well below the glass transition, so fracturing is unavoidable.
Why should we expect fracturing/cracking to be 1 to 1?
Replies from: lsparrish↑ comment by lsparrish · 2014-01-14T01:33:09.513Z · LW(p) · GW(p)
If you're worried about the effects of cracking, you can pay for ITS. LN2 is only used because it is cheap and relatively low-tech to maintain.
If you ask me it's a silly concern if we're assuming nanorepair or uploading. Cracking is just a surface discontinuity, and it forms at a point in time where the tissue is already in a glassy state where there can't be much mixing of molecules. The microcracks that form in frozen tissue is a much greater concern (but not the only concern with freezing). The fact that vitrified tissue forms large, loud cracks is related to the fact that it does such a good job holding things in place.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2014-01-17T03:53:31.160Z · LW(p) · GW(p)
What's "ITS"? (Google 'only' hits for "it's") How much more expensive is it? Is it offer by Alcor and CI?
Replies from: lsparrish↑ comment by lsparrish · 2014-01-17T18:50:38.234Z · LW(p) · GW(p)
Short for Intermediate Temperature Storage.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2014-01-21T00:38:57.634Z · LW(p) · GW(p)
Oh ok. Thank you.
↑ comment by Calvin · 2014-01-13T12:07:19.527Z · LW(p) · GW(p)
I mean, it is either his authoritative summary or yours, and with all due honesty that guy actually takes care to construct an actual argument instead of resorting to appeals to authority and ridicule.
Personally I would be more interested in someone explaining exactly how cues of a piece of info are going to be reassembled and whole brain is going to be reconstructed from a partial data.
Proving that cryo-preservation + restoration does indeed work, and also showing the exact method as to how, seems like a more persuasive way to construct an argument rather that proving that your opponents failed to show that what you are claiming is currently impossible.
If cryonics providers don't have a proper way of preserving your brain state (even if they can repair partial damage by guessing), than I am sorry to say, but you are indeed dead.
↑ comment by hairyfigment · 2014-01-13T22:33:03.167Z · LW(p) · GW(p)
I don't know why you'd say he gets it - he explicitly talks about re-animation. If he addresses the actual issue it would appear to be by accident.
↑ comment by ChrisHallquist · 2013-10-28T04:23:13.653Z · LW(p) · GW(p)
Thanks for this. This has substantially reduced my estimate of the odds of cryonics working.
Replies from: lukeprog, somervta↑ comment by lukeprog · 2014-01-11T20:29:25.325Z · LW(p) · GW(p)
I would not have updated much at all based on the David Ruhl comment. Cryobiology is a young field, and the details about what we can and can't do with current preservation technology change every couple of years. Also, I think almost all bio-scientists who say current preservation techniques are insufficient to preserve you are underestimating the likelihood and the powers of machine superintelligence, and in general haven't studied superintelligence theory hardly at all. Skepticism about cryonics working should, I think, mostly come from the other parts of the conjunction.
↑ comment by Gurkenglas · 2013-11-01T22:38:56.488Z · LW(p) · GW(p)
You can't use "Sufficient connectomics are far off" as a counterargument to cryonics, with its passing the technological buck to the future and all.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2013-11-02T04:46:36.718Z · LW(p) · GW(p)
The important part of the claim is that current vitrification is nowhere near sufficient to preserve your brain's information.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2013-11-03T01:50:51.469Z · LW(p) · GW(p)
Might it be enough if the brain was cut into slices and those stored with the process mentioned above, ignoring excessive cost?
Replies from: jkaufman, None↑ comment by jefftk (jkaufman) · 2013-11-04T05:34:01.070Z · LW(p) · GW(p)
You lose some data at the borders between slices, and you'd have to do it extremely quickly, but I'd expect it to work much better than the current method.
↑ comment by [deleted] · 2014-01-11T19:09:22.320Z · LW(p) · GW(p)
You'd lose any ability to do direct revival in that case.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2014-01-11T20:48:43.341Z · LW(p) · GW(p)
What does "direct revival" mean? If the slices were properly reconnected, the function of the brain should be unchanged.
Replies from: orbenn↑ comment by orbenn · 2014-01-12T18:38:02.523Z · LW(p) · GW(p)
I think he means "create a functional human you, while primarily sourcing the matter from your old body". He's commenting that slicing the brain makes this more difficult, but it sounds like the alterations caused by current vitrification techniques make it impossible either way.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2014-01-13T08:56:00.191Z · LW(p) · GW(p)
That criterion doesn't make sense as per No Individual Particles and Identity Isn't In Specific Atoms .
Replies from: Alsadius↑ comment by Alsadius · 2014-01-16T07:11:50.497Z · LW(p) · GW(p)
Unless you expect that revival of existing tissue will be a much easier path than assembly of completely new tissue. That's a plausible assumption.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2014-01-16T09:20:12.602Z · LW(p) · GW(p)
We shouldn't make what thing we want dependant on what is harder or easier to do, and in any case if one of these is possible, the other is too. Some more centuries of technology development don't mean much when you're suspended.
Replies from: Alsadius↑ comment by Alsadius · 2014-01-16T15:29:05.955Z · LW(p) · GW(p)
It doesn't change what we want, but it does change how likely we are to get it. Waiting additional centuries increases the probability of catastrophe significantly, be it a power outage or a nuclear war, as well as making it correspondingly harder to reintroduce yourself to society. And we don't actually have any reason to believe that if one is possible then the other is too - perhaps human technology will never get to the point of being able to map the brain perfectly and the only path to resurrection is with the same tissue, perhaps the tissue will be beyond repair and uploading will be the only viable option. Both are plausible.
comment by Vaniver · 2013-10-18T00:12:20.218Z · LW(p) · GW(p)
Susskind's Rule of Thumb seems worthwhile here. The actionable question doesn't seem to be so much "Does Bostrom publicly say he thinks cryonics could work?" as "Is Bostrom signed up for cryonics?" (Hanson, for example, is signed up, despite concerns that it most likely won't work.)
Replies from: ChrisHallquist, ikrase↑ comment by ChrisHallquist · 2013-10-18T00:31:42.297Z · LW(p) · GW(p)
Good point. In fact, Sandberg, Bostrom, and Armstrong are all signed up for cryonics.
Replies from: James_Miller↑ comment by James_Miller · 2013-10-18T01:28:36.650Z · LW(p) · GW(p)
As is Peter Thiel.
Replies from: lmm↑ comment by lmm · 2013-10-18T20:16:23.342Z · LW(p) · GW(p)
Peter Thiel is incredibly rich, so signing up for cryonics is not necessarily expressing a strong preference.
Replies from: David_Gerard↑ comment by David_Gerard · 2013-10-18T21:51:08.361Z · LW(p) · GW(p)
It may be considering how few similarly rich people are signed up.
Replies from: lmm↑ comment by lmm · 2013-10-19T08:56:08.220Z · LW(p) · GW(p)
I'd say it's more that a rich person not signing up is expressing a strong preference against. For people who believe rich people are smarter than average this should constitute a substantial piece of evidence.
Imagine people buying a car where it costs $1,000,000 to change the colour. We conclude that anyone who pays cares strongly about the colour; anyone who doesn't pay we can only say their feelings aren't enormously strong. Conversely imagine it costs $100 to change the colour. Then for anyone who pays we can only conclude they care a bit about the colour, while anyone who doesn't pay must be quite strongly indifferent to the car's colour.
comment by Mitchell_Porter · 2013-10-18T11:23:26.952Z · LW(p) · GW(p)
I don't believe any of the various purely computational definitions of personhood and survival, so just preserving the shapes of neurons, etc., doesn't mean much to me. My best bet is that the self is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the life of the organism, persists through time even during unconsciousness, and ceases to exist when its biological matrix becomes inhospitable. For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of phonons and/or biophotons, somewhere in the cortex.
That is just a wild speculation, made for the sake of concreteness. But what is really unlikely is that I am just a virtual machine, in the sense of computer science - a state machine whose states are coarse-grainings of the actual microphysical states, and which can survive to run on another, physically distinct computer, so long as it reproduces the rough causal structure of the original.
Physically, what is a computer? Nuclei and electrons. And physically, what is a computer program? It is an extreme abstraction of what some of those nuclei and electrons are doing. Computers are designed so that these abstractions remain valid - so that the dynamics of the virtual machine will match the dynamics of the physical object, unless something physically disruptive occurs.
The physical object is the reality, the virtual machine is just a concept. But the information-centric theory of what minds are and what persons are, is that they are virtual machines - a reification of a conceptual construct. This is false to the robust reality of consciousness, especially, which is why I insist on a theory of the self that is physical and not just computational.
I don't want to belabor this point, but just want to make clear again why I dissent from the hundred protean ideas out there, about mind uploading, copies, conscious simulations, platonic programs, personal resurrection from digital brain-maps, and so on, in favor of speculations about a physical self within the brain. Such a self would surely have unconscious coprocessors, other brain regions that would be more like virtual machines, functional adjuncts to the conscious part, such as the immediate suppliers of the boundary conditions which show up in experience as sensory perceptions. But you can't regard the whole of the mind as nothing but virtual machines. Some part of it has to be objectively real.
What would be the implications of this "physical" theory of identity, for cryonics? I will answer as if the topological vortex theory is the correct one, and not just a placeholder speculation.
The idea is that you begin to exist when the vortex begins to exist, and you end when it ends. By this criterion, the odds look bad for the proposition that survival through cryonics is possible. I could invent a further line of speculation as to how the web of quantum entanglement underlying the vortex is not destroyed by the freezing process, but rather gets locked into the ground state of the frozen brain; and such a thing is certainly thinkable, but that's all, and it is equally thinkable that the condensate hosting the vortex depends for its existence on a steady expenditure of energy provided by cellular metabolism, and must therefore disintegrate when the cells freeze. From this perspective cryonics looks like an unlikely gamble, a stab in the dark. So an advocate would have to revert to the old argument that even if the probability of survival through cryonics is close to zero, the probability of survival through non-cryonics is even closer to zero.
What about the idea of surviving by preserving your information? The vortex version of this concept is, OK, during this life you are a quantum vortex in your brain, and that vortex must cease to exist in a cryonically preserved brain; but in the future we can create a new vortex in a new brain, or in some other appropriate physical medium, and then we can seed it with information from the old brain. And thereby, you can live again - or perhaps just approximate-you, if only some of the information got through.
To say anything concrete here requires even more speculation. One might say that the nature of such resurrection schemes would depend a great deal on the extent to which the details of a person depend on information in the vortex, or on information in the virtual coprocessors of the vortex. Is the chief locus of memory, a virtual machine outside of and separate from the conscious part of the brain, coupled to consciousness so that memories just appear there as needed; or are there aspects of memory which are embedded in the vortex-self itself? To reproduce the latter would require, not just the recreation of memory banks adjoining the vortex-self, but the shaping and seeding of the inner dynamics of the vortex.
Either way, personally I find no appeal in the idea of "survival" via such construction of a future copy. I'm a particular "vortex" already; when that definitively sputters out, that's it for me. But I know many others feel differently, and such divergent attitudes might still exist, even if a vortex revolution in philosophy of mind replaced the program paradigm.
I somewhat regret the extremely speculative character of these remarks. They read as if I'm a vortex true believer. The point is to suggest what a future alternative to digital crypto-dualism might look like.
Replies from: Cyan, BaconServ, Risto_Saarelma, passive_fist, Viliam_Bur, Ishaan, Nisan, scav, bokov, scientism↑ comment by Cyan · 2013-10-18T14:16:29.140Z · LW(p) · GW(p)
I don't believe any of the various purely literary definitions of narrative and characterization, so just preserving the shapes and orderings of the letters of a story, etc., doesn't mean much to me. My best bet is that a novel is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the printing of a book, persists through time even when not read, and ceases to exist when its physical form becomes illegible. For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of ink and/or paper, somewhere between the front and back cover.
That is just a wild speculation, made for the sake of concreteness. But what is really unlikely is that a novel is just a collection of letters, in the sense of orthography - a sequence of glyphs representing letters that are coarse-grainings of the actual microphysical states, and which can survive to be read on another, physically distinct medium, so long as it reproduces the sequence of letters of the original.
Physically, what is a novel? Nuclei and electrons. And physically, what is a story? It is an extreme abstraction of what some of those nuclei and electrons are doing. Books are designed so that these abstractions remain valid - so that the dynamics of the story will match the sequence of the letters, unless something physically disruptive occurs.
The physical object is the reality, the narrative is just a concept. But the information-centric theory of what stories are and what novels are, is that they are narratives - a reification of a conceptual construct. This is false to the robust reality of a reader's consciousness, especially, which is why I insist on a literary theory that is physical and not just computational.
I don't want to belabor this point, but just want to make clear again why I dissent from the hundred protean ideas out there, about narrative uploading, copies, conscious readers, authorial intent, instances of decompression from digital letter-maps, and so on, in favor of speculations about a physical story within the book. Such a story would surely have information-theoretic story structures, other book regions that would be more like narratives, structural adjuncts to the novel part, such as the immediate suppliers of the boundary conditions which show up in experience as plot structure. But you can't regard the whole of the novel as nothing but creative writing. Some part of it has to be objectively real.
I think I'll stop here. Apologies to Mitchell Porter, who I judge to be a smart guy -- more knowledgeable than me about physics, without question -- who happens to believe a crazy thing. (I expect he judges my beliefs philosophically incoherent and hence crazy, so we're even on that score.) I should note that the above analogy hasn't been constructed with a great deal of care; I expect it can be picked apart quite thoroughly.
ETA: As I re-read this, I feel kind of bad about the mocking tone expressed by this kind of rhetorical construction, so let me state explicitly that I did it for the lulz; on the actual substantive matter at issue, I judge Mitchell Porter's comment to be at DH4 on the disagreement hierarchy and my own reply to be at DH3.
Replies from: BaconServ, Mitchell_Porter↑ comment by BaconServ · 2013-10-18T19:47:55.176Z · LW(p) · GW(p)
As much as I might try to find holes in the analogy, I still insisted I ought to upvote your comment, because frankly, it had to be said.
In trying to find those holes, I actually came to agree with your analogy well: The story is recreated in the mind/brain by each individual reader, and does not necessarily depend on the format. In the same way, if consciousness has a physical presence that it lacked in a simulation, then we will need to account for and simulate that as well. It may even eventually be possible to design and experiment to show that the raw mechanism of consciousness and its simulation were the same thing. Barring any possibility of simulation of perception, we can think of our minds as books to be read my a massive biologically-resembling brain that would retain such a mechanism, allowing the full re-creation of our conscious in that brain from a state of initially being a simulation that it reads. I have to say, once I'm aware I'm a simulation, I'm not terribly concerned about transferring to different mediums of simulation.
↑ comment by Mitchell_Porter · 2013-10-19T12:12:30.617Z · LW(p) · GW(p)
A story in a book, versus a mind in a brain. Where to begin in criticizing that analogy!
I'm sure there's some really profound way to criticize that analogy, as actually symptomatic of a whole wrong philosophy of mind. It's not just an accident that you chose to criticize a pro-physical, anti-virtual theory of mind, by inventing a semantic phlogiston that materially inhabits the words on a page and gives them their meaning. Unfortunately, even after so many years arguing with functionalists and other computationalists, I still don't have a sufficiently nuanced understanding of where their views come from, to make the profound critique, the really illuminating one.
But surely you see that explaining how it is that words on a page have meaning, and how it is that thoughts in a brain have meaning, are completely different questions! The book doesn't think, it doesn't act, the events in the story do not occur in the book. There is no meaning in the book unless brains are involved. Without them, words on a page are just shapes on a surface. The experience of the book as meaningful does not occur in the book, it occurs in the brain of a reader; so even the solution of this problem is fundamentally about brains and not about books. The fact that meaning is ultimately not in the book is why semantic phlogiston is absurd in that context.
But the brain is a different context. It's the end of the line. As with all of naturalism's ontological problems with mind, once you get to the brain, you cannot evade them any further. By all means, let the world outside the skull be a place wholly without time or color or meaning, if that is indeed your theory of reality. That just means you have to find all those things inside the skull. And you have to find them for real, because they are real. If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful - then you are in denial about your own experience.
Or at least, I would have to deny the basic facts of my own experience of reality, in order to adopt such views. Maybe you're some other sort of being, which genuinely doesn't experience time passing or see colors or have thoughts that are about things. But I doubt it.
Replies from: Cyan↑ comment by Cyan · 2013-10-19T18:22:15.504Z · LW(p) · GW(p)
I agree with almost all of what you wrote. Here's the only line I disagree with.
If your theory of such things, is that they are nothing more than labels applied by a neural net to certain inputs, inputs that are not actually changing or colorful or meaningful - then you are in denial about your own experience.
I affirm that my own subjective experience is as you describe; I deny that I am in denial about its import.
I want to be clear that I'm discussing the topic of what makes sense to affirm as most plausible given what we know. In particular, I'm not calling your conjecture impossible.
Human brains don't look different in lower-level organization than those of, say, cats, and there's no higher level structure in the brain that obviously corresponds to whatever special sauce it is that makes humans conscious. On the other hand, there are specific brain regions which are known to carry out specific functional tasks. My understanding is that human subjective experience, when picked apart by reductive cognitive neuroscience, appears to be an ex post facto narrative constructed/integrated out of events whose causes can be more-or-less assigned to particular functional sub-components of the brain. Positing that there's a special sauce -- especially a non-classical one -- just because my brain's capacity for self-reflection includes an impression of "unity of consciousness" -- well, to me, it's not the simplest conceivable explanation.
Maybe the universe really does admit the possibility of an agent which approximates my internal structure to arbitrary (or at least sufficient) accuracy and claims to have conscious experiences for reasons which are isomorphic to my own, yet actually has none because it's implemented on an inadequate physical substrate. But I doubt it.
↑ comment by BaconServ · 2013-10-18T20:13:02.063Z · LW(p) · GW(p)
I think the term "vortex" is apt simply because it demonstrate you're aware it sounds silly, but in a world where intent is more readily apparent, I would simply just use the standard term: Soul. (Bearing in mind that there are mortal as well as immortal models of the soul. (Although, if the soul does resemble a vortex, then it may be well possible that it keeps spinning in absence of the initial physical cause. Perhaps some form of "excitation in the quantum soul field" that can only be destroyed by meeting a "particle" (identity/soul, in this case) of the perfect waveform necessary to cancel it out.))
As in my previous comment, if the soul exists, then we will need to discover that as a matter of researching physical preservation/cryonics. Then the debate begins anew about whether or not we've discovered all the parts we need to affirm that the simulation is the same thing as the natural physical expression.
Personally, I am more a fan of Eliezer_Yudkowsky's active continuing process interpretation. I think the identity arises from the process itself, rather than any specific momentary configuration. If I can find no difference between the digital and the physical versions of myself, I won't be able to assume there are any.
↑ comment by Risto_Saarelma · 2013-10-18T13:09:41.023Z · LW(p) · GW(p)
Beyond it being unfortunate for the naive theory of personal continuity if it did, do you have a reason why the nexus of subjective experience can't be destroyed every time a person goes unconscious and then recreated when they wake up?
Replies from: bokov↑ comment by bokov · 2013-10-18T20:32:22.470Z · LW(p) · GW(p)
No, with a few technical modifications it can be quite plausible. However, if it is actually true, I have no more reason to care about my own post-revival self than I do about some other person's.
Once my likelihood of patternists being right in that way is updated past a certain threshold, it may be that even the modest cost of remaining a cryonicist might not seem worth it.
The other practical consequence of patternists being right is an imperative to work even harder at anti-aging research because it might be our only hope after all.
↑ comment by passive_fist · 2013-10-20T04:12:10.093Z · LW(p) · GW(p)
My best bet is that the self is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the life of the organism, persists through time even during unconsciousness, and ceases to exist when its biological matrix becomes inhospitable.
This is just another way of saying you believe in a soul. And if you think it persists during unconsciousness then why can't it persist during freezing?
For example, it might be an intricate topological vortex that forms in a (completely hypothetical) condensate of phonons and/or biophotons, somewhere in the cortex.
This sentence is meaningless as far as I know.
But what is really unlikely is that I am just a virtual machine, in the sense of computer science - a state machine whose states are coarse-grainings of the actual microphysical states, and which can survive to run on another, physically distinct computer, so long as it reproduces the rough causal structure of the original.
You say it's unlikely but give no justification. In my opinion it is a far more likely hypothesis than the existence of a soul.
I am surprised that a comment like this has recieved upvotes.
↑ comment by Viliam_Bur · 2013-10-18T14:53:45.596Z · LW(p) · GW(p)
But the information-centric theory of what minds are and what persons are, is that they are virtual machines - a reification of a conceptual construct. This is false to the robust reality of consciousness,
At this point I failed to understand what you are saying.
(What is the "robust reality of consciousness" and why it can't be simulated?)
↑ comment by Ishaan · 2013-10-19T08:42:35.443Z · LW(p) · GW(p)
I don't believe any of the various purely computational definitions of personhood and survival
So, this goes well beyond the scope of cryonics. We aren't discussing whether any particular method is doable - rather, we're debating the very possibility of running a soul on a computer computer.
To reproduce the latter would require, not just the recreation of memory banks adjoining the vortex-self, but the shaping and seeding of the inner dynamics of the vortex.
...but all you are doing here is adding a more complex element to the brain, Russel's-Teapot style. It's still part of the brain. If the vortex-soul thing is physical, observable, and can be described by a computable function then there is no theoretical reason why you can't copy the vortex-thing into a computer.
I'm a particular "vortex" already; when that definitively sputters out, that's it for me.
...so why did we even bother with this whole vortex-soul-thingy then? Why not just say "when my brain stops computing stuff, that's it for me"? How does the insertion of an extra object into the cognitive machinery in any way facilitate this argument?
They read as if I'm a vortex true believer.
I don't mean that you believe in the vortex specifically. I mean that your exact argument can be made without inserting any extra things (vortexes, souls, whatever) into our current understanding of the brain.
What you are basically saying is that you can't copy-paste consciousness...it doesn't matter what the specific substrate of it is and whether or not it has vortexes. If you were running as software on a computer in the first place, you'd say that cutting and pasting program would constitute death, no?
...Right? Or did I miss something important about your argument?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-10-19T11:26:14.156Z · LW(p) · GW(p)
I reject the computational paradigm of mind in its most ambitious form, the one which says that mind is nothing but computation - a notion which, outside of rigorous computer science, isn't even well-defined in these discussions.
One issue that people blithely pass by when they just assume computationalism, is meaning - "representational content". Thoughts, mental states, are about things. If you "believe in physics", and are coming from a naturalistic perspective, then meaning, intentionality, is one of the great conundrums, up there with sensory qualia. Computationalism offers no explanation of what it means for a bunch of atoms to be about something, but it does make it easy to sail past the issue without even noticing, because there is a purely syntactic notion of computation denuded of semantics, and then there is a semantic notion of computation in which computational states are treated as having meanings embedded into their definition. So all you have to do is to say that the brain "computes", and then equivocate between syntactic computation and semantic computation, between the brain as physical state machine and the mind as semantic state machine.
The technological object "computer" is a semantic state machine, but only in the same way that a book has meaning - because of human custom and human design. Objectively, it is just a syntactic state machine, and in principle its computations could be "about" anything that's isomorphic to them. But actual states of mind have an objective intrinsic semantics.
Ultimately, I believe that meaning is grounded in consciousness, that there are "semantic qualia" too; that the usual ontologies of physics must be wrong, because they contain no such things - though perhaps the mathematics of some theory of physics not too distant from what we already have, can be reinterpreted in terms of a new ontology that has room for the brain having such properties.
But until such time as all of that is worked out, computationalism will persist as a pretender to the title of the true philosophy of mind, incidentally empowering numerous mistaken notions about the future interplay of mind and technology. In terms of this placeholder theory of conscious quantum vortices, there's no problem with the idea of neural prostheses that work with your vortex, or of conscious vortices in something other than a biological brain; but if a simulation of a vortex isn't itself a vortex, then it won't be conscious.
According to theories of this nature, in which the ultimate substrate of consciousness is substance rather than computation, the very idea of a "conscious program" is a conceptual error. Programs are not the sorts of things that are conscious; they are a type of virtual state machine that runs on a Turing-universal physical state machine. Specifically, a computer program is a virtual machine designed to preserve the correctness of a particular semantic interpretation of its states. That's the best ontological characterization of what a computer program is, that I can presently offer. (I'm assuming a notion of computation that is not purely syntactic - that the computations performed by the program are supposed to be about something.)
Incidentally, I coughed up this vortex notion, not because it solves the ontological problem of intentional states, but just because knotted vortex lines are a real thing from physics that have what I deem to be properties necessary in a physical theory of consciousness. They have complex internal states (their topology) and they have an objective physical boundary. The states usually considered in computational neuroscience have a sorites problem; from a microphysical perspective, that considers what everything is really made of, they are defined extremely vaguely, akin to thermodynamic states. This is OK if we're talking about unconscious computations, because they only have to exist in a functional sense; if the required computational mappings are performed most of the time under reasonable circumstances, then we don't have to worry about the inherent impreciseness of the microphysical definition of those states.
But conscious states have to be an objective and exact part of any ultimate ontology. Consciousness is not a fuzzy idea which humans made up and which may or may not be part of reality. In a sense, it is your local part of reality, the part of reality that you know is there. It therefore cannot be regarded as a thing which exists approximately or vaguely or by convention, all of which can be said of thermodynamic properties and of computational states that don't have a microphysically exact definition. The quantum vortex in your cortex is, by hypothesis, something whose states have a microphysically exact definition, and so by my physical criterion, it at least has a chance of being the right theory.
Replies from: Ishaan, khafra↑ comment by Ishaan · 2013-10-19T20:09:54.111Z · LW(p) · GW(p)
incidentally empowering numerous mistaken notions about the future interplay of mind and technology.
Is that a prediction then? That your family and friends could somehow recognize the difference between you and a simulated copy of you? That the simulated copy of you would somehow not perceive itself as you? That the process just can't work and can't create anything recognizably conscious, intelligent, or human? (and does that mean strong AI needs to run on something other than a computer?) Or are you thinking it will be a philosophical zombie, and everyone will be fooled into thinking its you?
What do you think will actually happen, if/when we try to simulate stuff? Let's just say that we can do it roughly down to the molecular level.
states have a microphysically exact definition
What precludes us from simulating something down to the sufficiently, micro physically exact level? (I understand that you've got a physical theory of consciousness, but i'm trying to figure out how this micro-physical stuff plays into it)
Replies from: Cyan, Mitchell_Porter↑ comment by Cyan · 2013-10-20T05:13:12.954Z · LW(p) · GW(p)
That the simulated copy of you would somehow not perceive itself as you? That the process just can't work and can't create anything recognizably conscious, intelligent, or human?
Don't worry -- the comments by Mitchell_Porter in this comment thread were actually written by a vortexless simulation of an entirely separate envortexed individual who also comments under that account. So here, all of the apparent semantic content of "Mitchell_Porter"'s comments is illusory. The comments are actually meaningless syntactically-generated junk -- just the emissions of a very complex ELIZA chatbot.
↑ comment by Mitchell_Porter · 2013-10-21T03:42:03.935Z · LW(p) · GW(p)
What do you think will actually happen, if/when we try to simulate stuff?
I'll tell you what I think won't happen: real feelings, real thoughts, real experiences.
A computational theory of consciousness implies that all conscious experiences are essentially computations, and that the same experience will therefore occur inside anything that performs the same computation, even if the "computer" is a network of toppling dominoes, random pedestrians making marks on walls according to small rulebooks, or any other bizarre thing that implements a state machine.
This belief derives entirely from one theory of one example - the computational theory of consciousness in the human brain. That is, we perceive that thinking and experiencing have something to do with brain activity, and one theory of the relationship, is that conscious states are states of a virtual machine implemented by the brain.
I suggest that this is just a naive idea, and that future neuroscientific and conceptual progress will take us back to the idea that the substrate of consciousness is substance, not computation; and that the real significance of computation for our understanding of consciousness, will be that it is possible to simulate consciousness without creating it.
From a physical perspective, computational states have the vagueness of all functional, user-dependent concepts. What is a chair? Perhaps, anything you can sit on. But people have different tastes, whether you can tolerate sitting on a particular object may vary, and so on. "Chair" is not an objective category; in regions of design-space far from prototypical examples of a chair, there are edge cases whose status is simply disputed or questionable.
Exactly the same may be said of computational states. The states of a transistor are a prototypical example of a physical realization of binary computational states. But as we consider increasingly messy or unreliable instantiations, it becomes increasingly difficult to just say, yes, that's a 0 or a 1.
Consider the implications of this for a theory of consciousness which says, that the necessary and sufficient condition for the occurrence of a given state of consciousness, is the occurrence of a specific "computational state". It means that whether or not a particular consciousness exists, is not a yes-or-no thing - it's a matter of convention or definition or where you draw the line in state space.
This is untenable in exactly the same way that Copenhagenist complacency about the state of reality in quantum mechanics is untenable. It makes no sense to say that the electron has a position, but not a definite position, and it makes no sense to say that consciousness is a physical thing, but that whether or not it exists in a specific physical situation is objectively indeterminate.
If you are going to say that consciousness depends on the state of the physical universe, there must be a mapping which gives unique and specific answers for all possible physical states. There cannot be edge cases that are intrinsically undetermined, because consciousness is an objective reality, whereas chairness is an imputed property.
The eerie dualism of computer theories of consciousness, whereby the simulated experience mystically hovers over or dwells within the computer mainframe, chain of dominos, etc - present in the same way, regardless of what the "computer" is made of - might already have served as a clue that there was something wrong about this outlook. But the problem in developing this criticism is that we don't really know how to make a nondualistic alternative work.
Suppose that the science of tomorrow came to the conclusion that the only things in the world that can be conscious, are knots of flux in elementary force fields. Bravo, it's a microphysically unambiguous criterion... but it's still going to be property dualism. The physical property "knotted in a certain madly elaborate shape", and the subjective property "having a certain intricate experience", are still not the same thing. The eerie dualism is still there, it's just that it's now limited to lines of flux, and doesn't extend to bitstreams of toppling dominoes, Searlean language rooms, and so on. We would still have the strictly physical picture of the universe, and then streams of consciousness would be an extra thing added to that picture of reality, according to some laws of psychophysical correlation.
However, I think this physical turn, away from the virtual-machine theory of consciousness, at least brings us a little closer to nondualism. It's still hard to imagine, but I see more potential on this path, for a future theory of nature in which there is a conscious self, that is also a physical entity somewhere on the continuum of physical entities in nature, and in which there's no need to say "physically it's this, but subjectively it's that" - a theory in which we can speak of the self's conscious state, and its causal physical interactions, in the same unified language. But I do not see how that will ever happen with a purely computational theory, where there will always be a distinction between the purely physical description, and the coarse-grained computational description that is in turn associated with conscious experience.
Replies from: Risto_Saarelma, arundelo↑ comment by Risto_Saarelma · 2013-10-21T11:27:50.908Z · LW(p) · GW(p)
What do you think will actually happen, if/when we try to simulate stuff?
I'll tell you what I think won't happen: real feelings, real thoughts, real experiences.
It'll still be pretty cool when the philosophical zombie uploads who act exactly like qualia-carrying humans go ahead and build the galactic supercivilization of trillions of philosophical zombie uploads acting exactly like people and produce massive amounts of science, technology and culture. Most likely there will even be some biological humans around, so you won't even have to worry about nobody ever getting to experience any of it.
Replies from: yli↑ comment by yli · 2013-10-21T12:05:26.295Z · LW(p) · GW(p)
Actually because the zombie uploads are capable of all the same reasoning as M_P, they will figure out that they're not conscious, and replace themselves with biological humans.
On the other hand, maybe they'll discover that biological humans aren't conscious either, they just say they are for reasons that are causally isomorphic to the reasons for which the uploads initially thought they were conscious, and then they'll set out to find a substrate that really allows for consciousness.
↑ comment by arundelo · 2013-10-22T02:54:23.638Z · LW(p) · GW(p)
How do you respond to the thought experiment where your neurons (and glial cells and whatever) are replaced one-by-one with tiny workalikes made out of non-biological material? Specifically, would you be able to tell the difference? Would you still be conscious when the replacement process was complete? (Or do you think the thought experiment contains flawed assumptions?)
Feel free to direct me to another comment if you've answered this elsewhere.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-10-22T21:36:32.108Z · LW(p) · GW(p)
My scenario violates the assumption that a conscious being consists of independent replaceable parts.
Just to be concrete: let's suppose that the fundamental physical reality consists of knotted loops in three-dimensional space. Geometry comes from a ubiquitous background of linked simple loops like chain-mail, other particles and forces are other sorts of loops woven through this background, and physical change is change in the topology of the weave.
Add to this the idea that consciousness is always a state of a single loop, that the property of the loop which matters is its topology, and that the substrate of human consciousness is a single incredibly complex loop. Maybe it's an electromagnetic flux-loop, coiled around the microtubules of a billion cortical neurons.
In such a scenario, to replace one of these "consciousness neurons", you don't just emulate an input-output function, you have to reproduce the coupling between local structures and the extended single object which is the true locus of consciousness. Maybe some nano-solenoids embedded in your solid-state neuromorphic chips can do the trick.
Bear in mind that the "conscious loop" in this story is not meant to be epiphenomenal. Again, I'll just make up some details: information is encoded in the topology of the loop, the loop topology interacts with electron bands in the microtubules, the electrons in the microtubules feel the action potential and modulate the transport of neurotransmitters to the vesicles. The single extended loop interacts with the localized information processing that we know from today's neuroscience.
So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let's suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a "zombie", an unconscious entity which has been designed in imitation of a conscious being.
Of course, all these hypotheses and details are just meant to be illustrative. I expect that the actual tie between consciousness and microphysics will be harder to understand than "conscious information maps to knots in a loop of flux".
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2013-10-23T06:09:31.500Z · LW(p) · GW(p)
So what would happen if you progressively replaced the neurons of a brain with elements that simply did not provide an anchor for an extended loop? Let's suppose that, instead of having nano-solenoids anchoring a single conscious flux-loop, you just have an extra type of message-passing between the neurochips, which emulates the spooling of flux-topological information. The answer is that you now have a "zombie", an unconscious entity which has been designed in imitation of a conscious being.
This is done one neuron at a time, though, with the person awake and narrating what they feel so that we can see if everything is going fine. Shouldn't some sequence of neuron replacement lead to the replacement of neurons that were previously providing consciously accessible qualia to the remaining biological neurons that still host most of the person's consciousness? And shouldn't this lead to a noticeable cognitive impairment they can report, if they're still using their biological neurons to control speech (we'd probably want to keep this the case as long as possible)?
Is this really a thing where you can't actually go ahead and say that if the theory is true, the simple neurons-as-black-boxes replacement procedure should lead to progressive cognitive impairment and probably catatonia, and if the person keeps saying everything is fine throughout the procedure, then there might be something to the hypothesis of people being made of parts after all? This isn't building a chatbot that has been explicitly designed to mimic high-level human behavior. The neuron replacers know about neurons, nothing more. If our model of what neurons do is sufficiently wrong, then the aggregate of simulated neurons isn't going to go zombie, it's just not going to work because it's copying the original connectome that only makes sense if all the relevant physics are in play.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-10-24T12:11:02.094Z · LW(p) · GW(p)
My basic point was just that, if consciousness is only a property of a specific physical entity (e.g. a long knotted loop of planck-flux), and if your artificial brain doesn't contain any of those (e.g. it is made entirely of short trivial loops of planck-flux), then it won't be conscious, even if it simulates such an entity.
I will address your questions in a moment, but first I want to put this discussion back in context.
Qualia are part of reality, but they are not part of our current physical theory. Therefore, if we are going to talk about them at all, while focusing on brains, there is going to be some sort of dualism. In this discussion, there are two types of property dualism under consideration.
According to one, qualia, and conscious states generally, are correlated with computational states which are coarse-grainings of the microphysical details of the brain. Coarse-graining means that the vast majority of those details do not matter for the definition of the computational state.
According to the other sort of theory, which I have been advocating, qualia and conscious states map to some exact combination of exact microphysical properties. The knotted loop of planck-flux, winding through the graviton weave in the vicinity of important neurons, etc., has been introduced to make this option concrete.
My actual opinion is that neither of these is likely to be correct, but that the second should be closer to the truth than the first. I would like to get away from property dualism entirely, but it will be hard to do that if the physical correlate of consciousness is a coarse-grained computational state, because there is already a sort of dualism built into that concept - a dualism between the exact microphysical state and the coarse-grained state. These coarse-grained states are conceptual constructs, equivalence classes that are vague at the edges and with no prospect of being made exact in a nonarbitrary way, so are they just intrinsically unpromising as an ontological substrate for consciousness. I'm not arguing with the validity of computational neuroscience and coarse-grained causal analysis, I'm just saying it's not the whole story. When we get to the truth about mind and matter, it's going to be more new-age than it is cyberpunk, more organic than it is algorithmic, more physical than it is virtual. You can't create consciousness just by pushing bits around, it's something far more embedded in the substance of reality. That's my "prediction".
Now back to your comment. You say, if consciousness - and conscious cognition - really depends on some exotic quantum entity woven through the familiar neurons, shouldn't progressive replacement of biological neurons with non-quantum prostheses lead to a contraction of conscious experience and an observable alteration and impairment of behavior, as the substitution progresses? I agree that this is a reasonable expectation, if you have in mind Hans Moravec's specific scenario, in which neurons are being replaced one at a time and while the subject is intellectually active and interacting with their environment.
Whether Moravec's scenario is itself reasonable is another thing. There are about 30 million seconds in a year and there are billions of neurons just in the cortex alone. The cortical neurons are very entangled with each other via their axons. It would be very remarkable if a real procedure of whole-brain neural substitution didn't involve periods of functional impairment, as major modules of the brain are removed and then replaced with prosthesis.
I also find it very unlikely that attempting a Moravec procedure of neuronal replacement, and seeing what happens, will be important as a test of such rival paradigms of consciousness. I suppose you're thinking in terms of a hypothetical computational theory of neurons whose advocates consider it good enough to serve as the basis of a Moravec procedure, versus skeptics who think that something is being left out of the model.
But inserting functional replacements for individual cortical neurons in vivo will require very advanced technology. For people wishing to conduct experiments in mind emulation, it will be much easier to employ the freeze-slice-and-scan paradigm currently contemplated for C. elegans, plus state-machine models from functional imaging for brain regions where function really is coarser in its implementation. Meanwhile, on the quantum side, while there certainly need to be radical advances in the application of concepts from condensed-matter physics to living matter, if the hypothesized quantum aspects of neuronal function are to be located... I think the really big advances that are required, must be relatively simple. Alien to our current understandings, which is why they are hard to attain, but nonetheless simple, in the way that the defining concepts of physics are simple.
There ought to be a physical-ontological paradigm which simultaneously (1) explains the reality behind some theory-of-everything mathematical formalism (2) explains how a particular class of entities from the theory can be understood as conscious entities (3) makes it clear how a physical system like the human brain could contain one such entity with the known complexity of human consciousness. Because it has to forge a deep connection between two separate spheres of human knowledge - natural science and phenomenology of consciousness - new basic principles are needed, not just technical elaborations of known ways of thinking. So neurohacking exercises like brain emulation are likely to be not very relevant to the discovery of such a paradigm. It will come from inspired high-level thinking, working with a few crucial facts; and then the paradigm will be used to guide the neurohacking - it's the thing that will allow us to know what we're doing.
↑ comment by khafra · 2013-10-21T14:00:28.879Z · LW(p) · GW(p)
meaning - "representational content". Thoughts, mental states, are about things. If you "believe in physics", and are coming from a naturalistic perspective, then meaning, intentionality, is one of the great conundrums, up there with sensory qualia. Computationalism offers no explanation of what it means for a bunch of atoms to be about something
What do you think of Eliezer's approach to the "meaning" problem in The Simple Truth? I find the claim that the pebble system is about the sheep to be intuitively satisfying.
↑ comment by scav · 2013-10-18T16:05:05.235Z · LW(p) · GW(p)
My best bet is that the self is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the life of the organism, persists through time even during unconsciousness, and ceases to exist when its biological matrix becomes inhospitable.
How much do you want to bet on the conjunction of all those claims? (hint: I think at least one of them is provably untrue even according to current knowledge)
That is just a wild speculation, made for the sake of concreteness.
I don't think it supplied the necessary amount of concreteness to be useful; this is usual for wild speculation. ;)
The physical object is the reality, the virtual machine is just a concept.
A running virtual machine is a physical process happening in a physical object. So are you.
This is false to the robust reality of consciousness
Well, nobody actually knows enough about the reality of consciousness to make that claim. It may be that it is incompatible with your intuitions about consciousness. Mine too, so I haven't any alternative claims to make in response.
Replies from: bokov↑ comment by bokov · 2013-10-18T17:22:22.070Z · LW(p) · GW(p)
How much do you want to bet on the conjunction of all those claims? (hint: I think at least one of them is provably untrue even according to current knowledge)
How much do you want to bet on the conjunction of yours?
Replies from: scav↑ comment by scav · 2013-10-18T19:20:20.870Z · LW(p) · GW(p)
Just for exercise, let's estimate the probability of the conjunction of my claims.
claim A: I think the idea of a single 'self' in the brain is provably untrue according to currently understood neuroscience. I do honestly think so, therefore P(A) as close to 1.0 as makes no difference. Whether I'm right is another matter.
claim B: I think a wildly speculative vague idea thrown into a discussion and then repeatedly disclaimed does little to clarify anything. P(B) approx 0.998 - I might change my mind before the day is out.
claim C: The thing I claim to think in claim B is in fact "usually" true. P(C) maybe 0.97 because I haven't really thought it through but I reckon a random sample of 20 instances of such would be unlikely to reveal 10 exceptions, defeating the "usually".
claim D: A running virtual machine is a physical process happening in a physical object. P(D) very close to 1, because I have no evidence of non-physical processes, and sticking close to the usual definition of a virtual machine, we definitely have never built and run a non-physical one.
claim E: You too are a physical process happening in a physical object. P(E) also close to 1. Never seen a non-physical person either, and if they exist, how do they type comments on lesswrong?
claim F: Nobody knows enough about the reality of consciousness to make legitimate claims that human minds are not information-processing physical processes. P(F) = 0.99. I'm pretty sure I'd have heard something if that problem had been so conclusively solved, but maybe they were disappeared by the CIA or it was announced last week and I've been busy or something.
P( A B C D E F) is approx 0.96.
The amount of money I'd bet would depend on the odds on offer.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
Replies from: bokov↑ comment by bokov · 2013-10-18T20:23:51.110Z · LW(p) · GW(p)
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
No, you're right. You did technically answer my question, it wasn't rude, I should have made my intended point clearer. But your answer is really a restatement of your refutation of Mitchell Porter's position, not an affirmative defense of your own.
First of all, have I fairly characterized your position in my own post (near the bottom, starting with "For patternists to be right, both the following would have to be true...")?
If I have not, please let me know which the conditions are not necessary and why.
If I have captured the minimum set of things that have to be true for you to be right, do you see how they (at least the first two) are also conjunctive and at least one of them is provably untrue?
Replies from: scav↑ comment by scav · 2013-10-20T11:10:34.109Z · LW(p) · GW(p)
Oh, OK. I get you. I don't describe myself as a patternist, and I might not be what you mean by it. In any case I am not making the first of those claims.
However, it seems possible to me that a sufficiently close copy of me would think it was me, experience being me, and would maybe even be more similar to me as a person than biological me of five years ago or five years hence.
I do claim that it is theoretically possible to construct such a copy, but I don't think it is at all probable that signing up for cryonics will result in such a copy ever being made.
If I had to give a reason for thinking it's possible in principle, I'd have to say: I am deeply sceptical that there is any need for a "self" to be made of anything other than classical physical processes. I don't think our brains, however complex, require in their physical construction, anything more mysterious than room-temperature chemistry.
The amazing mystery of the informational complexity of our brains is undiminished by believing it to be physically prosaic when you reduce it to its individual components, so it's not like I'm trying to disappear a problem I don't understand by pretending that just saying "chemistry" explains it.
I stand by my scepticism of the self as a single indivisible entity with special properties that are posited only to make it agreeable to someone's intuition, rather than because it best fits the results of experiment. That's really all my post was about: impatience with argument from intuition and argument by hand-waving.
I'll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.
Replies from: bokov↑ comment by bokov · 2013-10-21T20:46:26.472Z · LW(p) · GW(p)
I'll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.
...and this is a weakly continualist concern that patternists should also agree with even if they disagree with the strong form ("a copy forked off from me is no longer me from that point forward and destroying the original doesn't solve this problem").
But this weak continualism is enough to throw some cold water on declaring premature victory in cryonic revival: the lives of humans have worth not only to others but to themselves, and just how close exactly is "close enough" and how to tell the difference are very central to whether lives are being saved or taken away.
↑ comment by bokov · 2013-10-18T16:09:11.493Z · LW(p) · GW(p)
I somewhat regret the extremely speculative character of these remarks. They read as if I'm a vortex true believer.
On the contrary, thank you for articulating the problem in a way I haven't thought of. I wish more patternists were as cautious about their own fallibility as you are in yours.
↑ comment by scientism · 2013-10-19T00:23:50.582Z · LW(p) · GW(p)
The problem with the computationalist view is that it confuses the representation with what is represented. No account of the structure of the brain is the brain. A detailed map of the neurons isn't any better than a child's crude drawing of a brain in this respect. The problem isn't the level of detail, it's that it makes no sense to claim a representation is the thing represented. Of course, the source of this confusion is the equally confused idea that the brain itself is a sort of computer and contains representations, information, etc. The confusions form a strange network that leads to a variety of absurd conclusions about representation, information, computation and brains (and even the universe).
Information about a brain might allow you to create something that functions like that brain or might allow you to alter another brain in some way that would make it more like the brain you collected information about ("like" is here relative), but it wouldn't then be the brain. The only way cryonics could lead to survival is if it led to revival. Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death. The specifics of your biology do not enter into it.
Cyan's post below demonstrates this confusion perfectly. A book does contain information in the relevant sense because somebody has written it there. The text is a representation. The book contains information only because we have a practice of representing language using letters. None of this applies to brains or could logically apply to brains. But two books can be said to be "the same" only for this reason and it's a reason that cannot possibly apply to brains.
Replies from: TheOtherDave, shminux, passive_fist, Cyan↑ comment by TheOtherDave · 2013-10-21T04:02:57.932Z · LW(p) · GW(p)
Just to make sure I'm following... your assertion is that my brain is not itself a sort of computer, does not contain representations, and does not contain information, my brain is some other kind of a thing, and so no amount of representations and information and computation can actually be my brain. They might resemble my brain in certain ways, they might even be used in order to delude some other brain into thinking of itself as me, but they are not my brain. And the idea that they might be is not even wrong, it's just a confusion. The information, the representations, the belief-in-continuity, all that stuff, they are something else altogether, they aren't my brain.
OK. Let's suppose all this is true, just for the sake of comity. Let's call that something else X.
On your account, should I prefer the preservation of my brain to the preservation of X, if forced to choose?
If so, why?
↑ comment by scientism · 2013-10-21T19:01:00.532Z · LW(p) · GW(p)
That's essentially correct. Preservation of your brain is preservation of your brain, whereas preservation of a representation of your brain (X) is not preservation of your brain or any aspect of you. The existence of a representation of you (regardless of detail) has no relationship to your survival whatsoever. Some people want to be remembered after they're dead, so I suppose having a likeness of yourself created could be a way to achieve that (albeit an ethically questionable one if it involved creating a living being).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-21T19:34:33.434Z · LW(p) · GW(p)
OK., I think I understand your position.
So, suppose I develop a life-threatening heart condition, and have the following conversation with my cardiologist:
Her: We've developed this marvelous new artificial heart, and I recommend installing it in place of your damaged organic heart.
Me: Oh, is it easier to repair my heart outside of my body?
Her: No, no... we wouldn't repair your heart, we'd replace it.
Me: But what would happen to my heart?
Her: Um... well, we typically incinerate it.
Me: But that's awful! It's my heart. You're proposing destroying my heart!!!
Her: I don't think you quite understand. The artificial heart can pump blood through your body just as well as your original heart... better, actually, given your condition.
Me: Sure, I understand that, but that's mere function. I believe you can replicate the functions of my heart, but if you don't preserve my heart, what's the value of that?
I infer that on your account, I'm being completely absurd in this example, since the artificial heart can facilitate my survival just as well (or better) as my original one, because really all I ought to value here is the functions. As long as my blood is pumping, etc., I should be content. (Yes? Or have I misrepresented your view of heart replacement?)
I also infer that you would further say that this example is nothing at all like a superficially similar example where it's my brain that's injured and my doctor is proposing replacing it with an artificial brain that merely replicates the functions of my brain (representation, information storage, computation and so forth). In that case, I infer, you would not consider my response absurd at all, since it really is the brain (and not merely its functions) that matter.
Am I correct?
If so, I conclude that I just have different values than you do. I don't care about my brain, except insofar that it's the only substrate I know of capable of implementing my X. If my survival requires the preservation of my brain, then it follows that I don't care about my survival.
I do care about preserving my X, though. Give me a chance to do that, and I'll take it, whether I survive or not.
Replies from: scientism↑ comment by scientism · 2013-10-22T00:53:46.232Z · LW(p) · GW(p)
I wouldn't say that a brain transplant is nothing at all like a heart transplant. I don't take the brain to have any special properties. However, this is one of those situations where identity can become vague. These things lie on a continuum. The brain is tied up with everything we do, all the ways in which we express our identity, so it's more related to identity than the heart. People with severe brain damage can suffer a loss of identity (i.e., severe memory loss, severe personality change, permanent vegetative state, etc). You can be rough and ready when replacing the heart in a way you can't be when replacing the brain.
Let me put it this way: The reason we talk of "brain death" is not because the brain is the seat of our identity but because it's tied up with our identity in ways other organs are not. If the brain is beyond repair, typically the human being is beyond saving, even if the rest of the body is viable. So I don't think the brain houses identity. In a sense, it's just another organ, and, to the degree that that is true, a brain transplant wouldn't be more problematic (logically) than a heart transplant, provided the dynamics underlying our behaviour could be somehow preserved. This is an extremely borderline case though.
So I'm not saying that you need to preserve your brain in order to preserve your identity. However, in the situation being discussed, nothing survives. It's a clear case of death (we have a corpse) and then a new being is created from a description. This is quite different from organ replacement! What I'm objecting to is the idea that I am information or can be "transformed" or "converted" into information.
What you're saying, as far as I can tell, is that you care more about "preserving" a hypothetical future description of yourself (hypothetical because presumably nobody has scanned you yet) than you do about your own life. These are very strange values to have - but I wish you luck!
Replies from: TheOtherDave, TheOtherDave↑ comment by TheOtherDave · 2013-10-22T02:51:31.732Z · LW(p) · GW(p)
Though, now that I think about it...
People with severe brain damage can suffer a loss of identity (i.e., severe memory loss, severe personality change, permanent vegetative state, etc).
Wait up. On your account, why should we call those things (memory loss, personality change, loss of cognitive ability) "loss of identity"? If something that has my memories, personality, and cognitive abilities doesn't have my identity, then it seems to follow that something lacking those things doesn't lack my identity.
It seems that on your account those things are no more "loss of identity" than losing an arm or a kidney.
Replies from: scientism↑ comment by scientism · 2013-10-22T13:32:28.914Z · LW(p) · GW(p)
It's the loss of faculties that constitutes the loss of identity, but faculties aren't transferable. For example, a ball might lose its bounciness if it is deflated and regain it if it is reinflated, but there's no such thing as transferring bounciness from one ball to another or one ball having the bounciness of another. The various faculties that constitute my identity can be lost and sometimes regained but cannot be transferred or stored. They have no separate existence.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-22T14:03:45.933Z · LW(p) · GW(p)
Ah, gotcha. Yeah, here again, I just can't imagine why I ought to care.
I mean, I agree that the attributes can't be "stored" if I understand what you mean by that. When I remove the air from a ball, there is no more bounciness; when I add air to a ball, there is bounciness again; in between, there is no bounciness. If I do that carefully enough, the bounciness now is in-principle indistinguishable from the bounciness then, but that's really all I can say. Sure.
That said, while I can imagine caring whether my ball bounces or not, and I can imagine caring whether my ball bounces in particular ways, if my ball bounces exactly the way it did five minutes ago I can't imagine caring whether what it has now is the same bounciness, or merely in-principle indistinguishable bounciness.
To me, this seems like an obvious case of having distinctions between words that simply don't map to distinctions between states of the world, and getting too caught up in the words.
By contrast, I can imagine caring whether I have the same faculties that constitute my identity as the guy who went to bed in my room last night, or merely in-principle indistinguishable faculties, in much the same way that I can imagine caring about whether my immortal soul goes to Heaven or Hell after I die. But it pretty much requires that I not think about the question carefully, because otherwise I conclude pretty quickly that I have no grounds whatsoever for caring, any more than I do about the ball.
So, yeah... I'd still much rather be survived by something that has memories, personality, and other identity-constituting faculties which are in-principle indistinguishable from my own, but doesn't share any of my cells (all of which are now tied up in my rapidly-cooling corpse), than by something that shares all of my cells but loses a significant chunk of those faculties.
Which I suppose gets us back to the same question of incompatible values we had the other day. That is, you think the above is clear, but that it's a strange preference for me to have, and you'd prefer the latter case, which I find equally strange. Yes?
Replies from: scientism↑ comment by scientism · 2013-10-22T19:48:02.652Z · LW(p) · GW(p)
Well, I would say the question of whether ball had the "same" bounciness when you filled it back up with air would either mean just that it bounces the same way (i.e., has the same amount of air in it) or is meaningless. The same goes for your faculties. I don't think the question of whether you're the same person when you wake up as when you went to sleep - absent your being abducted and replaced with a doppelgänger - is meaningful. What would "sameness" or "difference" here mean? That seems to me to be another case of conceiving of your faculties as something object-like, but in this case one set disappears and is replaced by another indistinguishable set. How does that happen? Or have they undergone change? Do they change without there being any physical change? With the ball we let the air out, but what could happen to me in the night that changes my identity? If I merely lost and regained by faculties in the night, they wouldn't be different and it wouldn't make sense to say they were indistinguishable either (except to mean that I have suffered no loss of faculties).
It's correct that two balls can bounce in the same way, but quite wrong to think that if I replace one ball with the other (that bounces in the same way) I have the same ball. That's true regardless of how many attributes they share in common: colour, size, material composition, etc. I can make them as similar as I like and they will never become the same! And so it goes with people. So while your doppelgänger might have the same faculties as you, it doesn't make him the same human being as you, and, unlike you, he wasn't the person who did X on your nth birthday, etc, and no amount of tinkering will ever make it so. Compare: I painstakingly review footage of a tennis ball bouncing at Wimbledon and carefully alter another tennis ball to make it bounce in just the same way. No amount of effort on my part will ever make it the ball I saw bounce at Wimbledon! Not even the finest molecular scan would do the trick. Perhaps that is the scenario you prefer, but, you're quite right, I find it very odd.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-22T20:49:00.197Z · LW(p) · GW(p)
I don't think the question of whether you're the same person when you wake up as when you went to sleep [..] is meaningful.
I'm content to say that, though I'd also be content to say that sufficient loss of faculties (e.g., due to a stroke while I slept) can destroy my identity, making me no longer the same person. Ultimately I consider this a question about words, not about things.
Do [your faculties] change without there being any physical change?
Well, physical change is constant in living systems, so the whole notion of "without physical change" is somewhat bewildering. But I'm not assuming the absence of any particular physical change.
I can make them as similar as I like and they will never become the same! And so it goes with people.
Sure, that's fine. I don't insist otherwise.
I just don't think the condition you refer to as "being the same person" is a condition that matters. I simply don't care whether they're the same person or not, as long as various other conditions obtain. Same-person-ness provides no differential value on its own, over and above the sum of the value of the various attributes that it implies. I don't see any reason to concern myself with it, and I think the degree to which you concern yourself with it here is unjustified, and the idea that there's some objective sense in which its valuable is just goofy.
So while your doppelgänger might have the same faculties as you, it doesn't make him the same human being as you, and, unlike you, he wasn't the person who did X on your nth birthday, etc, and no amount of tinkering will ever make it so.
Again: so what? Why should I care? I don't claim that your understanding of sameness is false, nor do I claim it's meaningless, I just claim it's valueless. OK, he's not the same person. So what? What makes sameness important?
To turn it around: suppose I am informed right now that I'm not the same person who did X on Dave's 9th birthday, that person died in 2012 and I'm a duplicate with all the same memories, personality, etc. I didn't actually marry my husband, I didn't _actually_buy my house, I'm not actually my dog's owner, I wasn't actually hired to do my job.
This is certainly startling, and I'd greet such a claim with skepticism, but ultimately: why in the world should I care? What difference does it make?
Perhaps that is the scenario you prefer, but, you're quite right, I find it very odd.
Prefer to what?
So, as above, I'm informed that I'm actually a duplicate of Dave.
Do I prefer this state of affairs to the one where Dave didn't die in 2012 and I was never created? No, not especially... I'm rather indifferent between them.
Do I prefer this state of affairs to the one where Dave died in 2012 and I was never created? Absolutely!
Do I prefer this state of affairs to the one where Dave continued to live and I was created anyway? Probably not, although the existence of two people in 2013 who map in such detailed functional ways to one person in 2012 will take some getting used to.
Similarly: I am told I'm dying, and given the option of creating such a duplicate. My preferences here seem symmetrical. That is:
- Do I prefer that option to not dying and not having a duplicate? No, not especially, though the more confident I am of the duplicate's similarity to me the more indifferent I become.
- Do I prefer it to dying and not having a duplicate? Absolutely!
- Do I prefer it to having a duplicate and not-dying? Probably not, though I will take some getting used to.
Which of those preferences seem odd to you? What is odd about them?
Replies from: scientism↑ comment by scientism · 2013-10-23T03:34:25.386Z · LW(p) · GW(p)
The preferences aren't symmetrical. Discovering that you're a duplicate involves discovering that you've been deceived or that you're delusional, whereas dying is dying. From the point of view of the duplicate, what you're saying amounts to borderline solipsism; you don't care if any of your beliefs, memories, etc, match up with reality. You think being deluded is acceptable as long as the delusion is sufficiently complete. From your point of view, you don't care about your survival, as long as somebody is deluded into thinking they're you.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-23T12:50:58.866Z · LW(p) · GW(p)
There's no delusion or deception involved in any of the examples I gave.
In each example the duplicate knows it's the duplicate, the original knows it's the original; at no time does the duplicate believe it's the original. The original knows it's going to die. The duplicate does not believe that its memories reflect events that occurred to its body; it knows perfectly well that those events occurred to a different body.
Everyone in each of those examples knows everything relevant.
From your point of view, you don't care about your survival, as long as somebody is deluded into thinking they're you.
No, this isn't true. There are lots of scenarios in which I would greatly prefer my survival to someone being deluded into thinking that they're me after my death. And, as I said above, the scenarios I describe don't involve anyone being deluded about anything; the duplicate knows perfectly well that it's the duplicate and not the original.
Replies from: scientism↑ comment by scientism · 2013-10-25T02:42:15.642Z · LW(p) · GW(p)
If the duplicate says "I did X on my nth birthday" it's not true since it didn't even exist. If I claim that I met Shakespeare you can say, "But you weren't even born!" So what does the duplicate say when I point out that it didn't exist at that time? "I did but in a different body" (or "I was a different body")? That implies that something has been transferred. Or does it say, "A different body did, not me"? But then it has no relationship with that body at all. Or perhaps it says, "The Original did X on their nth birthday and the Original has given me permission to carry on its legacy, so if you have a question about those events, I am the authority on them now"? It gets very difficult to call this "memory." I suppose you could say that the duplicate doesn't have the original's memories but rather has knowledge of what the original did, but then in what sense is it a duplicate?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-25T14:45:12.100Z · LW(p) · GW(p)
If the duplicate says "I did X on my nth birthday" it's not true since it didn't even exist.
Correct.
So what does the duplicate say when I point out that it didn't exist at that time?
When talking to you, or someone who shares your attitude, my duplicate probably says something like "You're right, of course. I'm in the habit of talking about my original's experiences as though they're mine, because I experience them as though they were, and both I and my original are perfectly happy talking that way and will probably keep doing so. But technically speaking you're quite correct... I didn't actually do X on my 9th birthday, nor did I have a 9th birthday to do anything on in the first place. Thanks for pointing that out."
Which is closest to your last option, I suppose.
Incidentally, my duplicate likely does this in roughly the same tone of voice that an adoptive child might say analogous things when someone corrects their reference to "my parents" by claiming that no, their parents didn't do any of that, their adoptive parents did. If you were to infer a certain hostility from that tone, you would not be incorrect.
It gets very difficult to call this "memory."
It's not difficult for me to call this a memory at all... it's the original's memory, which has been copied to and is being experienced by the duplicate. But if you'd rather come up with some special word for that to avoid confusion with a memory experienced by the same body that formed it in the first place, that's OK with me too. (I choose not to refer to it as "knowledge of what the original did", both because that's unwieldy and because it ignores the experiential nature of memory,, which I value.)
but then in what sense is it a duplicate?
Sufficient similarity to the original. Which is what we typically mean when we say that X is a duplicate of Y.
Replies from: scientism↑ comment by scientism · 2013-10-26T00:57:32.064Z · LW(p) · GW(p)
"I'm in the habit of talking about my original's experiences as though they're mine, because I experience them as though they were" appears to be a form of delusion to me. If somebody went around pretending to be Napoleon (answering to the name Napoleon, talking about having done the things Napoleon did, etc) and answered all questions as if they were Napoleon but, when challenged, reassured you that of course they're not Napoleon, they just have the habit of talking as if they are Napoleon because they experience life as Napoleon would, would you consider them delusional? Or does anything go as long as they're content?
To be honest, I'm not really sure what you mean by the experience of memory. Mental imagery?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-26T01:52:29.918Z · LW(p) · GW(p)
It has nothing to do with being content. If someone believes they are Napoleon, I consider them deluded, whether they are content or not.
Conversely, if they don't believe they are Napoleon, I don't consider them deluded, whether they are content or not.
In the example you give, I would probably suspect the person of lying to me.
More generally: before I call something a delusion, I require that someone actually believe it's true.
I'm not really sure what you mean by the experience of memory.
At this moment, you and I both know that I wrote this comment... we both have knowledge of what I did.
In addition to that, I can remember writing it, and you can't. I can have the experience of that memory; you can't.
The experience of memory isn't the same thing as the knowledge of what I did.
↑ comment by TheOtherDave · 2013-10-26T16:58:38.968Z · LW(p) · GW(p)
Though on further consideration, I suppose I could summarize our whole discussion as about whether I am content or not... the noun, that is, not the adjective. I mostly consider myself to be content, and would be perfectly content to choose distribution networks for that content based on their functional properties.
↑ comment by TheOtherDave · 2013-10-22T01:00:21.343Z · LW(p) · GW(p)
However, in the situation being discussed, nothing survives.
Lots of things survive. They just don't happen to be part of the original body.
What you're saying, as far as I can tell, is that you care more about "preserving" a hypothetical future description of yourself (hypothetical because presumably nobody has scanned you yet) than you do about your own life.
Yes, I think given your understanding of those words, that's entirely correct. My life with that "description" deleted is not worth very much to me; the continued development of that "description" is worth a lot more.
These are very strange values to have - but I wish you luck!
Right back atcha.
↑ comment by Shmi (shminux) · 2013-10-21T22:00:56.900Z · LW(p) · GW(p)
Suppose a small chunk of your brain is replaced with its functional equivalent, is the resulting chimera less "you"? If so, how can one tell?
Replies from: bokov, scientism↑ comment by bokov · 2013-10-22T14:21:08.887Z · LW(p) · GW(p)
Not necessarily less you. Why even replace? What about augment?
Add an extra "blank" artificial brain. Keep refining the design until the biological brain reports feeling an expanded memory capacity, or enhanced clarity of newly formed memories, or enhanced cognition. Let the old brain assimilate this new space in whatever as-yet poorly understood pattern and whatever rate comes naturally to it.
With the patient's consent, reversibly switch off various functional units in the biological region of the brain and see if the function is reconstituted elsewhere in the synthetic region. If it is, this is evidence that the technique is working. If not, the technique may need to be refined. At some point the majority of the patient's brain activity is happening in the synthetic regions. Temporarily induce unconsciousness in the biological part; during and after the biological part's unconsciousness, interview the patient about what subjective changes they felt, if any.
An agreement of external measurements and the patient's subjective assessment that continuity was preserved would be strong evidence to me that such a technique is a reliable means to migrate a consciousness from one substrate to another.
Migration should only be speeded up as a standard practice to the extent that it is justified by ample data from many different volunteers (or patients whose condition requires it) undergoing incrementally faster migrations measured as above.
As far as cryonics goes, the above necessarily requires actual revival before migration. The above approach rules out plastination and similar destructive techniques.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-10-22T14:47:55.659Z · LW(p) · GW(p)
I agree with all this, except maybe the last bit. Once the process of migration is well understood and if it is possible to calculate the structure of the synthetic part from the structure of the biological part, this knowledge can be used to skip the training steps and build a synthetic brain from a frozen/plastinated one, provided the latter still contains enough structure.
Anyway, my original question was to scientism, who rejected anything like that because
Replies from: bokov, bokovAny account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death. The specifics of your biology do not enter into it.
↑ comment by bokov · 2013-10-22T15:26:44.694Z · LW(p) · GW(p)
It's not clear to me whether scientism believes that the mind is a process that cannot take place on any substrate other than a brain, or whether he's shares my and (I think) Mitchell Porter's more cautious point of view that our consciousness can in principle exist somewhere other than a brain, but we don't yet know enough about neuroscience to be confident about what properties such a system must have.
I, for one, would be sceptical of there being no substrate possible at all except the brain, because it's a strong unsupported assertion on the same order as the (perhaps straw-man) patternist assertion that binary computers are an adequate substrate (or the stronger-still assertion that any computational substrate is adequate).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-22T15:42:47.936Z · LW(p) · GW(p)
If I have understood scientism's comments, they believe neither of the possibilities you list in your first paragraph.
I think they believe that whether or not a mind can take place on a non-brain substrate, our consciousness(es) cannot exist somewhere other than a brain, because they are currently instantiated in brains, and cannot be transferred (whether to another brain, or anything else).
This does not preclude some other mind coming to exist on a non-brain substrate.
Replies from: bokov↑ comment by bokov · 2013-10-22T15:56:37.737Z · LW(p) · GW(p)
Here is a thought experiment that might not be a thought experiment in the foreseeable future:
Grow some neurons in vitro and implant them in a patient. Over time, will that patient's brain recruit those neurons?
If so, the more far-out experiment I earlier proposed becomes a matter of scaling up this experiment. I'd rather be on a more resilient substrate than neurons, but I'll take what I can get.
I'm betting that the answer to this will be "yes", following a similar line of reasoning that Drexler used to defend the plausibility of nanotech: the existence of birds implied the feasibility of aircraft; the existence of ribosomes implies the feasibility of nanotech... neurogenesis occurring during development and over the last few decades found to be possible in adulthood implies the feasibility of replacing damaged brains or augmenting healthy ones.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-22T16:57:52.457Z · LW(p) · GW(p)
Yes, I agree with all of this.
↑ comment by bokov · 2013-10-22T15:46:25.921Z · LW(p) · GW(p)
build a synthetic brain from a frozen/plastinated one
I'm unconvinced that cryostasis wll preserve the experience of continuity. Because of the thought experiment with the non-destructive copying of a terminal patient, I am convinced that plastination will fail to preserve it (I remain the unlucky copy, and in addition to that, dead).
My ideal scenario is one where I can undergo a gradual migration before I actually need to be preserved by either method.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-10-22T16:21:34.891Z · LW(p) · GW(p)
Because of the thought experiment with the non-destructive copying of a terminal patient
link?
Replies from: bokov↑ comment by bokov · 2013-10-22T16:43:27.291Z · LW(p) · GW(p)
http://lesswrong.com/lw/iul/looking_for_opinions_of_people_like_nick_bostrom/9x47
Replies from: shminux↑ comment by Shmi (shminux) · 2013-10-22T16:59:29.182Z · LW(p) · GW(p)
Ah, ok:
You've been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that's great for him, but you are still suffering from a fatal illness.
So your issue is that a copy of you is not you? And you would treat star trek-like transporter beams as murder? But you are OK with a gradual replacement of your brain, just not with a complete one? How fast would the parts need to be replaced to preserve this "experience of continuity"? Do drugs which knock you unconscious break continuity enough to be counted as making you into not-you?
Basically, what I am unclear on is whether your issue is continuity of experience or cloning.
Replies from: bokov↑ comment by bokov · 2013-10-22T17:20:42.872Z · LW(p) · GW(p)
So your issue is that a copy of you is not you? And you would treat star trek-like transporter beams as murder?
Nothing so melodramatic, but I wouldn't use them. UNLESS they were in fact manipulating my wave function directly somehow causing my amplitude to increase in one place and decrease in another. Probably not what the screenplay writers had in mind, though.
But you are OK with a gradual replacement of your brain, just not with a complete one?
Maybe even a complete one eventually. If the vast majority of my cognition has migrated to the synthetic regions, it may not seem as much of a loss when parts of the biological brain break down and have to be replaced. Hard to speak on behalf of my future self with only what I know now. This is speculation.
How fast would the parts need to be replaced to preserve this "experience of continuity"?
This is an empirical question that could be answered if/when it becomes possible perform for real the thought experiment I described (the second one, with the blank brain being attached to the existing brain).
Basically, what I am unclear on is whether your issue is continuity of experience or cloning.
Continuity. I'm not opposed to non-destructive copies of me, but I don't see them as inherently beneficial to me either.
↑ comment by passive_fist · 2013-10-20T04:17:28.139Z · LW(p) · GW(p)
The point of cryonics is that it could lead to revival.
Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death.
Obviously. That's not what Mitchell_Porter's post was about, though.
↑ comment by Cyan · 2013-10-19T04:16:51.248Z · LW(p) · GW(p)
Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death.
You seem to think that creating a description of the structure of a brain is necessarily a destructive process. I don't know of any reason to assume that. If a non-destructive scan exists and is carried out, then there's no "death", howsoever defined. Right?
But anyway, let's grant your implicit assumption of a destructive scan, and suppose that this process has actually occurred to your brain, and "something that functions like [your] brain" has been created. Who is the resulting being? Who do they think they are? What do they do next? Do they do the sorts of things you would do? Love the people you love?
I grant that you do not consider this hypothetical being you -- after all, you are hypothetically dead. But surely there is no one else better qualified to answer these questions, so it's you that I ask.
Replies from: scientism, bokov, bokov↑ comment by scientism · 2013-10-19T13:32:56.767Z · LW(p) · GW(p)
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
The resulting being, if possible, would be a being that is confused about its identity. It would be a cruel joke played on those who know me and, possibly, on the being itself (depending on the type of being it is). I am not my likeness.
Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things. Convincing it that it has the same identity as a dead person is just one among many strange tricks you could play on it.
Replies from: Cyan↑ comment by Cyan · 2013-10-19T17:45:12.103Z · LW(p) · GW(p)
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
Fair enough.
The resulting being, if possible, would be a being that is confused about its identity. [...] Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things.
I'm positing that the being has been informed about how it was created; it knows that it is not the being it remembers, um, being. So it has the knowledge to say of itself, if it were so inclined, "I am a being purposefully constructed ab initio with all of the memories and cognitive capacities of scientism, RIP."
Would it be so inclined? If so, what would it do next? (Let us posit that it's a reconstructed embodied human being.) For example, would it call up your friends and introduce itself? Court your former spouse (if you have one), fully acknowledging that it is not the original you? Ask to adopt your children (if you have any)?
Replies from: scientism↑ comment by scientism · 2013-10-19T19:59:03.565Z · LW(p) · GW(p)
It would have false memories, etc, and having my false memories, it would presumably know that these are false memories and that it has no right to assume my identity, contact my friends and family, court my spouse, etc, simply because it (falsely) thinks itself to have some connection with me (to have had my past experiences). It might still contact them anyway, given that I imagine its emotional state would be fragile; it would surely be a very difficult situation to be in. A situation that would probably horrify everybody involved.
I suppose, to put myself in that situation, I would, willpower permitting, have the false memories removed (if possible), adopt a different name and perhaps change my appearance (or at least move far away). But I see the situation as unimaginably cruel. You're creating a being - presumably a thinking, feeling being - and tricking it into thinking it did certain things in the past, etc, that it did not do. Even if it knows that it was created, that still seems like a terrible situation to be in, since it's essentially a form of (inflicted) mental illness.
Replies from: Cyan↑ comment by Cyan · 2013-10-20T01:11:30.328Z · LW(p) · GW(p)
have the false memories removed
!!... I hope you mean explicit memory but not implicit memory -- otherwise there wouldn't be much of a being left afterwards...
"tricking" it into thinking it did certain things in the past
For a certain usage of "tricking" this is true, but that usage is akin to the way optical illusions trick one's visual system rather than denoting a falsehood deliberately embedded in one's explicit knowledge.
I would point out that the source of all the hypothetical suffering in this situation would the being's (and your) theory of identity rather than the fact of anyone's identity (or lack thereof). If this isn't obvious, just posit that the scenario is conceivable but hasn't actually happened, and some bastard deceives you into thinking it has -- or even just casts doubt on the issue in either case.
Of course that doesn't mean the theory is false -- but I do want to say that from my perspective it appears that the emotional distress would come from reifying a naïve notion of personal identity. Even the word "identity", with its connotations of singleness, stops being a good one in the hypothetical.
Have you seen John Weldon's animated short To Be? You might enjoy it. If you watch it, I have a question for you: would you exculpate the singer of the last song?
Replies from: scientism↑ comment by scientism · 2013-10-20T18:19:41.343Z · LW(p) · GW(p)
I take it that my death and the being's ab initio creation are both facts. These aren't theoretical claims. The claim that I am "really" a description of my brain (that I am information, pattern, etc) is as nonsensical as the claim that I am really my own portrait, and so couldn't amount to a theory. In fact, the situation is analogous to someone taking a photo of my corpse and creating a being based on its likeness. The accuracy of the resulting being's behaviour, its ability to fool others, and its own confused state doesn't make any difference to the argument. It's possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I would also point out that there are people who are quite content with severe mental illness. You might have delusions of being Napoleon and be quite happy about it. Perhaps such a person would argue that "I feel like Napoleon and that's good enough for me!"
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn't responsible for anything the other woman did, despite resembling her.
Replies from: Cyan, TheOtherDave↑ comment by Cyan · 2013-10-20T20:14:48.388Z · LW(p) · GW(p)
I take it that my death and the being's ab initio creation are both facts.
In the hypothetical, your brain has stopped functioning. Whether this is sufficient to affirm that you died is precisely the question at issue. Personally, it doesn't matter to me if my brain's current structure is the product of biological mechanisms operating continuously by physical law or is the product of, say, a 3D printer and a cryonically-created template -- also operating by physical law. Both brains are causally related to my past self in enough detail to make the resulting brain me in every way that matters to me.
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn't responsible for anything the other woman did, despite resembling her.
Curious that she used the transmission+reconstruction module while committing "suicide", innit? She didn't have to -- it was a deliberate choice.
Replies from: scientism↑ comment by scientism · 2013-10-20T22:04:33.524Z · LW(p) · GW(p)
The brain constructed in your likeness is only normatively related to your brain. That's the point I'm making. The step where you make a description of the brain is done according to a practice of representation. There is no causal relationship between the initial brain and the created brain. (Or, rather, any causal relationship is massively disperse through human society and history.) It's a human being, or perhaps a computer programmed by human beings, in a cultural context with certain practices of representation, that creates the brain according to a set of rules.
This is obvious when you consider how the procedure might be developed. We would have to have a great many trial runs and would decide when we had got it right. That decision would be based on a set of normative criteria, a set of measurements. So it would only be "successful" according to a set of human norms. The procedure would be a cultural practice rather than a physical process. But there is just no such thing as something physical being "converted" or "transformed" into a description (or information or a pattern or representation) - because these are all normative concepts - so such a step cannot possibly conserve identity.
As I said, the only way the person in cryonic suspension can continue to live is through a standard process of revival - that is, one that doesn't involve the step of being described and then having a likeness created - and if such a revival doesn't occur, the person is dead. This is because the process of being described and then having a likeness created isn't any sort of revival at all and couldn't possibly be. It's a logical impossibility.
Replies from: Cyan↑ comment by Cyan · 2013-10-21T03:00:10.337Z · LW(p) · GW(p)
My response to this is very simple, but it's necessary to know beforehand that the brain's operation is robust to many low-level variations, e.g., thermal noise that triggers occasional random action potentials at a low rate.
We would have to have a great many trial runs and would decide when we had got it right.
Suppose our standard is that we get it right when the reconstructed brain is more like the original brain just before cryonic preservation than a brain after a good night's sleep is like that same brain before sleeping -- within the subset of brain features that are not robust to variation. Further suppose that that standard is achieved through a process that involves a representation of the structure of the brain. Albeit that the representation is indeed a "cultural practice", the brute fact of the extreme degree of similarity of the pre- and post-process brains would seem much more relevant to the question of preservation of any aspect of the brain worthy of being called "identity".
ETA: Thinking about this a bit more, I see that the notion of "similarity" in the above argument is also vulnerable to the charge of being a mere cultural practice. So let me clarify that the kind of similarity I have in mind basically maps to reproducibility of the input-output relation of a low-level functional unit, up to, say, the magnitude of thermal noise. Reproducibility in this sense has empirical content; it is not merely culturally constructed.
Replies from: scientism↑ comment by scientism · 2013-10-21T19:09:01.422Z · LW(p) · GW(p)
I don't see how using more detailed measurements makes it any less a cultural practice. There isn't a limit you can pass where doing something according to a standard suddenly becomes a physical relationship. Regardless, consider that you could create as many copies to that standard as you wished, so you now have a one-to-many relationship of "identity" according to your scenario. Such a type-token relationship is typical of norm-based standards (such as mediums of representation) because they are norm-based standards (that is, because you can make as many according to the standard as you wish).
Replies from: Cyan↑ comment by Cyan · 2013-10-21T22:25:45.460Z · LW(p) · GW(p)
I don't see how using more detailed measurements makes it any less a cultural practice.
I'm not saying it's not a cultural practice. I'm saying that the brute fact of the extreme degree of similarity (and resulting reproducibility of functionality) of the pre- and post-process brains seems like a much more relevant fact. I don't know why I should care that the process is a cultural artifact if the pre- and post-process brains are so similar that for all possible inputs, they produce the same outputs. That I can get more brains out than I put in is a feature, not a bug, even though it makes the concept of a singular identity obsolete.
↑ comment by TheOtherDave · 2013-10-20T20:09:32.010Z · LW(p) · GW(p)
It's possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I don't know what the word "clear" in that sentence actually means.
If you're simply asserting that what has occurred in this example is your death, then no, it isn't clear, any more than if I assert that I actually died 25 minutes ago, that's clear evidence that Internet commenting after death is possible.
I'm not saying you're necessarily wrong... I mean, sure, it's possible that you're correct, and in your hypothetical scenario you actually are dead, despite the continued existence of something that acts like you and believes itself to be you. It's also possible that in my hypothetical scenario I'm correct and I really did die 25 minutes ago, despite the continued existence of something that acts like me and believes itself to be me.
I'm just saying it isn't clear... in other words, that it's also possible that one or both of us is confused/mistaken about what it means for us to die and/or remain alive.
Replies from: scientism↑ comment by scientism · 2013-10-20T21:42:27.249Z · LW(p) · GW(p)
In the example being discussed we have a body. I can't think of a clearer example of death than one where you can point to the corpse or remains. You couldn't assert that you died 25 minutes ago - since death is the termination of your existence and so logically precludes asserting anything (nothing could count as evidence for you doing anything after death, although your corpse might do things) - but if somebody else asserted that you died 25 minutes ago then they could presumably point to your remains, or explain what happened to them. If you continued to post on the Internet, that would be evidence that you hadn't died. Although the explanation that someone just like you was continuing to post on the Internet would be consistent with your having died.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-20T21:46:10.875Z · LW(p) · GW(p)
OK, I think I understand what you mean by "clear" now. Thanks.
↑ comment by bokov · 2013-10-19T05:43:36.876Z · LW(p) · GW(p)
Now, if I understand the "two particles of the same type are identical" argument in the context of uploading/copying, it shouldn't be relevant because two huge multi-particle configurations are not going to be identical. You cannot measure the state of each particle in the original and you cannot precisely force each particle in the copy into that state. And no amount of similar is enough, the two of you have to be identical in the sense that two electrons are identical if we're talking about being Feynman paths that your amplitude would be summed over. And that rules out digital simulations altogether.
But I didn't really expect any patternists to defend the first way you could be right in my post. Whereas, the second way you might be right amounts to, by my definition, proving to me that I am already dead or that I die all the time. If that's the case, all bets are off, everything I care about is due for a major reassessment.
I'd still want to know the truth of course. But the strong form of that argument (that I already experience on a recurring basis the same level of death as you would if you were destructively scanned) is not yet proven to be the truth. Only a plausible for which (or against which) I have not yet seen much evidence.
Replies from: Cyan↑ comment by Cyan · 2013-10-19T06:14:56.744Z · LW(p) · GW(p)
But the strong form of that argument (that I already experience on a recurring basis the same level of death as you would if you were destructively scanned) is not yet proven to be the truth.
Can you taboo "level of death" for me? Also, what sorts of experiences would count as evidence for or against the proposition?
Replies from: bokov↑ comment by bokov · 2013-10-19T06:25:26.949Z · LW(p) · GW(p)
Discontinuity. Interruption of inner narrative. You know how the last thing you remember was puking over the toilet bowl and then you wake up on the bathroom floor and it's noon? Well, that but minus everything that goes after the word "bowl".
Or the technical angle-- whatever routine occurrence it is that supposedly disrupts my brain state as much as a destructive scan and rounding to the precision limit of whatever substrate my copy would be running on.
Replies from: Cyan, bokov↑ comment by bokov · 2013-10-21T20:37:30.289Z · LW(p) · GW(p)
I guess this would be my attempt to answer your first question: articulating what I meant without the phrase "level of death".
My answer to your second question it tougher. Somewhat compelling evidence that whatever I value has been preserved would be simultaneously experiencing life from the point of view of two different instances. This could be accomplished perhaps through frequent or continuous synchronization of the memories and thoughts of the two brains. Another convincing experience (though less so) would be gradual replacement of individual biological components that would have otherwise died, with time for the replacement parts to be assimilated into the existing system of original and earlier-replaced components.
If I abruptly woke up in a new body with all my old memories, I would be nearly certain that the old me has experienced death if they are not around, or if they are still around (without any link to each others' thoughts), that I am the only one who has tangibly benefited from whatever the rejuvenating/stabilizing effects of the replication/uploading might be, and they have not. If I awoke from cryostasis in my old body (or head, as the case may be) even then I would only ever be 50% sure that the individual entering cryostasis is not experiencing waking up (unless there was independent evidence of weak activity in my brain during cryostasis).
The way for me to be convinced, not that continuity has been preserved but rather that my desire for continuity is impossible, does double duty with my answer to the first question:
[Unambiguous, de-mystifying neurological characterization of...]
whatever routine occurrence it is that supposedly disrupts my brain state as much as a destructive scan and rounding to the precision limit of whatever substrate my copy would be running on.
↑ comment by bokov · 2013-10-19T05:21:59.625Z · LW(p) · GW(p)
Actually, let's start by supposing a non-destructive scan.
The resulting being is someone who is identical to you, but diverges at the point where the scan was performed.
Let's say your problem is that you have a fatal illness. You've been non-destructively scanned, and the scan was used to construct a brand new healthy you who does everything you would do, loves the people you love, etc. Well, that's great for him, but you are still suffering from a fatal illness. One of the brainscan technicians helpfully suggests they could euthanize you, but if that's a solution to your problem then why bother getting scanned and copied in the first place? Your could achieve the same subjective outcome by going straight to the euthanasia step.
Now, getting back to the destructive scan. The only thing that's different is you skip the conversation with the technician and go straight to the euthanasia step. Again, an outcome you could have achieved more cheaply with a bottle of sleeping pills and a bottle of Jack Daniels.
Replies from: Cyan↑ comment by Cyan · 2013-10-19T05:47:06.658Z · LW(p) · GW(p)
After the destructive scan, a being exists that remembers being me up to the point of that scan, values all the things I value, loves the people I love and will be there for them. Regardless of anyone's opinion about whether that being is me, that's an outcome I desire, and I can't actually achieve it with a bottle of sleeping pills and a bottle of Jack Daniels. Absolutely the same goes for the non-destructive scan scenario.
...maybe you don't have kids?
Replies from: bokov↑ comment by bokov · 2013-10-19T06:14:57.214Z · LW(p) · GW(p)
Oh, I do, and a spouse.
I want to accomplish both goals: have them be reunited with me, and for myself to experience being reunited with them. Copying only accomplishes the first goal, and so is not enough. So long as there is any hope of actual revival, I do not wish to be destructively scanned nor undergo any preservation technique that is incompatible with actual revival. I don't have a problem with provably non-destructive scans. Hell, put me on Gitorious for people to download, just delete the porn first.
My spouse will probably outlive me, and hopefully if my kids have to get suspended at all, it will be after they have lived to a ripe old age. So everyone will have had some time to adjust to my absence, and would not be too upset about having to wait a little longer. Otherwise, we could form a pact where we revive whenever the conditions for the last of our revivals are met. I should remember to run this idea by them when they wake up. Well, at least the ones of them who talk in full sentences.
Or maybe this is all wishful thinking-- someone who thinks that what we believe is silly will just fire up the microtome and create some uploads that are "close enough" and tell them it was for their own good.
Replies from: Cyan↑ comment by Cyan · 2013-10-19T06:23:12.392Z · LW(p) · GW(p)
Sticking with the non-destructive scan + terminal illness scenario: before the scan is carried out, do you anticipate (i) experiencing being reunited with your loved ones; (ii) requesting euthanasia to avoid a painful terminal disease; (iii) both (but not both simultaneously for the same instance of "you")?
Replies from: bokov↑ comment by bokov · 2013-10-19T06:40:04.944Z · LW(p) · GW(p)
Probably (iii) is the closest to the truth, but without euthenasia. I'd just eventually die, fighting it to the very end. Apparently this is an unusual opinion or something because people have such a hard time grasping this simple point: what I care about is the continuation of my inner narrative for as long as possible. Even if it's filled with suffering. I don't care. I want to live. Forever if possible, for an extra minute if that's all there is.
A copy may accomplish my goal of helping my family, but it does absolutely nothing to accomplish my goal of survival. As a matter of self-preservation I have to set the record straight whenever someone claims otherwise.
Replies from: Cyan↑ comment by Cyan · 2013-10-19T19:09:54.911Z · LW(p) · GW(p)
what I care about is the continuation of my inner narrative for as long as possible
Okay -- got it. What I don't grasp is why you would care about the inner narrative of any particular instance of "you" when the persistence of that instance makes negligible material difference to all the other things you care about.
To put it another way: if there's only a single instance of "me" -- the only extant copy of my particular values and abilities -- then its persistence cannot be immaterial to all the other things I care about, and that's why I currently care about my persistence more-or-less unconditionally. If there's more than one copy of "me" kicking around, then "more-or-less unconditionally" no longer applies. My own internal narrative doesn't enter into the question, and I'm confused as to why anyone else would give their own internal narrative any consideration.
ETA: So, I mean, the utility function is not up for grabs. If we both agree as to what would actually be happening in these hypothetical scenarios, but disagree about what we value, then clauses like "patternists could be wrong" refer to an orthogonal issue.
Replies from: bokov, bokov, bokov↑ comment by bokov · 2013-10-21T21:47:24.164Z · LW(p) · GW(p)
Okay -- got it. What I don't grasp is why you would care about the inner narrative of any particular instance of "you" when the persistence of that instance makes negligible material difference to all the other things you care about.
Maybe the same why as why do some people care more about their families than about other people's families. Why some people care more about themselves than about strangers. What I can't grasp is how one would manage to so thoroughly eradicate or suppress such a fundamental drive.
Replies from: Cyan↑ comment by Cyan · 2013-10-23T01:28:16.633Z · LW(p) · GW(p)
What, kin selection? Okay, let me think through the implications...
Replies from: bokov↑ comment by bokov · 2013-10-21T21:33:08.645Z · LW(p) · GW(p)
If we both agree as to what would actually be happening in these hypothetical scenarios, but disagree about what we value, then clauses like "patternists could be wrong" refer to an orthogonal issue.
Patternists/computationalists make the, in principle, falsifiable assertion that if I opt for plastination and am successfully reconstructed, that I will wake up in the future just as I will if I opt for cryonics and am successfully revived without copying/uploading/reconstruction. My assertion is that if I opt for plastination I will die and be replaced by someone hard or impossible to distinguish from me. Since it takes more resources to maintain cryosuspension, and probably a more advanced technology level to thaw and reanimate the patient, if the patternists are right, plastination is a better choice. If I'm right, it is not an acceptable choice at all.
The problem is that, so far, the only being in the universe who could falsify this assertion is the instantiation of me that is writing this post. Perhaps with increased understanding of neuroscience, there will be additional ways to test the patternist hypothesis.
Replies from: Cyan↑ comment by Cyan · 2013-10-23T01:23:03.229Z · LW(p) · GW(p)
the, in principle, falsifiable assertion that if I opt for plastination that I will wake up in the future with an equal or greater probability than if I opt for cryonics
I'm not sure what you mean here. Probability statements aren't falsifiable; Popper would have had a rather easier time if they were. Relative frequencies are empirical, and statements about them are falsifiable...
My assertion is that I will die and be replaced by someone hard or impossible to distinguish from me.
At the degree of resolution we're talking about, talking about you/not-you at all seems like a blegg/rube distinction. It's just not a useful way of thinking about what's being contemplated, which in essence is that certain information-processing systems are running, being serialized, stored, loaded, and run again.
Replies from: bokov↑ comment by bokov · 2013-10-23T22:07:03.895Z · LW(p) · GW(p)
Oops, you're right. I have now revised it.
Replies from: Cyan↑ comment by Cyan · 2013-10-24T00:09:50.488Z · LW(p) · GW(p)
Suppose your brain has ceased functioning, been recoverably preserved and scanned, and then revived and copied. The two resulting brains are indistinguishable in the sense that for all possible inputs, they give identical outputs. (Posit that this is a known fact about the processes that generated them in their current states.) What exactly is it that makes the revived brain you and the copied brain not-you?
↑ comment by bokov · 2013-10-21T21:34:48.826Z · LW(p) · GW(p)
So, I mean, the utility function is not up for grabs.
And yet, what is to be done if your utility function is dissolved by the truth? How do we know that there even exist utility functions that retain their currency down to the level of timeless wave functions?
Replies from: Cyan↑ comment by Cyan · 2013-10-23T01:09:07.422Z · LW(p) · GW(p)
I haven't thought really deeply about that, but it seems to me that if Egan's Law doesn't offer you some measure of protection and also a way to cope with failures of your map, you're probably doing it wrong.
Replies from: bokov↑ comment by bokov · 2013-10-23T21:53:21.552Z · LW(p) · GW(p)
A witty quote from an great book by a brilliant author is awesome, but does not have the status of any sort of law.
What do we mean by "normality"? What you observe around you every day? If you are wrong about the unobserved causal mechanisms underlying your observations, you will make wrong decisions. If you walk on hot coals because you believe God will not let you burn, the normality that quantum mechanics adds up to diverges enough from your normality that there will be tangible consequences. Are goals part of normality? If not, they certainly depend on assumptions you make about your model of normality. Either way, when you discover that God can't/won't make you fireproof, some subset of your goals will (and should) come tumbling down. This too has tangible consequences.
Some subset of the remaining goals relies on more subtle errors in your model of normality and they too will at some point crumble.
What evidence do we have that any goals at all are stable at every level? Why should the goals of a massive blob of atoms have such a universality?
I can see the point of "it all adds up to normality" if you're encouraging someone to not be reluctant to learn new facts. But how does it help answer the question of "what goal do we pursue if we find proof that all our goals are bullshit"?
Replies from: Cyan↑ comment by Cyan · 2013-10-24T00:02:57.647Z · LW(p) · GW(p)
My vague notion is that if your goals don't have ramifications in the realm of the normal, you're doing it wrong. If they do, and some aspect of your map upon which goals depend gets altered in a way that invalidates some of your goals, you can still look at the normal-realm ramifications and try to figure out if they are still things you want, and if so, what your goals are now in the new part of your map.
Keep in mind that your "map" here is not one fixed notion about the way the world works. It's a probability distribution over all the ways the world could work that are consistent with your knowledge and experience. In particular, if you're not sure whether "patternists" (whatever those are) are correct or not, this is a fact about your map that you can start coping with right now.
It might be that the Dark Lords of the Matrix are just messing with you, but really, the unknown unknowns would have to be quite extreme to totally upend your goal system.
comment by bokov · 2013-10-21T21:07:17.369Z · LW(p) · GW(p)
A thread sprung up here about self-funding your cryonics arrangements. This has important enough practical implications for cryonicists that I am responding to it at the top level.
In addition to my original arguments that most individuals should not self-fund...
- No one is contractually obligated to pay for your suspension if your own "low risk" investments tank.
- Unless you have a proven track record investing, you are likely overconfident in your ability to invest compared to professionals.
- Creditors and heirs will find it easier to raid your investments than they will an insurance policy of which they are not beneficiaries.
...all of which are by themselves substantive reasons, I could slap myself for not having thought of the obvious fourth reason:
- If you truly are making low-risk investments, it may take a while for them to add up to a sufficient sum of money to cover cryosuspension. Whatever your age, there is some risk that you will die before your investments reach that point.
The best practice is not to self-fund unless you have close to enough money to pay for it up-front and the skills to isolate that money in a long-term trust that will be well-insulated from third-party meddling. And if both of those things are true, it's surprising to me that $80 US or so per month would be more than pocket change to you in the first place.
For more information, I recommend the article in this issue of Cryonics magazine, starting on page 7.
Full disclosure: my family and I set up our policies through this guy.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-10-21T21:09:39.567Z · LW(p) · GW(p)
I don't know enough to comment on these specific issues, but I am already going through Rudi, mainly for reasons of convenience.
comment by ChristianKl · 2013-10-17T20:42:00.356Z · LW(p) · GW(p)
If I remember right the chance of it rescuing you were estimated to be somewhere between 10-20% by senior LessWrong people in the last census.
The interesting thing was that senior LessWronger were more likely to recommend people to undergo cryonics then less senior people when both believed in the same probability of being revived.
In the end the question is:
p(getting revived) * u(utility of getting revived) <? u(utility cost of acor payments)
↑ comment by ChrisHallquist · 2013-10-17T21:03:22.712Z · LW(p) · GW(p)
See edit.
comment by passive_fist · 2013-10-20T03:58:11.058Z · LW(p) · GW(p)
This left me with the impression that the chances of the average cryopreserved person today of being later revived aren't great
It may be that people cryopreserved today will not be among the 'first batch' of people revived. The damage caused by ice could be repairable, but it might require more advanced nanotech, and perhaps some 'filling in' of lost information. I wouldn't be surprised if frozen people were revived in batches, starting from the most recently frozen ones, finally ending with those frozen in the 20th century.
comment by bokov · 2013-10-19T05:50:38.909Z · LW(p) · GW(p)
One way that patternists could be wrong and I might still be satisfied with a (non destructive) upload or copy as a way to preserve myself is if our memories were prevented from diverging by some sort of sync technology. Maybe it would be a daily thing, or maybe continuous. Sort of a RAID array of brains.
But this still requires actual revival as an initial step, not destructive copying or uploading.
comment by bokov · 2013-10-18T15:44:05.722Z · LW(p) · GW(p)
I wish I could give you more up-votes for explicitly making existential catastrophe part of your calculations, too many people focus on the technical considerations to the exclusion of other relevant unknowns.
Here are mine (explanation of edit-- oops, patternists being wrong and right didn't sum to 1, fixed now):
Cryo
(50% not being ended by an intervening existential catastrophe) x
(80% fellow humans come through) x
[(70% patternists are wrong) x (50% cryo sufficient to preserve whatever it is I call my continuity strictly by repairing my original instance) +
(30% patternists are right) x (70% cryo sufficient to preserve whatever it is I call my continuity by any means)]
= 22.4%
Plastination
(70% not being ended by an intervening existential catastrophe) x
(85% fellow humans will through) x
[(70% patternists are wrong) x (0% plastination sufficient to preserve whatever it is I call my continuity by any means) +
(30% patternists are right) x (90% plastination sufficient to preserve whatever it is I call my continuity by any means)]
= 16.065%
Explanations: among existential risks I count human-made problems that would prevent people who would revive me from doing so. So, fellow humans coming through simply means there is someone at any point in the future willing and able to revive me conditioned on no existential catastrophes and it being technically possible to revive me at all.
For patternists to be right, both the following would have to be true...
A sufficiently accurate representation of you is you (to the point that your amplitude would sum over all such representations that you coexist with).
It is theoretically possible to achieve such a representation by uploading or reconstruction with molecular precision or some other reconstruction or simulation technique.
... or the following has to be true:
- My sense of continuing inner narrative is an illusion that will be destroyed by a more accurate understanding of reality, and amounts to wanting something that is a empty set, a dangling object, like meeting the real Mickey Mouse or knowing what prime numbers taste like.
It's unlikely that patternists can be right in the first way because there is no evidence that very-very-similar configurations can validly be substituted for identical configurations. Especially if the replica in question did not evolve from the original and even more so if the replica is running on a completely different substrate than the original. Even if they are right in that way, will it ever be possible to test experimentally?
It is far more likely that patternists are right in the second way. My terminal goal is continuity (and I value accurate preservation of the data in my brain only because I think it is instrumental to preserving continuity). The main reason I am signed up for cryonics is because I believe it is more likely to succeed than any currently available alternative. If all alternatives are fundamentally doomed to failure, the entire question becomes largely moot. I'm therefore conditioning my above probabilities on the patternists not being right in the second way.
Replies from: BaconServ↑ comment by BaconServ · 2013-10-18T20:23:56.451Z · LW(p) · GW(p)
If continuing inner narrative is an illusion, and you find yourself in a simulation, then you could very well realize your dream of meeting a "the real Mickey Mouse." If we can be accurately simulated, why not living cartoons as well? A lot becomes possible by getting rid of identity-related illusions.
Replies from: bokov↑ comment by bokov · 2013-10-18T20:46:06.225Z · LW(p) · GW(p)
If I'm already in a simulation, that's a different story. A fait accompli. At the very least I'll have an empirical answer to the burning question of whether it's possible for me to exist within the simulation, though I'll still have no way of knowing whether I am really the same person as whoever I am simulating.
But until I find a glitch that makes objects disappear, render in wireframe, etc. I have no reason to give simulation arguments all that much more credence than heaven and hell.
Replies from: BaconServ