Is cryonics necessary?: Writing yourself into the future

post by Gordon Seidoh Worley (gworley) · 2010-06-23T14:33:29.651Z · LW · GW · Legacy · 147 comments

Contents

147 comments

Cryonics appears to be the best hope for continuing a person's existence beyond physical death until other technologies provide better solutions.  But despite its best-in-class status, cryonics has several serious downsides.

First and foremost, cryonics is expensive—well beyond a price that even a third of humanity can afford.  Economies of scale may eventually bring the cost down, but in the mean time billions of people will die without the benefit of cryonics, and, even when the cost bottoms out, it will likely still be too expensive for people living at subsistence levels.  Secondly, many people consider cryonics immoral or at least socially unacceptable, so even those who accept the idea of cryonics and want to pursue taking personal advantage of it are usually socially pressured out of signing up for cryonics.  Combined, these two forces reduce the pool of people who will act to sign up for cryonics to be less than even a fraction of a percent of the human population.

Given that cryonics is effectively not an option for almost everyone on the planet, if we're serious about preserving lives into the future then we have to consider other options, especially ones that are morally and socially acceptable to most of humanity.  Pushed by my own need to find an alternative to cryonics, I began trying to think of ways I could be restored after physical death.

If I am unable to preserve the physical components that currently make me up, it seems that the next best thing I can do is to record in some way as much of the details of the functioning of those physical components as possible.  Since we don't yet have the brain emulation technology that would make cryonics irrelevant for the still living, I need a lower tech way to making a record of myself.  And of all the ways I might try to record myself, none seems to better balance robustness, cost, and detail than writing.

Writing myself into the future—now we're on to something.

At first this plan didn't feel like such a winner, though:  How can I continue myself just through writing?  Even if I write down everything I can about myself—memories, medical history, everything—how can that really be all that's needed to restore me (or even most of me)?  But when we begin to break down what writing everything we can about ourselves really gives us, writing ourselves into the future begins to make more sense.

For most of humanity, what makes you who you are is largely the same between all people.  Since percentages would make it seem that I have too precise an idea of how much, let's put it like this:  up to your eyebrows, all humans (except those with extreme abnormalities) are essentially the same.  Because we share the same evolutionary past as all of our conspecifics, the biology and psychology of our brains is statistically the same.  We each have our quirks of genetics and development, but even those are statistically similar among people who share our quirks.  Thus with just a few bits of data we can already record most of what makes you who you are.

Most people find this idea unsettling when they first encounter it and have an urge to look away or disagree.  "How can I, the very unique me, be almost completely the same as everyone else?"  Since this is Less Wrong and not a more general forum, though, I'll assume you're still with me at this point.  If not, I recommend reading some of the introductory sequences on the site.

So if we begin with a human template, add in a few modifiers for particular genetic and developmental quirks, we get to a sort of blank human that gets us most of the way to restoring you after physical death.  To complete the restoration, we need to inject the stuff that sets you uniquely apart even from your fellow humans who share your statistically regular quirks:  your memories.  If the record of your memories is good enough, this should effectively create a person who is so much like you as to be indistinguishable from the original, i.e. restore you.

But, you may ask, is this restoration of you from writing really still you in the same way that the you restored from cryonics is you?  Maybe.  To me, it is.  Despite what subjective experience feels like, there doesn't seem to be anything in the brain that makes you who you are besides the general process of your brain and its memories.  Transferring yourself from your current brain to another brain or a brain emulation via writing doesn't seem that much different from transferring yourself via neuron replacement or some other technique except that writing introduces a lossy compression step, necessitated only by a lack of access to better technology.  Writing yourself into the future isn't the best solution, but it does seem to be an effective stopgap to death.


If you're still with me, we have a few nagging questions to answer.  Consider this an FAQ for writing yourself into the future.

How good a record is good enough?  In truth, I don't think we even know enough to get the order of magnitude right.  The best I can offer is that you need to record as much as you are willing to.  The more you record, the more there will be to work with, and the less chance there will be of insufficient data.  It may turn out that you simply can't record enough to create a good restoration of a person from writing, but this is little different from the risk in cryonics of not being well preserved enough to restore despite best efforts.  If you're willing to take the risk that cryonics won't work as well as you hope, you should be willing to accept that writing yourself into the future might not work as well as you hope.

How is writing yourself into the future more socially acceptable than cryonics?  Basically, because people already do this all the time, although not with an eye toward their eventually restoration.  People regularly keep journals, write blogs, write autobiographies, and pass on stories of their lives, even if only orally.  You can write a record of yourself, fully intending for it to be used to restore you at some future time, without ever having to do anything that is morally or socially unacceptable to other people (at least, for people in most societies) other than perhaps specify in your writing of yourself that you want it to be used to restore you after you die.

How is writing yourself into the future more accessible to the poor?  If a person is literate and has access to some form of durable writing material, they can write themselves into the future, limited only by their access to durable writing material and reliable storage.  Of course, many people are not literate, but the cost of teaching literacy is far lower than the cost of cryonics, and literacy has other benefits beyond writing yourself into the future, so it's an easy sell to increase literacy even to people who are opposed to the idea of life extension.

Will the restoration really be me?  Let me address this in another way.  You, like everything else, are a part of the universe.  Unlike what we believe to be true of most of the stuff in the universe, though, the stuff that makes up what we call you is aware of its existence.  As best we can tell, the way that you are aware of your existence is because you have a way of recalling previous events during your existence.  If we take away the store and recall of experience, we're left with some stuff that can do essentially everything it could when it had memory, but will not have any concept of existing outside the current moment.  Put the store and recall back in, though, and suddenly what we would recognize as self-awareness returns.

Other questions?  Post them and I'll try to address them.  I have a feeling that there will be some strong disagreement from people who disagree with me about what self-awareness means and how the brain works, and I'll try to explain my position as best I can to them, but I'm also interested in any other questions that people might have since there are likely many issues I haven't even considered yet.

147 comments

Comments sorted by top scores.

comment by gwern · 2010-06-23T16:09:43.274Z · LW(p) · GW(p)

Writing is extremely low-bandwidth. If I recall correctly, Shannon did some experimentation and found that per letter, English was no more than a bit and I've seen other estimates that it's less than a bit, per letter. (In comparison, depending on language and encoding, a character can take up to 32 bits to store uncompressed. Even ASCII requires 8 bits/1 byte per character.) And given the difficulty of producing a megabyte of personal information, and the vast space of potential selves...

If we're going to try to preserve ourselves through recorded information, wouldn't it make much more sense to instead spend a few hundred/thousand dollars on lifelogging? If you really do record your waking hours, then preservation of your writings is automatically included - as well as all the other stuff. Plus, this solves the issue of mundane experiences.

EDIT: I've put up some notes at https://www.gwern.net/Differences

Replies from: groupuscule, gwern, jimrandomh, Kaj_Sotala, jaimeastorga2000
comment by groupuscule · 2010-06-24T07:37:11.363Z · LW(p) · GW(p)

Writing might be inferior to lifelogging as a way of preserving yourself, but it might actually be better than lifelogging as a way of having a specific type of impact on the future. Since neither form of reconstruction is going to provide the same type of experiential immortality as cryonics potentially would, why not attempt to reincarnate your ideal self?

(As far as general anthropological data goes, there's going to be plenty of footage of average schmucks doing random stuff.)

comment by gwern · 2012-06-29T04:19:20.205Z · LW(p) · GW(p)

Curiously, there seem to be people interested in actually doing this sort of thing.

For example, it is argued that one could never record enough sensory experiences and actions to produce brain emulation [Gemmell, Bell et al., 2006]. However, this is the right answer to the wrong question. The question is not whether one can replicate the 10 trillion synaptic strengths and yet greater number of connectivities of the human brain in a software matrix. This would be like trying to replicate human flight by building an airplane out of a trillion micro-widgets in the exact same configuration as found in an eagle or a sparrow. Instead, the goal here is to replicate the functionality of a specific human consciousness in software. There is no more reason to assume a priori that the only way to do this is to replicate a human brain than there is to assume a priori that the only way to fly is to replicate a bird. Instead, we reason that software emulation of a human mind via analysis of a set of mindfiles is achievable because there are but a limited number of personality traits [Costa & McCrae, 1990], a finite set of universal facial expressions and emotions [Brown, 1991], a diminished repertoire of remembered thoughts from day to day [Ebbinghaus, 1885], and not more than a few gigabytes of remembered information [Landauer, 1986].

In general, dozens (n) of mannerism, personality and feeling types (m) yield many thousands of unique human combinations via (n!)/(m!*(n-m)!). Once you add to these thousands of personality and worldview templates differential recollections, beliefs, attitudes and values (the gigabytes of remembered information) there are many billions of unique possible combinations of human psyches, one of which will be a best-match for digitally stored mindfiles on a predecessor biological person. Mindware best fits one of the “m” compound mannerisms, personality and feeling types to that analyzed from stored mindfiles, and then populates it with the recollections, beliefs, attitudes and values evident from the stored mindfiles.

From "The Terasem Mind Uploading Experiment". Not sure how seriously to take them, but my own readings have been inclining me to the point of view that personal identities are just not that information-rich.

Also relevant is another paper in that issue, on very large scale use of personality questionnaires: http://www.worldscinet.com/ijmc/04/0401/S1793843012400082.html

Replies from: MathieuRoy, CarlShulman
comment by Mati_Roy (MathieuRoy) · 2020-05-13T20:48:26.264Z · LW(p) · GW(p)

first link is broken, but available on the Wayback Machine: The Terasem Mind Uploading Experiment

second link is broken; was it linking to this article?: How Accurate Are Personality Tests?

comment by CarlShulman · 2012-06-29T04:56:26.613Z · LW(p) · GW(p)

but my own readings have been inclining me to the point of view that personal identities are just not that information-rich.

Could you say more about said research? My sense is that that people can be flexible about preserving a tiny portion of their unique information, e.g. that many people would be very happy to forget most of their daily experiences from their lives so far (replacing with brief text files, records of major relationships and their emotional intensities, that sort of thing) in exchange for greatly extended life in the same body. But the "mindfile" backup method inevitably involves a chance for the original to diverge and evoke intuitions that "the thing over there, which I perceive as separate" doesn't provide continuity.

Replies from: gwern
comment by gwern · 2012-06-29T15:21:53.002Z · LW(p) · GW(p)

Could you say more about said research?

It's basically as above. Traits like IQ offer remarkable predictive power; Big Five on top of IQ allows more prediction, and the second paper's Small 100 seems like it'd add nontrivial data if anyone runs it on a suitable database to establish what each trait does. Much of the remaining variation can be traced to the environment, which obviously doesn't help in establishing that human personalities are extremely rich & complex.

Long-term memory is much smaller than most would guess when talking about 'galaxies of galaxies of neurons', and autobiographical memory is famously malleable and more symbolic than sensory. (Like dreams: they seem lifelike detailed and amazing computational feats, but if you try to actually test the detail, like read a book in your dream, you'll usually fail.) Skills don't involve very much personality, either, since there are so many ways to be a bad amateur and so few to be an expert (consider how few items it takes to make a decent expert system - not millions and billions!) and are measurable anyway. "The mind is flat", one might say. Once you get past the (very difficult) tasks of perceiving and modeling the world and controlling our bodies, which we all have in common, is there really that much there?

More generally, people tend to think that they do things for complex and subtle reasons, while outsiders see them doing things for few reasons and transparent self-serving ones at that, IIRC; why do we believe the Inside view that we're so very complex and unique special snowflakes, while ignoring our Outside view that everyone else seems to be fairly simple?

(More personally, I have more than once had the experience of reading something, composing a comment in my head, and going to make a comment - only to see that a past me had already posted that exact same comment. This is not conducive to thinking of myself as a complex unpredictable person, as opposed to a fairly simple predictable set of mechanisms.)

If you know of any essays or papers arguing something like the above but more rigorously, I'd appreciate pointers.

So for beta uploading, I would say as a general principle you want to go in order of variance explained: start with your whole genome (plus near relatives & tissue samples in order to catch rare/de novo variants & childhood influences), then use psychology tests & surveys designed for predictive power and precise measurement of important factors, then move on to recordings of interactive/adversarial material (like chat logs or speech), then record writings in general, then record quotidian details like your daily environment.

comment by jimrandomh · 2010-06-23T23:25:11.639Z · LW(p) · GW(p)

Unfortunately, lifelogging is illegal in my home state, and in many other places. Specifically, it is illegal to record audio here without informing all the parties being recorded, which is prohibitively impractical when you want to record 24/7. (There is no similar restriction for video, but video is likely to be less useful for reconstruction purposes than audio.)

Replies from: gwern, thomblake
comment by gwern · 2010-06-24T00:56:51.162Z · LW(p) · GW(p)

That's unfortunate. I guess you would want a discreet camera until the laws become more sensible.

comment by thomblake · 2010-06-24T14:32:49.048Z · LW(p) · GW(p)

Unfortunately, lifelogging is illegal in my home state

That's terrible - it's clearly a rights violation to disallow recording in public. Based on this guide it looks like only a few states require consent of all parties, and Vermont is the only one with basically no restrictions on recording.

Of course, having a camera/recorder in plain view tends to entail that consent is assumed, so maybe lifelogging sans the hidden camera is in order.

comment by Kaj_Sotala · 2010-06-23T20:53:34.871Z · LW(p) · GW(p)

What's the situation with commercially available lifelogging software/hardware? Can I just put in some money to get a recorder and start using that, or does it still involve a lot of customization to get something that might or might not work very well for the purpose?

Replies from: gwern, gwern
comment by gwern · 2010-06-24T00:59:15.660Z · LW(p) · GW(p)

I'm afraid I couldn't really say. I have seen the specs of enough small digital cameras and surveillance devices to know that decent quality 8 or 16 hour products using Flash are perfectly possible (and hard drive space is now so cheap as to not be worth discussing).

But I have yet to hear of anything that strikes me as ideal. Perhaps some other commentators know more.

comment by jaimeastorga2000 · 2011-11-09T02:36:44.198Z · LW(p) · GW(p)

If we're going to try to preserve ourselves through recorded information, wouldn't it make much more sense to instead spend a few hundred/thousand dollars on lifelogging?

But that does not have the advantages over cryonics that gworley uses to argue for writing. Cryonics at its cheapest costs $1250 for a lifetime CI membership plus the recurring life insurance payments. An initial investment on lifelogging combined with the periodical maintaining and/or replacing of equipment ought to be comparable (although you could count on technological advancement to bring these costs down as time goes on). And I don't think lifelogging is significantly more socially acceptable than cryonics.

Replies from: gwern
comment by gwern · 2011-11-09T14:35:02.298Z · LW(p) · GW(p)

Cryonics at its cheapest costs $1250 for a lifetime CI membership plus the recurring life insurance payments.

The membership is not even the costliest part. The insurance costs you a hard drive a month or more, and per Kryder's law and general consumer electronics, the disparity gets worse every year as the cost of lifelogging drops like a stone. Alcor runs at an annual loss and by definition is underpricing its services; I suspect CI is. Inflation is a major issue which will push up prices much higher than they currently are, which is what the current wrangling over 'grandfathering' is about - people have bought far too little insurance.

tl;dr: cryonics is more expensive than lifelogging. Cryonics will only get more expensive; lifelogging will only get cheaper. You do the math.

comment by wedrifid · 2010-06-23T16:01:41.638Z · LW(p) · GW(p)

Dear Diary,

My doctor told me today that of all the Elven Jedi he has ever treated I have by far the largest penis...

(Words we write down are only very loosely correlated with who we are.)

Replies from: Douglas_Knight, kodos96
comment by Douglas_Knight · 2010-06-23T18:44:25.394Z · LW(p) · GW(p)

(Words we write down are only very loosely correlated with who we are.)

No, it says something about you that you made that joke and Morendil made the other. (this is also a response to nickernst's comment about not knowing himself)

Replies from: Blueberry
comment by Blueberry · 2010-06-23T20:17:58.226Z · LW(p) · GW(p)

Can you elaborate on what you think it means about them?

Replies from: Douglas_Knight, wedrifid
comment by Douglas_Knight · 2010-06-23T20:54:12.297Z · LW(p) · GW(p)

No, I cannot, just like nickernst cannot do a core dump. But a lot of information leaks out that might be exploitable by a superintelligence. Cryonics has the advantage of not requiring a superintelligence, probably only nanotech.

comment by wedrifid · 2010-06-23T21:05:40.517Z · LW(p) · GW(p)

Well, for a start, that we're funny guys. ;)

But it also hints at my general cognitive processes. When I encounter a concept I understand it by exploring the extremes, the boundary cases and exceptions (even in my own thoughts). The exploits. That I chose to make a joke rather than a criticism in this context is also fairly indicative of my personality, although the inference there is somewhat more difficult. It would require looking at my social responses to various other situations.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-25T06:11:36.790Z · LW(p) · GW(p)

Also, using the terms "Elven Jedi" is a pretty clear indicator of you being more affiliated than average with the scifi/fantasy geekdom. Choosing to include "sexual" content such as the size of one's penis also says something - there are lots of people who'd be too prudish to go even that far. Some weak inferences could probably be also drawn from the fact that you used the expression "Dear Diary".

None of this would be enough to justify any firm conclusions by itself, of course. But combined with a large enough number of other weak pieces of evidence...

Replies from: Alicorn
comment by Alicorn · 2010-06-25T06:12:36.565Z · LW(p) · GW(p)

I will never write an epistolary work of fiction again!

Replies from: gwern
comment by gwern · 2010-10-15T00:46:34.578Z · LW(p) · GW(p)

I'm sorry, the 'I will never X again!' snowclone just tagged you as a white 21st century American in the top decile of intelligence who has spent a great deal of time reading webcomics in the style of or affiliated to Dinosaur Comics, with all that implies.

Replies from: jimrandomh
comment by jimrandomh · 2010-10-15T02:38:32.290Z · LW(p) · GW(p)

What's wrong with being a white, 21st century American in the upper decile of intelligence who has read Dinosaur Comics?

Replies from: gwern
comment by gwern · 2010-10-15T02:44:15.483Z · LW(p) · GW(p)

Alicorn's disavowal is due to fear of something learning about her through a series of weak inferences; however, ironically, her disavowal/vow to avoid revealing data useful for weak inference is itself grist for the weak inference mill. (And hopefully my example inferences show that the weak inferences may not be all that weak.)

That something learning about her might be entirely neutral. On the other hand, it might be an unFriendly AI programmed by Black Panthers who were picked on as kids by nerds and are irritated by Ryan North's longboard stylings.

comment by kodos96 · 2010-06-23T18:17:00.921Z · LW(p) · GW(p)

This made me literally LOL. Uncontrollably. For about a minute. My coworkers are looking at me funny now.

comment by ewbrownv · 2010-06-23T16:32:17.848Z · LW(p) · GW(p)

Unfortunately people who can't afford cryonics are unlikely to have the time or resources to create meticulous records of themselves. When you consider the opportunity cost of creating such records, the actual materials needed, and the cost of preserving them reliably for at least several decades, it isn't obvious that this is much cheaper than cryonics.

There's also the problem that most people don't consider 'make a perfect copy of me' and 'bring me back to life' to be equivalent operations, and the ones who do are almost all Western intellectual types who could easily afford cryonics if they actually wanted to. The world’s poor almost all see their personal identity as tied to their physical body, so this kind of approach would seem pointless to them.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2010-06-23T20:08:30.370Z · LW(p) · GW(p)

I agree with you here in that almost no one, especially the world's poor, will consider this a valid means of coming back to life. But, then, that's sort of the point. Depending on how you present it you can potentially get people to keep these kinds of writings even if they don't believe it will extend their lives in any meaningful way, and then they won't be completely lost because they didn't believe it was possible to come back from a biological death. And it lets those who do believe it will let them come back to life pursue their interest without hitting against social backlash.

Replies from: ewbrownv, ocr-fork
comment by ewbrownv · 2010-06-24T20:44:20.647Z · LW(p) · GW(p)

What are you going to tell an illiterate subsistence farmer in Bangladesh that will convince him to put an hour a week into recording his life instead of feeding his family?

I think you greatly underestimate the difficulty of implementing a scheme like this, and overestimate the chance that the effort will actually accomplish anything. If you really want to save lives in the Third World you'd have a bigger impact donating to traditional charity efforts.

comment by ocr-fork · 2010-06-24T01:20:32.604Z · LW(p) · GW(p)

Depending on how you present it you can potentially get people to keep these kinds of writings even if they don't believe it will extend their lives in any meaningful way,

Writing isn't feasible, but lifelogging might be. (see gwern's thread). The government could hand out wearable cameras that double as driving licenses, credit cards, etc. If anyone objects all they have to do rip out the right wires.

Replies from: DSimon
comment by DSimon · 2010-06-24T17:16:57.703Z · LW(p) · GW(p)

I object a great deal! Once we're all carrying around wearable cameras, the political possibility of making it illegal to rip out the wires would seem much less extreme than a proposal today to introduce both the cameras and the anti-tampering laws. Introducing these cameras would be greasing a slippery slope.

I'd rather keep the future probability for total Orwellian surveillance low, thanks.

Replies from: gwern, Unknowns, Vladimir_M
comment by gwern · 2010-06-24T19:55:50.956Z · LW(p) · GW(p)

Have you read David Brin's The Transparent Society? Surveillance societies are already here (look at London and its million-plus cameras), and purely on the side of the authorities. Personal cameras at least may help even the scales.

Replies from: Vladimir_M, ewbrownv
comment by Vladimir_M · 2010-06-24T22:43:19.903Z · LW(p) · GW(p)

I find most of the public debates on these issues rather myopic, in that they focus on the issue of surveillance by governments as the main problem. What I find to be a much more depressing prospect, however, are the consequences of a low-privacy society that may well come to pass through purely private institutions and transactions.

Even with the most non-intrusive and fair government imaginable, if lots of information about your life is easily available online, it means that a single stupid mistake in life that would earlier have only mild consequences can ruin your reputation forever and render you permanently unemployable and shunned socially. Instead of fading memories and ever more remote records about your past mistakes, they will forever be thrown right into the face of anyone who just types your name into a computer (and not to even mention the future more advanced pattern-matching and cross-referencing search technologies). This of course applies not just to mistakes, but also to any disreputable opinions and interests you might have that happen to be noted online.

Moreover, the social norms may develop to the point where it's expected that you constantly log the details of your life online. We do seem to be going in that direction, if the "social networking" sites are any indication. In such a situation, even if you had the option of reducing your online profile, it would send off a powerful signal that would make you look weird and suspicious.

I am worried about these developments much more than about what our sclerotic governments might do with their new surveillance capabilities. After all, even today, they can find out whatever they want about you if they really care for some reason -- they just need to put some effort into cross-referencing the already abundant information you leave behind at every step. However, as long as you pay your taxes and don't misbehave in those particular ways they care about, you'll be comfortably under their radar, and I see no reason why it wouldn't stay that way. Even nowadays, if I were to express some opinions that aren't very respectable, I'd be much more worried about the prospect of these words forever coming up whenever someone searches for my name online than about the much more remote possibility that the government might take active interest in what I said.

Replies from: Kaj_Sotala, gwern, Unknowns
comment by Kaj_Sotala · 2010-06-25T06:28:54.335Z · LW(p) · GW(p)

Even with the most non-intrusive and fair government imaginable, if lots of information about your life is easily available online, it means that a single stupid mistake in life that would earlier have only mild consequences can ruin your reputation forever and render you permanently unemployable and shunned socially.

I've heard this opinion expressed frequently, but it always seems to kind of contradict itself. If there's lots of information available about everyone, and all kinds of stupid mistakes will easily become permanently recorded...

...then wouldn't that lead to just about everyone's reputation being ruined in the eyes of everyone? But that doesn't make any sense - if almost everyone's going to have some stupid mistakes of theirs caught permanently on file, then all that will happen is that you'll find out you're not the only one who has made stupid mistakes. Big deal.

In fact, this to me seems potentially preferrable than our current society. Right now, people's past mistakes get lost in the past. As a result, we construct an unrealistic image where most people seem far more perfect than they actually are. Some past mistake coming out might ruin someone's reputation, and people who have made perfectly normal and reasonable mistakes will feel a lot more guilty about it than would be warranted. If the mistakes everyone had made were available, then we wouldn't have these unrealistic unconscious conceptions of how perfect people must be. Society might be far healthier as a result.

Replies from: Vladimir_M, reaver121, NancyLebovitz
comment by Vladimir_M · 2010-06-25T16:09:16.188Z · LW(p) · GW(p)

Kaj_Sotala:

But that doesn't make any sense - if almost everyone's going to have some stupid mistakes of theirs caught permanently on file, then all that will happen is that you'll find out you're not the only one who has made stupid mistakes.

There are at least three important problems with this view:

  • First, this is only one possible equilibrium. Another possibility is a society where everyone is extremely cautious to the point of paranoia, so that very few people ever commit a faux pas of any sort -- and although most people would like things to be more relaxed, they're faced with a problem of collective action. I don't think this is at all unrealistic -- people living under repression quickly develop the instinct to watch their mouth and behavior obsessively.

  • Second, even under the most optimistic "good" equilibrium, this argument applies only to those behaviors and opinions that are actually widespread. Those whose unconventional opinions and preferences are in a small minority, let alone lone-wolf contrarians, will have to censor themselves 24/7 or suffer very bad consequences.

  • Third, many things people dare say or do only in private are not dangerous because of laws or widespread social norms, but because of the local and private relations of power and status in which they are entangled. You need look no further than the workplace: if your bosses can examine all the details of your life to determine how docile, obedient, and conformist you are, then clearly, having such traits 24/7 is going to become necessary to prosper economically (except for the minority of self-employed folks, of course). Not to mention what happens if you wish to criticize your employers, even in your own free time! (Again, there's a collective action problem of sorts here: if everyone were mouthing off against their bosses and couldn't help but do it, it would lead to a "good" equilibrium, but the obedient and docile will outcompete the rest, making such traits more valuable and desirable.)

Replies from: None
comment by [deleted] · 2010-06-25T21:02:53.997Z · LW(p) · GW(p)

Second, even under the most optimistic "good" equilibrium, this argument applies only to those behaviors and opinions that are actually widespread. Those whose unconventional opinions and preferences are in a small minority, let alone lone-wolf contrarians, will have to censor themselves 24/7 or suffer very bad consequences.

I think it can apply even to minority opinions, because the minority opinions add up. Even if only 1% of the population has a given minority opinion, significantly more than 1% of the population is probably going to have at least one minority opinion about something. If people choose to be super-intolerant of 1% opinions, and if 70% of the population has at least one 1% opinion, then it's not 1% of the population that people will have to be super-intolerant of, but 70% of the population.

Or if 70% seems too extreme a possibility, try 30%. The point is that the sum total of small minorities adds up to a total that is less small, and this total will determine what happens. Take the extreme case: suppose the total adds up to 100%, so that 100% of the population holds at least one extreme-minority opinion. Can a person afford to ostracize close to 100% of the population (consisting of everybody who has at least one extreme-minority opinion that he does not share)? I think not. Therefore he will have to learn to be much more tolerant of extreme-minority opinions.

While that is only the extreme case, and 30% is not 100%, I think the point is made, that the accumulated total of all people who have minority opinions matters, and not merely the total for each minority opinion.

Replies from: Sticky
comment by Sticky · 2010-06-26T02:19:49.409Z · LW(p) · GW(p)

It seems unlikely that people would think that way. Taking myself as an example, I favor an extensive reworking of the powers, internal organization, and mode of election of the U.S. House of Representatives. I know that I'm the only person in the world who favors my program, because I invented it and haven't yet described it completely. I've described parts of it in online venues, each of which has a rather narrow, specialist audience, so there might possibly be two or three people out there who agree with me on a major portion of it, but certainly no one who agrees on the whole. That makes me an extreme minority.

There are plenty of extreme minorities I feel no sympathy for at all. Frankly, I think moon-hoax theorists should be shunned.

Replies from: None
comment by [deleted] · 2010-06-26T05:31:13.995Z · LW(p) · GW(p)

You are not facing the situation I'm describing, because it hasn't happened yet. It is a future speculation that would occur in a sufficiently transparent society. As long as you are unaware of most people's odd opinions, you can afford to shun the tiny minority of odd thinkers whose odd thoughts you are aware of, because in doing so you are only isolating yourself socially from that tiny minority, which is no skin off your nose. However, in a sufficiently transparent society you may, hypothetically, discover that 99% of everyone has at least one opinion which (previously) you were ready to shun a person for. In that hypothetical case, if you continue your policy of shunning those people, you will find yourself socially isolated to a degree that a homeless guy living under a bridge might feel sorry for. In that hypothetical situation, then, you may find yourself with no choice but to relax your standards about whether to shun people with odd opinions.

On second thought, in a sense it has happened. I happen to live in that world now, because I happen to think that pretty much everybody has views about as batty is moon-hoax theorists. In reaction to finding myself in this situation, I am not inclined to shun people who espouse moon-hoax-theory-level idiocy, because I would rather have at least one or two friends.

comment by reaver121 · 2010-06-25T11:22:59.145Z · LW(p) · GW(p)

You're assuming that because someone has made mistakes themselves they will judge others less harshly. That is not necessarily the case.

Besides, most people make indeed mistakes but not the same mistakes. If you're boss is a teetotaler and you are a careful driver, you are not going to think well of each other if you get drunk and your boss gets into an car accident.

Even I have the same problem. I tend to procrastinate so if a coworker is past his deadline I don't really care. But I dislike sloppy thinking and try to eradicate it in myself so it really gets on my nerves if someones goes all irrational on me. (Although I seem to be getting better as I get older in accepting that most people don't think like me.)

Replies from: None
comment by [deleted] · 2010-06-25T14:48:30.435Z · LW(p) · GW(p)

You're assuming that because someone has made mistakes themselves they will judge others less harshly.

Actually, I don't think that's the only or most important factor. People who learn about the skeletons in your closet will compare you, not only to themselves, but to other people. If everyone has skeletons in their closet and everyone knows about them, then your prospective employer Bob (say) will be comparing the skeletons in your closet not only to the skeletons in his own closet, but more importantly to the skeletons in the closets of the other people who are competing with you for the same job.

As for people not making equal mistakes, to put it in simple binary terms merely for the purpose of making the point, divide people into "major offenders" and "minor offenders" and suppose major offenders are all equally major and minor all equally minor. If major offenders outnumber minor offenders, then being a major offender is not such a big deal since you're part of the majority. But if minor offenders outnumber major offenders, then only a minority of people will be major offenders and therefore only a minority will have to worry about a transparent society. So either way, the transparent society is not that big a thing to fear for the average person. It's a self-limiting danger. The more probable it is that the average person will be revealed to have Pervert Type A, the greater the fraction of people who will be revealed to be Pervert Type A, and therefore the harder it will be for other people to ostracize them, since to do so would reduce the size of their own social network.

Replies from: reaver121
comment by reaver121 · 2010-06-28T08:25:48.848Z · LW(p) · GW(p)

Good points. Just read the whole conversation between you and Vladimir_M and I agree it could go both ways.

comment by NancyLebovitz · 2010-06-25T11:20:21.287Z · LW(p) · GW(p)

...then wouldn't that lead to just about everyone's reputation being ruined in the eyes of everyone? But that doesn't make any sense - if almost everyone's going to have some stupid mistakes of theirs caught permanently on file, then all that will happen is that you'll find out you're not the only one who has made stupid mistakes. Big deal.

One part of how that plays out depends on whether there's a group that can enforce "it's different when we do it."

comment by gwern · 2010-06-25T00:08:23.511Z · LW(p) · GW(p)

An overall view of the 20th century would note that one's own government is a major threat to one's life. I don't especially see why one would think this has ceased to be true in the 21st; history has seen many sclerotic regimes pass and be replaced by fresher ones, and a one-way surveillance society would only enhance government power.

Why do you think social norms are a greater threat?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-25T01:23:03.670Z · LW(p) · GW(p)

Well, that's a complex topic that can't possibly be done justice to in a brief comment. But to put it as succinctly as possible, modern governments are already so powerful that given the existing means at their disposal, additional surveillance won't change things much. Your argument can in fact be used to argue against its relevance -- all the sundry 20th century totalitarians had no problem doing what they did without any surveillance technology to speak of.

My view, which would take much more space than is available here to support by solid arguments, is that the modern Western system of government will continue sliding gradually along the same path as now, determined by bureaucratic inertia and the opinions fashionable among high-status groups; both these things are fairly predictable, as far as any large-scale predictions about human affairs go. Whether these developments should be counted as good or bad, depends on many difficult, controversial, and/or subjective judgments, but realistically, even though I'm inclined towards the latter view, I think anyone with a little prudence will be able to continue living fairly comfortably under the government's radar for the foreseeable future. Even in the conceivable scenarios that might end up in major instability and uncertain outcomes, I don't think surveillance technology will matter much when it comes to the trouble that awaits us in such cases.

On the other hand, I see a very realistic prospect of social norms developing towards a zero-privacy world, where there would be no Orwellian thought police coming after you, but you would be expected to maintain a detailed public log of your life -- theoretically voluntarily, but under the threat of shunning and unemployability in case you refuse it. Already, employers, school admissions bureaucrats, etc. are routinely searching through people's trails left on Facebook and Google. What happens when an even greater portion of one's life will be customarily posted online? How long before not having a rich online trail is considered weird and suspect by itself?

Already, an easily googlable faux pas will be a horrible millstone around your neck for the rest of your life, even if the government couldn't care less about it. What will happen when far more stuff is online, and searchable in far more powerful ways?

Replies from: gwern, NancyLebovitz
comment by gwern · 2010-06-25T01:43:31.091Z · LW(p) · GW(p)

Already, employers, school admissions bureaucrats, etc. are routinely searching through people's trails left on Facebook and Google. What happens when an even greater portion of one's life will be customarily posted online? How long before not having a rich online trail is considered weird and suspect by itself?

While we're simply stating our beliefs...

I view this as merely a transition period. You say we cannot both maintain our old puritanical public standards and ever increasing public disclosure. I agree.

However, the latter is driven by powerful and deep economic & technological & social trends, and the former is a weak creature of habit and tradition which has demonstrated in the 20th century its extreme malleability (just look at homosexuality!).

It is a case of a movable object meeting an unstoppable force; the standards will be forced to change. A 10 year old growing up now would not judge harshly an old faux pas online, even if the 30 and 40 year olds currently in charge would and do now judge harshly. Those 30 and 40 year olds' time is numbered.

Replies from: Vladimir_M, NancyLebovitz
comment by Vladimir_M · 2010-06-25T02:43:26.583Z · LW(p) · GW(p)

On the whole, I don't think that people are becoming more tolerant of disreputable behaviors and opinions, or that they are likely to become so in the future -- or even that the set of disreputable traits will become significantly smaller, though its composition undoubtedly will change. Every human society has its taboos and strong status markers attached to various behaviors and opinions in a manner that seems whimsical to outsiders; it's just that for the last few decades, the set of behaviors and opinions considered disreputable has changed a lot in Western societies. (The situation is also confused by the fact that, similar to its inconsistent idealization of selectively applied "free-thinking," our culture has developed a strange inconsistent fondness for selectively applied "tolerance" as a virtue in its own right.)

Of course, those whose opinions, preferences, and abilities are more in line with the new norms have every reason to be happy, and to them, it will look as if things have become more free and tolerant indeed. Trouble is, this is also why it's usually futile to argue the opposite: even by merely pointing out those things where you are now under greater constraint by social norms than before, you can't avoid the automatic status-lowering association with such things and the resulting derision and/or condemnation.

Realistically, the new generations will react to reduced privacy by instinctively increasing conformity, not tolerance. Ultimately, I would speculate that in a world populated by folks who lack the very idea of having a private sphere where you can allow yourself to do or say something that you wouldn't want to be broadcast publicly, the level of tolerance would in fact go way down, since typical people would be brought up with an unrelenting focus on watching their mouth and their behavior, and lack any personal experience of the satisfaction of breaking a norm when no one untrusted is watching.

Replies from: None
comment by [deleted] · 2010-06-25T15:18:42.171Z · LW(p) · GW(p)

On the whole, I don't think that people are becoming more tolerant of disreputable behaviors and opinions, or that they are likely to become so in the future -- or even that the set of disreputable traits will become significantly smaller, though its composition undoubtedly will change.

It is commonly said that status competition is zero-sum. This seems a more certain invariant than what you just wrote above. If that's the case, then any change in the degree of tolerance will be perfectly matched by a corresponding change in the degree of conformity - and vice versa.

The picture you paint, however, is of the average person becoming more of a pariah, more unemployable, fewer friends, because they are haunted by that one ineradicable disreputable behavior in their past. This picture violates the assumption that status competition is zero-sum - an assumption which I have a stronger confidence in than I do in your claim that we are not going to become "more tolerant". In fact your claim is ambiguous, because there is surely no canonical way to compare different sets of taboo behavior so that the degree of tolerance of different cultures can be compared. It is a similar problem to the problem of adjusting for inflation with price indexes. I have more confidence in our ability to measure, and compare, the fraction of the population relegated to low status (eg unemployability), than I do in our ability to measure, and compare, the magnitudes of the sets of taboos of different societies.

Replies from: NancyLebovitz, Vladimir_M
comment by NancyLebovitz · 2010-06-25T15:22:52.769Z · LW(p) · GW(p)

If there are a lot of pariahs in a connected world, then they will form their own subcultures.

comment by Vladimir_M · 2010-06-26T03:34:11.557Z · LW(p) · GW(p)

Constant:

The picture you paint, however, is of the average person becoming more of a pariah, more unemployable, fewer friends, because they are haunted by that one ineradicable disreputable behavior in their past.

Maybe I failed to make my point clearly, but that is not what I had in mind. The picture I paint is of the average person becoming far more cautious and conformist, and of a society where various contrarians and others with unconventional opinions and preferences have no outlet at all for speaking their mind or indulging their preferences.

Average folks would presumably remain functioning normally (within whatever the definition of normality will be), only in a constant and unceasing state of far greater caution, hiding any dangerous thoughts they might have at all times and places. The number of people who actually ruin their lives by making a mistake that will haunt them forever won't necessarily be that high; the unceasing suffocating control of everyone's life will be the main problem.

What the society might end up looking like after everyone has grown up in a no-privacy world, we can only speculate. It would certainly not involve anything similar to the relations between people we know nowadays. (For example, you speak of friends -- but at least for me, a key part of the definition of a close friend vs. friend vs. mere acquaintance is the level of confidentiality I can practice with the person in question. I'm not sure if the concept can exist in any meaningful form in a world without privacy.)

In fact your claim is ambiguous, because there is surely no canonical way to compare different sets of taboo behavior so that the degree of tolerance of different cultures can be compared. It is a similar problem to the problem of adjusting for inflation with price indexes.

That's a very good analogy! But note that none of my claims depend on any exact comparison of levels of tolerance. Ultimately, the important question is whether, in a future Brinesque transparent society, there would exist taboo opinions and preferences whose inevitable suppression would be undesirable by some reasonable criteria. I believe the answer is yes, and that it is unreasonably optimistic to believe that such a society would become so tolerant and libertarian that nothing would get suppressed except things that rightfully should be, like violent crime. (And ultimately, I believe that such unwarranted optimism typically has its roots in the same biases that commonly make people believe that the modern world is on an unprecedented path of increasing freedom and tolerance.)

There is of course also the issue of thoughts and words that are dangerous due to people's specific personal circumstances, which is more or less orthogonal to the problem of social norms and taboos (as discussed in the third point of this comment).

Replies from: None
comment by [deleted] · 2010-06-26T12:08:11.572Z · LW(p) · GW(p)

Thanks - I have nothing specifically in reply. Just to be clear about where I'm coming from, while I am not convinced that the future will unfold as you describe, neither am I convinced that it will not. So, I agree with you that popular failure to devote any attention to the scenario is myopic.

comment by NancyLebovitz · 2010-06-25T02:19:04.818Z · LW(p) · GW(p)

The interesting thing is that is isn't just going to be reasonable individual choices.

I assume there will be serious social pressure to take some faux pas seriously and ignore others.

comment by NancyLebovitz · 2010-06-25T01:49:12.899Z · LW(p) · GW(p)

I'll throw some complexity in-- those social standards change, sometimes as a result of deliberate action, sometimes as a matter of random factors.

The most notable recent example is prejudice against homosexuality getting considerably toned down.

I agree that there's a chance that just not having a public record of oneself mightl be considered to be suspicious.

I'm hoping that the loss of privacy will lead to a more accurate understanding of what people are really like, and more reasonable standards, but I'm not counting on it.

comment by Unknowns · 2010-06-25T06:16:36.918Z · LW(p) · GW(p)

Once it becomes sufficiently obvious that everyone frequently does or says "not very respectable" things, people will begin to just laugh when someone brings them up as a criticism. It will no longer be possible to pretend that such things apply only to the people you criticize.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-25T15:31:01.688Z · LW(p) · GW(p)

That is only one possible equilibrium. The other one is that as the sphere of privacy shrinks, people become more and more careful and conformist, until ultimately, everyone is behaving with extreme caution. In this equilibrium, people are locked in a problem of collective action -- nobody dares to say or do what's on his mind, even though most people would like to.

Moreover, even in the "good" equilibrium, the impossibility of hypocrisy protects only those behaviors and opinions that are actually characteristic of a majority. If your opinions and preferences are in a small minority, there is nothing at all to stop you from suffering condemnation, shunning, low status, and perhaps even outright persecution from the overwhelming majority.

comment by ewbrownv · 2010-06-24T21:04:37.723Z · LW(p) · GW(p)

I found it stunningly naive. So far the actual response of governments to citizen surveillance has been to make it illegal whenever it becomes inconvenient, and of course government systems are always fenced in with 'protections' to prevent private individuals from 'misusing' the data they collect. In an actual surveillance state the agency with control of the surveillance system would have the ability to imprison anyone at any time while being nearly immune to retaliation, a situation that ensures it will quickly mutate into an oppressive autocracy no matter what it started out as.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-25T02:09:58.571Z · LW(p) · GW(p)

This has the cheering implication that surveillance by citizens makes a difference when it does happen, and it's important to push to make sure it's legal.

comment by Unknowns · 2010-06-25T06:14:20.541Z · LW(p) · GW(p)

Here's one vote for total Orwellian surveillance.

comment by Vladimir_M · 2010-06-24T18:44:57.506Z · LW(p) · GW(p)

DSimon:

I'd rather keep the future probability for total Orwellian surveillance low, thanks.

Sadly, that horse has long left the barn -- and in any case, it seems to me that privacy is even in principle incompatible with highly developed digital technology.

What I find to be a much more realistic danger than the prospect of Orwellian government are the social and market implications of a low-privacy world. If a lot of information about your life is easily accessible online, this means that embarrassing mistakes that would cause only mild consequences in the past can now render you permanently unemployable, and perhaps even socially ostracized. In such a world, once you do anything disreputable, it's bound to haunt you forever, throwing itself into the face of anyone who just types your name into a computer (and not to even mention the future technologies for other sorts of pattern-matching and cross-referencing search).

To make things even worse, in a society where you're expected to place a detailed log of your private life online by social convention -- and it seems like we are going towards this, if the "social networking" websites are any indication -- refusal to do so will send off a thunderous signal of weirdness and suspiciousness. Thus, the combination of technology and social trends can result in a suffocatingly controlling society even with the most libertarian government imaginable.

comment by [deleted] · 2010-06-23T15:02:25.863Z · LW(p) · GW(p)

I don't have direct access to a large percent of my memories. Many cannot be put into words, and I don't just mean music and imagery. The knot between these memories is utterly complex. In self-reflection, I am dishonest with myself, and I don't feel like it is so. My mother has the idea that poetry is a means of most honestly recording some of the difficult-to-explain thoughts, but I think that the scope, inexpressibility and interconnection of the memories makes this infeasible.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-25T06:16:39.685Z · LW(p) · GW(p)

Presumably those memories affect your behavior somehow, though. A superintelligence might be able to re-create, if not the same memories, then functionally equivalent ones. Whether it was capable of doing so depends on how much information of your behavior is retained.

On the other hand, if those memories don't affect your behavior, then that implies that they're not essential for rebuilding something we could call "you".

comment by Vladimir_Nesov · 2010-06-23T15:21:40.736Z · LW(p) · GW(p)

How good a record is good enough? In truth, I don't think we even know enough to get the order of magnitude right. The best I can offer is that you need to record as much as you are willing to.

But this estimate is essential. By deciding to pursue this course of action, you in effect state that you estimate sufficient probability of it being enough to justify the additional effort. You can't say "I don't know" and act on this knowledge at the same time.

Replies from: gworley, Douglas_Knight
comment by Gordon Seidoh Worley (gworley) · 2010-06-23T20:32:27.689Z · LW(p) · GW(p)

Perhaps I made a mistake in using the LW taboo words "I don't know". Really, how much is probably a function of how fine-grained you want the restoration from writing to be. Since I think it's reasonable to assume decreasing marginal utility from additional writing, I think a good estimate is that something like the first 10 pages of an autobiography are worth about the same as the following 100 pages (assuming a uniform distribution of information, so not the first 10 pages of a typical autobiography that might go in chronological order). The more you write the better the restoration will be. How good that restoration will actually turn out to be compared to, say, a cryonic restoration, is hard to know because we don't actually know how that will turn out either for sure, but obviously I think it will turn out to be pretty good.

comment by Douglas_Knight · 2010-06-23T18:03:25.299Z · LW(p) · GW(p)

In other words, if we don't know the necessary order of magnitude, we should only bother if we can increase our output by orders of magnitude.

comment by HughRistik · 2010-06-23T20:40:41.736Z · LW(p) · GW(p)

This post addresses the subject of the appropriate human data compression format. Though an interesting idea, I think that the proposed method is too low in resolution. You acknowledge the lossiness, but I think it's just going to be too much.

Although the method you advocate is probably the best thing short of cryonics, I doubt there is any satisfactory compression method that can make anyone that's more similar to you than a best friend or a family member who gets stuck with your memories. It's better to have too much data than too much.

Because we share the same evolutionary past as all of our conspecifics, the biology and psychology of our brains is statistically the same. We each have our quirks of genetics and development, but even those are statistically similar among people who share our quirks. Thus with just a few bits of data we can already record most of what makes you who you are.

I'm not confident in this part. Although a large percentage of human biology and psychology are identical, the devil is in the details. From a statistical perspective, aren't humans and chimps practical identical also? Percentage similarity of traits is probably the wrong metric, since small quantitative differences can have large qualitative impact.

Your idea of a generic human template, with various subtemplates for quirks, is also interesting, but still too low resolution.

Under what metric do we say that you and I have the "same" quirk, even if our phenotypes look superficially similar? How much data is discarded by the aggregation?

Even if we assume that the notion of a generic human template is meaningful, there are almost as many ways that people can deviate from the generic human template as there are people, and there would have to be that many quirky subtemplates. It's possible that we could compress human phenotypic deviation into a lower number of templates than the number of people, but I don't think we are anywhere near having a satisfactory way to do so. In the least, storing the deltas of human phenotypes might cut down on the data we all have in common.

The problem with lossy measures of phenotype such as memories and our current ability to measure quirky deviations from the average is that they discard too much information: the genotype, and other low-level aspects of phenotype.

Let's start with the genotype problem. In the future, we synthesize a human with the same phenotype as our crude records of you (your memories, and your quirks according to the best psychometric indexes currently available). We will call this phenotype set X. Yet since multiple genotype can converge towards the same phenotype (particularly for crude measures of phenotype), the individual created is not guaranteed to have the same genotype as you, and probably won't. Due to having a different genotype, this individual could end up having traits outside X that you didn't have. They will have the same set of phenotypic traits as you that were recorded, but they may lack phenotypic traits that weren't recorded (because you method of recording discarded the data), and they may have extra phenotypic traits that you didn't have because keeping those traits out wasn't in the spec.

Fundamentally, I think it's problematic to try to reverse-engineer a mind based on a record of its outputs. That seems like trying to reverse-engineer a computer and its operating system based on a recording of everything it displayed on its monitor while being used. Even if you know how computers and operating systems work in general, you will still be unable to eliminate lots of guesswork in choosing between multiple constructions that would lead to the same output.

If you know a system's construction, you may be able to predict its outputs, but knowing the outputs of a system doesn't necessarily allow you to work backwards to its construction, even when you know the principles by which systems of that type are constructed.

I think the best reconstruction you will get through this method won't be substantially more similar to you than taking your best friend (who has highly similar personality and interests) and implanting him with your memories. At best, you would get someone similar enough to be a relative of yours.

We really need to preserve your genotype; otherwise, future scientists could make an individual with all your memories and crudely measured personality traits, but a different personality and value system (in ways that weren't measured), who wakes up and wonders what they heck you were doing all your life. We would need a solution that has all the phenotypic traits recorded for you, with the constraint that the individual created has the same genotype as you.

Yet even such a solution would still be discarding too much information about biological traits that influence your phenotype yet are not recorded in your genetic code, such as your prenatal development. It's been shown that prenatal factors influence your brain structure, personality, and interests. So we need to record your prenatal environment to be able to create a meaningful facsimile of you. Otherwise, we could end constructing someone with the same genotype, memories, and psychometric measures as you, who nevertheless had a different brain structure; such a person would probably be less like you than a twin of yours who was implanted with your memories, because your twin shares a similar prenatal environment to yours, while your copy does not. A different brain structure would create a similar problem to having a different genotype: a different brain that has the same recorded phenotype as you will differ from you in unrecorded aspects of phenotype.

I would worry that a record of every single thought and behavior you have, both from yourself and observers, would still not be enough to reverse-engineer "you" in any meaningful way. Adding the constraint of matching your genotype would help, but we still don't have a way to compress biological factors other than genotype, such as prenatal development. We have no MP3 format for humans.

The best record of your prenatal development that we have is your body and brain structure, so these would have to be stored along with your memories. Preferably at a rather cold temperature.

Replies from: apophenia, gworley
comment by apophenia · 2010-06-25T03:32:54.424Z · LW(p) · GW(p)

How good a record is good enough? In truth, I don't think we even know enough to get the order of magnitude right. The best I can offer is that you need to record as much as you are willing to. The more you record, the more there will be to work with, and the less chance there will be of insufficient data. It may turn out that you simply can't record enough to create a good restoration of a person from writing, but this is little different from the risk in cryonics of not being well preserved enough to restore despite best efforts. If you're willing to take the risk that cryonics won't work as well as you hope, you should be willing to accept that writing yourself into the future might not work as well as you hope.

I think you're equating the relative risks here. If I wrote about myself for an hour a day for the rest of my life, I would rate chances very low that I could be reconstructed in contrast to cryonics. Your chances of being reconstructed increase with the amount of information present. One of the arguments for the safety of vitrifying the brain is that because the brain has a lot of redundancy (structure), we might be able to reconstruct damage.

I worked for a while in cryptography, where we try to recover original data wholly or partially from encrypted data. Based on those experiences and talking with Peter de Blanc, I looked into this reconstruction problem for a couple of hours at one point. Off the top of my head here's some tips which might make it easier for me to reconstruct your body, brain and memories if I'm alive:

Record lots of data. Speech is better than writing. Include at least one photo. Videoblogging should record at a ate of least ten to a hundred times faster than writing, and if storage stays as cheap as it now it'll survive. Freeze a DNA sample (cheap, but riskier) or record one (expensive because you have to scan it, but less likely to be destroyed). This should allow one to reconstruct a physical twin at minimum. I'm going to stick my neck out here and say don't censor yourself if you're recording audio. If I want to reconstruct your brain, the first step is probably to reverse-engineer your thoughts, so the more freely you talk the easier to deduce how you arrive at what you say. For example, I would learn more about someone if I watch them solving a cryptogram for ten seconds, than if they just gave me the answer. I have no idea if free association is equally as likely to work as essays. In general, the closer to the source the better in reverse-engineering: I've heard an estimate I buy that a minute of high-quality video could replace DNA in a pinch, but would still pay to freeze it, myself.

A friend of mine made a design for a $50 EEG cap, but the bandwidth is low enough that I doubt it's worth the cash for most people. If you have money to spare it can't hurt.

I wouldn't guess I would be able to exactly reconstruct so that you'd notice no difference, which is my personal standard and why I'm not doing any of this myself.

Ciphergoth, I'd be interested in what you have to say about this.

comment by Gordon Seidoh Worley (gworley) · 2010-06-24T16:05:32.923Z · LW(p) · GW(p)

At best, you would get someone similar enough to be a relative of yours.

Even if that's all the better we can do, that's much better than the nothing that will befall those who would otherwise have been totally lost because they didn't sign up for cryonics.

Replies from: GreenRoot
comment by GreenRoot · 2010-06-24T18:01:26.530Z · LW(p) · GW(p)

that's much better than the nothing that will befall those who would otherwise have been totally lost

I'm curious to know why you make this judgment. I imagine future people choosing between making a new person and making an as-similar-as-a-relative copy of a preserved person. In both cases, one additional person gets to exist. In both cases, that person is not somebody who has ever existed before. In neither case case does a future person get to revive a loved one, because the result will only be somebody similar to that loved one. Reviving the preserved person is better for the preserved person, I guess, but making a new person is better for the new person. Once you've lost continuity of identity, you've lost any reason why basing new people on recordings is better than making new people the old fashioned way.

Put another way, the nothing that will befall the totally lost feels exactly as bad to me as the nothing that will befall the future unborn whom they displace.

I know that ethical reasoning about potentially-existing people is hard, so I'm not too clear on this, so I'd like to know why you feel the way you do.

comment by Morendil · 2010-06-23T15:40:34.863Z · LW(p) · GW(p)

How many hours do you estimate you'll be putting into your autobiography for the resulting record to be "good enough"?

Next question, what is your hourly pay rate?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2010-06-23T20:44:10.716Z · LW(p) · GW(p)

I see where this is going, so I'll go ahead and let you run an economic analysis on me. But, keep in mind that cost is not the only factor, only the main one for most of the world's population. For me it has far more to do with the social costs I would have to pay to sign up for cryonics.

That said, I estimate I'll be putting about 1 hour a week into writing myself into the future. I am currently paid at a rate of approximately $18 an hour. I'm not sure what my lifetime average pay rate will be, but let's go ahead and estimate it at $60 per hour in 2010 USD (I have two M.S. degrees, one in computer science and one in mathematics, and I'm willing to do work with questionable ethical outcomes, like "defense" contracting).

Replies from: wedrifid
comment by wedrifid · 2010-06-25T08:49:19.276Z · LW(p) · GW(p)

The figures given put that at 10 years of writing at 1h per week resolution vs a cryonic preservation.

comment by WrongBot · 2010-06-24T18:29:31.462Z · LW(p) · GW(p)

How good a record is good enough?

No record in English (and I'm using English as a shorthand for any human language) can ever be good enough. English is not a technology for transmitting information.

English is a compression format, and a very lossy and somewhat inaccurate compression format at that. But it has a stupendously high compression rate and compression algorithms with reasonable running speeds on specially adapted hardware (i.e. brains), so for the particular purposes of human communication English is a pretty decent option.

I own a t-shirt with this graphic printed on it. If you possess a mostly correct compression algorithm (that is, you speak modern English), the ~5kb of data on that shirt contains sufficient information to reproduce ~30 major scientific or technological discoveries. I don't know exactly how much space you could fit that information into if you encoded it in a way that wasn't very heavily optimized for very specific types of human brains, but I suspect it's many orders of magnitude greater than 5kb.

On the surface this seems like it could be an argument for reproducing a specific human from their preserved written material, what with the amazing information density of English. But using a standard English decompression algorithm to analyze what you've written is worthless, because we're not trying to recreate the meaning of what you've written. We're trying to recreate the compression algorithm used to create your writings, which would be approximately isomorphic with your brain. But because the English data format is lossy and imprecise, reconstructing that algorithm from only its output is impossible.

If you could preserve a copy of the decompressed version of what you were trying to write along with your writings, that might be enough to reverse-engineer your brain('s compression algorithm). But I don't think that's possible for any human, much less most of them.

Replies from: sfb, NancyLebovitz
comment by sfb · 2010-07-20T14:44:21.927Z · LW(p) · GW(p)

How is your description of English as a compression format different from the idea of the detached lever, where one puts the characters a p p l e into a computer and hopes it will have crunchy, juicy properties?

I believe I speak Modern English, and could probably look for wavy hands on sticks penicillin mold or coil wires around magnets, but how does "atoms can be split" help me reproduce a major scientific/engineering discovery? It's not a compressed instruction, it's a teachers password I can say to other people who know that atoms can be split so we can be comfortably "scientific" in each other's presence. I don't know what it means in terms of equations, machinery, or testable predictions - and more to the point, - I still don't know what it means after reading the t-shirt.

I could probably grope about in a corpse and find a heart or a lung, but how do I tell when I have a pancreas instead of a phlogistondix? And which bit of it is the pancreatic duct? And how do I tell if the fluid that comes out of the unknown lump of creature that I have is insulin? Or after injection, how to tell if it's working?

The only constrained anticipation I have for 'insulin' is that it helps diabetic people - although I now note that I have no real idea what 'diabetic people' means in medical terms or how, if I were thrust back in time, I would be able to reliably identify them.

I suggest that t-shirt is not a compressed guide, it's a memory aid for people who already know the details behind it and who could, if their memory was entirely under their command, manage exactly the same without it.

Replies from: WrongBot, JoshuaZ
comment by WrongBot · 2010-07-20T15:57:58.395Z · LW(p) · GW(p)

I agree with JoshuaZ, but would add:

English is a compression algorithm, but most of the information required for that algorithm is stored in your brain. Your brain hears the word "apple" and expands it to represent everything that you know about apples. If your brain can't expand "pancreas" as far, that is a characteristic of your brain and not the word.

As is true of software compression algorithms, the purpose of your brain's compression algorithm is to allow you to shrink the size of your knowledge and messages, at the cost of computing time and accuracy.

comment by JoshuaZ · 2010-07-20T14:58:03.484Z · LW(p) · GW(p)

The only constrained anticipation I have for 'insulin' is that it helps diabetic people - although I now note that I have no real idea what 'diabetic people' means in medical terms or how, if I were thrust back in time, I would be able to reliably identify them.

I suggest that t-shirt is not a compressed guide, it's a memory aid for people who already know the details behind it and who could, if their memory was entirely under their command, manage exactly the same without it.

But these terms don't exist in complete isolation. Say for example I'm sent back to 1850. Then I don't know what the different parts of a pancreas look like, but doctors will know. So I can bootstrap my knowledge based on that (and presumably they know what a diabetic is and how to recognize them). Some of these (like using quartz crystals to make clocks) are difficult due to infrastructural problems, but most of them have large amounts of associated ideas that connect to the terms.

By analogy with the issue being discussed, the terms being used don't function completely as detached levers, since when we have a written record of you saying "I like to eat apples but not oranges" we have a specific idea of what "apple" means.

Replies from: sfb
comment by sfb · 2010-07-21T18:43:05.971Z · LW(p) · GW(p)

Are you saying that once you have a written record of me mentioning apples, then you can talk to me about 'apples' with no explanation, but before that you would have to talk to me about 'apples (which are ...)' with an explanation?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-21T19:41:34.713Z · LW(p) · GW(p)

Are you saying that once you have a written record of me mentioning apples, then you can talk to me about 'apples' with no explanation, but before that you would have to talk to me about 'apples (which are ...)' with an explanation?

Hmm, ok. That can't be right when phrased that way. So something is wrong with my notions. It may be that the point about time-travel holds but generalizing it to the lever issue fails.

comment by NancyLebovitz · 2010-06-25T02:06:21.760Z · LW(p) · GW(p)

I believe that language is for communicating the shared part of experience, or sometimes for creating the illusion of shared experience. Whatever is unique about a person's experience is going to get lost if you try to communicate it through language.

Ok, that's maybe a little too harsh-sounding. I think some people are relatively similar to each other, so that language can resonate rfairly well between them.

Still, I believe in tacit knowledge. And even if a skillful person can find words for some of it-- turn the bike wheel towards the direction you're falling is sound advice, but how would you convey exactly what it's like to be you riding a bike on a particular day, or what it's like to know how to ride a bike before you have words for it?

comment by [deleted] · 2010-06-23T15:41:52.722Z · LW(p) · GW(p)

A few thoughts:

-This would require both an enormous amount of time spent meticulously documenting your experiences (most of which would be mundane), and incredible writing skill to be able to capture various nuances of emotion. The number of people who are able to satisfy both these conditions may be less than the number who will sign up for cryonics.

-It's not clear to me that there's any consistent way to translate the written expression of a memory (particularly an emotional memory) into a mental state, partially because...

-I'm not sure writing is a fine-grained enough tool to capture a person's mental state.

-As nickernst stated, a large (perhaps most) of what makes up my mental state isn't stored in the form of memories I can access.

comment by RobinZ · 2010-06-23T15:09:03.915Z · LW(p) · GW(p)

One way to help test the feasibility of this plan is to both write prolificly and undergo cryonic preservation.

Replies from: thomblake
comment by thomblake · 2010-06-23T15:14:54.784Z · LW(p) · GW(p)

Not a very useful test though; it's generally assumed that once we know whether cryonics works, we won't really need it anymore.

comment by orthonormal · 2010-06-23T23:32:02.585Z · LW(p) · GW(p)

On the one hand, I do expect a society after a positive Singularity to be interested in, say, reconstructing Feynman from the evidence he left, and of course the result would be indistinguishable from the original recipe to anyone who knew him or knew his writings, etc. It goes without saying that I expect this to be awesome, and look forward to talking with reconstructed historical figures as if they were the originals.

However, I do suspect that there's a deep structure to an individual human's experience and thinking which might be essential to the continuation of subjective experience, and which might be underspecified by the records left by a dead person. Thus the most probable reconstruction may not be a continuation of the original person's experience. And I think I have a much stronger preference for "seeing the future myself" than "having someone very like me see the future".

To make this concrete, let's say that we had access to both a nondestructive brain scanner and to a superintelligence capable of doing such reconstructions, and that I'll either be scanned and uploaded, or else a reconstruction of me from my writings (and other relevant data of the sort accessible today, leading up to the point of entering the chamber) will be uploaded. In the first case, upon walking into the chamber I'd anticipate a 50% chance of suddenly finding myself an upload. In the second case, I'd expect a much smaller chance of that being the case.

I know that we don't have the relevant data yet on the variance between human mind-states, but given this uncertainty, I find I'd much prefer cryonics to reconstruction.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-24T10:16:43.171Z · LW(p) · GW(p)

I agree, provided "the future myself" is understood as a particular concept describing the structure of the future, and not magical carrier of subjective experience. The terminology of continuation of subjective experience can be decoded through this concept, whenever its instance is found in the environment, but the terminological connection starts to break down when it's not, for example when there are multiple copies. Such cases reveal the problems with subjective experience ontology, its limited applicability.

It's really interesting to read an argument that uses subjective experience terminology, through this lens. For example, take this phrase:

Thus the most probable reconstruction may not be a continuation of the original person's experience. And I think I have a much stronger preference for "seeing the future myself" than "having someone very like me see the future".

This translates thusly: "The most probable reconstruction may not have the property of having the structure of "original person". And I have a much stronger preference for the future containing "future myself" than for the future containing "someone very like me but still significantly different"".

Replies from: orthonormal
comment by orthonormal · 2010-06-24T16:08:51.074Z · LW(p) · GW(p)

I agree with your expansion of the concept.

comment by CarlShulman · 2010-06-23T18:15:38.491Z · LW(p) · GW(p)

Martine Rothblatt has written a lot about this idea, using the term 'mindfiles.'

comment by Roko · 2010-06-23T14:50:23.329Z · LW(p) · GW(p)

You should probably give a number for the cost of cryo.

As far as I know, $9,000 is the cheapest possibility, which is cheaper than many cars, and there are a lot of those in the world.

Replies from: Kevin
comment by Kevin · 2010-06-23T15:27:38.825Z · LW(p) · GW(p)

Where can you get $9,000 cryonics?

Replies from: Roko
comment by Roko · 2010-06-23T15:54:38.237Z · LW(p) · GW(p)

CI does full body for $30,000, but for a young person the actual payments to a life insurance policy would only be $9000 thanks to compound interest.

Somebody who is already 75 years old wouldn't get that benefit, but we're talking about the cheapest possibility, which would be for a young person.

Replies from: CronoDAS, gworley
comment by CronoDAS · 2010-06-23T17:44:54.838Z · LW(p) · GW(p)

Is that $9000 figure for a term life insurance policy that becomes worthless after a number of years have passed, or for "whole life insurance" that pays off when you die, regardless of when that happens to be?

Replies from: Roko
comment by Roko · 2010-06-24T11:02:21.198Z · LW(p) · GW(p)

I don't know, but I suspect the latter, since 9k compounded for 50 years ~ 30k.

comment by Gordon Seidoh Worley (gworley) · 2010-06-23T20:35:52.609Z · LW(p) · GW(p)

Note, though, that you're talking about costs for people living in the First World. If you live in Sudan, for example, I doubt you can get access to cryonics short of paying for it all upfront in full: after all, who would want to insure someone's life when they live in such a deadly country.

Replies from: Roko
comment by Roko · 2010-06-23T21:07:42.772Z · LW(p) · GW(p)

You could put the money in a bank account, but $9000 is probably more money than the average Sudanese person earns in a week, so it's a moot point.

comment by Kaj_Sotala · 2010-06-23T20:50:00.774Z · LW(p) · GW(p)

Part of the reason why I make available records of e.g. the books I own, the music I listen to and the board games I've played (though this last list is horribly incomplete) is to make it possible for someone to reconstruct me in the future. There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction. A lot would be missing, of course, but it's still better than nothing.

I don't put that big of a priority this, though - I haven't made an effort to make sure that the contents of my hard drive will remain available somewhere after my death, for instance. It's more of an entertaining thought I like to play with.

Replies from: ocr-fork
comment by ocr-fork · 2010-06-24T06:05:06.331Z · LW(p) · GW(p)

There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction.

That's orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-24T08:48:40.334Z · LW(p) · GW(p)

Sure. What about it?

Replies from: ocr-fork
comment by ocr-fork · 2010-06-24T15:58:28.896Z · LW(p) · GW(p)

Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-24T18:36:56.307Z · LW(p) · GW(p)

If I had surviving friends, then optimally the process would also extract their memories for the purpose. If we have the technology to reconstruct people like that, then surely we also have the technology to read memories off someone's brain, though it might require their permission which might not be available.

If they gave their permission, though, they wouldn't be able to tell a difference since all their memories of me were used in building that copy.

comment by [deleted] · 2010-06-23T18:28:53.790Z · LW(p) · GW(p)

You don't want a written diary, you want a highly efficient miniature camera that's always on. And maybe an option to annotate it in real time.

Replies from: gwern
comment by gwern · 2010-06-23T18:59:37.696Z · LW(p) · GW(p)

As I suggest, lifelogs.

comment by Blueberry · 2010-06-23T20:27:25.661Z · LW(p) · GW(p)

we need to inject the stuff that sets you uniquely apart even from your fellow humans who share your statistically regular quirks: your memories. If the record of your memories is good enough, this should effectively create a person who is so much like you as to be indistinguishable from the original, i.e. restore you.

I put a lot less importance on memory than you do. For instance, if I suffered amnesia and was not conscious of any of my previous experiences, I would still be me. In fact, given the choice between (A) someone who had a completely different past but had my memories transplanted, and (B) me with permanent amnesia, so I'm not consciously aware of any of my past experiences, I care much more about preserving B than A. It's not clear to my why I should care about my memories put into a 'blank body', if that's even meaningful to talk about without all the lost resonances and connections.

Besides your memory, you have all sorts of unconscious connections that your brain has made over your lifetime, and all sorts of skills, thoughts, and beliefs for which you no longer have the memories. I've forgotten a lot more than I know right now, but all those things that I've forgotten have distilled down into who I am in a way that's not necessarily reflected in my memories.

comment by AlexMennen · 2010-06-24T19:38:45.883Z · LW(p) · GW(p)

I don't think that writing yourself into the future would work very well, but I've got another idea for a cheap cryonics-substitute: get your brain frozen in plain old ice. By the time we get whole brain emulation, a brain frozen in ice may contain enough information to replicate on a computer, even if it cannot be biologically revived like a cryogenically frozen brain.

Replies from: khafra, Chroma, Roko
comment by khafra · 2010-06-24T19:58:13.854Z · LW(p) · GW(p)

Permafrost burial has been explored, but is generally considered an inferior option. If I were going for a cheap cryonics substitute, I'd try plasticization. A lab can do a head for a couple thousand bucks, it preserves enough microstructure for an electron scanning microscope, and there's no worries about staying cool.

comment by Chroma · 2010-06-24T20:04:44.304Z · LW(p) · GW(p)

A couple of things:

The cost of cryonics is more than just the liquid nitrogen. You need to mobilize a team to properly preserve the brain, then keep it in a refrigeration unit indefinitely.

If you keep tissue at temperatures slightly below 0ºC, it's not really frozen. Tiny pockets of concentrated ions will lower the freezing temperature of water in in those areas, keeping portions of the tissue liquid. I think the effect is similar to salting roads in the winter time. Anyway, the tissue degrades over time scales we care about.

comment by Roko · 2010-06-25T18:26:25.397Z · LW(p) · GW(p)

Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive. Cryo is not expensive.

Repeat 5000 times until it sticks...

Seriously, the cost of reliably getting people to bury you in ice somewhere would be more than the cheapest cryo.

  • It has to be ice that never melts
  • You have to find a group of people who are willing to "move a body" for you without freaking out. They will probably also have to trek somewhere remote and then break the law. They have to not "chicken out" at the end.
  • It has to not be found out by the authorities and exhumed
  • It has to be found again at the other end (perhaps when your associates are themselves dead)
  • It has to be super-cold ice (south pole, perhaps?) and even then bacteria would be a problem.

EDIT: apparently cryonics societies will do the work for you, but it'll still cost $5000+. Why not get the real deal for only a little more?

comment by PhilGoetz · 2010-06-24T04:17:55.634Z · LW(p) · GW(p)

I find it deeply weird that nobody has pointed out that the information describing you, written as prose, is not conscious. This is a major drawback. The OP mentioned it, and dared people to take him/her up on it, and nobody did.

I attribute this to a majority of people on LW taking Dennett's position on consciousness, which is basically to try to pretend that it doesn't exist, and that being a materialist means believing that there is no "qualia problem".

Replies from: Kaj_Sotala, ocr-fork
comment by Kaj_Sotala · 2010-06-24T05:25:11.345Z · LW(p) · GW(p)

I don't follow. The OP didn't claim that just having the written information would be enough. They were saying that the information could be used to build a copy of you. The prose might not be conscious, but the copy would be.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-06-27T05:09:39.497Z · LW(p) · GW(p)

Oops, you're right.

comment by ocr-fork · 2010-06-24T04:37:09.098Z · LW(p) · GW(p)

Is a vitrified brain conscious?

Replies from: cousin_it
comment by cousin_it · 2010-06-24T10:51:17.253Z · LW(p) · GW(p)

No idea. We haven't yet revived any vitrified brains and asked them whether they experience personal continuity with their pre-vitrification selves. The answer could turn out either way.

Replies from: ocr-fork, Vladimir_Nesov
comment by ocr-fork · 2010-06-24T16:35:24.367Z · LW(p) · GW(p)

They remember being themselves, so they'd say "yes."

I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."

Replies from: cousin_it
comment by cousin_it · 2010-06-25T12:55:49.476Z · LW(p) · GW(p)

They remember being themselves

You don't know that until you've actually done the experiment. Some parts of memory may be "passive" - encoded in the configuration of neurons and synapses - while other parts may be "active", dynamically encoded in the electrical stuff and requiring constant maintenance by a living brain. To take an example we understand well, turning a computer off and on again loses all sorts of information, including its "thread of consciousness".

EDIT: I just looked it up and it seems this comment has a high chance of being wrong. People have been known to come to life after having a (mostly) flat EEG for hours, e.g. during deep anaesthesia. Sorry.

comment by Vladimir_Nesov · 2010-06-24T10:58:04.988Z · LW(p) · GW(p)

We haven't yet revived any vitrified brains and asked them whether they experience personal continuity with their pre-vitrification selves. The answer could turn out either way.

Joke probably? As if the above experiment has any connection to this confusion of a hypothesis.

comment by Roko · 2010-06-23T16:37:54.120Z · LW(p) · GW(p)

With cryonics at $9000, you have to ask which method is getting you the most utility per unit effort. $9000 equates to about 200-600 hours of work for most reading this, but if the writing takes an hour a day for the rest of your life, that's 10,000+ hours.

Of course the best protection would be to do both.

Replies from: Nisan
comment by Nisan · 2010-06-24T08:22:40.975Z · LW(p) · GW(p)

Is that really how much it costs? Can you give me a link to a reference?

ETA: Ah, Roko's reasoning is here

comment by Morendil · 2010-06-23T15:35:16.047Z · LW(p) · GW(p)

This post is in dire need of a reference to Hofstadter's I am a Strange Loop.

Also maybe Halperin's The First Immortal which explicitly considers the possibility raised here.

Also maybe Lion Kimbro.

Replies from: gwern, JoshuaZ
comment by gwern · 2010-06-23T16:06:24.735Z · LW(p) · GW(p)

While we're going with fictional examples, the John Keats cybrid in Dan Simmons's Hyperion and Fall of Hyperion is pretty much exactly this suggestion.

Replies from: chronophasiac
comment by chronophasiac · 2010-06-23T16:41:33.672Z · LW(p) · GW(p)

Spoiler warning for a Greg Egan short story...

Steve Fever is this suggestion, exactly. It is a fairly disturbing account of an unFriendly AI attempting to resurrect a dead man using this method. Recommended.

Replies from: None
comment by [deleted] · 2010-06-23T19:11:51.974Z · LW(p) · GW(p)

This is also a primary plot point of the Battlestar Galactica prequel Caprica. This also comes in Charles Stross's Accelerando up when some evil AIs decide to do this to most of humanity based on our historical records. The full text of the novel is at http://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando.html Search for "Frequently Asked Questions" to find the relevant section.

There is also another similarly interesting plot thread in the story which can be summed up by this excerpt:

The Church of Latter-Day Saints believes that you can't get into the Promised Land unless it's baptized you – but it can do so if it knows your name and parentage, even after you're dead. Its genealogical databases are among the most impressive artifacts of historical research ever prepared. And it likes to make converts.

The Franklin Collective believes that you can't get into the future unless it's digitized your neural state vector, or at least acquired as complete a snapshot of your sensory inputs and genome as current technology permits. You don't need to be alive for it to do this. Its society of mind is among the most impressive artifacts of computer science. And it likes to make converts.

Replies from: anon895, gworley
comment by anon895 · 2010-06-24T03:16:12.496Z · LW(p) · GW(p)

It might be time to take this thread to TV Tropes.

comment by Gordon Seidoh Worley (gworley) · 2010-06-23T20:46:01.571Z · LW(p) · GW(p)

I have no doubt that this sort of thing has been occasionally explored in fiction. That said, there's a big difference between considering an idea in fiction and considering acting on an idea in real life.

comment by JoshuaZ · 2010-06-23T19:29:05.878Z · LW(p) · GW(p)

And in Alastair Reynolds' Revelation Space universe there are two major types of simulations of people, the alphas are a very accurate model from a fast, destructive scan of the brain, while the betas are essentially this.

comment by Democritus · 2010-06-23T21:32:14.919Z · LW(p) · GW(p)

This doesn't seem particularly useful to me. Even if the written copy could be identical to me in every way, I would place a much lower value on the creation of such a copy than on the extension of my current life. You're right that this might be slightly preferable to death, but I certainly wouldn't position it as a real alternative even to cryonics.

Replies from: ata
comment by ata · 2010-06-23T21:40:58.275Z · LW(p) · GW(p)

What do you mean by "identical to me in every way"? Does that mean that it actually contains all the same information as your brain, or something less exact or complete than that?

Replies from: Democritus
comment by Democritus · 2010-06-30T04:04:54.674Z · LW(p) · GW(p)

I am referring to a copy that contains exactly the same information as the current "me".

comment by KrisC · 2010-06-23T18:27:07.801Z · LW(p) · GW(p)

We are going to have to rely on simulations of the deceased for the foreseeable future. Individuals who have not left extensive records will be relatively more quality simulations.

Hopefully at some point a sufficiently advanced simulation will exist which can interpolate the remainder of humanity, but even then we are left looking for a reason to do so.

comment by Oscar_Cunningham · 2010-06-23T15:47:40.433Z · LW(p) · GW(p)

Would you also write about the large pecentage of your time spent writing?

Replies from: Morendil
comment by Morendil · 2010-06-23T15:55:42.618Z · LW(p) · GW(p)

Evidently. :)

This brings up a related point. How do you write your skills into the future? You can't just write "As of 2010 I was an excellent piano player".

But wait - maybe you can. If you're assuming a reconstruction technology which can uncompress verbal descriptions of behaviours into the much more complicated expression of such behaviours in terms of the neural substrate, then quite possibly this technology will also have massive general knowledge about human skills allowing it to uncompress such a statement into its equivalent in neural and muscular organization.

But then, what a temptation! As of 2010 I am not, in fact, able to play the piano, but if this record for the future can also serve as my letter to Santa, why not? It's not as if any of it is readily verifiable. I could say I like the taste of lemon when actually I hate it.

This line of thought isn't to ridicule the idea of writing yourself into the future - just to bring out some consequences the OP may not have thought about.

Replies from: gworley, JamesAndrix
comment by Gordon Seidoh Worley (gworley) · 2010-06-23T16:27:22.271Z · LW(p) · GW(p)

Of course this is a possibility. Even with cryonics, presumably if we have the technology to restore you then we'll have the technology to restore you with whatever modifications you'd like. The person you write into the future will be like you only insofar as you make them like you. If you choose to write someone like yourself but who is an excellent piano player into the future, so be it.

comment by JamesAndrix · 2010-06-25T05:26:28.697Z · LW(p) · GW(p)

Piano playing is easy to record.

comment by jaimeastorga2000 · 2011-05-18T06:50:31.507Z · LW(p) · GW(p)

I've had a similar idea for a while. It involves reconstructing people from the memories of those who knew them, like Kaj_Sotala describes. So, for instance, my maternal grandfather died a few years ago. But if a bunch of us who knew him lived into a future where our memories of him could be scanned and analyzed, a copy of him could be built that we would find indistinguishable from the original as far as those of us who remember him are concerned. I thought it might be useful from a comfort standpoint.

comment by lsparrish · 2010-07-01T22:11:51.062Z · LW(p) · GW(p)

How much is gained by writing about yourself? Aside from personal development aspects, less than video-logging. But likely different information than video-logging. Possibly stuff buried in your childhood. Logging biometrics like heartbeat could be more total benefit than video-logging, but again different information. Having a combination could prove more useful in a synergistic manner than any one information source on its own, because comparing them against each other allows you to infer more information.

The same goes for cryonics. At a guess I'd say preserving the nanostructure of your brain to the highest technologically feasible level is going to be thousands of times more effective than all the digital and plain-english data in the world. But combined they would be significantly more effective, because of the data you can gain by cross-referencing different types of information.

comment by Lightwave · 2010-06-25T18:03:48.645Z · LW(p) · GW(p)

This is an idea Paul Almond suggested a while ago in his article Indirect Mind Uploading.

Also, he has quite a few new AI related articles on his website http://www.paul-almond.com. I haven't read any of them yet, so I can't comment.

comment by JamesAndrix · 2010-06-25T05:50:03.295Z · LW(p) · GW(p)

I think this is missing out on a lot of other higher bandwidth source of information about us. Part of the problem is a focus on output, as if creating a pale imitation of the process of scanning a brain. But you could also reconstruct much of brain by looking at the things that went into it: DNA, the environment. tack on the records that are already automatically made of its choices, and you isolate a very small part of the potential mind space.

Any person with my DNA, my grades, my bookshelf, my pantry, my bank statements and my web VIEWING history would be quite a bit like me, tack on the things I've posted and my Gmail acount, memories from people who know me, and a hi res visual scan of places I go, and you've got a pretty narrow criteria for rebuilding.

From here: the phobia that this was the last post of my original, and my simulated life has reproduced all the artifacts left by my original. (or that my artifacts have begun to deviate, and I am no longer a candidate for a true copy.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-25T06:14:25.894Z · LW(p) · GW(p)

There may be a difference of temperament here-- to my mind, a lot of what's most distinctive about being me is the feeling I chase when I'm being creative-- the sense of rightness that I use to adjust what I'm doing.

It's conceivable that a new person which was producing calligraphy and webposts similar to mine would be trying to make things in consonance with that feeling, but it's not obvious that they would be.

comment by apophenia · 2010-06-25T03:21:44.253Z · LW(p) · GW(p)

How good a record is good enough? In truth, I don't think we even know enough to get the order of magnitude right. The best I can offer is that you need to record as much as you are willing to. The more you record, the more there will be to work with, and the less chance there will be of insufficient data. It may turn out that you simply can't record enough to create a good restoration of a person from writing, but this is little different from the risk in cryonics of not being well preserved enough to restore despite best efforts. If you're willing to take the risk that cryonics won't work as well as you hope, you should be willing to accept that writing yourself into the future might not work as well as you hope.

I think you're equating the relative risks here. If I wrote about myself for an hour a day for the rest of my life, I would rate chances very low that I could be reconstructed in contrast to cryonics. Your chances of being reconstructed increase with the amount of information present. One of the arguments for the safety of vitrifying the brain is that because the brain has a lot of redundancy (structure), we might be able to reconstruct damage.

I worked for a while in cryptography, where we try to recover original data wholly or partially from encrypted data. Based on those experiences and talking with Peter de Blanc, I looked into this reconstruction problem for a couple of hours at one point. Off the top of my head here's some tips which might make it easier for me to reconstruct your body, brain and memories if I'm alive:

Record lots of data. Speech is better than writing. Include at least one photo. Videoblogging should record at a ate of least ten to a hundred times faster than writing, and if storage stays as cheap as it now it'll survive. Freeze a DNA sample (cheap, but riskier) or record one (expensive because you have to scan it, but less likely to be destroyed). This should allow one to reconstruct a physical twin at minimum. I'm going to stick my neck out here and say don't censor yourself if you're recording audio. If I want to reconstruct your brain, the first step is probably to reverse-engineer your thoughts, so the more freely you talk the easier to deduce how you arrive at what you say. For example, I would learn more about someone if I watch them solving a cryptogram for ten seconds, than if they just gave me the answer. I have no idea if free association is equally as likely to work as essays. In general, the closer to the source the better in reverse-engineering: I've heard an estimate I buy that a minute of high-quality video could replace DNA in a pinch, but would still pay to freeze it, myself.

A friend of mine made a design for a $50 EEG cap, but the bandwidth is low enough that I doubt it's worth the cash for most people. If you have money to spare it can't hurt.

I wouldn't guess I would be able to exactly reconstruct so that you'd notice no difference, which is my personal standard and why I'm not doing any of this myself.

Ciphergoth, I'd be interested in what you have to say about this.

comment by Psychohistorian · 2010-06-24T05:13:08.443Z · LW(p) · GW(p)

I'm very curious as to your theory of what happens if you do both. That is, suppose you're cryogenically frozen and then revived, while someone also makes a top-notch copy of you based on the recorded memories you left behind. It seems rather obvious that you can't have double-you, so what happens?

This hypothetical suggests to me that one or both are doomed - and if it's just one, I'd think it's this method you've suggested that wouldn't work. But I really haven't thought too hard on this issue, so I'm curious as to what others think the solution/outcome is.

Replies from: Kaj_Sotala, orthonormal
comment by Kaj_Sotala · 2010-06-24T05:23:19.292Z · LW(p) · GW(p)

It seems rather obvious that you can't have double-you

Why not?

Replies from: Psychohistorian
comment by Psychohistorian · 2010-06-24T05:41:13.417Z · LW(p) · GW(p)

There's no mechanism linking the two entities, so it seems necessary that whatever that each entity has a distinct first-person experience. Whoever "you" are, then, you can't experience being both entities. I think that's the cleanest way to express what I mean, and thank you for calling me on using "obvious."

Another way of thinking about this: Suppose someone offers to make 1 million essentially perfect copies of you and subject them to the best life they can engineer for them, which you get to confirm fits perfectly with your own values. The catch: prior to the copy, they'll paint a 1 on your forehead, which will not be copied. They'll then find "you" and subject the "original" to endless torture. I, for one, would not hesitate to reject this offer for largely self-interested reasons. I can understand an altruist taking it, though that makes the fact that the million people are copies rather irrelevant. If I understand the stance of many people here (RH, for example), they'd take the deal out of self interest (at least some number of copies, which could be greater than a million), because they don't distinguish between copies. This seems like severely flawed reasoning, though too complex to properly address in a sub-sub-comment. I'd like to know if this is a straw man.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-24T08:46:45.086Z · LW(p) · GW(p)

In general, I find that continuity of consciousness is an illusion that's hard-wired into us for self-preservational purposes. We can explain the mind without needing to define some sort of entity that remains the same from our birth to death, and any attempted definition for such an entity gets more and more convoluted as you try to consistently answer questions like "if you lose all your memories, is it still you", "if you get disassembled and then rebuilt, is that still you" and "how can you at 5 be the same person as you at 50". It's a bit like believing in a soul.

Still, the concept of a 'you' has various e.g. legal and social uses, and it's still a relatively well-defined concept for as long as you don't try to consider various weird cases. Once we have a world where people can be copied, however, the folk-psychological concept of "you" pretty much becomes incoherent and arbitrary. Which still doesn't force you to completely abandon the concept, of course - you can arbitrarily define it however you wish.

As for your thought experiment, there are at two interpretations that make sense to me. One is that since every copy will have experiences and memories identical to being me and there are a million of them, then there's a 1/1,000,000 chance for me to "become" any particular one of the copies. Correspondingly, there's a 1/1,000,000 chance that I'll be tortured. The other interpretation is that there is a 100% chance that I will "become" each of the copies, so a 100% that I'll become the one that is eternally tortured and a 100% chance that I'll also become the 999,999 others.

Alternatively, you could also say that there's a 100% chance that I'll remain the one who had "1" painted on his forehead. Or that I'll become all of the copies whose number happens to be a prime. Or whatever. Identity is arbitrary in such a scenario, so which one is "correct" depends pretty much only on your taste.

comment by orthonormal · 2010-06-24T05:35:31.104Z · LW(p) · GW(p)

Your question is a more complicated version of "what happens if I'm non-destructively copied", and the answer to that one is that both of them are you, and so before the copying is done you should assign equal probability to "ending up as" the original or as the copy. (It should work the same as Everett branching.)

In this case, I don't fully expect the "reconstructed from writings" self to be as connected to my current subjective experience as a cryopreserved self would be. But the mere fact of there being "two selves" doesn't present an inherent problem.

Replies from: Vladimir_Nesov, Psychohistorian, Psychohistorian
comment by Vladimir_Nesov · 2010-06-24T10:33:17.493Z · LW(p) · GW(p)

It's not a given that building this kind of probabilistic model is helpful. (Forgetful driver and beauty again.)

comment by Psychohistorian · 2010-06-24T06:19:33.524Z · LW(p) · GW(p)

If I understand the physics and the link even a little bit correctly, those copies would have to be identical to an arbitrarily high degree of specification. That identicalness would end soon (I'd imagine something like nanoseconds) after the new brain was generated (and I think it's extremely charitable to posit that such a replication is meaningfully possible); it seems like even variations in local gravity would break the identity. Certainly, within a few seconds, processing necessarily different sensory data (as both copies can't be observing from the exact same location) would make the two different. What happens to double-me at that point, or is that somehow not material?

Replies from: orthonormal
comment by orthonormal · 2010-06-24T06:33:02.046Z · LW(p) · GW(p)

Well, ISTM that only the gross structure (the cells, the strength of their connections, and the state of firing) is really essential to the relevant pattern. Advanced nanotechnology is theoretically more than capable of recording such data and constructing a copy, to within the accuracy of body-temperature thermal noise. (So if you really wanted to be careful, you'd put the brain in suspended animation at low temperature, copy it there, and warm both copies back up to normal; but I don't think that would be necessary in practice.)

What happens to double-me at that point, or is that somehow not material?

Yup, the copies diverge. Just as there are different quantum versions of me branching as life goes along (see here for a relevant parable), my experience would branch there, with two people who once were "me". When I observe a quantum random coinflip, half of future mes are in worlds where they observe heads and half are in worlds where they observe tails; they quickly become different people from each other, both of them remembering having been me-before-the-flip, and so it's quite coherent for me to say before the flip that I expect to see heads with 1/2 probability and tails with 1/2 probability. The duplication experiment is no different, except that this time my branched copies have the chance to play chess against each other afterwards. I expect 1/2 probability of finding myself to be the one who remained in the scanning room (and who gets to play White), and 1/2 chance of finding myself to be the one who wakes in the construction room (and who gets to play Black).

comment by Psychohistorian · 2010-06-24T05:47:31.980Z · LW(p) · GW(p)

This is somewhat redundant with my previous response, but suppose we have some superficial way to distinguish - i.e. you're marked with something that doesn't get copied. Why would you not expect to continue to have the experience associated with the physical object that is your brain, i.e. not wake up as the copy?

It's also interesting that this assumes it's meaningfully possible to replicate a brain, which is an unanswered empirical question. Even granted that the world is perfectly materialistic, it does not seem to follow that one can make a copy of a brain so perfect that one's experience could jump from one to the other, so to speak. Sort of like Heisenberg's uncertainty principle, but for brain replication.

...unless you're referring to the situation where you wake up after an individual has been copied. In that case, it does seem like the odds you're the original are 50/50. But if you're the original going to the copying-lab, it seems like you should be virtually guaranteed to wake up in your own body, which will be confirmable if you give it some identifying mark beforehand (or ensure that it's directed to a red room and the copy to a blue one, or whatever).

Replies from: orthonormal
comment by orthonormal · 2010-06-24T06:11:08.899Z · LW(p) · GW(p)

OK, so we do disagree on this fundamental level. I apologize for the following infodump, especially when it's late at night for me...

I assign high probability to the patternist theory of consciousness: the thesis that the subjective thread of consciousness is not maintained by material continuity or a metaphysical soul, but by the patterned relations between the different complicated brain-states (or mind-moments, if we want to be less brain-chauvinistic). That is, you can identify the configuration that is my-brain-right-now (A1), and the configuration that is my-brain-a-millisecond-later (A2), and they're connected in a way similar to the way that successive states of Conway's Game of Life are connected. (Of course, there are multiple ways A1 could go, so throw in A2', A2'', etc, but only a small subset of possible brain-configurations have a nonnegligible connection of this sort to A1.) Anyway, my estimate for "what I'll experience next after A1" should just be a matter of counting all the A2s and variants in the multiverse, and comparing the measures of each.

This sounds weird to our evolved intuitions, but it appears to be the simplest theory of subjective experience which doesn't involve extra metaphysical entities or new, heretofore unobserved, laws of physics. As noted in the link above, the notion of "material continuity" is a practical aggregate consequence which doesn't cut to the way the universe actually works. Reality is made of configurations, not objects, and it would be unnatural to introduce a basic property for a substructure of a configuration (like A2) which wouldn't hold for an identical substructure placed elsewhere in space and time. (Trivial properties like "location" obviously excepted, and completely historical-social properties like "the first nanotube of this length ever constructed" should be in a different category as well.)

The patternist theory of consciousness, incidentally, is basically assumed in the OP and in a good deal of the LW discussion of uploading and other such technologies.

Replies from: Psychohistorian
comment by Psychohistorian · 2010-06-24T06:18:42.661Z · LW(p) · GW(p)

I follow this general theory and mostly agree with it, though I admit it isn't fully adapted into my thoughts on consciousness generally.

What I don't see, exactly, is how "good enough" copies could work. (I also don't see how identical copies could work, but that's a practical issue, not a conceptual one.) Recreating someone who's significantly more like me than most seems rather categorically different from freezing and later reactivating my brain, particularly since people who are significantly more like me than most probably already exist to some degree. At what degree does similarity cross some relevance threshold, if ever? Or have I misconstrued the issue?

Replies from: orthonormal
comment by orthonormal · 2010-06-24T06:39:18.303Z · LW(p) · GW(p)

At what degree does similarity cross some relevance threshold, if ever?

That's precisely the issue at the heart of the current discussion, as I see it. And it's on that issue that I'm uncertain. A copy of the cellular structure and activity of my brain is definitely good enough to carry on my conscious experience. Is a best-guess reconstruction of that structure from my written records good enough? I strongly suspect not, but it's always dicey to say what a superintelligence couldn't figure out from limited evidence.

comment by timtyler · 2010-06-24T19:33:14.998Z · LW(p) · GW(p)

Cryopreservation works pretty well for embryos, eggs and sperm. Or, if you are feeling optimistic, you could sequence your DNA - and store that. That is not preserving everything - but it should be enough to make some identical twins - which is pretty close for most purposes.