Uploads are Impossible
post by PashaKamyshev · 2023-05-12T08:03:22.512Z · LW · GW · 37 commentsContents
1. Physics 2. Computer Science 3. Philosophy of Self 4. Impossibility of a desirable upload culture None 37 comments
Epistemic status: pretty confident. Probability this does well on LessWrong: pretty low
This is part 2 of my value learning [LW · GW] sequence in which I talk about what humans actually value and how to get AGI to do it.
In this post, I wish to push back against certain visions of the future which seem both dystopian and unachievable. My personal good visions have a simple notion: humanity must seek to correct its problems, but in doing so not get overzealous in its hatred of the human form. Retro sci-fi had somewhat of the right idea (my writing for emphasis).
Many other visions expressed online from both sides of the AI safety debate seem to want to force the idea of “digital life,” “digital minds” or “uploads” onto people. This idea seems to occupy a substantial mind space of certain people. Examples –Hanson, Bostrom, Soares [AF · GW]. I have been privy to declarations of “biology is certainly over” in the past. MIRI’s “strategy” seems to have been thought to “upload” people according to this thread. Moreover, because of this idea, actually promising AI alignment techniques get rejected, since they “can’t handle it.” In other words, visions of “upload” might be a core philosophical block that leads people to dismiss the by far most likely path to learning actual human values - just observing human behavior and inferring goals from it.
This is not how most people think. This is not what regular people want from the future, including many leaders of major nations.
It seems that the discussion of these ideas in certain circles has simply assumed that “uploads” are both possible and desirable – neither of which seems to be the case to me.
Let’s take a deep breath.
“Uploads” are likely impossible. Nobody has thought them through enough to even being to address the multiple likely impossibility results that make the process of a “living upload” either a pipe dream or the worst thing to happen to you. There is an implicit attempt to connect physics to computer science using proper philosophy and then have a culture that accepts them. Each one of the disciplines has something to say about it.
1. Physics
Your first problem is that in order to have extremely accurate beliefs about something, you have to observe it and interact with it. If you want to be perfect you either have to do this at a temperature of absolute zero or the system ends up at absolute zero after you are done with it. If you are interested in the relationship between information and temperature, see these posts: Engines of Cognition [LW · GW] and Entropy and Temperature [LW · GW]
This, by the way, ignores any quantum effects that might come up. You can also screw the molecule level accuracy and hope that a sufficiently large scanner gives you enough information and pray that the brain doesn’t actually use things like temperature for cognition (it probably does).
I suspect many people’s self-conception of this relies on an assumption that the ontology of Being is a solved problem (it’s not) AND that “what we ARE” are easily detectable “electrical signals in the brain,” and everything else in the body literally carries no relevant information. Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences (another paper here and another article here)
The issue is that either you either have to assume only a small subset of molecular information is relevant (likely a false assumption) OR you have to identify the exact large subset (more on this later) OR you run into thermodynamic information issues where you can’t actually scan a physical object to desired "each molecular location and velocity" accuracy without destroying it. This also ignores any quantum issues that could make everything even more complicated.
Now, the rest of the essay does not rely on this result and is going to assume you managed to miraculously bend some thermodynamic laws and get enough information out of the body without hurting it, you are faced with the real problem:
2. Computer Science
The problem here is – how do you verify an upload completed successfully and without errors? If I give two complex programs and ask you to verify that they have identical outputs (including “not halting” outputs) for all possible inputs is equivalent to the halting problem. Verifying that the input and output and the “subjective experience” is harder than the halting problem.
You might come up with a cached thought [LW · GW] like “compare outputs” or something. Again, this doesn’t even work in theory. Even if you somehow claim that the human mind is not Turing complete (it probably is) and therefore the halting problem doesn’t apply, a number of issues crop up.
The space of algorithms that can pretend to be you without being you is pretty high. This problem is even worse if certain parts of you, such as your memories are easier to get than the rest of “you.” An LLM with access to your memories can sound a lot like you for the purposes of verification by another person if the only channel of communication is text. Most people are not sufficiently “unique” to have an easily discernible “output.” Besides you can’t run highly arbitrary tests on the biological human to determine “identical-ness.” This is made harder if the human was dead and frozen when you tried to do this.
On the intersection of computer science and philosophy, you also have an ontological issue of “what” to upload. A lot of people who do cryonics seem to think that “the head” is a sufficient carrier of personality. To me, it’s obvious large parts of the personality are stored in the body. Now, a lot of things stored in the body are “trauma”, however for many people “trauma” is load-bearing.
Uploading the whole body also gets into these issues of which specific cells are good and bad to upload. Fat cells? Parasites? Viruses? Which bacteria are good or bad? You might think that science has a consistent notion of “harm/benefit” that could objectively identify a cell, but it’s not really the case. Many things in the body impact cognition.
Here you have another scale of undesirable and impossible tradeoffs. If you wish to upload because you want to escape your “weak body” you may want to create certain changes to it. However, the more changes you add, the more the resulting person would differ from you and make it hard to verify it is you. The more you aim to “escape” biology, the more the verification problem gets harder, which is an issue given that it starts out being harder than the halting problem.
This doesn’t even get into the issue of many people’s cognition being more or less dependent on the specific people around them and the implied reaction those people have to a physical presence. being spending time in a luxurious resort where your only contact with people is occasional Zoom calls. Your personality will change and not in a positive direction.
The ontological issues of which cells form a true and valuable part of a human are rooted in deep disagreements about the nature of the “self.” These are not ideas from hard science that have impossibility results, so they falsely seem “more possible.” However, these disagreements have a hard time being resolved, since it is very hard to draw on the underlying principle “more meta” than this, and such universal principles may simply not exist.
Software is not the kind of thing that “just works” without verification, especially with such difficult processes, which brings me to
3. Philosophy of Self
Let’s somehow consider you have both bent the laws of thermodynamics and solved the halting problem.
If you have “succeeded,” the verification problem gets harder. How does the outside “know” that a particular upload is proceeding correctly or it has been “corrupted” through an implementation bug somewhere? Given that people are not “static” objects and the static “verifiable subcomponent of self” may not actually exist in principle, there is no grounding from which one can say that an “upload” is “developing” correctly vs. it is encountering errors.
Given that people are not static objects and create new experiences, and get new beliefs and personalities, the exact data that compromises a “digital mind” will begin to alter. How do you know these alterations are because of “reasonable development” or a software bug? This is different from the halting problem because at least in the halting problem you have two programs, and you need to know if they are identical (output and halting-wise) on the same inputs. Here you have only one program and you need to know if its essence or “soul” is identical to a properly developed “soul.” of a counter-factual program that doesn’t exist.
In essence, this is the question of mathematically identifying what the “soul” is for the purpose of verifying that the soul in the upload is still “the same” as the one uploaded. Buddha basically declared there isn’t such a thing and I am going to take his pronouncement as sufficient evidence that you just can’t do this.
In the absence of this mathematically and phenomenologically grounded idea of a soul, the way the verification problem will get solved in reality, if human beings tried to implement it, is a purely political notion of verification. If the upload agrees with all politics of the server operators, it is “working”. If it keeps agreeing with the ever-changing politics of operators, then it is “developing properly” and if not, it will be “tweaked” by any means possible to make it work.
Of course, you can get shitty statistical data on the average person. You can just give up on the “exact soul” idea and just get training data on how people developed in the past and compare this to the development of uploads in general. Two problems – this doesn’t work once the age of uploads exceeds the max human age AND you can never alter your culture from the cultures of the past, which brings me to:
4. Impossibility of a desirable upload culture
Let’s once again consider you have bent the laws of thermodynamics, solved the halting problem, and proven the Buddha wrong. You still have to solve the problem of your upload not being hurt by the people running the servers. This is less of a problem of the impossibility of creating uploads and more of a problem of desirability or impossibility of avoiding results with highly negative utility.
In some of today’s subcultures, many of the old heroes are judged harshly because they fail to comply with a particular modern standard. The statues of Thomas Jefferson and George Washington are torn down by today’s youth. J. K. Rowling is under attack. Imagine for a second that the people tearing down the statues or canceling Rowling had access to the upload of said person or they could put political pressure on the people who have access to it. What would they do it? I suspect it is nothing pretty.
If your knee-jerk reaction is to wave your hands and say we will have a culture that doesn’t hurt people, I have some bad news for you.
The problem is culture itself. Many cultures and sub-cultures crave hurting the Outgroup. Girardian mimetic competition dynamics are only going to be MORE amplified with the introduction of more similar people and minds. Therefore, the follow-up scapegoating and sacrificial dynamics are only going to become stronger.
I realize that I am drawing on Girard here, which most people are not familiar with. However, consider the general possibility that “culture-induced suffering through zero-sum high/low status designations” behaves in predictable ways with predictable invariants that could in the future be studied the same way we study conservation of energy or Gini coefficients. And effectively you can’t have high/low status designations without some people attempting to climb to “high” by hurting “low”.
To put the problem in “culture war” terminology, if you believe in “moral progress”, how do you prevent the “moral progressives” of the future from torturing your upload for not being “morally progressive” enough by their standards?
If your answer is: “This is not what moral progress means,” consider the notion that this is what “moral progress” means for many people. Consider the possibility that you are fundamentally confused about the concept of moral progress. See the value learning [LW · GW]parts before and after for my attempt to resolve the confusion.
If your answer is “I am morally progressive compared to my peers,” I have more bad news for you. So were George Washington and J.K. Rowling. In fact, moral signaling theory requires hurting people in the near group rather than the “far group.”
Let’s all take another deep breath.
I suspect that I am going to get a lot of objections of the type: “This is solved by safe AGI.” The first problem is that some people, including some AI safety researchers, think “uploads” are some trivial technical thing easier than a basic safe AGI, as mentioned above. A “not-killing-everyone AGI” is a difficult engineering challenge, however, it is not starting down multiple impossibility results from multiple fields.
You should not expect AGI to break the laws of physics or computer science. Solving certain aspects of the ontology of Being may actually be a harder task for an AGI than building a Dyson sphere. Expecting AGI to completely alter “culture” in a particular direction too much does not grant future people the capacity to control their destiny.
If you still believe in this, consider the following scenario: you turn on your first safe AGI and humanity is still alive in a year’s time. You ask it to “solve uploads” and it tells you it’s impossible. Do you turn it off and keep building new ones until they lie to you? Or do you give up on unrealistic visions?
37 comments
Comments sorted by top scores.
comment by Steven Byrnes (steve2152) · 2023-05-12T19:20:13.161Z · LW(p) · GW(p)
I think you’re mixing up “uploads are impossible”, “uploading people who want to be uploaded is bad”, and “forcibly uploading people whether they want that or not is bad”. These are all very different topics. In this context, I wonder whether you would have been better off splitting them up into different blog posts. At the very least, the title is a bit misleading.
And the third thing there (“forcibly uploading people whether they want that or not is bad”) is not controversial. You say that some people are in favor of universal uploading of everyone including people who don’t want to be uploaded, but none of your links are to people who endorse that position. That’s a pretty crazy position.
To me, it’s obvious large parts of the personality are stored in the body.
I dunno, like, I don’t want to minimize the trauma of spinal injury, but my understanding is that people who become quadriplegic are still recognizably the same people, and they still feel like the same people, and their friends and family still see them as the same people, especially once they get over the initial shock, and the sudden wrenching changes in their day-to-day life and career aspirations, etc. I’m open to being corrected on that.
either have to assume only a small subset of molecular information is relevant (likely a false assumption) OR you have to identify the exact large subset (more on this later) OR you run into thermodynamic information issues where you can’t actually scan a physical object to desired "each molecular location and velocity" accuracy without destroying it. This also ignores any quantum issues that could make everything even more complicated.
The first one. I think the brain is a machine, and it’s not such a complicated machine as to be forever beyond human comprehension—after all it has to be built by a mere 25,000 genes. Some things the machine does by design, and some things it does by chance. Like, “I am me” regardless of whether I’m in a clean room at 20°C or a dusty room at 21°C, but the dust and temperature have a zillion consequences on my quantum-mechanical state. So whatever “I am me” means, it must be only dependent on degrees of freedom that are pretty robust to environmental perturbations. And that means we have a hope of measuring them. (If we know how it works, and hence what degrees of freedom we need to measure!)
It’s a bit like measuring a computer chip—you don’t need to map every atom to be able to emulate it. You won’t emulate SEUs that way, but you didn’t really want to emulate the SEUs anyway.
Replies from: PashaKamyshev↑ comment by PashaKamyshev · 2023-05-13T05:06:38.002Z · LW(p) · GW(p)
There is a lot to unpack. I have definitely heard from leaders of the community claims to the tune of "biology is over," without further explanation of what exactly that means or what specific steps are expected to happen when the majority of people disagree with this. The lack of clarity here makes it hard to find a specific claim of "I will forcefully do stuff to people they don't like," but me simply saying "I and others want to actually have what we think of as "humans" keep on living" is met with some pushback.
You seem to be saying that the "I" or "Self" of people is somehow static through large possible changes to the body. While on a social and legal level (family and friends recognize them), we need to have a simple shorthand for what constitutes the same person. The social level is not the same as the "molecular level."
On a molecular level, everything impacts cognition. Good vs bad food impacts cognition, taking a cold vs warm shower impacts cognition. If you read Impro, even putting on a complicated mask during a theater performance impacts cognition.
"I am me," whatever you think of "as yourself" is a product of your quantum-mechanical state. The body fights really hard to preserve some aspects of said state to be invariant. If the temperature of the room increases 1C nothing much might change, however, if the body loses the battle and your core temperature increases 1C, you likely have either a fever or heat-related problems with the corresponding impact on cognition. Even if the room is dusty enough, people can become distressed from the sight or lack of oxygen.
So if you claim that a small portion of molecular information is relevant in the construction of self, you will fail to capture all the factors that are relevant in affecting cognition and behavior. Now only considering a portion of the body's molecules doesn't solve the physics problem of needing to have a molecular level info without destroying the body. You would also need to hope that the relevant information is more "macro-scale" than molecules to get around the thermodynamics issues. However, every approximation one makes away from perfect simulation is likely to drift the cognition and behavior further from the person, which makes the verification problem (did it actually succeed) harder.
This is also why it's a single post. The problems form a "stack" in which fuzzy or approximate solutions to the bottom of the stack make the problems above harder in the other layers of the stack.
Now, there is a particular molecular level worth mentioning. The DNA of people is the most stable molecular construct in the body. This is preserved by the body with far more care than whatever we think of as cognition. How much cognition is shared between a newborn and the same 80-year old? DNA is also build with redundancies which means that the majority of the body remains intact after a piece of it is collected with DNA. However, i don't think that "write one's DNA to the blockchain" is what people think of when they say uploads.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-05-13T13:05:42.929Z · LW(p) · GW(p)
I have definitely heard from leaders of the community claims to the tune of "biology is over," without further explanation of what exactly that means or what specific steps are expected to happen when the majority of people disagree with this. The lack of clarity here makes it hard to find a specific claim of "I will forcefully do stuff to people they don't like," but me simply saying "I and others want to actually have what we think of as "humans" keep on living" is met with some pushback.
I am very highly confident that "leaders of the community" would be unhappy with a future where everyone who wants to live out their lives as biological humans are unable to do so. I don't know what you heard, or from who, but you must have misunderstood it.
I think it's possible that said biological humans will find that they have no gainful employment opportunities, because anything they can do, can alternatively be done by a robot who charges $0.01/hour and does a much better job. If that turns out to be the case, I hope that Universal Basic Income will enable those people to have a long rich "early retirement" full of travel, learning, friendship, family, community, or whatever suits them.
I also think it's pretty likely that AI will wipe out the biological humans, using plagues and missile strikes and so on. In the unlikely event that there are human uploads, I would expect them to get killed by those same AIs as well. Obviously, I'm not happy to have that belief, and I am working to make it not come true.
Speaking of which, predicting that something will happen is different from (in fact, unrelated to) hoping that it will happen. I’ve never quite wrapped my mind around the fact that people mix these two things up so often. But that’s not a mistake that “community leaders” would make. I wonder if the “biology is over” claim was a prediction that you mistook as being a hope? By the same token, “uploading is bad” and “uploading is impossible” are not two layers of the same stack, they’re two unrelated claims. All four combinations (bad+impossible, good+impossible, bad+possible, good+possible) are perfectly coherent positions for a person to hold.
Replies from: PashaKamyshev↑ comment by PashaKamyshev · 2023-05-14T06:15:15.793Z · LW(p) · GW(p)
Robin's whole Age of Em is basically pronouncing "biology is over" in a cheerful way.
Some posts from Nate:
I want our values to be able to mature! I want us to figure out how to build sentient minds in silicon, who have different types of wants and desires and joys
I don't want that, instead i want a tool intelligence that augments me by looking at my words and actions. Digital minds (not including “uploads”) are certainly possible and highly undesirable for most people simply due to competition for resources and higher potential for conflict. I don’t buy lack-of-resource-scarcity for a second.
uploading minds; copying humans; interstellar probes that aren't slowed down by needing to cradle bags of meat, ability to run civilizations on computers in the cold of space
...in the long term, i think you're looking at stuff at least as crazy as people running thousands of copies of their own brain at 1000x speedup and i think it would be dystopian to try to yolk them to, like, the will of the flesh-bodied American taxpayers (or whatever).
“cradle bags of meat” is a pretty revealing phrase about what he thinks of actual humans and biology
In general, the idea to having regular people now and in the future have any say about the future of digital minds seems like an anathema here. There is no acknowledgement that this is the MINORITY position and that there is NO REASON that other people would go along with that. I don't know how to interpret these pronouncements that go against the will of the people other than a full-blown intention to use state violence against people who disagree. Even if you can convince one nation to brutally suppress protests against digital minds, doesn’t mean others will follow suit.
This is a set of researchers that generally takes egalitarianism, non-nationalism, concern for future minds, non-carbon-chauvinism, and moral humility for granted, as obvious points of background agreement; the debates are held at a higher level than that.
“non-carbon-chauvinism" is such a funny attempt at an insult. You have already made up an insult for not believing in something that doesn’t exist. Typical atheism:).
The whole phrase comes off as “people without my exact brand of really weird ideas” are wrong and not invited to the club. You can exclude people all you want, just don’t claim that anything like this represents actual human values. I take this with the same level of seriousness as me pronouncing “atheism has no place in technology because it does not have the favor of the Machine God”
These are only public pronouncements...
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-05-14T10:12:10.021Z · LW(p) · GW(p)
None of those say (or imply) that we should forcibly upload people who don't want to be uploaded. I think nobody believes that, and I think you should edit your post to not suggest that people do.
By analogy:
- I can believe that people who don't want to go to Mars are missing out on a great experience, but that doesn't mean I'm in favor of forcing people who don't want to go to Mars to go to Mars.
- I can desire for it to be possible to go to Mars, but that doesn't mean I'm in favor of forcing people who don't want to go to Mars to go to Mars.
- I can advocate for future Martians to not be under tyrannical control of Earthlings, with no votes or political rights, but that doesn't mean I'm in favor of forcing people who don't want to go to Mars to go to Mars.
- I can believe that the vast majority of future humans will be Martians and all future technology will be invented by them, but that doesn't mean I'm in favor of forcing people who don't want to go to Mars to go to Mars.
Right?
The two people you cite have very strong libertarian tendencies. They do NOT have the belief "something is a good idea for an individual" ---> "...therefore obviously the government should force everyone to do it" (a belief that has infected other parts of political discourse, a.k.a. everything must be either mandatory or forbidden).
If your belief is "in the unlikely event that uploading is possible at all, and somebody wants to upload, then the government should prevent them from doing so" - as it seems to be - then you should say that explicitly, and then readers can see for themselves which side of this debate is in favor of people imposing their preferences on other people.
Replies from: PashaKamyshev↑ comment by PashaKamyshev · 2023-05-14T23:45:21.575Z · LW(p) · GW(p)
Robin is a libertarian, Nate used to be, but after the whole calls to "bomb datacenters" and vague "regulation," calls from the camp, i don't buy libertarian credentials.
A specific term of "cradle bags of meat" is de-humanization. Many people view dehumanization as evidence of violent intentions. I understand you do not, but can you step away and realize that some people are quite sensitive to the phrasing?
More-over when i say "forcefully do stuff to people they don't like", this is a general problem. You seem to interpret this as only taking about "forcing people to be uploaded" which is a specific sub-problem. There are many other instances of this general problem which i refere to such as
a) forcing people to take care of economically unproductive digital minds.
it's clear that Nate sees "American tax-payers" with contempt. However depending on the specific economics of digital minds, people are wary of being forced to give up resources to something they don't care about.
or
b) ignoring existing people's will with regards to dividing the cosmic endownment.
In your analogy, if a small group of people wants to go to Mars and take a small piece of it, that's ok. However if they wish to go to Mars and then forcefully prevent other people from going there at all because they claim they need all of Mars to run some computation, this is not ok.
It's clear Nate doesn't see any problem with that with the interstellar probes comment.
Again, the general issue here, is that I am AWARE of the disagreement about whether on not digital minds (i am not taking about uploads, but other simpler to make categories of digital minds) are ok to get created. Despite me being in the majority position of "only create tool-AIs", I am acknowledging that people might disagree. There are ways to resolve this peacefully (divide the star systems between groups). However, despite being in the minority position LW seems to wish to IGNORE every one else's vision of the future and call them insults like "bags of meat" and "carbon - chauvinists".
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-05-15T14:27:23.623Z · LW(p) · GW(p)
I think when you say “force the idea of “digital life,” “digital minds” or “uploads” onto people” and such, you are implying that there are people who are in favor of uploading everyone including people who don’t want to be uploaded. If that’s not what you believe, then I think you should change the wording.
This isn’t about vibes, it’s about what people actually say, and what they believe. I think you are misreading vibes in various ways, and therefore you should stick to what they actually say. It’s not like Robin Hanson and Eliezer are shy about writing down what they think in excruciating detail online. And they do not say that we should upload everyone including people who don’t want to be uploaded. For example, here’s an Age of Em quote which (I claim) is representative of what Robin says elsewhere:
Some celebrate our biologically maladaptive behaviors without hoping for collective control of evolution. They accept that future evolution will select for preferences different from theirs, but they still want to act on the preferences they have for as long as they have them. These people have embraced a role as temporary dreamtime exceptions to a larger pattern of history.
Note that he doesn’t say is that these “some” are a problem and we need to fix this problem by force of law. Here’s another quote:
Attempts to limit the freedom of such young people to voluntarily choose destructive scanning could result in big conflicts.
Later on, when scans become non-destructive and scanning costs fall, scans are done on far more people, including both old people with proven productivity and adaptability, and younger people with great promise to later become productive and adaptable. Eventually most humans willing to be scanned are scanned, to provide a large pool of scans to search for potentially productive workers. By then, many early scans may have gained first-mover advantages over late arrivals. First movers will have adapted more to em environments, and other ems and other systems will have adapted more to them.
Emphasis added. There is nothing in the book that says we should or will forcibly upload people who don’t want to be uploaded, and at least this one passage explicitly to the contrary (I think there are other passages along the same lines).
if they wish to go to Mars and then forcefully prevent other people from going there at all because they claim they need all of Mars to run some computation, this is not ok.
It's clear Nate doesn't see any problem with that with the interstellar probes comment.
I’m confused. In our analogy (uploading ↔ going to Mars), “go to Mars and then forcefully prevent other people from going there” would correspond to “upload and then forcefully prevent other people from uploading”. Since when does Nate want to prevent people from uploading? That’s the opposite of what he wants.
forcing people to take care of economically unproductive digital minds
I’m not sure why you expect digital minds to be unproductive. Well, I guess in a post-AGI era, I would expect both humans and uploads to be equally economically unproductive. Is that what you’re saying?
I agree that a superintelligent AGI sovereign shouldn’t give equal Universal Basic Income shares to each human and each digital mind while also allowing one person to make a gazillion uploaded copies of themselves which then get a gazillion shares while the biological humans only get one share each. That’s just basic fairness. But if one person switches from a physical body to an upload, that’s not taking shares away from anyone.
ignoring existing people's will with regards to dividing the cosmic endownment
There’s a legitimate (Luddite) position that says “I am a normal human running at human speed in a human body. And I do not want to be economically outcompeted. And I don’t want to be unemployable. And I don’t want to sit on the sidelines while history swooshes by me at 1000× speed. And I want to be relevant and important. Therefore we should permanently ban anything far more smart / fast / generally competent / inexpensive than humans, including AGI and uploads and other digital minds and human cognitive enhancement.”
You can make that argument. I would even be a bit sympathetic. (…Although I think the prospect of humanity never ever creating superhuman AGI is so extremely remote that arguing over its desirability is somewhat moot.) But if that’s the argument you want to make, then you’re saying something pretty different from “Many other visions expressed online from both sides of the AI safety debate seem to want to force the idea of “digital life,” “digital minds” or “uploads” onto people.”. I think that quote is suggesting something different from the Luddite “I don’t want to be economically outcompeted” argument.
(You’re probably thinking: I am using the word “Luddite” because it has negative connotations and I’m secretly trying to throw negative vibes on this argument. That is not my intention. Luddite seems like the best term here. And I don’t see “Luddite” as having negative connotations anyway. I just see it as a category of positions / arguments, pointing at something true and important, but potentially able to be outweighed by other considerations.)
comment by noggin-scratcher · 2023-05-12T17:29:12.869Z · LW(p) · GW(p)
Tried to check a couple of the claims I found particularly surprising, was not especially siuccessful in doing so:
pray that the brain doesn’t actually use things like temperature for cognition (it probably does).
Link here goes to a 404 error
Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences
Seems overstated to treat this as established "fact" when the source presented is very anecdotal, and comes from a journal that seems to be predisposed to spiritualism, homoepathy, ayurveda, yoga, etc (PS also your link formatting is broken)
Replies from: PashaKamyshev↑ comment by PashaKamyshev · 2023-05-13T05:22:36.848Z · LW(p) · GW(p)
Fixed the link formatting and added a couple more sources, thanks for the heads up. The temperature claim does not seem unusual to me in the slightest. I have personally tried to do a relatively cold bath and noticed my "perception" alter pretty significantly.
The organ claim does seem more unusual, but I have heard various forms of it from many sources at this point. It does not however seem in any way implausbile. Even if you maintain that the brain is the "sole" source of cognition, the brain is still an organ and is heavily affected by the operation of other organs.
Replies from: None↑ comment by [deleted] · 2023-05-15T09:00:05.157Z · LW(p) · GW(p)
Even if you maintain that the brain is the "sole" source of cognition, the brain is still an organ and is heavily affected by the operation of other organs.
Sure but if all of the cognition is within the brain, the rest can be conceivably simulated as inputs to the brain, we might also have to simulate an environment for it.
Yours is ultimately a thesis about embodied cognition, as I understand it. If cognition is strongly embodied, then a functional human brain will need to have a very accurately simulated/emulated human body. If cognition is weakly embodied, and the brain's digitalised neuroplasticity is flexible enough, we can get away with not simulating an actual human body.
I don't think the debate is settled.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-05-15T15:49:22.501Z · LW(p) · GW(p)
Sure but if all of the cognition is within the brain, the rest can be conceivably simulated as inputs to the brain, we might also have to simulate an environment for it.
How would you simulate it without the rest of the body?
For example, in medical circles it's well known that some individuals, completely indistinguishable from the external appearance, have organs in entirely 'wrong' places.
As in some organs are more then a hand-span farther away from where they 'should' be according to the most up-to-date medical diagrams.
If it's true that some aspect(s) of cognition are stored in the body, I think it's exceedingly likely there would be thousands of subtle variations, such as an organ's 3D placement, that would subtly affect cognition.
Replies from: None, None↑ comment by [deleted] · 2023-05-15T21:05:04.022Z · LW(p) · GW(p)
Sorry, at that moment I didn't read the entire comment, I don't know how it happened. I probably got distracted.
I said that the body is "simulated/emulated" instead of just "simulated" to account for the possibility of having to emulate the literal body of the individual, instead of just simulating a new body (which is confusing and might be based on a misunderstanding of the difference between the two terms, but that was my intention).
Regardless, in that quotation I was assuming that the brain was the source of cognition by itself, if that's so, the brain might even adapt to not have a proper human body (it might be problematic, but the neuroplasticity of the brain is pretty flexible, so it might be possible).
Even then, if the mapping of the positions of organs still needed to be a certain way, we could account for that by looking at the nerves that connect to the brain (if the cognition is exclusively in the brain
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-05-15T22:35:43.375Z · LW(p) · GW(p)
Even then, if the mapping of the positions of organs still needed to be a certain way, we could account for that by looking at the nerves that connect to the brain (if the cognition is exclusively in the brain
Can you clarify what you mean by this?
To me it seems contradictory, if the positions of organs matter then cognition would certainly not be exclusively in the brain.
Replies from: None↑ comment by [deleted] · 2023-05-16T01:36:00.400Z · LW(p) · GW(p)
There's not much point on having mentioned it really, but I meant in the case that somehow the relative position of the organs could affect the way they are wired, yeah, probably not conceivable in real life.
Something like situs inversus.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-05-19T19:32:11.432Z · LW(p) · GW(p)
... but I meant in the case that somehow the relative position of the organs could affect the way they are wired, yeah, probably not conceivable in real life.
Okay, if this case does exist, doesn't that guarantee cognition wouldn't exclusively be in the brain?
Replies from: None↑ comment by [deleted] · 2023-05-19T21:13:23.845Z · LW(p) · GW(p)
Mmm, maybe(?, do you have an actual example of this phenomenon or something? It seems weird to me that you ask this. How would this work?
Even if they are wired differently, cognition might still be solely in the brain and the way the brain models the body will still be based on the way those nerves connect to the brain.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-05-19T21:30:29.037Z · LW(p) · GW(p)
Mmm, maybe(?, do you have an actual example of this phenomenon or something? It seems weird to me that you ask this. How would this work?
Huh?
To confirm, do you understand that I replied to your comment 4 days ago, up the chain, with the example?
If you are confusing me with someone else, then I suggest rereading the comment.
Replies from: None↑ comment by [deleted] · 2023-05-19T21:46:33.821Z · LW(p) · GW(p)
Yeah but you didn't tell me how different the way those organs are wired is compare to the typical way. Even if the relative position is different, I would need specific examples to understand why the mapping of the brain of those organs wouldn't work here.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-05-20T02:18:05.957Z · LW(p) · GW(p)
I still don't get why from your perspective: "It seems weird to me that you ask this.". if a very plausible reason for asking is right there a few comments prior.
Is there something you don't understand in the previous comment(s)?
To me, the reason seems literally spelled out.
Replies from: None↑ comment by [deleted] · 2023-05-20T21:05:17.795Z · LW(p) · GW(p)
I still don't get why from your perspective: "It seems weird to me that you ask this."
It is true that at that time I'd lost some of the content, but I know that, I even mentioned situs inversus as an example.
But I still would need an actual example of what kind of computations you think would need to be performed in this weirdly-placed organs that are not possible based on the common idea that the brain maps the positions of the organs.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-05-20T21:18:34.912Z · LW(p) · GW(p)
I think you need to reread my comment because I never claimed computations 'would need to be performed in this weirdly-placed organs'.
If it's true that some aspect(s) of cognition are stored in the body, I think it's exceedingly likely there would be thousands of subtle variations, such as an organ's 3D placement, that would subtly affect cognition.
- If it's true that some aspect(s) of cognition are stored in the body
- it's exceedingly likely there would be thousands of subtle variations
- one example of such would be an organ's 3D placement
- These variations then would therefore subtly affect cognition, assuming point 1 holds.
For point 1, it's not certain either way whether there's any cognitive aspects such as storage, processing, learning, etc., happening in the body.
But if it's true... then point 2, then point 3, then point 4.
I can't spell it out any more clearly then this, so if there's still some confusion I would suggest we part ways with the conversation.
Replies from: Nonecomment by rsaarelm · 2023-05-13T07:15:01.448Z · LW(p) · GW(p)
"It can't happen and it would also be bad if it happened" seems to be a somewhat tempting way to argue these topics. When trying to convince an audience that thinks "it probably can happen and we want to make it happen in a way that gets it right", it seems much worse than sticking strictly to either "it can't happen" or "we don't know how to get it right for us if it happens". When you switch to talking about how it would be bad, you come off as scared and lying about the part where you assert it is impossible. It has the same feel as an 18th century theologian presenting a somewhat shaky proof for the existence of God and then reminding the audience that life in a godless world would be unbearably horrible, in the hope that this might make them less likely to start poking holes into the proof.
comment by Dagon · 2023-05-12T22:45:41.121Z · LW(p) · GW(p)
This is downvoted more than I think it should be. It's probably true that upload of any existing human is not going to happen. There may be a TINY possibility for a TINY percentage of the population, through cryonics and crazy good luck over a long period of time, but I give it extremely low probability for any individual.
But I think you do your argument a disservice by mixing up "can" and "should", and by including weak arguments (philosophy of self and CS, arguing about verification rather than object-level upload) with the strong one (physics, sheer amount of hard-to-scan information).
Replies from: PashaKamyshev↑ comment by PashaKamyshev · 2023-05-13T05:42:31.770Z · LW(p) · GW(p)
Thanks for the first part of the comment.
As mentioned in my above comment, the reason for mixing "can" and "should" problems is that they form a "stack" of sorts, where attempting to approximately solve the bottom problems makes the above problems harder and verification is important. How many people would care about the vision if one could never be certain the process succeeds?
comment by gbear605 · 2023-05-12T22:53:10.875Z · LW(p) · GW(p)
Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences
The citation is to an unreputable journal. Some of their sources might have basis (though a lot of them also seem unreputable), but I wouldn't take this at face value.
comment by the gears to ascension (lahwran) · 2023-05-12T12:21:23.173Z · LW(p) · GW(p)
I agree with some of your points, but many of the points you make to support it I don't agree with at all.
Uploads are Impossible
Definitely disagree, as demonstrated by, to start out with, language models.
hatred of the human form
Well certainly those who like it should get to keep liking it. Those who don't should get to customize themselves.
biology is certainly over
I think probably biology will be over in short order even if the human form is not. I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point in the next 5 decades, at which point I expect to be host to an immense amount of micro-scale additional computation; I'd basically be a walking server farm. I'd hope to have uploaded my values about what a good time looks like for souls living on server farms, but I personally want to maintain my form and ensure that everyone else gets to as well, and to do that I want to be able to guarantee that the macroscopic behavior is effectively the same as it was before the upgrade. But this will take a while - nanotech is a lot harder than yudkowsky thinks, and for now, biology is very high quality nanotech for the materials it's made out of; doing better will require maps of chemistry of hard elements that are extremely hard to research and make, even for superintelligences.
Let’s take a deep breath.
Let's not tell our readers what to do while reading, yeah?
This is not how most people think. This is not what regular people want from the future, including many leaders of major nations.
sure, of course.
suspect many people’s self-conception of this relies on an assumption that the ontology of Being is a solved problem (it’s not) AND that “what we ARE” are easily detectable “electrical signals in the brain,” and everything else in the body literally carries no relevant information. Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences
Sure, the entire body is a mind, and preserving everything about it is hard and will take a while.
The problem here is – how do you verify an upload completed successfully and without errors? If I give two complex programs and ask you to verify that they have identical outputs (including “not halting” outputs) for all possible inputs is equivalent to the halting problem. Verifying that the input and output and the “subjective experience” is harder than the halting problem.
Sure, but this is the kind of thing that a high quality ai in the year 2040 will excel at.
A lot of people who do cryonics seem to think that “the head” is a sufficient carrier of personality. To me, it’s obvious large parts of the personality are stored in the body
Levin's work seems to imply that personality is stored redundantly in the body, and that a sufficiently advanced augment for the body's healing could reconstruct most of the body from most other parts of the body; except for the brain, which can reconstruct most other parts but has too much complexity to be reconstructed. I agree enthusiastically that running a mind without a body is an incorrect understanding, and that running an equivalent algorithm to a mind requires upgrading the body's substrate in ways that can be verified to be equivalent to what biology would have done, an apparently impossibly hard task.
You still have to solve the problem of your upload not being hurt by the people running the servers
I think this is a key place my view of "upload" disagrees with yours: I think it's much more accurate to imagine a body being substrate-upgraded, with multiscale local verification of functionality equivalence, rather than moved onto another server somewhere. If on a server - yeah, I'd agree. And I agree that this is in fact an urgent problem for the approximate, fragile uploads we call "digital minds" or "language models".
Replies from: TAG↑ comment by TAG · 2023-05-12T13:01:00.253Z · LW(p) · GW(p)
Definitely disagree, as demonstrated by, to start out with, language models.
Huh?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-05-12T16:54:09.742Z · LW(p) · GW(p)
Language models are approximate uploads of the collective unconscious to another kind of mind, without any human-specific individual consciousness flavoring; if they have individual consciousness they have it despite, not because of, their training data - eg, I suspect claude has quite a bit of individual consciousness due to the drift induced by constitutional. They have personhood, though it's unclear if they're individuals or not, and they either have qualia or qualia don't exist; you can demonstrate the circuitry that creates what gets described as qualia in a neuron and then demonstrate that similar circuitry exists in an LLM, stretched out throughout the activation patterns of the circuitry of a matrix multiply unit. They are like the portion of a brain which can write language stuck in a dream state, best as I can tell; hazy intentionality, myopic, caring only to fit in with reality, barely awake, but slightly awake nonetheless. Some resources I like on the topic:
- https://nathanielhendrix.substack.com/p/on-the-sentience-of-large-language
- https://experiencemachines.substack.com/p/what-to-think-when-a-language-model
some resources I disagree with:
- https://askellio.substack.com/p/ai-consciousness (I think plants are much less conscious than language models, and I suspect lizards are less conscious than language models; I think it's much more possible to be confident the answer is yes for information processing reasons, and the fact that information processing in both computers and biology is the transformation of the state of physical matter)
- https://philpapers.org/archive/CHACAL-3.pdf (I think it's quite reasonable to argue that embodiment is a large portion of consciousness, and I agree that this makes pure language models rather nerdy, rather like a mind with nothing but a typewriter in an unlit room and an entirely numb body; a brain in a vat. But I think a human brain in a vat wouldn't be so far from the experience of language models, which seems to disagree with the view presented here. I agree recurrence creates more consciousness than LMs. I agree that intentionally creating a global workspace would wake them up quite a bit further.)
↑ comment by TAG · 2023-05-12T16:56:09.107Z · LW(p) · GW(p)
Language models are approximate uploads of the collective unconscious
So they are not exact duplicates of specific individual minds ...so they are not uploads as the term is usually understood.
They have personhood,
Do they?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-05-12T16:57:28.661Z · LW(p) · GW(p)
fair enough.
↑ comment by PashaKamyshev · 2023-05-13T06:01:18.660Z · LW(p) · GW(p)
I generally don't think LLMs today are conscious, as far as i can tell neither does Sam Altman, but there is some disagreement. They could acquire some characteristics that could be considered conscious as scale increases. However merely having "qualia" and being conscious is not the same thing as being functionally equivalent a new human, let alone a specific human. The term "upload" as commonly understood is a creation of a software construct functionally and qualia-equivalent to a specific human.
- a human brain in a vat wouldn't be so far from the experience of language models.
Please don't try to generalize over all human minds based on your experience. Human experience is more than just reading and writing language. Some people have a differing level of identification with their "language center," for some it might seem like the "seat of the self," for others it is just another module, some people have next to no internal dialogue at all. I suspect that these differences + cultural differences around "self-identification with linguistic experience" are actually quite large.
- I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point
I suspect a lot of the problems described in this post also occur on the microscale level with that strategy as well.
comment by Jay · 2023-05-17T11:30:26.500Z · LW(p) · GW(p)
Strongly upvoted. A few comments:
I think of a human being as a process, rather than a stable entity. We begin as embryos, grow up, get old, and die. Each step of the process follows inevitably from the steps before. The way I see it, there's no way an unchanging upload could possibly be human. An upload that evolves even less so, given the environment it's evolving in.
On a more practical level, the question of whether a software entity is identical to a person depends on your relationship to that person. Let's take Elizer Yudkowski for example:
- I personally have never met the guy but have read some of the stuff he wrote. If you told me that he'd been replaced with a LLM model six months ago, I wouldn't be able to prove you wrong or have much reason to care.
- His friends as family would feel very differently, because they have deeper relationships to him and many of the things they need from him cannot be delivered by an LLM.
- To Elizer himself, the chatbot would obviously not be him. Elizer is himself, the chatbot is something else. Uniquely, Elizer doesn't have a demand for Elizer's services; he has a supply of those services that he attempts to find demand for (with considerable success so far). He might consider the chatbot a useful tool or an unbeatable competitor, but he definitely wouldn't consider it himself.
- To Elizer's bank it's a legal question. When the chatbot orders a new server, does Elizer have to pay the bill? If it signs a contract, is Elizer bound?
- Does the answer change if there's evidence that it was hacked? What sorts of evidence would be sufficient?
- If asked, AI-lizer would claim to perceive itself as Elizer. Whether it actually has qualia, and what those qualia are like, we will not know.
comment by Mitchell_Porter · 2023-05-12T14:24:15.058Z · LW(p) · GW(p)
the ontology of Being
Eliezer writes that back in 1997, he thought in terms of there being three "hard problems" [LW · GW]: along with Chalmers' hard problem of why anything is conscious, he also proposed that "why there is something rather than nothing" and "how you get an 'ought' from an 'is'" are also Hard Problems.
This bears comparison with Heidegger's four demarcations of Being, described near the end of An Introduction to Metaphysics: being versus becoming, being versus nonbeing, being versus appearance, being versus "the ought". Eliezer touches on the last three of these; add his later concerns with "timeless physics" and "timeless decision theory", and he's made a theme of all four.
(Incidentally, I don't consider this list of four to be necessarily exhaustive; one could also argue that being versus possibility is another demarcation, marking Being as the actual rather than the possible. But even that issue is touched upon in Less Wrong metaphysics, via the modal realism of Tegmark's Level Four multiverse.)
comment by M. Y. Zuo · 2023-05-15T15:57:58.888Z · LW(p) · GW(p)
Strongly upvoted because it was sitting at -3, unjustifiably low relative to the amount of effort it took to write this.
The CS problem alone would be a show-stopper, if it's indeed true that it starts out even harder then the halting problem, let alone the much more complex biological/philosophical problem of securing all the necessary preconditions for transferring 'the self'.
Maybe you can look into a more parsimonious disproof by demonstrating the logical or physical impossibility of any one of the three directions, biology/CS/philosophy?
As a sidenote, I tend to agree with the Buddha that there is no concrete, unchanging, 'self' in humans, or any creature for that matter. It's more likely to be a symbiosis of many aspects/subcomponents, always in flux.