Recreational Cryonicspost by topynate · 2014-01-15T20:21:36.011Z · score: 1 (14 votes) · LW · GW · Legacy · 6 comments
We recently saw a post in Discussion by ChrisHallquist, asking to be talked out of cryonics. It so happened that I'd just read a new short story by Greg Egan which gave me the inspiration to write the following:
It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.
I had little expectation of this actually convincing anyone, but thought it was a fairly novel contribution. When jowen's plea for a refutation went unanswered, I began attempting one myself. What I ended up with closes the door on the scenario I outlined, but opens one I find rather more disturbing.
I think I'd better start by explaining why I wrote my comment the way that I did.
Normally, when being simulated is raised as a negative possibility (referred to in SF as being 'deathcubed', and carrying the implication not so much of torture as of arbitrariness), it's in the context of an AI doing so. Now there's a pretty good argument against being deathcubed by an AI, as follows:
Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. Humans are just matter that can be better used for something else; likewise simulations use computational resources better used for something else. Therefore there is little to fear in the way of being tortured by an AI.
I sidestepped that entire argument (but I'll return to it in a minute) by referring to "the ethics of the society that will exist". In other words, without making it explicit, I created the image of a community of agents, probably human-like agents, each with ownership of resources that they could use according to a moral code and subject to some degree of enforcement. I assumed a naturalistic polity, rather than a hegemony.
With the assumptions behind my scenario laid bare, it should now be apparent that it is no more stable than a world in which everyone owns nuclear weapons. If the resources to do these simulations are dispersed amongst that many people, someone will use them to brute-force an AI which will then hegemonize the universe.
If you accept the above, then you need only worry about a hegemon that permits such a society to exist. Such a hegemon would probably be classified as uFAI, and so we go back to the already refuted situation of a perversely evil AI.
Thus far I believe myself to have argued according to what passes for orthodoxy on LW. Note, though, that everything hinges on predictions about uFAI. These tend to be based on Steve Omohundro's The Basic AI Drives, which if taken seriously imply that an uFAI would convert the universe to utilons and not give a fig for human beings.
One of the drives AIs are predicted to have is the desire to be rational. I claim that a key behaviour that is eminently rational in humans has been neglected in considering AIs, particularly uFAIs. Namely, play. We humans take pleasure in play, quite aside from whatever productive gains we get out of it. Nevertheless, intellectual play has contributed to innumerable advances in science, mathematics, philosophy, etc. Dour, incurious people demonstrably fail to satisfy their values in dimensions that ostensibly are completely unrelated to leisure. An AI, you may say, need not play in order to create utility; it can structure its thoughts more rationally than humans, without resorting to undirected intellectual activity. I grant that a hyperrational agent will not allocate as great a proportion of resources to such undirected activity - that follows from holding more accurate beliefs about what it is productive to do. But no agent can have perfect knowledge of this nature. Thus all sufficiently rational agents will devote some proportion of their resources, no matter how small, to play.
What is the nature of play for a superintelligence? For Friendly AI, by definition it will not involve deathcubing. For an unFriendly AI, this is not the case. We then need to assess how likely it is that play for such an AI would involve such atrocities. Well then, first consider the sum of resources available to a hegemonic AI. Computationally speaking, doing sims of humans, indeed of human civilizations, would be a relative drop in the ocean. In a universe containing billions of stars in a single galaxy, how many tonnes of computronium would it take? Not many, I'd wager. Yet, no matter how easy it is for an AI to deathcube, perhaps it's simply so irrelevant, or uninteresting, an activity as not to occur even in undirected intellectual activity.
Perhaps. Yet it's rather easy to devise side-projects for an uFAI that are very simple to describe, take minimal resources and include untold human suffering. It begins to strain belief that of all the compactly specified tasks that refer to the universe in which an AI finds itself, it will try none of those which fall into this category.
An example seems in order. One would be just to do every thought experiment devised by mankind. Some of these would be computationally intractable even for a superintelligence, but those tend to be limited to computer science and number theory. Let's assume that 10 billion human beings record an average of a thousand unique and tractable ideas of note, each requiring about 10^30 operations - numbers deliberately on the high side of plausible. Then it would take 10^43 operations to do all of them. A maximally efficient computer weighing one kilogram and occupying one litre appears capable of ~10^50 operations a second. Thus, a superintelligence that can achieve efficencies of one ten-millionth in all these dimensions would take one second on a computer weighing 10,000 tonnes and occupying 10,000 m3 (imagine a zeppelin filled with water rather than hydrogen gas) to finish the job. It is at least plausible for an uFAI to do such a thing even before beginning wholescale matter-conversion. In such a context, where humans actually exist, play involving humans looks very natural.
It hardly needs mentioning that almost everything ever discussed on this site would be included in the example above.
Comments sorted by top scores.