Waterfall Ethics
post by calef · 2012-01-30T21:14:28.774Z · LW · GW · Legacy · 30 commentsContents
30 comments
I recently read Scott Aaronson's "Why Philosophers Should Care About Computational Complexity" (http://arxiv.org/abs/1108.1791), which has a wealth of interesting thought-food. Having chewed on it for a while, I've been thinking through some of the implications and commitments of a computationalist worldview, which I don't think is terribly controversial around here (there's a brief discussion in the paper about the Waterfall Argument, and its worth reading if you're unfamiliar with either it or the Chinese room thought experiment).
That said, suppose we ascribe to a computationalist worldview. Further suppose that we have a simulation of a human running on some machine. Even further suppose that this simulation is torturing the human through some grisly means.
By our supposed worldview, our torture simulation is reducible to some finite state machine, say a one tape turing machine. This one tape turing machine representation, then, must have some initial state.
My first question: Is more 'harm' done in actually carrying out the computation of the torture simulation on our one tape turing machine than simply writing out the initial state of the torture simulation on the turing machine's tape?
The computation, and thus the simulation itself, are uniquely specified by that initial encoding. My gut feeling here is that no, no more harm is done in actually carrying out the computation, because the 'torture' that occurs is a structural property of the encoding. This might lead to perhaps ill-formed questions like "But when does the 'torture' actually 'occur'?" for some definition of those words. But, like I said, I don't think that question makes sense, and is more indicative of the difficulty in thinking about something like our subjective experience as something reducible to deterministic processes than it is a criticism of my answer.
If one thinks more harm is done in carrying out the simulation, then is twice as much harm done by carrying out the simulation twice? Does the representation of the simulation matter? If I go out to the beach and arrange sea shells in a way that mimics the computation of the torture, has the torture 'occurred'?
My second question: If the 'harm' occurring in the simulation is uniquely specified by the initial state of the Turing machine, how are we to assign moral weight (or positive/negative utility, if you prefer) to actually carrying out this computation, or even the existence of the initial state?
As computationalists, we agree that the human being represented by the one tape turing machine is feeling just as real pain as we are. But (correct me if I'm wrong), it seems like we're committed to the idea that the 'harm' occurring in the torture simulation is a property of the initial state, and this initial state exists independent of us actually enumerating that state. That is, there is some space of all possible simulations of a human as represented by encodings on a one tape turing machine.
Is the act of specifying one of those states 'wrong'? Does the act of recognizing such a possible space of encodings realize all of them, and thus cause an uncountable number of tortures and pleasures?
I don't think so. That just seems silly. But this also seems to rob a simulated human of any moral worth. Which is kinda contradictory--we recognize that the pain a simulated human feels is real, yet we don't assign any utility to it. Again, I don't think my answers are *right*, they were just my initial reactions. Regardless of how we answer either of my questions, we seem committed to strange positions.
Initially, the whole exercise was looking for a way to dodge the threats of some superintelligent malevolent AI simulating the torture of copies of me. I don't think I've actually successfully dodged that threat, but it was interesting to think about.
30 comments
Comments sorted by top scores.
comment by Vaniver · 2012-01-31T00:44:34.167Z · LW(p) · GW(p)
As computationalists, we agree that the human being represented by the one tape turing machine is feeling just as real pain as we are.
Ah, a proof by contradiction that we are not computationalists!
Replies from: None↑ comment by [deleted] · 2012-01-31T01:48:36.840Z · LW(p) · GW(p)
To the OP: By the quoted statement, do you mean that the human being so represented feels pain as the simulation is being conducted, or merely as a result of the simulation's initial state existing?
To Vaniver: I'm certainly of the opinion that actually simulating a human (or any other person) under the given circumstances would cause said human to (1) actually exist and (2) actually experience pain. Of course, by "we" you might really mean "you" or some other subset of the site's population.
I'm not anywhere near so certain about the "initial state" situation, primarily because I don't seem to have a coherent model of what it fundamentally means to "exist" yet.
Replies from: calef, Vaniver↑ comment by calef · 2012-01-31T02:40:39.291Z · LW(p) · GW(p)
Well, I suppose we have to be more clear about what it even means to feel pain "as the simulation is being conducted". If we accept that pain does occur, then that pain occurring is independent of the way we carry out the simulation. So let's pretend our turing machine tape is a bunch of sea shells on a sea shore, and I've devised a way to carry out an isomorphic set of operations on the sea shells that performs the computation.
We could select the obvious point in time that corresponds to the instant before the "simulation" actually starts feeling pain, as represented by pain receptors firing, as encoded in some configuration of the sea shells. Does the "pain" begin in the next timestep? Is it only a property of completed timesteps (as there is some time it takes me to actually perform the operations on the sea shells)? What if I just stop, mid timestep?
To me, it seems much more reasonable that the "pain" is just a property of the 'time' evolution of the system. It seems strange to imbue the act of carrying out the computation with 'causing pain'.
Replies from: None, Lightwave↑ comment by [deleted] · 2012-01-31T04:46:29.004Z · LW(p) · GW(p)
I agree with that argument. That is to say, I find it intuitively appealing. But I also find the converse argument (things shouldn't be said to exist just because someone wrote down a mathematical equation that might give rise to them) equally intuitively appealing.
(This line of reasoning has occurred here before, by the way. See also that post's predecessors.)
All things considered my current position is, I don't understand what's going on well enough to come to any certain conclusion. I make the distinction of actually conducting the simulation purely because it seems the slightly more appealing of the two.
↑ comment by Lightwave · 2012-01-31T09:24:07.678Z · LW(p) · GW(p)
To me, it seems much more reasonable that the "pain" is just a property of the 'time' evolution of the system. It seems strange to imbue the act of carrying out the computation with 'causing pain'.
"the act of carrying out the computation" is "the 'time' evolution of the system". And the system needs to have the causal organization that implements the computation.
Replies from: calef↑ comment by calef · 2012-01-31T12:50:27.966Z · LW(p) · GW(p)
To actually know what occurred, we must carry out the computation.
Does the act of carrying out the computation change anything about the 'time' evolution of the system? Calling it a 'time' evolution perhaps puts the wrong emphasis on what I think is important, here. Like another poster has said, "2+2=4" is a good analogue. We can devise a computation that results in the answer to the query "What is 2+2?", but I don't think one can argue that actually performing that computation can be equivocated with the result.
When I say 'time' evolution, I really mean the thing floating in idea space that is the decideable answer (in a formal sense) to the question "What is the set of subsequent 'time' steps of this initial configuration?"
Replies from: Lightwave↑ comment by Lightwave · 2012-01-31T13:38:43.233Z · LW(p) · GW(p)
I'm not quite sure what you mean. Computation is a process, not a state (or a configuration of matter). For a physical system to implement a certain computation, it needs to "evolve over time" in a very specific way. You could probably say it's a series of states that are causally connected in a specific way.
Replies from: calef↑ comment by calef · 2012-02-01T01:21:23.880Z · LW(p) · GW(p)
I think the process of computation matters only insofar as we do not know the result of any given computation before performing it.
So say I have performed the torture sim, and say that I have every configuration of the tape listed to a corresponding page of some really long book. Is the computation performed once again if I flip through the book? Or must I physically carry out the computation using some medium (e.g. sea shells)?
To me, it seems that the only difference between the universe before I ran the simulation and the universe after is that I know what occurred in that simulation. The simulation itself, and all of its content (that is, the sequence of states following from the initial state), was already a fact of the universe before I knew about it.
Replies from: Lightwave↑ comment by Lightwave · 2012-02-01T09:10:46.530Z · LW(p) · GW(p)
Is the computation performed once again if I flip through the book? Or must I physically carry out the computation using some medium (e.g. sea shells)?
So what is your answer to these questions? Does flipping through the book create torture? And what if you have the algorithm / list of steps / list of tape configurations described in a book before you implement them and run the Turing machine?
Replies from: calef↑ comment by calef · 2012-02-01T15:29:07.158Z · LW(p) · GW(p)
I don't think it "creates torture" any more than saying 2+2=4 "creates" the number 4--or, at least that's what I think a computationalist is committed to.
If I have some enumeration of the torture sim in hand, but I haven't performed the computation myself, I have no way of trusting that this enumeration actually corresponds to the torture sim without "checking" the computation. If one thinks that now performing the torture sim on a Turing machine is equivalent to torture, one must also be committed to thinking that checking the validity of the enumeration one already has is equivalent to torture.
But this line of thought seems to imply that the reality of the torture is entirely determined by our state of knowledge about any given step of the turing machine. Which strikes me as absurd. What if one person has checked the computation, and one hasn't, etc. It's essentially the same position that '4' doesn't exist unless we compute it somehow (which, admittedly, isn't a new idea).
comment by [deleted] · 2012-01-30T21:42:00.685Z · LW(p) · GW(p)
this has been mentioned before, for anyone who is interested.
I think the conclusion was that the sims become morally important members of our universe (as opposed to just calculations to see what happens) when there is enough two-way causal interaction.
I think there's still some confusion there.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-01-31T09:39:02.972Z · LW(p) · GW(p)
Thanks for the link! It is also my intuition that universes exist in possibility-space, not in computation-space.
If a turing machine simulation is torturing some poor guy, then he is already being tortured in a possiblity-space, whether we run the simulation or not, and regardless of how we run it.
Simulating a universe is simply a window that allows us to see what happens in that universe. It is not a cause of that universe. Running 3^^^3 simulations of torture is simply looking 3^^^3 times at a universe whose inhabitant is (always was, always we be; the possibility-space is timeless) tortured, as a by-product of the laws of that universe. Seeing it may hurt our feelings, but it does not provide any additional harm to the person being simulated. Also, running 3^^^3 simulations with modified parameters is looking into 3^^^3 different universes.
If I repeat hundred times that "2+2=4", it does not make "2+2=4" more real in any sense.
Replies from: khafra↑ comment by khafra · 2012-02-01T13:20:12.941Z · LW(p) · GW(p)
This perspective seems an elegant resolution, in many ways. But it also seems to presuppose either some variant of Tegmark's level IV multiverse, or at least that all logical possibilities have moral weight. I'm not sure either of these have been adequately established.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-02-01T17:41:01.542Z · LW(p) · GW(p)
I have problems to understand some aspects of this "multiverse" stuff, so my reasoning may be very confused.
My biggest problem is what makes some universes "more real" than others. For example, why is our universe more real than a universe that is exactly like the our universe until now, but exactly now some magic starts happening. Seems to me that an official answer is "Solomonoff induction", which says that 'our universe' has shorter description than 'our universe + magic starting right now', but that answer seems to me just like passing the bucket. Why should Tegmark multiverse care about the length of description?
But the question relevant to this topic is this: Let there be a universe A to which the Solomonoff induction gives prior probability X, and an universe B with probability Y. The universe B contains a computer that runs a simulation of universe A... now is the probability of the universe A still X, or is it X+Y? I don't even know what this question means (maybe nothing), but it seems to me that it means whether people running the simulation in the universe B are somehow responsible for what happens in universe A.
comment by Alejandro1 · 2012-01-31T02:14:51.346Z · LW(p) · GW(p)
Probably most LWers are familiar with it, but Hofstadter's classic "A conversation with Einstein's Brain" is a good read on these same questions.
I am always confused, too, when I think these questions. The only way I can think of solving the puzzles without rejecting computationalism is to reject an ultimate distinction between a computation existing abstractly and a computation being ran, and go in the Tegmarkian direction of a fully Platonic ontology of abstract forms, among them computations, among them us. But since I don't find this believable (gut reaction, which I acknowledge irrational but I can't remedy) I go back in practice to either rejecting computationalism, at least in its strong forms, or assuming there is another solution to the puzzles which I cannot think of.
Replies from: torekp↑ comment by torekp · 2012-02-05T22:47:59.913Z · LW(p) · GW(p)
While I accept computationalism only for intentionality (i.e. semantics, knowledge, etc.) and not qualia, I don't see why computationalists of all stripes shouldn't insist that a computation must actually run. EY's Timeless Causality post looks relevant, offering a redefinition of running a computation in terms of causal relations, rather than, necessarily, requiring extension in time.
comment by Pavitra · 2012-01-31T18:31:05.642Z · LW(p) · GW(p)
Is more 'harm' done in actually carrying out the computation of the torture simulation on our one tape turing machine than simply writing out the initial state of the torture simulation on the turing machine's tape?
The tape does not contain all the information necessary. You also need the machine that interprets the tape. A different machine would perform a different computation.
Loading the tape into the machine, but not powering the machine, is also insufficient. You don't then have a hypothetical Turing machine; rather, you have a machine that performs the function of doing nothing with the tape. Reality doesn't care about human-intuitive hypotheticals and near-misses.
Representation is irrelevant; the seashells are torture because working out the right arrangement of seashells necessarily involves actually performing the computation.
Replies from: moridinamael↑ comment by moridinamael · 2012-01-31T19:08:37.307Z · LW(p) · GW(p)
Why privilege the physical movement of the seashells? What if I move the seashells into position for timestep 35469391 and then mentally imagine the position of the seashells at timestep 35469392? You could say I am "performing the calculation," but you could also say I am "discovering the result of propagating forward the initial conditions."
I don't think our intuitions about what "really happens" are useful. I think we have to zoom out at least one level and realize that our moral and ethical intuitions only mean anything within our particular instantiation of our causal framework. We can't be morally responsible for the notional space of computable torture simulations because they exist whether or not we "carry them out." But perhaps we are morally responsible for particular instantiations of those algorithms.
I don't know the answer - but I don't think the answer is that performing mechanical operations with seashells reifies torture but writing down the algorithm does not.
Replies from: Pavitracomment by lessdazed · 2012-01-30T21:35:55.005Z · LW(p) · GW(p)
Shit and Bullshit Rationalists Don't Say:
"I've read more papers by Scott Aaronson than just the one." "Which one?" (Both of these.)
Quantity of experience: brain-duplication and degrees of consciousness Nick Bostrom
Replies from: torekp, ShardPhoenix↑ comment by ShardPhoenix · 2012-01-30T22:19:53.862Z · LW(p) · GW(p)
Do you mean to say that Aaronson's paper is bad, or that everyone's already read it?
Replies from: lessdazed↑ comment by lessdazed · 2012-01-31T00:57:00.467Z · LW(p) · GW(p)
Not that it's bad, for that would be confusing levels, even if "shit" were being used in its usual figurative sense. For example, I would consider some true things said that are self-harmful violations of social norms "shit."
Like others I read it from a link on LW, I think...thanks for posting.
comment by Shmi (shminux) · 2012-01-30T21:47:02.642Z · LW(p) · GW(p)
As computationalists, we agree that the human being represented by the one tape turing machine is feeling just as real pain as we are.
If so, then the future SPCA (Society for the Prevention of Cruelty to Automata, the name is shamelessly stolen from Stanislaw Lem) will fight for prohibition of all AI testing, thus bringing all progress to a grinding halt.
comment by bogus · 2012-01-31T06:36:24.234Z · LW(p) · GW(p)
As computationalists, we agree that the human being represented by the one tape turing machine is feeling just as real pain as we are.
This is a deeply confused argument. I can simulate a spring-mass harmonic oscillator, and argue that the simulated spring has the same elasticity as some particular spring in the real world. But that elasticity is a property of the simulation, not the physical substrate. The computer chip running the simulation is not an elastic spring. Similarly, the Turing machine simulating a human being does not experience any physical pain.
Replies from: arundelo, asr, Multipartite↑ comment by Multipartite · 2012-01-31T13:23:37.635Z · LW(p) · GW(p)
The Turing machine doing the simulating does not experience pain, but the human being being simulated does.
Similarly, the waterfall argument found in the linked paper seems as though it could as-easily be used to argue that none of the humans in the solar system have intelligence unless there's an external observer to impose meaning on the neural patterns.
A lone mathematical equation is meaningless without a mind able to read it and understand what its squiggles can represent, but functioning neural patterns which respond to available stimuli causally(/through reliable cause-and-effect) are the same whether emboided in cell weights or in tape states. (So, unless one wishes to ignore one's own subjective consciousness and declare oneself a zombie...)
For the actual-versus-potential question, I am doubtful regarding the answer, but for the moment I imagine a group of people in a closed system (say, an experiment room), suddenly (non-lethally) frozen in ice by a scientist overseeing the experiment. If the scientist were to later unfreeze the room, then perhaps certain things would definitely happen if the system remained closed. However, if it were never unfrozen, then they would never happen. Also, if they were frozen yet the scientist decided to interfere in the experiment and make the system no longer a closed system, then different things would happen. As with the timestream in normal life, 'pain' (etc.) is only said to take place at the moment that it is actually carried out. (And if one had all states laid out simultaneously, like a 4D person looking at all events in one glance from past to present, then 'pain' would only be relevant for the one point/section that one could point to in which it was being carried out, rather than in the entire thing.)
Now though, the question of the pain undergone by the models in the predicting scientist's mind (perhaps using his/her/its own pain-feeling systems for maximum simulation accuracy) by contrast... hmm.