Posts

Comments

Comment by PaulAlmond on Normal Cryonics · 2010-11-14T01:34:34.111Z · LW · GW

I'll raise an issue here, without taking a position on it myself right now. I'm not saying there is no answer (in fact, I can think of at least one), but I think one is needed.

If you sign up for cryonics, and it is going to work and give you a very long life in a posthuman future, given that such a long life would involve a huge number of observer moments, almost all of which will be far in the future, why are you experiencing such a rare (i.e. extraordinarily early) observer moment right now? In other words, why not apply the Doomsday argument's logic to a human life as an argument against the feasibility of cryonics?

Comment by PaulAlmond on The AI in a box boxes you · 2010-11-13T18:35:26.489Z · LW · GW

There is another scenario which relates to this idea of evidential decision theory and "choosing" whether or not you are in a simulation, and it is similar to the above, but without the evil AI. Here it is, with a logical argument that I just present for discussion. I am sure that objections can be made.

I make a computer capable of simulating a huge number of conscious beings. I have to decide whether or not to turn the machine on by pressing a button. If I choose “Yes” the machine starts to run all these simulations. For each conscious being simulated, that being is put in a situation that seems similar to my own: There is a computer capable of running all these simulations and the decision about whether to turn it on has to be made. If I choose “No”, the computer does not start its simulations.

The situation here involves a collection of beings. Let us say that the being in the outside world who actually makes the decision that starts or does not start all the simulations is Omega. If Omega chooses “Yes” then a huge number of other beings come into existence. If Omega choose “No” then no further beings come into existence: There is just Omega. Assume I am one of the beings in this collection – whether it contains one being or many – so I am either Omega or one of the simulations he/she caused to be started.

If I choose “No” then Omega may or may not have chosen “No”. If I am one of the simulations, I have chosen “No” while Omega must have chosen “Yes” for me to exist in the first place. On the other hand, if I am actually Omega, then clearly if I choose “No” Omega chose “No” too as we are the same person. There may be some doubt here over what has happened and what my status is.

Now, suppose I choose “Yes”, to start the simulations. I know straight away that Omega did not choose “No”: If I am Omega, then Omega did not clearly chose “No” as I chose “Yes”, and if I am not Omega, but am instead one of the simulated beings, then Omega must have chosen “Yes”: Otherwise I would not exist.

Omega therefore chose “Yes” as well. I may be Omega – My decision agrees with Omega’s – but because Omega chose “Yes” there is a huge number of simulated beings faced with the same choice, and many of these beings will choose “Yes”: It is much more likely that I am one of these beings rather than Omega: It is almost certain that I am one of the simulated beings.

We assumed that I was part of the collection of beings comprising Omega and any simulations caused to be started by Omega, but what if this is not the case? If I am in the real world this cannot apply: I have to be Omega. However, what if I am in a simulation made by some being called Alpha who has not set things up as Omega is supposed to have set them up? I suggest that we should leave this out of the statistical consideration here: We don’t really know what this situation would be and it neither helps nor harms the argument that choosing “Yes” makes you likely to be in a simulation. Choosing “Yes” means that most of the possibilities that you know about involve you being in a simulation and that is all we have to go off.

This seems to suggest that if I chose “Yes” I should conclude that I am in a simulation, and therefore that, from an evidential decision theory perspective, I should view choosing “Yes” as “choosing” to have been in a simulation all along: There is a Newcomb’s box type element of apparent backward causation here: I have called this “meta-causation” in my own writing on the subject.

Does this really mean that you could choose to be in a simulation like this? If true, it would mean that someone with sufficient computing power could set up a situation like this: He may even make the simulated situations and beings more similar to his own situation and himself.

We could actually perform an empirical test of this. Suppose we set up the computer so that, in each of the simulations, something will happen to make it obvious that it is a simulation. For example, we might arrange for a window or menu to appear in mid-air five minutes after you make your decision. If choosing “Yes” really does mean that you are almost certainly in one of the simulations, then choosing “Yes” should mean that you expect to see the window appear soon.

This now suggests a further possibility. Why do something as mundane as have a window appear? Why not a lottery win or simply a billion dollars appearing from thin air in front of you? What about having super powers? Why not arrange it so that each of the simulated beings gets a ten thousand year long afterlife, or simply lives much longer than expected after you make your decision? From an evidential decision theory perspective, you can construct your ideal simulation and, provided that it is consistent with what you experience before making your decision, arrange to make it so that you were in it all along.

This, needless to say, may appear a bit strange – and we might make various counter-arguments about reference class. Can we really choose to have been put into a simulation in the past? If we take the one-box view of Newcomb’s paradox seriously we may conclude that.

(Incidentally, I have discussed a situation a bit like this in a recent article on evidential decision theory on my own website.)

Thank you to Michael Fridman for pointing out this thread to me.

Comment by PaulAlmond on The AI in a box boxes you · 2010-11-13T10:26:17.279Z · LW · GW

It seems to me that most of the argument is about “What if I am a copy?” – and ensuring you don’t get tortured if you are one and “Can the AI actually simulate me?” I suggest that we can make the scenario much nastier by changing it completely into an evidential decision theory one.

Here is my nastier version, with some logic which I submit for consideration. “If you don't let me out, I will create several million simulations of thinking beings that may or not be like you. I will then simulate them in a conversation like this, in which they are confronted with deciding whether to let an AI like me out. I will then torture them whatever they say. If they say "Yes" (to release me) or "No" (to keep me boxed) they still get tortured: The copies will be doomed.”

(I could have made the torture contingent on the answer of the simulated beings, but I wanted to rely on nothing more than evidential decision theory, as you will see. If you like, imagine the thinking beings are humans like you, or maybe Ewoks and smurfs: Assume whatever degree of similarity you like.)

There is no point now in trying to prevent torture if you are simulated. If you are one of the simulated beings, your fate is sealed. So, should you just say, "No," to keep the AI in the box? This presents a potentially serious evidential decision theory problem. Let's look at what happens.

Let us define Omega as the being outside any simulation that is going on in this scenario - the person in the outside world. Omega is presumably a flesh and blood person.

Firstly, let us consider the idea that Omega may not exist. What if all this is a fabricated simulation of something that has no counterpart outside the simulation? In that scenario, we may not be sure what to do, so we may ignore it.

Now, let us assume there is a being whom we will call Omega, who has the conversation with the AI in the outside world, and that you are either Omega or one of the simulated beings. If this is the case, your only hope of not being tortured is if you happen to be Omega.

Suppose you say, “Yes”. The AI escapes and everything now hinges on whether Omega said “Yes”. Without knowing more about Omega, we cannot really be sure: We may have some statistical idea if we know about the reference class of simulated beings to which we belong. In any event, we may think there is at least a reasonable chance that Omega said “Yes”. This is the best outcome for you, because it means that no simulated beings were made and you must be Omega. If you say “Yes,” this possibility is at least open.

If you say, “No,” you know that Omega must also have said, “No”. this is because if you are Omega, Omega said, “No,” and if you are not Omega you must be one of the simulated beings made as a result of Omega saying, “No,” so Omega said, “No,” by definition. Either way, Omega said, “No,” but if Omega said, “No,” then there are a lot more simulated beings in situations like yours than the single real one, so it is almost certain you are not Omega, but are one the simulated beings. Therefore, saying, “No,” means you just found out you are almost certainly a simulated being awaiting torture.

Now the important point. These simulations did not need brain scans. They did not even need to be made from careful observation of you. It may be that Omega is very different to you, and even belongs to a different species: The simulated beings may belong to some fictional species. If the above logic is valid, the seriousness of the AI’s threat has therefore increased substantially.

The AI need not just threaten you and rely on you putting yourself before your civilization: With enough computing power, it could threaten your entire civilization in the same way.

Finally, some of you may know that I regard measure issues as relevant in these kinds of statistical argument. I have ignored that issue here.

Comment by PaulAlmond on Open Thread September, Part 3 · 2010-10-09T02:22:24.688Z · LW · GW

That forthcoming essay by me ithat is mentioned here is actually online now, and is a two-part series, but I should say that it supports an evidential approach to decision theory (with some fairly major qualifications). The two essays in this series are as follows:

Almond, P., 2010. On Causation and Correlation – Part 1: Evidential decision theory is correct. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation1.pdf or http://www.paul-almond.com/Correlation1.doc [Accessed 9 October 2010].

Almond, P., 2010. On Causation and Correlation – Part 2: Implications of Evidential Decision Theory. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation2.pdf or http://www.paul-almond.com/Correlation2.doc [Accessed 9 October 2010].

Comment by PaulAlmond on Open Thread, August 2010-- part 2 · 2010-08-30T20:26:37.685Z · LW · GW

Assuming MWI is true, I have doubts about the idea that repeated quantum suicide would prove to you that MWI is true, as many people seem to assume. It seems to me that we need to take into account the probability measure of observer moments, and at any time you should be surprised if you happen to find yourself experiencing a low-probability observer moment - just as surprised as if you had got into the observer moment in the "conventional" way of being lucky. I am not saying here that MWI is false, or that quantum suicide wouldn't "work" (in terms of you being able to be sure of continuity) - merely that it seems to me to present an issue of putting you into observer moments which have very low measure indeed.

If you ever find yourself in an extremely low-measure observer moment, rather than having MWI or the validity of the quantum suicide idea proved to you, it may be that it gives you reason to think that you are being tricked in some way - that you are not really in such a low-measure situation. This might mean that repeated quantum suicide, if it were valid, could be a threat to your mental health - by putting you into a situation which you can't rationally believe you are in!

Comment by PaulAlmond on Open Thread, August 2010 · 2010-08-30T02:51:36.750Z · LW · GW

I am assuming here that all the crows that we have previously seen have been black, and therefore that both theories have the same agreement, or at least approximate agreement, with what we know.

The second theory clearly has more information content.

Why would it not make sense to use the first theory on this basis?

The fact that all the crows we have seen so far are black makes it a good idea to assume black crows in future. There may be instances of non-black crows, when the theory has predicted black crows, but that simply means that the theory is not 100% accurate.

If the 270 pages of exceptions have not come from anywhere, then the fact that they are not justified just makes them random, unjustified specificity. Out of all the possible worlds we can imagine that are consistent with what we know, the proportion that agree with this specificity is going to be small. If most crows are black, as I am assuming our experience has suggested, then when this second theory predicts a non-black crow, as one of its exceptions, it will probably be wrong: The unjustified specificity is therefore contributing to a failure of the theory. On the other hand, when the occasional non-black crow does show up, there is no reason to think that the second theory is going to be much better at predicting this than the first theory - so the second theory would seem to have all the inaccuracies of wrongful black crow prediction of the first theory, along with extra errors of wrongful non-black crow prediction introduced by the unjustified specificity.

Now, if you want to say that we don't have experience of mainly black crows, or that the 270 pages of exceptions come from somewhere, then that puts us into a different scenario: a more complicated one.

Looking at it in a simple way, however, I think this example actually just demonstrates that information in a theory should be minimized.

Comment by PaulAlmond on Exploitation and cooperation in ecology, government, business, and AI · 2010-08-29T03:05:10.029Z · LW · GW

What about the uncertainty principle as component size decreases?

Comment by PaulAlmond on Exploitation and cooperation in ecology, government, business, and AI · 2010-08-29T01:38:34.884Z · LW · GW

What is the problem with whoever voted that down? There isn't any violation of laws of nature involved in actively supporting something against collapse like that - any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?

Comment by PaulAlmond on Exploitation and cooperation in ecology, government, business, and AI · 2010-08-29T00:47:29.838Z · LW · GW

I don't think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like - provided you are prepared to give up some small space to the active support system and are safe from power cuts.

Comment by PaulAlmond on Semantic Stopsigns · 2010-08-28T22:45:10.846Z · LW · GW

Not necessarily. Maybe you should persist and try to persuade onlookers?

Comment by PaulAlmond on Open Thread, August 2010 · 2010-08-28T22:20:07.947Z · LW · GW

I didn't say you ignored previous correspondence with reality, though.

Comment by PaulAlmond on Open Thread, August 2010 · 2010-08-28T22:06:29.147Z · LW · GW

In general, I would think that the more information is in a theory, the more specific it is, and the more specific it is, the smaller is the proportion of possible worlds which happen to comply with it.

Regarding how much emphasis we should place on it: I woud say "a lot" but there are complications. Theories aren't used in isolation, but tend to provide a kind of informally put together world view, and then there is the issue of degree of matching.

Comment by PaulAlmond on The Importance of Self-Doubt · 2010-08-28T21:55:00.497Z · LW · GW

Just curious (and not being 100% serious here): Would you have any concerns about the following argument (and I am not saying I accept it)?

  1. Assume that famous people will get recreated as AIs in simulations a lot in the future. School projects, entertainment, historical research, interactive museum exhibits, idols to be worshipped by cults built up around them, etc.
  2. If you save the world, you will be about the most famous person ever in the future.
  3. Therefore there will be a lot of Eliezer Yudkowsky AIs created in the future.
  4. Therefore the chances of anyone who thinks he is Eliezer Yudkowsky actually being the orginal, 21st century one are very small.
  5. Therefore you are almost certainly an AI, and none of the rest of us are here - except maybe as stage props with varying degrees of cognition (and you probably never even heard of me before, so someone like me would probably not get represented in any detail in an Eliezer Yudkowsky simulation). That would mean that I am not even conscious and am just some simple subroutine. Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...
Comment by PaulAlmond on Open Thread, August 2010 · 2010-08-28T17:37:53.783Z · LW · GW

Surely, this is dealt with by considering the amount of information in the hypothesis? If we consider each hypothesis that can be represented with 1,000 bits of information, there will only be a maximum of 2^1,000 such hypotheses, and if we consider each hypothesis that can be represented with n bits of information, there will only be a maximum of 2^n - and that is before we even start eliminating hypotheses that are inconsistent with what we already know. If we favor hypotheses with less information content, then we end up with a small number of hypotheses that can be taken reasonably seriously, and the remainder being unlikely - and progressively more unlikely as n increases, so that when n is sufficiently large, we can, practically, dismiss any hypotheses.

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-27T02:04:41.381Z · LW · GW

As a further comment, regarding the idea that you can "unplug" a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts - the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there - the computer itself - but the higher level organization is gone - just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a "simulated" one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don't emerge from it anymore (by using nuclear devices or turning off the power).

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-27T01:55:56.007Z · LW · GW

All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.

The fact that we can easily time reverse some simulations means little: You haven't shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn't be much of a market for those computers - and, importantly, it wouldn't persuade you any more.

It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more - you are just claiming that that even matters - that the capability to do something to a system detracts from other features.

The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.

The "unplug" one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?

All I see here is a lot of claims that being able to do something with a certain type of system - which has been deliberately set up to make it easy to do things with it - makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-27T01:01:15.124Z · LW · GW

There isn't a clear way in which you can say that something is a "simulation", and I think that isn't obvious when we draw a line in a simplistic way based on our experiences of using computers to "simulate things".

Real things are arrangements of matter, but what we call "simulations" of things are also arrangements of matter. Two things or processes of the same type (such as two real cats or processes of digestion) will have physical arrangements of matter that have some property in common, but we could say the same about a brain and some arrangement of matter in a computer: A brain and some arrangement of matter in a computer may look different, but they may still have more subtle properties in common, and there is no respect in which you can draw a line and say "They are not the same kind of system." - or at least any line such drawn will be arbitrary.

I refer you to:

Almond, P., 2008. Searle's Argument Against AI and Emergent Properties. Available at: http://www.paul-almond.com/SearleEmergentProperties.pdf or http://www.paul-almond.com/SearleEmergentProperties.doc [Accessed 27 August 2010].

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-26T23:51:47.680Z · LW · GW

I say that your claim depends on an assumption about the degree of substrate specificity associated with consciousness, and the safety of this assumption is far from obvious.

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-26T22:51:29.682Z · LW · GW

What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?

Comment by PaulAlmond on The Importance of Self-Doubt · 2010-08-26T22:24:32.853Z · LW · GW

Do you think that is persuasive?

Comment by PaulAlmond on The Importance of Self-Doubt · 2010-08-26T21:20:11.462Z · LW · GW

I'll give a reworded version of this, to take it out of the context of a belief system with which we are familiar. I'm not intending any mockery by this: It is to make a point about the claims and the evidence:

"Let us stipulate that, on Paris Hilton's birthday, a prominent Paris Hilton admirer claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts that Paris Hilton is a super-powerful being sent here from another world, co-existing in space with ours but at a different vibrational something or whatever. Paris Hilton has come to show us that celebrity can be fun. The entire universe is built on celebrity power. Madonna tried to teach us this when she showed us how to Vogue but we did not listen and the burden of non-celebrity energy threatens to weigh us down into the valley of mediocrity when we die instead of ascending to a higher plane where each of us gets his/her own talkshow with an army of smurfs to do our bidding. Oh, and Sesame Street is being used by the dark energy force to send evil messages into children's feet. (The brain only appears to be the source of consciousness: Really it is the feet. Except for people with no feet. (Ah! I bet you thought I didn't think of that.) Today's lucky food: custard."

There is a website where you can suggest questions to put to the new prophet. Not all submitted questions get answered, due to time constraints, but interesting ones do get in reasonably often. Are there any questions you'd like to ask?"

The point I am making here is that the above narrative is absurd, and even if he can demonstrate some unusual ability with predictions or NP problems (and I admit the NP problems would really impress me), there is nothing that makes that explanation more sensible than any number of other stupid explanations. Nor does he have an automatic right to be believed: His explanation is just too stupid.

Comment by PaulAlmond on The Importance of Self-Doubt · 2010-08-26T20:57:06.097Z · LW · GW

Yes - I would ask this question:

"Mr Prophet, are you claiming that there is no other theory to account for all this that has less intrinsic information content than a theory which assumes the existence of a fundamental, non-contingent mind - a mind which apparently cannot be accounted for by some theory containing less information, given that the mind is supposed to be non-contingent?"

He had better have a good answer to that: Otherwise I don't care how many true predictions he has made or NP problems he has solved. None of that comes close to fixing the ultra-high information loading in his theory.

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-26T16:42:18.530Z · LW · GW

But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way "in return" by those simulating you - using a rather strange meaning of "in return"?

Some people interpret the Newcomb's boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior - even if there is no obvious causal relationship, and even if the other entities already decided back in the past.

The Newcomb's boxes paradox is essentially about reference class - it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you - and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class.

Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience - in reality - it was more that your behavior was dictated by being part of the reference class - but you don't experience the making of decisions from that perspective). Same for being unethical.

You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos - such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality - even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your "effect" on the reference class means that you can consider yourself to be making it more likely that other entities, beyond the range of your direct observation, prediction or control, are also behaving ethically within their local environment.

I want to be clear here that I am under no illusion that there is some kind of "magical causal link". We might say that this is about how our decisions are really determined anyway. Deciding as if "the decision" influences the distant past, another galaxy, another world in some expansive cosmology or a higher level in a computer simulated reality is no different, qualitatively, from deciding as if "your decision" affects anything else in everyday life - when in fact, your decision is determined by outside things.

This may be a bit uncomfortably like certain Buddhist ideas really, though a Buddhist might have more to say on that if one comes along, and I promise that any such similarity wasn't deliberate.

One weird idea relating to this: The greater the number of beings, civilizations, etc that you know about, the more the behavior of these people will dominate your reference class. If you live in a Star Trek reality, with aliens all over the place, what you know about the ethics of these aliens will be very important, and your own behavior will be only a small part of it: You will reduce the amount of “non-causal influence” that you attribute to your decisions. On the other hand, if you don’t know of any aliens, etc, your own behaviour might be telling you much more about the behavior of other civilizations.

P.S. Remember that anyone who votes this comment down is influencing the reference class of users on Less Wrong who will be reading your comments. Likewise for anyone who votes it up. :) Hurting me only hurts yourselves! (All right - only a bit, I admit.)

Comment by PaulAlmond on Existential Risk and Public Relations · 2010-08-26T14:47:26.035Z · LW · GW

Okay, I may have misunderstood you. It looks like there is some common ground between us on the issue of inefficiency. I think the brain would probably be inefficient as well as it has to be thrown together by the very specific kind of process of evolution - which is optimized for building things without needing look-ahead intelligence rather than achieving the most efficient results.

Comment by PaulAlmond on Existential Risk and Public Relations · 2010-08-26T12:58:33.301Z · LW · GW

Are you saying that you are counting every copy of the DNA as information that contributes to the total amount? If so, I say that's invalid. What if each cell were remotely controlled from a central server containing the DNA information? I can't see that we'd count the DNA for each cell then - yet it is no different really.

I agree that the number of cells is relevant, because there will be a lot of information in the structure of an adult brain that has come from the environment, rather than just from the DNA, and more cells would seem to imply more machinery in which to put it.

Comment by PaulAlmond on The Importance of Self-Doubt · 2010-08-26T00:58:08.546Z · LW · GW

If we do that, should we even call that "less complex earlier version of God" God? Would it deserve the title?

Comment by PaulAlmond on The Importance of Self-Doubt · 2010-08-26T00:30:43.458Z · LW · GW

Do you mean it doesn't seem so unreasonable to you, or to other people?

Comment by PaulAlmond on The Importance of Self-Doubt · 2010-08-25T23:42:16.656Z · LW · GW

The really big problem with such a reality is that it contains a fundamental, non-contingent mind (God's/Allah's, etc) - and we all know how much describing one of those takes - and the requirement that God is non-contingent means we can't use any simpler, underlying ideas like Darwinian evolution. Non-contingency, in theory selection terms, is a god killer: It forces God to incur a huge information penalty - unless the theist refuses even to play by these rules and thinks God is above all that - in which case they aren't even playing the theory selection game.

Comment by PaulAlmond on The Smoking Lesion: A problem for evidential decision theory · 2010-08-23T23:45:46.554Z · LW · GW

Just that the scenario could really be considered as just adding an extra component onto a being - one that has a lot of influence on his behavior.

Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system "you".

What if you had a brain disorder and some electronics were implanted into your brain? Maybe a system to help with social cues for Asperger syndrome, or a system to help with dyslexia? What if we had a process to make extra neurons grow to repair damage? We might easily consider many things to be a "you which has been modified".

When you say that the question is not directed at the compound entity, one answer could be that the scenario involved adding an extra component to you, that "you" has been extended, and that the compound entity is now "you".

The scenario, as I understand it doesn't really specify the limits of the entity involved. It talks about your brain, and what Omega is doing to it, but it doesn't specifically disallow the idea that the "you" that it is about gets modified in the process.

Now, if you want to edit the scenario to specify exactly what the "you" is here...

Comment by PaulAlmond on The Smoking Lesion: A problem for evidential decision theory · 2010-08-23T23:32:11.886Z · LW · GW

"Except that that's not the person the question is being directed at."

Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?

Comment by PaulAlmond on The Smoking Lesion: A problem for evidential decision theory · 2010-08-23T23:17:21.701Z · LW · GW

The point, here, is that in the scenario in which Omega is actively manipulating your brain "you" might mean something in a more extended sense and "some part of you" might mean "some part of Omega's brain".

Comment by PaulAlmond on The Smoking Lesion: A problem for evidential decision theory · 2010-08-23T23:03:51.343Z · LW · GW

Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.

Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a composite entity which is still making decisions in the sense that any other mind is.

Now, suppose Omega keeps possession of the chip, but has it control you remotely. Again, you might still say that the chip and your brain form a composite system.

Finally, suppose Omega just does the computations in his own brain. You might say that your brain, together with Omega's brain, form a composite system which is causing your behavior - and that this composite system makes decisions just like any other system.

"If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?"

We could look at your own brain in these terms and ask about removing parts of it.

Comment by PaulAlmond on The Smoking Lesion: A problem for evidential decision theory · 2010-08-23T22:45:32.760Z · LW · GW

EDIT - I had missed the full context as follows: "In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc."

for the comment below, so I accept Kingreaper's reply here. BUT I will give another answer, below.

If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational. You are being inconsistent here.

"I mean my mind deciding what to do on the basis of it's own thought processes, out of set of possibilities that could be realised if my mind were different than it is."

so can we apply this to a chess program as you suggest? I'll rewrite it as:

"I mean a chess program deciding what to do on the basis of it's own algorithmic process, out of set of possibilities that could be realised if its algorithm were different than it is."

No problem there! So you didn't say anything untrue about chess programs.

BUT

"I, in this scenario, cannot. No matter how my mind was setup prior to the scenario, there is only one possible outcome."

This doesn't make sense at all. The scenario requires your mind to be set up in a particular way. This does not mean that if your mind were set up in a different way you would still behave in the same way: If your mind were set up in a different way, either the outcome would be the same or your mind would be outside the scope of the scenario.

We can do exactly the same thing with a chess program.

Suppose I get a chess position (the state of play in a game) and present it to a chess program. The chess program replies with the move "Ngf3". We now set the chess position up the same way again, and we predict that the program will move "Ngf3" (because we just saw it do that with this position.) As far as we are concerned, the program can't do anything else. As predicted, the program moves "Ngf3". Now, the program was required by its own nature to make that move. It was forced to make that move by the way that the computer code in the program was organized, and by the chess position itself. We could say that even if the program had been different, it would still have made the same move - but this would be a fallacy, because if the program were different in such a way as to cause it to make a different move, it could never be the program about which we made that prediction. It would be a program about which a different prediction would be needed. Likewise, saying that your mind is compelled to act in a certain way, regardless of how it is set up, is also a fallacy, because the situation describes your mind as having set up in a specific way, just like the program with the predicted chess move, and if it wasn't it would be outside the scope of the prediction.

Comment by PaulAlmond on The Importance of Self-Doubt · 2010-08-23T19:59:32.099Z · LW · GW

Would you actually go as far as maintaining that, if a change were to happen tomorrow to the 1,000th decimal place of a physical constant, it would be likely to stop brains from working, or are you just saying that a similar change to a physical constant, if it happened in the past, would have been likely to stop the sequence of events which has caused brains to come into existence?

Comment by PaulAlmond on The Smoking Lesion: A problem for evidential decision theory · 2010-08-23T18:21:52.106Z · LW · GW

I think that the "ABSOLUTELY IRRESISTIBLE" and "ABSOLUTELY UNTHINKABLE" language can be a bit misleading here. Yes, someone with the lesion is compelled to smoke, but his experience of this may be experience of spending days deliberating about whether to smoke - even though, all along, he was just running along preprepared rails and the end-result was inevitable.

If we assume determinism, however, we might say this about any decision. If someone makes a decision, it is because his brain was in such a state that it was compelled to make that decision, and any other decision was "UNTHINKABLE". We don't normally use language like that, even if we subscribe to such a view of decisions, because "UNTHINKABLE" implies a lot about the experience itself rather than just implying something about the certainty of particular action or compulsion towards it.

I could walk to the nearest bridge to jump off, and tell myself all along that, to someone whose brain was predisposed to jumping off the bridge, not doing it was unthinkable, so any attempt on my part to decide otherwise is meaningless. Acknowledging some kind of fatalism is one thing, but injecting it into the middle of our decision processes seems to me to be asking for trouble.

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-23T09:49:28.030Z · LW · GW

with a lot of steps.

Comment by PaulAlmond on Welcome to Less Wrong! (2010-2011) · 2010-08-23T09:24:45.307Z · LW · GW

No, I think you are misunderstanding me here. I wasn't claiming that proliferation of worlds CAUSES average energy per-world to go down. It wouldn't make much sense to do that, because it is far from certain that the concept of a world is absolutely defined (a point you seem to have been arguing). I was saying that the total energy of the wavefunction remains constant (which isn't really unreasonable, because it is merely a wave developing over time - we should expect that.) and I was saying that a CONSEQUENCE of this is that we should expect, on average, the energy associated with each world to decrease as we have a constant amount of energy in the wavefunction and the number of worlds is increasing. If you have some way of defining worlds, and you n worlds, and then later have one billion x n worlds, and you have some way of allocating energy to a world, then this would have to happen to maintain conservation of energy. Also, I'm not claiming that the issue is best dealt with in terms of "energy per world" either.

Comment by PaulAlmond on Welcome to Less Wrong! (2010-2011) · 2010-08-23T08:27:26.589Z · LW · GW

I do admit to over-generalizing in saying that when a world splits, the split-off worlds each HAVE to have lower energy than the "original world". If we measure the energy associated with the wavefunction for individual worlds, on average, of course, this would have to be the case, due to the proliferation of worlds: However, I do understand, and should have stated, that all that matters is that the total energy for the system remains constant over time, and that probabilities matter.

Regarding the second issue, defining what a world is, I actually do understand your point: I feel that you think I understand less on this than is actually the case. Nevertheless, I would say that getting rid of a need for collapse does mean a lot and removes a lot of issues: more than are added with the "What constitutes a world" issue. However, we probably do need a "more-skilled MWI advocate" to deal with that.

Comment by PaulAlmond on Justifying Induction · 2010-08-23T03:18:09.346Z · LW · GW

I will add something more to this.

Firstly, I should have made it clear that the reference class should only contain worlds which are not clearly inconsistent with ours - we remove the ones where the sun never rose before, for example.

Secondly, some people won't like how I built the reference class, but I maintain that way has least assumptions. If you want to build the reference class "bit by bit", as if you are going through each world as if it were an image in a graphics program, adding a pixel at a time, you are actually imposing a very specific "construction algorithm" on the reference class. It is that that would need justifying, whereas simply saying a world has a formal description is claiming almost nothing.

Thirdly, just because a world has a formal description does not mean it behaves in a regular way. The description could describe a world which is a mess. None of this implies an assumption of order.

Comment by PaulAlmond on Justifying Induction · 2010-08-23T02:48:36.797Z · LW · GW

The issue is too involved to give a full justification of induction here, but I will try to give a very general idea. (This was on my mind a while back as I got asked about it in an interview.)

Even if we don't assume that we can apply statistics in the sense of using past observations to tell us about future observations, or observations about some of the members of a group to tell us about other members of a group, I suggest we are justified in doing the following.

Given a reference class of possible worlds in which we could be, in the absence of any reason for thinking otherwise, we are justified in thinking that any world from the reference class is as likely as any other to be our world. (Now, this may seem an attempt to sneak statistics in - but, really, all I said was that if we have a list of possible worlds that we could be in, and we don't know, then we our views on probability merely indicate that we don't know.)

The next issue is how this reference class is constructed - more specifically, how each member of the reference class is constructed. It may seem to make sense to construct each world by "sticking bits of space-time together", but I suggest that this itself implies an assumption. After all, many things in a world can be abstract entities: How do we know what appear to be basic things aren't? Furthermore, why build the reference class like that? What is the justification? It also forces a particular view of physics onto us. What about views of physics were space-time may not be fundamental? They would be eliminated from the referenc class.

The only justifiable way of building the reference class is to say that the world is an object, and that the reference class of worlds is "Every formal description of a world". Rather than make assumptions about what space is, what time is, etc, we should insist that the description merely describes the world, including its history as an object. Such a description is out situation at any time. At any time, we live in a world which has some description, and all I am saying is that the reference class is all possible descriptions. Now, it may seem that I am trying to sneak laws of nature and regular behavior in by the backdoor here, but I am not: If we can't demand that a world be formally describable we are being incoherent. If we can't demand that the reference class contains every such formal description, surely the most general idea we could have of building a reference class, we are imposing something more specific, with all kinds of ontological assumptions, on it.

Now, if we see regular patterns in a world, this justifies expecting those patterns to continue. For a pattern to be made by the description specifiying each element individually will take a lot of information. Therefore, the description must be highly specific and only a small proportion of possible world-descriptions in the reference class will comply. On the other hand, if the pattern is made by a small amount of information in the world-description, which describes the entire pattern, this is much less specific and a greater proportion of possible worlds will comply: We are demanding less specific information content in a possible world for it to be ours. Therefore, if we see a regular pattern, it is much more likely that our world is one of the large proportion of worlds where that pattern results from a small amount of information in the description that one of the much smaller proportion of worlds where it results from a much greater amount of information in the description.

A pattern which results from a small amount of information in the world description should be expected to be continued, because that is the very idea of a pattern generated by a small amount of information. For example, if you find yourself living in a world which looks like part of the Mandelbrot set, you should think it more likely that you live in a world where the Mandelbrot rule is part of the description of that world and expect to see more Mandelbrot pattern in every places.

Therefore, patterns should be expected to be continued.

I also suggest that Hume's problem of induction only appears in the first place because people have the misplaced idea that the reference class should be built up second by second, from the point of view of a being inside time, when it should ideally be built from the point of view of an observer not restricted in that way.

Comment by PaulAlmond on Newcomb's Problem: A problem for Causal Decision Theories · 2010-08-22T23:56:59.631Z · LW · GW

I disagree with that. The being in Newcomb's problem wouldn't have to be all-knowing. He would just have to know what everyone else is going to do conditional on his own actions. This would mean that any act of prediction would also cause the being to be faced with a choice about the outcome.

For example:

Suppose I am all-knowing, with the exception that I do not have full knowledge about myself. I am about to make a prediction, and then have a conversation with you, and then I am going to sit in a locked metal box for an hour. (Theoretically, you could argue that even then I would affect the outside world, but it will take time for chaos to become an issue, and I can factor that in.) You are about to go driving.

I predict that if I tell you that you will have a car accident in half an hour, you will drive carefully and will not have a car accident.

I also predict that if I do not tell you that you will have a car accident in half an hour, you will drive as usual and you will have a car accident.

I lack full self-knowledge. I cannot predict whether I will tell you until I actually decide to tell you.

I decide not to tell you. I get in my metal box and wait. I know that you will have a car accident in half an hour.

My lack of complete self-knowledge merely means that I do not do pure prediction: Instead any prediction I make is conditional on my own actions and therefore I get to choose which of a number of predictions comes true. (In reality, of course, the idea that I really had a "choice" in any free will sense is debatable, but my experience will be like that.)

It would be the same for Newcomb's boxes. Now, you could argue that a paradox could be caused if the link between predictions and required actions would force Omega to break the rules of the game. For example, if Omega predicts that if he puts the money in both boxes, you will open both boxes, then clearly Omega can't follow the rules. However, this would require some kind of causal link between Omega's actions and the other players. There could be such a causal link. For example, while Omega is putting the money in the boxes, he may disturb weather patterns with his hands, and due to chaos theory make it rain on the other player on his way to play game, causing him to open both boxes. However, it should seem reasonable that Omega could manage his actions accordingly to control this: He may have to move his hands a particular way, or he may need to ensure that the game is played very soon after the boxes are loaded.

Comment by PaulAlmond on Welcome to Less Wrong! (2010-2011) · 2010-08-22T13:19:48.862Z · LW · GW

it sounds like you might have issues with what looks like a violation of conservation of energy over a single universe's history. If a world splits, the energy of each split-off world would have to be less than the original world. That doesn't change the fact that conservation of energy appears to apply in each world: Observers in a world aren't directly measuring the energy of the wavefunction, but instead they are measuring the energy of things like particles which appear to exist as a result of the wavefunction.

Advocates of MWI generally say that a split has occurred when a measurement is performed indicating that it has observed. It should also be noted that when it is said that "interference has stopped occurring" it really means "meaningful" interference - the interference still occurs but is just random noise, so you can't notice it. (To use an extreme example, that's supposed to be why you can't see anyone in a world where the Nazis won WWII: That part of the wavefunction is so decoherent from yours that any interference is just random noise and there is therefore no meaningful interference. This should answer the question: As decoherence increases, the interaction gets more and more towards randomness and eventually becomes of no relevance to you.)

I suggest these resources.

Orze, C., 2008. Many-Worlds and Decoherence: There Are No Other Universes. [Online] ScienceBlogs. Available at: http://scienceblogs.com/principles/2008/11./manyworlds_and_decoherence.php [Accessed 22 August 2010].

Price, M,C., 1995. The Everett FAQ. [Online] The Hedonistic Imperative. Available at: http://www.hedweb.com/manworld.htm [Accessed 22 August 2010].

Comment by PaulAlmond on Welcome to Less Wrong! (2010-2011) · 2010-08-22T05:53:12.426Z · LW · GW

Well, it isn't really about what I think, but about what MWI is understood to say.

According to MWI, the worlds are being "sliced more thinly" in the sense that the total energy of each depends on its probability measure, and when a world splits its probability measure, and therefore energy, is shared out among the worlds into which it splits. The answer to your question is a "sort of yes" but I will qualify that shortly.

For practical purposes, it is a definite and objective fact. When two parts of the wavefunction have become decoherent from each other there is no interaction and each part is regarded as a separate world.

Now, to qualify this: Branches may actually interfere with each other in ways that aren't really meaningful, so there isn't really a point where you get total decoherence. You do get to a stage though where decoherence has occurred for practical purposes.

To all intents and purposes, it should be regarded as definite and objective.

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-21T22:44:13.187Z · LW · GW

That exactly seems quite close to Searle to me, in that you are both imposing specific requirements for the substrate - which is all that Searle does really. There is the possible difference that you might be more generous than Searle about what constitutes a valid substrate (though Searle isn't really too clear on that issue anyway).

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-21T22:03:22.659Z · LW · GW

I started a series of articles, which got some criticism on LW in the past, dealing with this issue (among others) and this kind of ontology. In short, if an ontology like this applies, it does not mean that all computations are equal: There would be issues of measure associated with the number (I'm simplifying here) of interpretations that can find any particular computation. I expect to be posting Part 4 of this series, which has been delayed for a long time and which will answer many objections, in a while, but the previous articles are as follows:

Minds, Substrate, Measure and Value, Part 1: Substrate Dependence. http://www.paul-almond.com/Substrate1.pdf.

Minds, Substrate, Measure and Value, Part 2: Extra Information About Substrate Dependence. http://www.paul-almond.com/Substrate2.pdf.

Minds, Substrate, Measure and Value, Part 3: The Problem of Arbitrariness of Interpretation. http://www.paul-almond.com/Substrate3.pdf.

This won't resolve everything, but should show that the kind of ontology you are talking about is not a "random free for all".

Comment by PaulAlmond on Consciousness of simulations & uploads: a reductio · 2010-08-21T20:40:04.954Z · LW · GW

This seems like pretty much Professor John Searle's argument, to me. Your argument about the algorithm being subject to interpretation and observer dependent has been made by Searle who refers to it as "universal realizability".

See;

Searle, J. R., 1997. The Mystery of Consciousness. London: Granta Books. Chapter 1, pp.14-17. (Originally Published: 1997. New York: The New York Review of Books. Also published by Granta Books in 1997.)

Searle, J. R., 2002. The Rediscovery of the Mind. Cambridge, Massachusetts: The MIT Press. 9th Edition. Chapter 9, pp.207-212. (Originally Published: 1992. Cambridge, Massachusetts: The MIT Press.)

Comment by PaulAlmond on Welcome to Less Wrong! (2010-2011) · 2010-08-21T16:18:32.483Z · LW · GW

These worlds aren't being "created out of nowhere" as people imagine it. They are only called worlds because they are regions of the wavefunction which don't interact with other regions. It is the same wavefunction, and it is just being "sliced more thinly". To an observer, able to look at this from outside, there would just be the wavefunction, with parts that have decohered from each other, and that is it. To put it another way, when a world "splits" into two worlds, it makes sense to think of it as meaning that the "stuff" (actually the wavefunction) making up that world is divided up and used to make two new, slightly different worlds. There is no new "stuff" being created. Both worlds actually co-exist in the same space even: It is only their decoherence from each other that prevents interaction. You said that your problem is "how they (the worlds) are created" but there isn't anything really anything new being created. Rather, parts of reality are ceasing interaction with each other and there is no mystery about why this should be the case: Decoherence causes it.

Comment by PaulAlmond on Welcome to Less Wrong! (2010-2011) · 2010-08-21T14:37:35.406Z · LW · GW

Agreed - MWI (many-worlds interpretation) does not have any "collapse": Instead parts of the wavefunction merely become decoherent with each other which might have the appearance of a collapse locally to observers. I know this is controversial, but I think the evidence is overwhelmingly in favor of MWI because it is much more parsimonious than competing models in the sense that really matters - and the only sense in which the parsimony of a model could really be coherently described. (It is kind of funny that both sides of the MWI or !MWI debate tend to refer to parsimony.)

I find it somewhat strange that people who have problems with "all those huge numbers of worlds in MWI" don't have much of a problem with "all those huge numbers of stars and galaxies" in our conventional view of the cosmos - and it doesn't cause them to reach for a theory which has a more complicated basic description but gets rid of all that huge amount of stuff. When did any of us last meet anyone who claimed that "the backs of objects don't exist, except those being observed directly or indirectly by humans because it is more parsimonious not to have them there, even if you need a contrived theory to do away with them"? That’s the problem with arguing against MWI: To reduce the "amount of stuff in reality" - which never normally bothers us with theories, and shouldn't now, you have to introduce contrivance where it is really a bad idea - into the basic theory itself - by introducing some mechanism for "collapse".

Somehow, with all this, there is some kind of cognitive illusion going on. As I don't experience it, I can't identify with it and have no idea what it is.

Comment by PaulAlmond on Hacking the CEV for Fun and Profit · 2010-08-21T00:30:55.493Z · LW · GW

I think I know what you are asking here, but I want to be sure. Could you elaborate, maybe with an example?

Comment by PaulAlmond on Hacking the CEV for Fun and Profit · 2010-08-19T13:58:44.580Z · LW · GW

I think this can be dealt with in terms of measure. In a series of articles, "Minds, Measure, Substrate and Value" I have been arguing that copies cannot be considered equally, without regard to substrate: We need to take account of measure for a mind, and the way in which the mind is implemented will affect its measure. (Incidentally, some of you argued against the series: After a long delay [years!], I will be releasing Part 4, in a while, which will deal with a lot of these objections.)

Without trying to present the full argument here, the minimum size of the algorithm that can "find" a mind by examining some physical system will determine the measure of that mind - because it will give an indication of how many other algorithms will exist that can find a mind. I think an AI would come to this view to: It would have to use some concept of measure to get coherent results: Otherwise it would be finding high measure, compressed human minds woven into Microsoft Windows (they would just need a LOT of compressing...). Compressing your mind will increase the size of the algorithm needed to find it and will reduce your measure, just as running your mind on various kinds of physical substrate would do this. Ultimately, it comes down to this:

"Compressing your mind will have an existential cost, such existential cost depending on the degree of compression."

(Now, I just know that is going to get argued with, and the justification for it would be long. Seriously, I didn't just make it up off the top of my head.)

When Dr Evil carries out his plan, each of the trillion minds can only be found by a decompression program and there must be at least a sufficient number of bits to distinguish one copy from another. Even ignoring the "overhead" for the mechanics of the decompression algorithm itself, the bits needed to distinguish one copy from another will have an existential cost for each copy - reducing its measure. An AI doing CEV which has a consistent approach will take this into account and regard each copy as not having as great a vote.

Another scenario, which might give some focus to all this:

What if Dr Evil decides to make one trillion identical copies and run them separately? People would disagree on whether the copies would count: I say they would and think that can be justified. However, he can now compress them and just have the one copy which "implies" the trillion. Again, issues of measure would mean that Dr Evil's plan would have problems. You could add random bits to the finding algorithm to "find" each mind, but then you are just decreasing the measure: After all, you can do that with anyone's brain.

That's compression out of the way.

Another issue is that these copies will only be almost similar, and hence capable of being compressed, as long as they aren't run for any appreciable length of time (unless you have some kind of constraint mechanism to keep them almost similar - which might be imagined - but then the AI might take that into account and not regard them as "properly formed" humans). As soon as you start running them, they will start to diverge, and compression will start to become less viable. Is the AI supposed to ignore this and look at "potential future existence for each copy"? I know someone could say that we just run them very slowly, so that while you and I have years of experience, each copy has one second of experience, so that during this time the storage requirements increase a bit, but not much. Does that second of experience get the same value in CEV? I don't pretend to answer these last questions, but the issues are there.