Posts

Comments

Comment by MockTurtle on Stupid Questions February 2017 · 2017-02-14T16:43:46.969Z · LW · GW

Interesting questions to think about. Seeing if everyone independently describes the clothes the same way (as suggested by others) might work, unless the information is leaked. Personally, my mind went straight to the physics of the thing, 'going all science on it' as you say - as emperor, I'd claim that the clothes should have some minimum strength, lest I rip them the moment I put them on. If a piece of the fabric, stretched by the two tailors, can at least support the weight of my hand (or some other light object if you're not too paranoid about the tailor's abilities as illusionists), then it should be suitable.

Then, when your hand (or whatever) goes straight through, either they'll admit that the clothes aren't real, or they'll come up with some excuse about the cloth being so fine that it ripped or things go straight through, at which point you can say that these clothes are useless to you if they'll rip at the slightest movement or somehow phase through flesh, etc.

Incidentally, that's one of my approaches to other things invisible to me that others believe in. Does it have practical uses or create a physical effect in the world? If not, then even if it's really there, there's not much point in acknowledging it...

Comment by MockTurtle on Open Thread, Aug. 15. - Aug 21. 2016 · 2016-08-18T12:34:06.136Z · LW · GW

Even though it's been quite a few years since I attended any quantum mechanics courses, I did do a talk as an undergraduate on this very experiment, so I'm hoping that what I write below will not be complete rubbish. I'll quickly go through the double slit experiment, and then try to explain what's happening in the delayed choice quantum eraser and why it happens. Disclaimer: I know (or knew) the maths, but our professors did not go to great lengths explaining what 'really' happens, let alone what happens according to the MWI, so my explanation comes from my understanding of the maths and my admittedly more shoddy understanding of the MWI. So take the following with a grain of salt, and I would welcome comments and corrections from better informed people! (Also, the names for the different detectors in the delayed choice explanation are taken from the wikipedia article)

In the normal double slit experiment, letting through one photon at a time, the slit through which the photon went cannot be determined, as the world-state when the photon has landed could have come from either trajectory (so it's still within the same Everett branch), and so both paths of the photon were able to interfere, affecting where it landed. As more photons are sent through, we see evidence of this through the interference pattern created. However, if we measure which slit the photon goes through, the world states when the photon lands are different for each slit the photon went through (in one branch, a measurement exists which says it went through slit A, and in the other, through slit B). Because the end world states are different, the two branch-versions of the photon did not interfere with each other. I think of it like this: starting at a world state at point A, and ending at a world state at point B, if multiple paths of a photon could have led from A to B, then the different paths could interfere with each other. In the case where the slit the photon went through is known, the different paths could not both lead to the same world state (B), and so existed in separate Everett branches, unable to interfere with each other.

Now, with the delayed choice: the key is to resist the temptation to take the state "signal photon has landed, but idler photon has yet to land" as point B in my above analogy. If you did, you'd see that the world state can be reached by the photon going through either slit, and so interference inside this single branch must have occurred. But time doesn't work that way, it turns out: the true final world states are those that take into account where the idler photon went. And so we see that in the world state where the idler photon landed in D1 or D2, this could have occurred whether the photon went through either slit, and so both on D0 (for those photons) and D1/D2, we end up seeing interference patterns, as we're still within a single branch, so to speak (when it comes to this limited interaction, that is). Whereas in the case where the idler photon reaches D3, that world state could not have been reached by the photon going through either slit, and so the trajectory of the photon did not interfere with any other trajectory (since the other trajectory led to a world state where the idler photon was detected at D4, so a separate branch).

So going back to my point A/B analogy, imagine three world states A, B and C as points on a page, and STRAIGHT lines represent different hypothetical paths a photon could take, you can see that if two paths lead from point A to point B, the lines would be on top of each other, meaning a single branch, and the paths would interfere. But if one of the paths led to point A and the other to point B, they would not be on top of each other, they go into different branches, and so the paths would not interfere.

Comment by MockTurtle on Zombies Redacted · 2016-07-04T12:17:58.118Z · LW · GW

I wonder what probability epiphenomenalists assign to the theory that they are themselves conscious, if they admit that belief in consciousness isn't caused by the experiences that consciousness brings.

The more I think about it, the more absurdly self-defeating it sounds, and I have trouble believing that ANYONE could hold such views after having thought about it for a few minutes. The only reason I continue to think about it is because it's very easy to believe that some people, no matter how an AI acted and for how long, would never believe the AI to be conscious. And that bothers me a lot, if it affects their moral stance on that AI.

Comment by MockTurtle on Open Thread May 16 - May 22, 2016 · 2016-05-18T10:00:16.753Z · LW · GW

I really enjoyed this, it was very well written! Lots of fun new concepts, and plenty of fun old ones being used well.

Looking forward to reading more! Even if there aren't too many new weird things in whatever follows, I really want to see where the story goes.

Comment by MockTurtle on Your transhuman copy is of questionable value to your meat self. · 2016-01-08T14:31:10.378Z · LW · GW

I very much like bringing these concepts of unambiguous past and ambiguous future to this problem.

As a pattern theorist, I agree that only memory (and the other parts of my brain's patterns which establish my values, personality, etc) matter when it comes to who I am. If I were to wake up tomorrow with Britney Spear's memories, values, and personality, 'I' will have ceased to exist in any important sense, even if that brain still had the same 'consciousness' that Usul describes at the bottom of his post.

Once one links personal identity to one's memories, values and personality, the same kind of thinking about uploading/copying can be applied to future Everett branches of one's current self, and the unambigous past/ambiguous future concepts are even more obviously important.

In a similar way to Usul not caring about his copy, one might 'not care' about a version of oneself in a different Everett branch, but it would still make sense to care about both future instances of yourself BEFORE the split happens, due to the fact that you are uncertain which future you will be 'you' (and of course, in the Everett branch case, you will experience being both, so I guess both will be 'you'). And to bring home the main point regarding uploading/copying, I would much prefer that an entity with my memories/values/personality continue to exist in at least one Everett branch, even if such entities will cease existing in other branches.

Even though I don't have a strong belief in quantum multiverse theory, thinking about Everett branches helped me resolve the is-the-copy-really-me? dilemma for myself, at least. Of course, the main difference (for me) is that with Everett branches, the different versions of me will never interact. With copies of me existing in the same world, I would consider my copy as a maximally close kin and my most trusted ally (as you explain elsewhere in this thread).

Comment by MockTurtle on Your transhuman copy is of questionable value to your meat self. · 2016-01-08T13:37:26.442Z · LW · GW

Surely there is a difference in kind here. Deleting a copy of a person because it is no longer useful is very different from deleting the LAST existing copy of a person for any reason.

Comment by MockTurtle on Is simplicity truth indicative? · 2015-08-13T12:03:37.048Z · LW · GW

Does the fact that naive neural nets almost always fail when applied to out of sample data constitute a strong general argument against the anti-universalizing approach?

I think this demonstrates the problem rather well. In the end, the phenomenon you are trying to model has a level of complexity N. You want your model (neural network or theory or whatever) to have the same level of complexity - no more, no less. So the fact that naive neural nets fail on out of sample data for a given problem shows that the neural network did not reach sufficient complexity. That most naive neural networks fail shows that most problems have at least a bit more complexity than that embodied in the simplest neural networks.

As for how to approach the problem in view of all this... Consider this: for any particular problem of complexity N, there are N - 1 levels of complexity below it, which may fail to make accurate predictions due to oversimplification. And then there's an infinity of complexity levels above N, which may fail to make accurate predictions due to overfitting. So it makes sense to start with simple theories, and keep adding complexity as new observations arrive, and gradually improve the predictions we make, until we have the simplest theory we can which still produces low errors when predicting new observations.

I say low errors because to truly match all observations would certainly be overfitting! So there at the end we have the same problem again, where we trade off accuracy on current data against overfitting errors on future data... Simple (higher errors) versus complex (higher overfitting)... At the end of the process, only empiricism can help us find the theory that produces the lowest error on future data!

Comment by MockTurtle on Is simplicity truth indicative? · 2015-08-11T15:07:43.150Z · LW · GW

The first paper he mentions in the machine learning section can be found here, if you'd like to take a look: Murphy and Pazzani 1994 I had more trouble finding the others which he briefly mentions, and so relied on his summary for those.

As for the 'complexity of phenomena rather than theories' bit I was talking about, your reminder of Solomonoff induction has made me change my mind, and perhaps we can talk about 'complexity' when it comes to the phenomena themselves after all.

My initial mindset (reworded with Solomonoff induction in mind) was this: Given an algorithm (phenomenon) and the data it generates (observations), we are trying to come up with algorithms (theories) that create the same set of data. In that situation, Occam's Razor is saying "the shorter the algorithm you create which generates the data, the more likely it is to be the same as the original data-generating algorithm". So, as I said before, the theories are judged on their complexity. But the essay is saying, "Given a set of observations, there are many algorithms that could have originally generated it. Some algorithms are simpler than others, but nature does not necessarily choose the simplest algorithm that could generate those observations."

So then it would follow that when searching for a theory, the simplest ones will not always be the correct ones, since the observation-generating phenomenon was not chosen by nature to necessarily be the simplest phenomenon that could generate those observations. I think that may be what the essay is really getting at.

Someone please correct me if I'm wrong, but isn't the above only kinda valid when our observations are incomplete? Intuitively, it would seem to me that given the FULL set of possible observations from a phenomenon, if you believe any theory but the simplest one that generates all of them, surely you're making irrefutably unnecessary assumptions? The only reason you'd ever doubt the simplest theory is if you think there are extra observations you could make which would warrant extra assumptions and a more complex theory...

Comment by MockTurtle on Is simplicity truth indicative? · 2015-08-06T09:38:25.411Z · LW · GW

Looking at the machine learning section of the essay, and the paper it mentions, I believe the author to be making a bit too strong a claim based on the data. When he says:

"In some cases the simpler hypotheses were not the best predictors of the out-of-sample data. This is evidence that on real world data series and formal models simplicity is not necessarily truth-indicative."

... he fails to take into account that many more of the complex hypotheses get high error rates than the simpler hypotheses (despite a few of the more complex hypotheses getting the smallest error rates in some cases), which still says that when you have a whole range of hypotheses, you're more likely to get higher error rates when choosing a single complex one than a single simple one. It sounds like he says Occam's Razor is not useful just because the simplest hypothesis isn't ALWAYS the most likely to be true.

Similarly, when he says:

"In a following study on artificial data generated by an ideal fixed 'answer', (Murphy 1995), it was found that a simplicity bias was useful, but only when the 'answer' was also simple. If the answer was complex a bias towards complexity aided the search."

This is not actually relevant to the discussion of whether simple answers are more likely to be fact than complex answers, for a given phenomenon. If you say "It turns out that you're more likely to be wrong with a simple hypothesis when the true answer is complex", this does not affect one way or the other the claim that simple answers may be more common than complex answers, and thus that simple hypotheses may be, all else being equal, more likely to be true than complex hypotheses when both match the observations.

That being said, I am sympathetic to the author's general argument. While complexity (elaboration), when humans are devising theories, tends to just mean more things which can be wrong when further observations are made, this does not necessarily point to whether natural phenomena is generally 'simple' or not. If you observe only a small (not perfectly representative) fraction of the phenomenon, then a simple hypothesis produced at this time is likely to be proven wrong in the end. I'm not sure if this is really an interesting thing to say, however - when talking about the actual phenomena, they are neither really simple nor complex. They have a single true explanation. It's only when humans are trying to establish the explanation based on limited observation that simplicity and complexity come into it.

Comment by MockTurtle on Crazy Ideas Thread · 2015-07-10T10:45:03.119Z · LW · GW

If the many-worlds interpretation is truly how the world is, and if having multiple copies of myself as an upload is more valuable than just having one copy on more powerful (or distributed) hardware, then...

I could bid for a job asking for a price which could be adequate if I were working by myself. I could create N copies of myself to help complete the job. Then, assuming there's no easy way to meld my copies back into one, I could create a simple quantum lottery that deletes all but one copy.

Each copy is guaranteed to live on in its own Everett branch, able to enjoy the full reward from completing the job.

Comment by MockTurtle on Debunking Fallacies in the Theory of AI Motivation · 2015-05-07T14:05:53.579Z · LW · GW

Firstly, thank you for creating this well-written and thoughtful post. I have a question, but I would like to start by summarising the article. My initial draft of the summary was too verbose for a comment, so I condensed it further - I hope I have still captured the main essense of the text, despite this rather extreme summarisation. Please let me know if I have misinterpreted anything.

People who predict doomsday scenarios are making one main assumption: that the AI will, once it reaches a conclusion or plan, EVEN if there is a measure of probability assigned to that conclusion or plan, act as if the conclusion is certain. That is, it will mindlessly carry out the action, even if its human instructors say it is the wrong action, even if the checking code in place to double check actions says that it's wrong (due to the instructors saying it's wrong).

You state that this all comes down to a 'Doctrine of Infallibility': if an AI cannot reassess or in some way take into account the uncertainty of its conclusions, then it might act as the doomsayers fear. But you also state that an AI containing such a Doctrine would be unable to be very intelligent, because it would contain a contradiction, a logical inconsistency: it would have both a full understanding of the uncertainty in its knowledge and reasoning methods, AND still be programmed to act as if it was certain of its conclusions. Such an AI would never reach human level AI, let alone be intelligent enough to be a threat to us.

Any assumption that an AGI would take our commands literally and instantly implement the naive (and catastrophic) actions to fulfil them is based on the assumption that the AI is sticking to the Doctrine of Infallibility, and is thus flawed.

Now, my question to you: do you think it would be possible in theory to create an AI which does not abide by the Doctrine of Infallibility (something like a Swarm Relaxation Intelligence), yet is STILL programmed to perform actions purely based on the consideration of all the types of uncertainty in its knowledge and reasoning methods? So for example, it considers all relevant facts, reaches a plan to fulfil its goal, continues to investigate relevant facts until its probability that this is the best plan reaches a certain threshold, and implements it? I know that in practice, we would have 'checking code' which makes it pass the plan by humans before doing it, in order for us to assess the sensibleness and safety of the plan. But in theory, would you consider it possible for the process to work without human input, maybe once the AI has reached a certain level of intelligence?

Comment by MockTurtle on Stupid Questions May 2015 · 2015-05-05T13:49:43.471Z · LW · GW

From these examples, I might guess that these mistakes fall into a variety of already existing categories, unlike something like the typical mind fallacy which tends to come down to just forgetting that other people may have different information, aims and thought patterns.

Assuming you're different from others, and making systematic mistakes caused by this misconception, could be attributed to anything from low-self esteem (which is more to do with judgments of one's own mind, not necessarily a difference between one's mind and other people's), to the Fundamental Attribution Error (which could lead you to think people are different from you by failing to realise that you might have the same behaviour if you were in the same situation as they are, due to your current ignorance of what that situation is). Also, I don't know if there is a fallacy name for this, but regarding your second example, it sounds like the kind of mistake one makes when one forgets that other people are agents too. When all you can observe is your own mind, and the internal causes from your side which contribute to something in the outside world, it can be easy to forget to consider the other brains contributing to it. So, again, I'm not sure I would really put it down to something as precise as 'assuming one's mind is different from that of other people'.

(Edit: The top comment in this post by Yvain seems to expand a little on what you're talking about.)

Comment by MockTurtle on Stupid Questions May 2015 · 2015-05-05T10:09:16.340Z · LW · GW

I would say that it has to do with the consequences of each mistake. When you subconsciously assume that others think the way you do, you might see someone's action and immediately assume they have done it for the reason you would have done it (or, if you can't conceive of a reason you would do it, you might assume they are stupid or insane).

On the other hand, assuming people's minds differ from you may not lead to particular assumptions in the same way. When you see someone do something, it doesn't push you into thinking that there's no way the person did that for any reason you would do it. I don't think it will have that same kind of effect on your subconscious assumptions. I might be missing something, though. How do you see the atypical mind fallacy affecting your behaviour/thoughts in general?

Comment by MockTurtle on Why I Reject the Correspondence Theory of Truth · 2015-03-24T16:48:51.407Z · LW · GW

I think I may be a little confused about your exact reason to reject the correspondence theory of truth. From my reading, it seems to me that you reject it because it cannot justify any truth claim, since any attempt to do so is simply comparing one model to another - since we have no unmediated access to 'reality'. Instead, you seem to claim that pragmatism is more justified when claiming that something is true, using something along the lines of "it's true if it works in helping me achieve my goals".

There are two things that confuse me: 1) I don't see why the inability to justify a truth statement based on the correspondence theory would cause you to reject that theory as a valid definition of truth. In your post, you seem to accept that there IS a world which exists independently of us, in some way or other. If I say, "I believe that 'this snow is white' is true, which is to say, I believe that there exists a reality independent from my model of it where such objects in some way exist and are behaving in a manner corresponding to my statement"... That is what I understand by the correspondence theory of truth, so even if I cannot ever prove it (this could all be hallucinations for all I know), it still is a meaningful statement, surely? At least philosophically speaking? To me, there is a difference between 'if the statement that snow is white is true, it is because I am successful in my actions if I act as if snow is white' and 'if the statement that snow is white is true, it is because there exists an actual reality (which I have no unmediated access to), independent of my thoughts and senses, which has corresponding objects acting in corresponding ways to the statement, which somehow affect my observations'. When people argue about what truth really means, I don't see how it is only meaningful to advocate for the former definition over the latter, even if the latter is admittedly not particularly useful in a non-philosophical way. 2) Isn't acting on the world to achieve your goals a type of experiment, establishing correspondence between one model (if I do this, I will achieve my goal) and another model (my model of reality as relayed by the success or failure of my actions)? I don't see how, just because there is a goal other than finding out about an underlying reality, it would be any more correct or meaningful to say that this experiment reveals more truth than experiments whose only goal is to try to get the least mediated view of reality possible.

As far as I can see, if we assume even the smallest probability that our actions (whether they be pragmatic-goal-achieving or pure observation) are affected by some underlying, unmediated reality which we have no direct access to, then the more such actions we take, the more is revealed about this thing which actually affects our model.

Comment by MockTurtle on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-02T10:31:29.482Z · LW · GW

Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I'm unconscious, it seems like the answer could be different - if I cease to exist, others might care, but I won't (at the time!).

Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since I won't exist), but on what my preferences are now, and somehow extend that into the future regardless of the existence of a personal utility function at that future time...

Thanks for the help!

Comment by MockTurtle on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-02T10:22:57.186Z · LW · GW

I think you've helped me see that I'm even more confused than I realised! It's true that I can't go down the road of 'if I do not currently care about something, does it matter?' since this applies when I am awake as well. I'm still not sure how to resolve this, though. Do I say to myself 'the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness'?

I think that seems like a pretty solid thing to think, and is useful, but when I say it to myself right now, it doesn't feel quite right. For now I'll meditate on it and see if I can internalise that message. Thanks for the help!

Comment by MockTurtle on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-02T10:12:30.988Z · LW · GW

I remember going through a similar change in my sense of self after reading through particular sections of the sequences - specifically thinking that logically, I have to identify with spatially (or temporally) separated 'copies' of me. Unfortunately it doesn't seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of 'what if the teleporter malfunctions and you don't get recreated at your destination? Is that a bad thing?' is almost without meaning, as there would no-longer be a 'me' to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious.

As pointed out by Richard, this is probably even more absurd than I realise, as I am not 'conscious' of all my desires at all times, and thus I cannot go on this road of 'if I do not currently care about something, does it matter?'. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when.

Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those 'click' moments...

Comment by MockTurtle on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-01T11:09:03.015Z · LW · GW

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset - I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.

I'm definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?

I'm pretty sure I'd be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?

Comment by MockTurtle on Stupid Questions (10/27/2014) · 2014-11-20T11:43:47.801Z · LW · GW

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset - I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.

I'm definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?

I'm pretty sure I'd be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?

Comment by MockTurtle on Bayes Academy: Development report 1 · 2014-11-20T10:33:06.144Z · LW · GW

This is a really brilliant idea. Somehow I feel that using the Bayesian network system on simple trivial things at first (like the student encounter and the monster fight) is great for getting the player into the spirit of using evidence to update on particular beliefs, but I can imagine that as you go further with the game, the system would be applied to more and more 'big picture' mysteries of the story itself, such as where the main character's brother is.

Whenever I play conversation based adventure games or mystery-solving games such as Phoenix Wright, I can see how the player is intended to guess certain things from clues, and ask the right questions to gain more crucial information, but having the Bayesian Network be explicitly represented in the game means the game is a lot simpler in some ways (you don't have to do all the updating in your head) but also introduces a different kind of challenge (the player can be shot down if ve tries to guess the answer to the mystery right away with too little data, and it becomes a lot more to do with which pieces of evidence that could be looked at could provide the most information). A growing vision in my mind of what a game like this would look like is making me quite excited to play it!

But I think I'm getting a little carried away. The game as an educational tool would probably quite different from a game which tries to make mystery-solving a challenge. Getting the balance right, to make it fun, might still be pretty challenging, I think.

As a side note, it'd be pretty awesome to use this system to show particular logical fallacies that people (either other characters, or the main character before applying proper probability theory) in the game could make.

Comment by MockTurtle on Any LWers in UK West Midlands? · 2012-01-19T13:24:27.024Z · LW · GW

I know this post is a little old now, but I found myself wondering the same thing (and a little disappointed that I am the only one to comment) and found this. I must say that it's hard to find anyone around my social groups who has heard of LessWrong or even just cares about rationality, so it'd be great to meet up with other LWers! I'm currently attending the University of Birmingham, and live near the university.