Posts

Comments

Comment by Untermensch on [Link] Is the Endowment Effect Real? · 2013-02-27T03:22:35.368Z · LW · GW

If I am given a thing, like a mug, I now have one more mug than I had before. My need for mugs has therefore decreased. If I am to sell the mug, I must examine how much I will need the mug after it is gone and place a price on that loss of utility. If I am buying a mug I must set a price on how much I need it after I have it and place a price on that increase of utility. If the experiment is not worded carefully then the thought process could go along the lines of...

I have 2 mugs, and often take a tea break with my mate Steve. To sell one of those mugs would make me lose out on this activity... $10. I don't hugely need another mug unless it breaks, but it is handy to have a spare... $2.

In real life people will attribute more value to their stuff than other stuff as in general they would not have got the stuff if they did not value it higher than the cost of getting it. It is not a failiure of rationality to want something more than what you paid for it, and while it is a failiure of rationality to over value something just because you own it, it is not a failiure of rationality to ask a higher price first in case the person you are selling to is willing to pay more.

It would be difficult to adjust for these factors in designing an experiment.

Comment by Untermensch on Wanted: "The AIs will need humans" arguments · 2012-06-14T14:35:23.307Z · LW · GW

I have a couple of questions about this subject...

Does it still count if the AI "believes" that it needs humans when it, in fact, does not?

For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?

Just because an AI needs humans to exist, does that really mean that it won't kill them anyway?

This argument seems to be contingent on the AI wishing to live. Wishing to live is not a function of all inteligence. If an AI was smarter than anything else out there but depended on lesser, and provenly irrational beings for its continued existence this does not mean that it would want to "live" that way forever. It could either want to gain independance, or cease to exist, neither of which are necessarily healthy for its "supporting units".

Or, it could not care either way whether it lives or dies, as stopping all work on the planet is more important for slowing the entropic death of the universe.

It may be the case that an AI does not want to live reliant on "lesser beings" and sees the only way of ensuring its permanent destruction as the destruction of any being capable of creating it again, or the future possibilty of such life evolving. It may decide to blow up the universe to make extra sure of that.

Come to think of it a suicidal AI could be a pretty big problem...

Comment by Untermensch on Newcomb's Problem and Regret of Rationality · 2012-05-11T17:15:29.090Z · LW · GW

Sorry, I am having difficulty explaining as I am not sure what it is I am trying to get across, I lack the words. I am having trouble with the use of the word predict, as it could imply any number of methods of prediction, and some of those methods change the answer you should give.

For example if it was predicting by the colour of the player's shoes it may have a micron over 50% chance of being right, and just happened to have been correct the 100 times you heard of. In that case one should take a and b, if, on the other hand, it was a visitor from a higher matrix, and got its answer by simulating you perfectly and at fast forward, then whatever you want to take is the best option and in my case that is B. If it is breaking causality by looking through a window into the future, then take box B. My answers are conditional on information I do not have. I am having trouble mentally modelling this situation without assuming one of these cases to be true.

Comment by Untermensch on Newcomb's Problem and Regret of Rationality · 2012-05-10T21:56:44.960Z · LW · GW

Thank you. By depersonalising the question it makes it easier for me to think about. If do you take one box or two becomes should one take one box or two... I am still confused. I'm confident that just box B should be taken, but I think that I need information that is implied to exist but is not presented in the problem to be able to give a correct answer. Namely the nature of the predictions Omega has made.

With the problem as stated I do not see how one could tell if Omega got lucky 100 times with a flawed system, or if it has a deterministic or causality breaking process that it follows.

One thing I would say is that picking B the most you could lose is 1000 dollars if B is empty. Picking A and B the most you could gain over just B is 1000 dollars. Is it worth betting a reasonable chance at $1,000,000 for a $1,000 gain if you beat a computer at a game 100 people failed to beat it at, especially if it is a game you more or less axiomatically do not understand how it is playing?

Comment by Untermensch on Newcomb's Problem and Regret of Rationality · 2012-05-10T15:54:13.539Z · LW · GW

Thanks, that does help a little, though I should say that I am pretty sure I hold a number of irrational beliefs that I am yet to excise. Assuming that Omega literally implanted the idea into my head is a different thought experiment to Omega turned out to be predicting is different to Omega saying that it predicted the result etc. Until I know how and why I know it is predicting the result I am not sure how I would act in the real case. How Omega told me that I was only allowed to pick box a and b or just b may or may not be helpful but either way not as important as how I know it is predicting.

Edit. There seem to be a number of thought experiments wherein I have an irrational belief that I can more accuratly mentally model, like how I may behave if I thought that I was the King of England. Now I am wondering what about this specific problem is giving me trouble.

Comment by Untermensch on Newcomb's Problem and Regret of Rationality · 2012-05-10T07:37:33.007Z · LW · GW

The difficulty I am having here is not so much that the stated nature of the problem is not real so much that it is asking one to assume they are irrational. With a .999999999c spaceship it is not irrational to assume one is in a trolley on a space ship if one is in a trolley on a space ship. There is not enough information in the Omega puzzle as it assumes you, the person it drops the boxes in front of, know that omega is predicting, but does not tell you how you know that. As the mental state 'knowing it is predicting' is fundamental to the puzzle, not knowing how one came to that conclusion asks you to be a magical thinker for the purpose of the puzzle. I believe that this may at least partially explain why there seems to be a lack of consensus.

I also am suspicious of the ambiguous nature of the word predict, but am having trouble phrasing the issue. Omega may be using astrology and happen to have been right each of 100 times, or be literally looking forward in time. Without knowing how can one make the best choice?

All that said taking just B is my plan, as with $1,000,000 I can afford to lose $1,000.

Comment by Untermensch on Newcomb's Problem and Regret of Rationality · 2012-05-06T15:05:03.870Z · LW · GW

Sorry, I'm new here, I am having trouble with the Idea that anyone would consider taking both boxes in a real world situation. How would this puzzle be modeled differently, versus how would it look differently if it were Penn and Teller flying Omega?

If Penn and Teller were flying Omega then they would have been able to produce exactly the same results as seen, without violating causality or time travelling or perfectly predicting people by just cheating and emptying the box after you choose to take both.

Given that "it's cheating" is a significantly more rational idea than "it's smart enough to predict 100 people" in terms of simplicity and results seen, why not go with that as a rational reason to pick just box B? The only reason one would take both is if it proved it was not cheating, how it could do that without also convincing me of its predictive powers I don't know, and once convinced of is predictive powers I would have to take Box B.

So taking both boxes only makes sense if you know it is not cheating, and know it can be wrong. I notice I am confused, how can you both know it is not cheating, and not know that it is correct in it's prediction.

I think that the reason this puzzle begets irrationality is that one of the fundamental things you must do to parse the puzzle is irrational, that is 'believe that the machine is not cheating', given the alternatives and no further info.

Comment by Untermensch on [SEQ RERUN] The Dilemma: Science or Bayes? · 2012-05-06T02:51:51.052Z · LW · GW

I agree with the terms, for the sake of explanation by magical thinker I was thinking along the lines of young non science trained children, or people who have either no knowledge of or no interest in the scientific method. Ancient Greek philosophers could come under this label if they never experimented to test their ideas. The essence is that they theorise without testing their theory.

In terms of the task, my first idea was the marshmallow test from a Ted lecture, "make the highest tower you can that will support a marshmallow on top from dry spaghetti, a yard of string, and a yard of tape."

Essentially a situation where the results are clearly comparable, but the way to get the best result is hard to prove. So far triangles are the way to go, but there may be a better way that nobody has tried yet. If the task has a time limit, is it worth using scientific or bayesian principles to design the tower or is it better to just start taping some pasta.

Comment by Untermensch on Why do people ____? · 2012-05-06T02:23:25.437Z · LW · GW

Good point, I do not, but I find it strange that people, myself included, practice at enjoying something when there are plenty of things that are enjoyable from the start. Especially when starting an aquired taste is often quite uncomfortable. I salute the mind that looked at a tobacco plant, smoked it, coughed its lungs out, and then kept doing it till it felt good.

Comment by Untermensch on Why do people ____? · 2012-05-05T12:18:16.286Z · LW · GW

Why do people take the time to develop "aquired tastes". "That was an unpleasant experience", somehow becomes "I will keep doing it until I like it."

My guess is social conditioning, but then how did it become popular enough for that to be a factor?

Comment by Untermensch on [SEQ RERUN] The Dilemma: Science or Bayes? · 2012-05-05T11:58:36.283Z · LW · GW

Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.

I am now considering a more complicated case.

You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.

Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a "Bayes patch" to the work of your teammates so that they can benefit from the fruits of your Bayseian thinking without knowing it themselves?

I suppose I am trying to ask how easily or well is Bayes applied to undirected work by non-Bayesian operators. If I was a scientist in a group full of magical thinkers all of us with a task, I do not know what they would come up with but I reckon I would be able to make some scientific use of the information they generate, is the same the case for Bayes?

Comment by Untermensch on [SEQ RERUN] The Dilemma: Science or Bayes? · 2012-05-04T14:00:23.019Z · LW · GW

Science is simple enough that you can sic a bunch of people on a problem with a crib sheet and an "I can do science, me" attitude, and get a good enough answer early. The mental toolkit for applying Bayes is harder to give to people. I am right at the beggining approaching from a mentally lazy, slight psychological, and engineering background, when I first saw the word Bayes was in a certain Harry Potter fanfic a week or so ago. I failed the insightful tests in the early sequences, and caught myself noticing I was confused and not doing anything about it, and failed all over again in the next set of insightful tests. I have a way to go.

The time it takes for me to get a "I can do Bayes, me" attitude, even with a crib sheet, could have been spent solving a bunch of other problems.

If the choice is between science and Bayes, which at my low level of training I suspect is a false choice, then at the moment I would go science because I am better at it than Bayes. Like I type Qwerty not Dvorak, because I can type faster Qwerty even though Dvorak is (allegedly) better.

Given that each person at the moment has finite problem solving time, an argument could be made for applying Science to problems as it is easier to teach. That being said "I notice that I am confused" would have saved me a lot of trouble if I had heard of it earlier.

Comment by Untermensch on Focus Your Uncertainty · 2012-05-02T12:35:09.648Z · LW · GW

Edit - I didn't read the premises correctly. I missed the importance of the bit "Your mind keeps drifting to the explanations you use on television, of why each event plausibly fits your market theory. But it rapidly becomes clear that plausibility can't help you here—all three events are plausible. Fittability to your pet market theory doesn't tell you how to divide your time. There's an uncrossable gap between your 100 minutes of time, which are conserved; versus your ability to explain how an outcome fits your theory, which is unlimited."

The time one spends preparing excuses is only loosely, and also inversley, linked to how easy to explain the event is. When unsure of an outcome to excuse what you are looking for is not the "most likely to be needed" excuse to be "really good" but for any excuse you need to be "as good as possible."

Even if your pet theory is so useless as to be utterly general, it should still be possible to estimate the easiest event to explain compared to any of the others, and that is where the least time should be spent. Failing that, if the events are all equally easy to explain with your pet theory then the time taken trying to work out where to spend the time would be better spent writing whichever explanation of up or down you think most likely of the two until it is as good as you can get it in less than half the time, then do the same for the other, then a few minutes at the end saying how these cancel if the market stays same or similar,

Better would be write a long list of excuses with predicted up and down values, and use them to get a range of levels of upnesses and downnesses that you can combine any number of to excuse any specific levels of up and downness. "Normally the reserve announcement would have had a huge upwards effect on the market, but because it was rainy today and baked beans are on sale in Wal Mart this is reflected in the only slight increase seen when looking at the market as a whole" This way you can even justify trends right up till the moment of truth "Earlier in the day the market was dropping due to the anticipated reseve announcement, but once it was discovered that Bolivia was experiencing solar flares, this slowed the downward trend, with the floating of shares in Greenpeace flinging the market back up again"

Lets use something more 'predictable' for illustrative purposes, you are a physics teacher in 1960/70's America, some serious looking people in suits turn up at your door, their pet scientist and all his notes were disappeared by the Reds and your country needs you.

After the time wasted arguing that it was insane to even ask you to do this you have both a gun to your head and 100 minutes left to come up with excuses as to why the "Hammer and Feather on the Moon" experiment went in any of the three ways*. Given that you have good reason to believe that the Hammer and Feather experiment may not go as you predict, spending 99.99 % of your time on the obvious answer is a very unwise use of your time resource. In fact it may be wiser to spend 1 minute on the obvious answer to have more time to try to excuse the feather hitting first.

*Turns out that the president had been told Russian telekinetics were going to mess with the results of the experiment to make Americans believe the moon landings had been faked, or, if you pefer, perhaps they were worried that the props department in Area 51 hadn't got the tensions on the invisible wires right, yet...