Posts

Comments

Comment by kybernetikos on On Terminal Goals and Virtue Ethics · 2014-06-19T00:08:31.909Z · LW · GW

And the Archangel has decided to take some general principles (which are rules) and implant them in the habit and instinct of the children. I suppose you could argue that the system implanted is a deontological one from the Archangels point of view, and merely instinctual behaviour from the childrens point of view. I'd still feel that calling instinctual behaviour 'virtue ethics' is a bit strange.

Comment by kybernetikos on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2014-06-19T00:02:28.652Z · LW · GW

It seems as if your argument rests on the assertion that my access to facts about my physical condition is at least as good as my access to facts about the limitations of computation/simulation. You say the 'physical limitations', but I'm not sure why 'physical' in my universe is particularly relevant - what we care about is whether it's reasonable for there to be many simulations of someone like me over time or not.

I don't think this assertion is correct. I can make a statement about the limits of computation / simulation - i.e. that there is at least enough simulation power in the universe to simulate me and everything I am aware of, that is true whether I am in a simulation or in a top level universe, or even whether I believe in matter and physics at all.

I believe that this assertion, that the top level universe contains at least enough simulation power to simulate someone like myself and everything of which they are aware is something that I have better evidence for than the assertion that I have physical hands.

Have I misunderstood the argument, or do you disagree that I have better evidence for a minimum bound to simulation power than for any specific physical attribute?

Comment by kybernetikos on On Terminal Goals and Virtue Ethics · 2014-06-18T23:12:47.790Z · LW · GW

That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?

Comment by kybernetikos on Normal Ending: Last Tears (6/8) · 2013-07-16T21:51:37.460Z · LW · GW

The point is that they are the kind of species to deal with situations like this in a more or less fairminded way. That will stand them in good stead in future difficult negotiatons with other aliens.

Comment by kybernetikos on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-08T12:17:14.742Z · LW · GW

It's likely that anything around today has a huge impact on the state of the future universe. As I understood the article, the leverage penalty requires considering how unique your opportunity to have the impact would be too, so Archimedes had a massive impact, but there have also been a massive number of people through history who would have had the chance to come up with the same theories had they not already been discovered, so you have to offset Archimedes leverage penalty by the fact that he wasn't uniquely capable of having that leverage.

Comment by kybernetikos on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? · 2013-05-02T15:24:22.200Z · LW · GW

I tend to think death, but then I'm not sure that we genuinely survive from one second to another.

I don't have a good way to meaningfully define the kind of continuity that most people intuitively think we have and so I conclude that it could easily just be an illusion.

Comment by kybernetikos on Probability is in the Mind · 2012-11-21T22:42:03.214Z · LW · GW

Thinking of probabilities as levels of uncertainty became very obvious to me when thinking about the Monty Hall problem. After the host has revealed that one of the three doors has a booby prize behind it, you're left with two doors, with a good prize behind one of them.

If someone walks into the room at that stage, and you tell them that there's a good prize behind one door and a booby prize behind another, they will say that it's a 50/50 chance of selecting the door with the prize behind it. They're right for themselves, however the person who had been in the room originally and selected a door knows more and therefore can assign different probabilities - i.e. 1/3 for the door they'd selected and 2/3 for the other door.

If you thought that the probabilites were 'out there' rather than descriptions of the state of knowledge of the individuals, you'd be very confused about how the probability of choosing correctly could at the same time be 2/3 and 1/2.

Considering the Monty Hall problem as a way for a part of the information in the hosts head to be communicated to the contestant becomes the most natural way of thinking about it.

Comment by kybernetikos on Problematic Problems for TDT · 2012-06-19T07:46:29.046Z · LW · GW

The complaints about Omega being untrustworthy are weak. Just reformulate the problem so Omega says to all agents, simulated or otherwise, "You are participating in a game that involves simulated agents and you may or may not be one of the simulated agents yourself. The agents involved in the game are the following: <describes agents' roles in third person>".

Good point.

That clears up the summing utility across possible worlds possibility, but it still doesn't address the fact that the TDT agent is being asked to (potentially) make two decisions while the non-TDT agent is being asked to make only one. That seems to me to make the scenario unfair (it's what I was trying to get at in the 'very different problems' statement).

Comment by kybernetikos on Problematic Problems for TDT · 2012-06-19T07:30:10.111Z · LW · GW

Yes. I think that as long as there is any chance of you being the simulated agent, then you need to one box. So you one box if Omega tells you 'I simulated some agent', and one box if Omega tells you 'I simulated an agent that uses the same decision procedure as you', but two box if Omega tells you 'I simulated an agent that had a different copywrite comment in its source code to the comment in your source code'.

This is just a variant of the 'detect if I'm in a simulation' function that others mention. i.e. if Omega gives you access to that information in any way, you can two box. Of course, I'm a bit stuck on what Omega has told the simulation in that case. Has Omega done an infinite regress?

Comment by kybernetikos on Rational Toothpaste: A Case Study · 2012-06-06T12:49:40.653Z · LW · GW

3x sounds really scary, but I have no knowledge of whether a 4 micron extra loss of sound dentin is something to be concerned about or not.

Comment by kybernetikos on Problematic Problems for TDT · 2012-06-06T12:12:51.481Z · LW · GW

In a prisoners dilemma Alice and Bob affect each others outcomes. In the newcomb problem, Alice affects Bobs outcome, but Bob doesn't affect Alices outcome. That's why it's OK for Bob to consider himself different in the second case as long as he knows he is definitely not Alice (because otherwise he might actually be in a simulation) but not OK for him to consider himself different in the prisoners dilemma.

Comment by kybernetikos on Problematic Problems for TDT · 2012-06-06T12:05:55.465Z · LW · GW

The key thing is the question as to whether it could have been you that has been simulated. If all you know is that you're a TDT agent and what Omega simulated is a TDT agent, then it could have been you. Therefore you have to act as if your decision now may either real or simulated. If you know you are not what Omega simulated (for any reason), then you know that you only have to worry about the 'real' decision.

Comment by kybernetikos on Problematic Problems for TDT · 2012-06-06T11:54:52.360Z · LW · GW

I don't think that the ability to simulate without rewarding the simulation is what pushes it over the threshold of "unfair".

It only seems that way because you're thinking from the non-simulated agents point of view. How do you think you'd feel if you were a simulated agent, and after you made your decision Omega said 'Ok, cheers for solving that complicated puzzle, I'm shutting this reality down now because you were just a simulation I needed to set a problem in another reality'. That sounds pretty unfair to me. Wouldn't you be saying 'give me my money you cheating scum'?

And as has been already pointed out, they're very different problems. If Omega actually is trustworthy, integrating across all the simulations gives infinite utility for all the (simulated) TDT agents and a total $1001000 utility for the (supposedly non-simulated) CDT agent.

Comment by kybernetikos on Problematic Problems for TDT · 2012-06-06T11:51:02.463Z · LW · GW

Omega (who experience has shown is always truthful)

Omega doesn't need to simulate the agent actually getting the reward. After the agent has made its choice, the simulation can just end.

If we are assuming that Omega is trustworthy, then Omega needs to be assumed to be trustworthy in the simulation too. If they didn't allow the simulated version of the agent to enjoy the fruits of their choice, then they would not be trustworthy.

Comment by kybernetikos on Problematic Problems for TDT · 2012-06-01T22:01:33.243Z · LW · GW

Actually, I'm not sure this matters. If the simulated agent knows he's not getting a reward, he'd still want to choose so that the nonsimulated version of himself gets the best reward.

So the problem is that the best answer is unavailable to the simulated agent: in the simulation you should one box and in the 'real' problem you'd like to two box, but you have no way of knowing whether you're in the simulation or the real problem.

Agents that Omega didn't simulate don't have the problem of worrying whether they're making the decision in a simulation or not, so two boxing is the correct answer for them.

The decisions being made are very different between an agent that has to make the decision twice and the first decision will affect the payoff of the second versus an agent that has to make the decision only once, so I think that in reality perhaps the problem does collapse down to an 'unfair' one because the TDT agent is presented with an essentially different problem to a nonTDT agent.

Comment by kybernetikos on Using degrees of freedom to change the past for fun and profit · 2012-03-14T09:26:02.228Z · LW · GW

The success is said to be by a researcher who has previously studied the effect of "geomagnetic pulsations" on ESP, but I could not locate it online.

Can we have a prejudicial summary of the previous studies of the 6 researchers who failed to replicate the effect too?

Comment by kybernetikos on My Algorithm for Beating Procrastination · 2012-02-10T17:46:48.169Z · LW · GW

I noticed that if I'm apathetic about doing a task, then I also tend to be apathetic about thinking about doing the task, whereas tasks that I get done I tend to be so enthusiastic about that I have planned them and done them in my head long before I do them in physicality. My conclusion: apathy starts in the mind and the cure for it starts in the mind too.

Comment by kybernetikos on Consequentialism Need Not Be Nearsighted · 2011-09-02T08:46:01.812Z · LW · GW

But what if the doctor is confident of keeping it a secret? Well, then causal decision theory would indeed tell her to harvest his organs, but TDT (and also UDT) would strongly advise her against it. Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to output in order to maximize utility is "Don't kill the traveler,"5 and thus the doctor doesn't kill the traveler.

This doesn't follow the spirit of the keeping it secret part of the setup. If we know the exact mechanism that the doctor uses to make decisions then we would be able to deduce that he probably saved those five patients with the organs from the missing traveller, so it's no longer secret. To fairly accept the thought experiment, the doctor has to be certain that nobody will be able to deduce what he's done.

It seems to me that you haven't really denied the central point, which is that under consequentialism the doctor should harvest the organs if he is certain that nobody will be able to deduce what he has done.

Comment by kybernetikos on Ask LW: What questions to test in our rationality questionnaire? · 2011-08-04T19:25:11.414Z · LW · GW

Set up questions that require you assume something odd in the preamble, and then conclude with something unpalatable (and quite possibly false). This tests to see if people can apply rationality even when it goes against their emotional involvement and current beliefs. As well as checking that they reach the conclusion demanded (logic), also give them an opportunity as part of a later question to flag up the premise that they feel caused the odd conclusion.

Something bayesian - like the medical test questions where the incidence in the general population is really low, but that specific one has been done so much loads of people know it. Maybe take some stats from newspaper reports and see if appropriate conclusions can be drawn.

"When was the last time you changed your mind about something you believed?" tests peoples ability to apply their rationality.

Comment by kybernetikos on Secrets of the eliminati · 2011-07-22T09:14:51.937Z · LW · GW

I agree. In particular I often find these discussions very frustrating because people arguing for elimination seem to think they are arguing about the 'reality' of things when in fact they're arguing about the scale of things. (And sometimes about the specificity of the underlying structures that the higher level systems are implemented on). I don't think anyone ever expected to be able to locate anything important in a single neuron or atom. Nearly everything interesting in the universe is found in the interactions of the parts not the parts themselves. (Also - why would we expect any biological system to do one thing and one thing only?).

I regard almost all these questions as very similar to the demarcation problem. A higher level abstraction is real if it provides predictions that often turn out to be true. It's acceptable for it to be an incomplete / imperfect model, although generally speaking if there is another that provides better predictions we should adopt it instead.

This is what would convince me that preferences were not real: At the moment I model other people by imagining that they have preferences. Most of the time this works. The eliminativist needs to provide me with an alternate model that reliably provides better predictions. Arguments about theory will not sway me. Show me the model.

Comment by kybernetikos on Secrets of the eliminati · 2011-07-21T05:10:39.768Z · LW · GW

eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

Just because something only exists at high levels of abstraction doesn't mean it's not real or explanatory. Surely the important question is whether humans genuinely have preferences that explain their behaviour (or at least whether a preference system can occasionally explain their behaviour - even if their behaviour is truly explained by the interaction of numerous systems) rather than how these preferences are encoded.

The information in a jpeg file that indicates a particular pixel should be red cannot be analysed down to a single bit that doesn't do anything else, that doesn't mean that there isn't a sense in which the red pixel genuinely exists. Preferences could exist and be encoded holographically in the brain. Whether you can find a specific neuron or not is completely irrelevant to their reality.

Comment by kybernetikos on Teachable Rationality Skills · 2011-05-29T13:43:58.067Z · LW · GW

I sneeze quite often. When someone says 'bless you', my usual response is 'and may you also be blessed'. I've heard a number of people who had apparently never wondered before say 'why do we say that?' after receiving that response.

Comment by kybernetikos on No One Can Exempt You From Rationality's Laws · 2011-01-19T17:39:47.899Z · LW · GW

I have all of the english wikipedia available for offline searching on my phone. It's big, sure, but it doesn't fill the memory card by any means (and this is just the default one that came with the phone).

For offline access on a windows computer, WikiTaxi is a reasonable solution.

I'd recommend that everyone who can carry around offline versions of wikipedia. I consider it part of my disaster preparedness, not to mention the fun of learning new things by hitting the 'random article' button.

Comment by kybernetikos on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2011-01-19T17:32:18.107Z · LW · GW

A parole board considers the release of a prisoner: Will he be violent again?

I think this is the kind of question that Miller is talking about. Just because a system is correct more often, doesn't necessarily mean it's better.

For example if the human experts allowed more people out who went on to commit relatively minor violent offences and the SPRs do this less often, but are more likely to release prisoners who go on to commit murder then there would be legitimate discussion over whether the SPR is actually better.

I think this is exactly what he is talking about when he says

Where AI's compete well generally they beat trained humans fairly marginally on easy (or even most) cases, and then fail miserably at border or novel cases. This can make it dangerous to use them if the extreme failures are dangerous.

Whether or not there is evidence that says this is a real effect I don't know, but to address it what you really need to measure is total utility of outcomes rather than accuracy.

Comment by kybernetikos on The Santa deception: how did it affect you? · 2010-12-25T21:46:00.534Z · LW · GW

that I started to wonder why adults would want children to believe in Santa Claus, and whether their reasons for it were actually good.

I think that lots of people have a kind of compulsion to lie to anyone they care about who is credulous, particularly children, about things that don't matter very much. I assume it's adaptive behaviour, to try to toughen up their reasoning skills on matters that aren't so important - to teach them that they can't rely on even good people to tell them stuff that is true.

Comment by kybernetikos on Efficient Charity: Do Unto Others... · 2010-12-25T07:07:26.168Z · LW · GW

The good you do can compound too. If you save a childs life at $500, that child might go on to save other childrens lives. I think you might well get a higher rate of interest on the good you do than 5%. There will be a savings rate at which you should save instead of give, but I don't think we're near it at the moment.

Comment by kybernetikos on Efficient Charity: Do Unto Others... · 2010-12-25T05:58:08.208Z · LW · GW

Most of us allocate a particular percentage to charity, despite the fact that most people would say that nearly nothing we spend money on is as important as saving childrens lives.

I don't know whether you think it's that we overestimate how much we value saving childrens lives, or underestimate how important xbox games, social events, large tvs and eating tasty food are to us. Or perhaps you think it's none of that, and that we're being simply irrational.

I doubt that anyone could consistently live as if the difference between choice of renting a nice flat and renting a dive was one life per month, or that halving normal grocery consumption for a month was a childs life that month, etc. If that's really the aim, we're going to have to do a significant amount of emotional engineering.

I also want to stick up for the necessity of analysing the way that a charity works, not just what they do. For example, charities that employ local people and local equipment may save fewer people per dollar in the short term, but may be less likely to create a culture of dependence, and may be more sustainable in the long term. These considerations are important too.

Comment by kybernetikos on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-01T15:01:26.237Z · LW · GW

I have to admire the cunning of your last sentence.

Or have I accidentally defected? I can't tell.

EDIT: I think the 'wizened' correction was intended to be a joke. When I read your piece originally the idea of you 'wizening up' made me smile, and I suspect that the corrector just wanted to share that idea with others who may have missed it.

Comment by kybernetikos on Belief in Belief vs. Internalization · 2010-11-29T21:12:50.764Z · LW · GW

I suppose the goal you were going to spend the money on would have to be of sufficient utility if achieved to offset that in order to make the scenario work. Maybe saving the world, or creating lots of happy simulations of yourself, or finding a way to communicate between them.

Comment by kybernetikos on Belief in Belief vs. Internalization · 2010-11-29T20:54:47.162Z · LW · GW

Imagine a raffle where the winner is chosen by some quantum process. Presumably under the many worlds interpretation you can see it as a way of shifting money from lots of your potential selves to just one of them. If you have a goal you are absolutely determined to achieve and a large sum of money would help towards it, then it might make a lot of sense to take part, since the self that wins will also have that desire, and could be trusted to make good use of that money.

Now, I wonder if anyone would take part in such a raffle if all the entrants who didn't win were killed on the spot. That would mean that everyone would win in some universe, and cease to exist in the other universes where they entered. Could that be a kind of intellectual assent vs belief test for Many Worlds?

Comment by kybernetikos on Belief in Belief vs. Internalization · 2010-11-29T16:59:49.670Z · LW · GW

Yeah, that is a problem with the illustration. However, I don't think it's completely devoid of use.

Taking a risk based on some knowledge is a very strong sign of having internalised that knowledge.

Comment by kybernetikos on Belief in Belief vs. Internalization · 2010-11-29T14:41:30.788Z · LW · GW

I've heard this contrasted as 'knowledge', where you intellectually assent to something and can make predictions from it and 'belief', where you order your life according to that knowledge, but this distinction is certainly not made in normal speech.

A common illustration of this distinction (often told by preachers) is that Blondin the tightrope walker asked the crowd if they believed he could safely carry someone across the Niagra falls on a tightrope, and almost the whole crowd shouted 'yes'. Then he asked for a volunteer to become the first man ever so carried, at which point the crowd shut up. In the end the only person he could find to accept was his manager.

Comment by kybernetikos on Harry Potter and the Methods of Rationality discussion thread · 2010-11-28T18:38:20.466Z · LW · GW

If you want to read the full thing, rather than just the description, you can download the ebook here. I certainly enjoyed it.

Comment by kybernetikos on Harry Potter and the Methods of Rationality discussion thread · 2010-11-28T15:32:33.942Z · LW · GW

There's a fairly obvious answer to that stuff in my opinion. Ventus by Schroeder (scifi) covers it nicely. It would be a structure set up by the atlantians for control of nature, before they ascended probably and left Earth for the stars.

Edit: It occurs to me that the other possibility would be a simulation, originally invented by the atlantians for them to upload themselves into, or perhaps muggles were supposed to be NPCs.