Posts

Comments

Comment by CynicalOptimist on Confidence levels inside and outside an argument · 2016-11-18T00:15:12.816Z · LW · GW

This is good, but I feel like we'd better represent human psychology if we said:

Most people don't make a distinction between the concepts of "x has probability <0.1%" and "x is impossible".

I say this because I think there's an important difference between the times when people have a precise meaning in mind, which they've expressed poorly, and the times when people's actual concepts are vague and fuzzy. (Often, people don't realise how fuzzy their concepts are).

Comment by CynicalOptimist on Timeless Identity · 2016-11-17T20:29:45.541Z · LW · GW

This seems to me like an orthogonal question. (A question that can be entirely extricated and separated from the cryonics question).

You're talking about whether you are a valuable enough individual that you can justify resources being spent on maintaining your existence. That's a question that can be asked just as easily even if you have no concept of cryonics. For instance: if your life depends on getting medical treatment that costs a million dollars, is it worth it? Or should you prefer that the money be spent on saving other lives more efficiently?

(Incidentally, i know that utilitarianism generally favours the second option. But I would never blame anyone for choosing the first option if the money was offered to them.)

I would accept an end to my existence if it allowed everyone else on earth to live for as long as they wished, and experience an existentially fulfilling form of happiness. I wouldn't accept an end to my existence if it allowed one stranger to enjoy an ice cream. There are scenarios where I would think it was worth using resources to maintain my existence, and scenarios where I would accept that the resources should be used differently. I think this is true when we consider cryonics, and equally true if we don't.

The cryonics question is quite different.

For the sake of argument, I'll assume that you're alive and that you intend to keep on living, for at least the next 5 years. I'll assume that If you experienced a life-threatening situation tomorrow, and someone was able to intervene medically and grant you (at least) 5 more years of life, then you would want them to.

There are many different life-threatening scenarios, and many different possible interventions. But for decision making purposes, you could probably group them into "interventions which extend my life in a meaningful way" and interventions that don't. For instance, an intervention that kept your body alive but left you completely brain-dead would probably go in the second category. Coronary bypass surgery would probably go in the first.

The cryonics question here is simply: "If a doctor offered to freeze you, then revive you 50 years later" would you put this in the same category as other "life-saving" interventions? Would you consider it an extension of your life, in the same way as a heart transplant would be? And would you value it similarly in your considerations?

And of course, we can ask the same question for a different intervention, where you are frozen, then scanned, then recreated years later in one (or more) simulations.

Comment by CynicalOptimist on Timeless Identity · 2016-11-17T20:08:01.244Z · LW · GW

I think I've got a good response for this one.

My non-episodic memory contains the "facts" that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren't an interesting band. My boyfriend's non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd's music is trancendentally good).

Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were in my non-episodic memory. More than that, I would also lose my sense of self if I gained contradictory memories. I would need to have my non-episodic memories and not have the facts from my boyfriend's memory.

That's the reason why "off the shelf" doesn't sound suitable in this context.

Comment by CynicalOptimist on The Optimizer's Curse and How to Beat It · 2016-11-17T19:26:48.675Z · LW · GW

Very interesting. I'm going to try my hand at a short summary:

Assume that you have a number of different options you can choose, that you want to estimate the value of each option and you have to make your best guess as to which option is most valuable. In step one, you generate individual estimates using whatever procedure you think is best. In step 2 you make the final decision, by choosing the option that had the highest estimate in step one.

The point is: even if you have unbiased procedures for creating the individual estimates in step one (ie procedures that are equally likely to overestimate as to underestimate) biases will still be introduced in step 2, when you're looking at the list of all the different estimates. Specifically, the biases are that the highest estimate(s) are more likely to be overestimates, and the lowest estimate(s) are more likely to be underestimates.

Comment by CynicalOptimist on The Optimizer's Curse and How to Beat It · 2016-11-17T18:28:11.531Z · LW · GW

Well in some circumstances, this kind of reasoning would actually change the decision you make. For example, you might have one option with a high estimate and very high confidence, and another option with an even higher estimate, but lower confidence. After applying the approach described in the article, those two options might end up switching position in the rankings.

BUT: Most of the time, I don't think this approach will make you choose a different option. If all other factors are equal, then you'll probably still pick the option that has the highest expected value. I think that what we learn from this article is more about something else: It's about understanding that the final result will probably be lower than your supposedly "unbiased" estimate. And when you understand that, you can budget accordingly.

Comment by CynicalOptimist on The Optimizer's Curse and How to Beat It · 2016-11-17T17:27:56.951Z · LW · GW

I think there's some value in that observation that "the all 45 thing makes it feel like a trick". I believe that's a big part of why this feels like a paradox.

If you have a box with the numbers "60" and "20" as described above, then I can see two main ways that you could interpret the numbers:

A: The number of coins in this box was drawn from a probability distribution with a mean of 60, and a range of 20.

B: The number of coins in this box was drawn from an unknown probability distribution. Our best estimate of the number of coins in this box is 60, based on certain information that we have available. We are certain that the actual value is within 20 gold coins of this.

With regards to understanding the example, and understanding how to apply the kind of Bayesian reasoning that the article recommends, it's important to understand that the example was based on B. And in real life, B describes situations that we're far more likely to encounter.

With regards to understanding human psychology, human biases, and why this feels like a paradox, it's important to understand that we instinctively tend towards "A". I don't know if all humans would tend to think in terms of A rather than B, but I suspect the bias applies widely amongst people who've studied any kind of formal probability. "A" is much closer to the kind of questions that would be set as exercises in a probability class.

Comment by CynicalOptimist on Magical Categories · 2016-11-10T12:34:15.568Z · LW · GW

I think that RobbBB has already done a great job of responding to this, but I'd like to have a try at it too. I'd like to explore the math/morality analogy a bit more. I think I can make a better comparison.

Math is an enormous field of study. Even if we limited our concept of "math" to drawing graphs of mathematical functions, we would still have an enormous range of different kinds of functions: Hyperbolic, exponential, polynomial, all the trigonometric functions, etc. etc.

Instead of comparing math to morality, I think it's more illustrative to compare math to the wider topic of "value-driven-behaviour".

An intelligent creature could have all sorts of different values. Even within the realm of modern, western, democratic morality we still disagree about whether it is just and propper to execute murderers. We disagree about the extent to which a state is obligated to protect its citizens and provide a safety net. We disagree about the importance of honesty, of freedom vs. safety, freedom of speech vs. protection from hate speech.

If you look at the wider world, and at cultures through history, you'll find a much wider range of moralities. People who thought it was not just permitted, but morally required that they enslave people, restrict the freedoms of their own families, and execute people for religious transgressions.

You might think that these are all better or worse approximations of the "one true morality", and that a superintelligence could work out what that true morality is. But we don't think so. We believe that these are different moralities. Fundamentally, these people have different values.

Then we can step further out, and look at the "insane" value systems that a person could hold. Perhaps we could believe that all people are so flawed that they must be killed. Or we could believe that no one should ever be allowed to die, and so we extend life indefinitely, even for people in agony. Or we might believe everyone should be lobotomised for our own safety.

And then there are the truly inhuman value systems: the paperclip maximisers, the prime pebble sorters, and the baby eaters. The idea is that a superintelligence could comprehend any and all of these. It would be able to optimise for any one of them, and foresee results and possible consequences for all of them. The question is: which one would it actually use?

A superintelligence might be able to understand all of human math and more besides, but we wouldn't build one to simply "do all of maths". We would build it with a particular goal and purpose in mind. For instance (to pick an arbitrary example) we might need it to create graphs of Hyperbolic functions. It's a bad example, I know. But I hope it serves to help make the point.

Likewise, we would want the intelligence to adopt a specific set of values. Perhaps we would want them to be modern, western, democratic liberal values.

I wouldn't expect a superintelligence to start generating Hyperbolic functions, despite the fact that it's smart enough to do so. The AI would have no reason to start doing that particular task. It might be smart enough to work out that that's what we want of course, but that doesn't mean it'll do it (unless we've already solved the problem of getting them to do "what humans want it to do".) If we want Hyperbolic functions, we'll have to program the machine with enough focus to make it do that.

Likewise, a computer could have any arbitrary utility function, any arbitrary set of values. We can't make sure that a computer has the "right" values unless we know how to clearly define the values we want.

With Hyperbolic functions, it's relatively easy to describe exactly, unambiguously, what we want. But morality is much harder to pin down.

Comment by CynicalOptimist on The Hidden Complexity of Wishes · 2016-11-09T00:29:20.700Z · LW · GW

But if you do care about your wishes being fulfilled safely, then safety will be one of the things that you want, and so you will get it.

So long as your preferences are coherent, stable, and self-consistent then you should be fine. If you care about something that's relevant to the wish then it will be incorporated into the wish. If you don't care about something then it may not be incorporated into the wish, but you shouldn't mind that: because it's something you don't care about.

Unfortunately, people's preferences often aren't coherent and stable. For instance an alcoholic may throw away a bottle of wine because they don't want to be tempted by it. Right now, they don't want their future selves to drink it. And yet they know that their future selves might have different priorities.

Is this the sort of thing you were concerned about?

Comment by CynicalOptimist on The Hidden Complexity of Wishes · 2016-11-08T23:46:28.419Z · LW · GW

I like this style of reasoning.

Rather than taking some arbitrary definition of black boxes and then arguing about whether they apply, you've recognised that a phrase can be understood in many ways, and we should use the word in whatever way most helps us in this discussion. That's exactly the sort of rationality technique we should be learning.

A different way of thinking about it though, is that we can remove the confusing term altogether. Rather than defining the term "black box", we can try to remember why it was originally used, and look for another way to express the intended concept.

In this case, I'd say the point was: "Sometimes, we will use a tool expecting to get one result, and instead we will get a completely different, unexpected result. Often we can explain these results later. They may even have been predictable in advance, and yet they weren't predicted."

Computer programming is especially prime to this. The computer will faithfully execute the instructions that you gave it, but those instructions might not have the net result that you wanted.

Comment by CynicalOptimist on The Hidden Complexity of Wishes · 2016-11-08T23:30:09.765Z · LW · GW

"if the Pump could just be made to sense the proper (implied) parameters."

You're right, this would be an essential step. I'd say the main point of the post was to talk about the importance, and especially the difficulty, of achieving this.

Re optimisation for use: remember that this involves a certain amount of trial and error. In the case of dangerous technologies like explosives, firearms, or high speed vehicles, the process can often involve human beings dying, usually in the "error" part of trial and error.

If the technology in question was a super-intelligent AI, smart enough to fool us and engineer whatever outcome best matched its utility function? Then potentially we could find ourselves unable to fix the "error".

Please excuse the cheesy line, but sometimes you can't put the genie back in the bottle.

Re the workings of the human brain? I have to admit that I don't know the meaning of ceteris paribus, but I think that the brain mostly works by pattern recognition. In a "burning house" scenario, people would mostly contemplate the options that they thought were "normal" for the situation, or that they had previously imagined, heard about, or seen on TV

Generating a lot of different options and then comparing them for expected utility isn't the sort of thing that humans do naturally. It's the sort of behaviour that we have to be trained for, if you want us to apply it.

Comment by CynicalOptimist on The Hidden Complexity of Wishes · 2016-11-08T23:12:51.964Z · LW · GW

I agree, just because something MIGHT backfire, it doesn't mean we automatically shouldn't try it. We should weigh up the potential benefits and the potential costs as best we can predict them, along with our best guesses about the likelihood of each.

In this example, of course, the lessons we learn about "genies" are supposed to be applied to artificial intelligences.

One of the central concepts that Eliezer tries to express about AI is that when we get an AI that's as smart as humans, we will very quickly get an AI that's very much smarter than humans. At that point, the AI can probably trick us into letting it loose, and it may be able to devise a plan to achieve almost anything.

In this scenario, the potential costs are almost unlimited. And the probability is hard to work out. Therefore figuring out the best way to program it is very very important.

Because that's a genie... {CSI sunglasses moment} ... that we can't put back in the bottle.

Comment by CynicalOptimist on The Hidden Complexity of Wishes · 2016-11-08T22:53:45.196Z · LW · GW

I see where you're coming from on this one.

I'd only add this: if a genie is to be capable of granting this wish, it would need to know what your judgements were. It would need to understand them, at least as well as you do. This pretty much resolves to the same problem that Eliezer already discussed.

To create such a genie, you would either need to explain to the genie how you would feel about every possible circumstance, or you would need to program the genie so as to be able to correctly figure it out. Both of these tasks are probably a lot harder than they sound.

Comment by CynicalOptimist on Magical Categories · 2016-11-07T23:57:38.690Z · LW · GW

Can't agree with this enough.

Comment by CynicalOptimist on Ethics Notes · 2016-11-05T09:32:10.648Z · LW · GW

Alternate answer:

If the Kremlin publicly announces a policy, saying that they may reward some soldiers who disobey orders in a nuclear scenario? Then this raises the odds that a Russian official will refuse to launch a nuke - even when they have evidence that enemy nukes have already been fired on Russia.

(So far, so good. However...)

The problem is that it doesn't just raise the odds of disobedience, it also raises the perceived odds as well. ie it will make Americans think that they have a better chance of launching a first strike and "getting away with it".

A publically announced policy like this would have weakened the USSR's nuclear deterrent. Arguably, this raises everyone's chances of dying in a nuclear war, even the Americans.

Comment by CynicalOptimist on Ethics Notes · 2016-11-05T09:20:45.939Z · LW · GW

It may be an uncommon scenario, but it's the scenario that's under discussion. We're talking about situations where a soldier has orders to do one thing, and believes that moral or tactical considerations require them to do something else - and we're asking what ethical injunctions should apply in that scenario.

To be fair, Jubilee wasn't very specific about that.

Comment by CynicalOptimist on Humans in Funny Suits · 2016-10-07T02:29:57.810Z · LW · GW

Yup! I agree completely.

If you were modeling an octopus-based sentient species, for the purposes of writing some interesting fiction, then this would be a nice detail to add.

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-10-07T02:06:44.517Z · LW · GW

Thank you. :)

Comment by CynicalOptimist on Conservation of Expected Evidence · 2016-08-19T15:18:21.705Z · LW · GW

I believe the idea was to ask "hypothetically, if I found out that this hypothesis was true, how much new information would that give me?"

You'll have two or more hypotheses, and one of them is the one that would (hypothetically) give you the least amount of new information. The one that would give you the least amount of new information should be considered the "simplest" hypothesis. (assuming a certain definition of "simplest", and a certain definition of "information")

Comment by CynicalOptimist on When (Not) To Use Probabilities · 2016-05-12T21:57:03.230Z · LW · GW

This is excellent advice.

I'd like to add though, that the original phrase was "algorithms that make use of gut feelings... ". This isn't the same as saying "a policy of always submitting to your gut feelings".

I'm picturing a decision tree here: something that tells you how to behave when your gut feeling is "I'm utterly convinced" {Act on the feeling immediately}, vs how you might act if you had feelings of "vague unease" {continue cautiously, delay taking any steps that constitute a major commitment, while you try to identify the source of the unease}. Your algorithm might also involve assessing the reliability of your gut feeling; experience and reason might allow you to know that your gut is very reliable in certain matters, and much less reliable in others.

The details of the algorithm are up for debate of course. For the purposes of this discussion, i place no importance on the details of the algorithm i described. The point is just that these procedures are helpful for rational thinking, they aren't numerical procedures, and a numerical procedure wouldn't automatically be better just because it's numerical.

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-10T21:14:13.641Z · LW · GW

I think this is the basis of good Business Analysis. A field I'm intending to move into.

It's the very essence of "Hold off on proposing solutions".

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-10T21:07:14.643Z · LW · GW

This is perfectly true. But it doesn't much matter, because the point here is that when these people reject the idea of evolution, for these kind of reasons, they use feelings of "absurdity" as a metric - without critically assessing the reasons why they feel that way.

The point here isnt that the lady was using sound and rational reasoning skills. The contention is that her style of reasoning was something a rationalist shouldn't want to use - and that it was something the author no longer wants to use in their own thinking.

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-10T20:48:04.630Z · LW · GW

Oh absolutely. We don't have time to thoroughly investigate the case for every idea we come across. There comes a time when you say that you're not interested in exploring an idea any further.

But there is an intellectual honesty to admitting that you haven't heard all of the evidence, and acknowledging that you might conceivably have changed your mind (or least significantly changed your probability estimates) if you had done more research.

And there's a value to it as well. Some ideas have been thoroughly researched and should be labelled in our minds as "debunked". Others should be labelled as "not yet disproven". Later, if we happen to encounter more evidence on the topic, we might take this into account when we decide how seriously to take it.

The lady in the story might have sounded much more sensible to us if she had said "Evolution still sounds absurd to me, but I'll admit that i haven't yet given the pro-evolution argument a proper opportunity to change my mind".

And i think we should try to be that sensible ourselves.

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-10T20:26:37.495Z · LW · GW

I think that absurdity, in this sense, is just an example of Occam's Razor / Bayesian rationalty in practice. If something has a low prior, and we've no evidence that would make us raise our probability estimates, then we should believe that the idea probably isn't true.

I've always assumed that the absurdity bias was a tendency to do something slightly different. In this context, absurdity is a measure of how closely an idea conforms to our usual experiences. It's a measure of how plausible an idea feels to our gut. By this definition, absurdity is being used as a proxy for "low probability estimate, rationally assigned".

It's often a good proxy, but not always.

Or perhaps another way to put it: when evidence seems to point to an extremely unlikely conclusion, we tend to doubt the accuracy of the evidence. And the absurdity bias is a tendency to doubt the evidence more thoroughly than ideal rationality would demand.

(Admission: I've noticed that I've had some trouble defining the bias, and now I'm considering the possibility that "absurdity bias" is a less useful concept than I thought it was).

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-05T22:00:37.602Z · LW · GW

Incidentally, does this prime number have to be expressed in Base 10?

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-05T21:59:31.299Z · LW · GW

I think the original poster would have agreed to this even before they had the realisation. The point here is that, even when you do listen to an explanation, the absurdity bias can still mislead you.

The lady in the story had an entire conversation about evolution and still rejected it as absurd. Some ideas simply take more than 20 minutes to digest, understand and learn about. Therfore after 20 minutes of conversation, you cannot reasonably conclude that you've heard everything there is. You cannot reasonably conclude that you wouldn't be convinced by more evidence.

It's just like any bias really. Even when you know about it and you think you've adjusted sufficiently, you probably haven't.

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-05T21:41:39.613Z · LW · GW

I think this just underscores the original post's point.

The lesson here isn't that Christians are probably right or that Christians are probably wrong. The lesson here is that you can go very wrong by relying on the absurdity heuristic. And that that's true even when the claim seems really absurd.

Let's take a hypothetical atheist who really does think that all Christians believe in the literal word of the Bible. This atheist might reject the whole of Christianity because of the absurdity of talking snakes. Having rejected the entire school of thought that all of Christianity represents, he never has the opportunity to find out that he was wrong (about all Christians taking the Bible literally). Therefore be never realises that he had flawed reasons for rejecting religion.

The woman in the story has a similarly inaccurate understanding of what (many) evolutionists believe. The flawed understanding is part of the issue.

This bias applies to people who reject an idea on the grounds that it seems absurd, but their assessment of 'absurdity' is based on their limited, probably inaccurate, understanding of the topic.

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-05T21:35:57.518Z · LW · GW

I think you might be deflecting the main point here. Possibly without realising it.

You have a better opportunity to practice your skills as a rationalist if you respond to the [least convenient] (http://tinyurl.com/LWleastconvenient) possible interpretation of this comment.

I would propose that the "experts" being referred to are experts in debating the existence of God. ie of all the arguments that have ever been put forward for the existence of God, these are the people who know the most compelling ones. The most rationally compelling, logically coherent arguments.

Perhaps you mean to say that no such people exist, or no such arguments exist. It is possible that that's true. But it is almost certain that having brief conversations with garden-variety theists, won't expose us to these arguments.

If you happen to have gone looking for these arguments, with an open mind and a willingness to genuinely consider their merits, and you remain unconvinced, then that's fine. I'm pretty sure that if I were to go looking for the most compelling arguments, with a genuinely open mind, i would remain unconvinced too. But i think it's important to acknowledge that I haven't actually done so. I haven't done the research and I haven't given myself the best possible opportunity to change my mind. - There were other things that I was more interested in doing.

For those of us who haven't heard the most compelling arguments: I honestly think that's fine. But i think the original poster (and Psycho) are describing an important bias, that we should be aware of and careful about in our own thinking: the tendency to reason as if we have already seen the most compelling evidence for something, even when there's no reason to believe that you have.

When you realise that you've not yet seen the most convincing version of an argument, there's no reason to raise your probability estimates. But you also shouldn't lower them in the same way that you would if you were sure you'd seen all the evidence that there was.

Comment by CynicalOptimist on Talking Snakes: A Cautionary Tale · 2016-05-05T20:52:05.932Z · LW · GW

Exactly!

To demonstrate in this way that the absurdity heuristic is useful, you would have to claim something like:

The ratio of false absurd claims (that you are likely to encounter) to true absurd claims (that you are likely to encounter) is much higher than the ratio of false non-absurd claims (that you are likely to encounter) to true non-absurd claims (that you are likely to encounter).

EDIT wow. I'm the person who wrote that, and i still find it hard to read it. This is one of the reasons why rationality is hard. Even when you have a good intuition for the concepts, it's still hard to express the ideas in a concrete way.

Comment by CynicalOptimist on Humans in Funny Suits · 2016-04-25T11:07:00.151Z · LW · GW

I agree with AlexanderRM.

You stated that some of the autistic people you know are significantly different from most humans. That's in line with the original content, not a counter-argument to it.

And with that said, I'm not sure I'm happy being in a conversation about how "different" a group of people is from normal people. It's hard to know how that will be taken by the people involved, and it may not be a nice feeling to read it.

Comment by CynicalOptimist on Humans in Funny Suits · 2016-04-25T10:41:10.119Z · LW · GW

I think you're right. That squeamishness is very much a product of you having grown up as not-an-octopus.

Most creatures taste with an organ that's at the top of their digestive tract, it's fairly sensible that they have an aversion to tasting anything that they would be unhealthy for them to consume.

A species that had always had a chemical-composition-sense on all of it's limbs? Would almost certainly have a very different relationship with that sense than we have with taste.

Comment by CynicalOptimist on Humans in Funny Suits · 2016-04-25T10:34:53.839Z · LW · GW

I think this might be the bias in action yet again.

Our idea of an alien experience is to taste with a different part of our bodies? That's certainly more different-from-human than most rubber-forehead aliens, but "taste" is still a pretty human-familiar experience. There are species with senses that we don't have at all, like a sensitivity to magnetism or electric fields.

Comment by CynicalOptimist on Humans in Funny Suits · 2016-04-25T10:26:44.094Z · LW · GW

TV tropes calls that the "planet of hats". (visit tv tropes at your own peril, it's a notorious time sink).

I think it represents a different fallacy: to assume that am unfamiliar group of things (or people) are much more homogenous than they really are. And more specifically: to assume that a culture or group of things is entirely defined by the things that make them different from us.

Comment by CynicalOptimist on Humans in Funny Suits · 2016-04-25T10:17:05.777Z · LW · GW

Yes, of course there are many good reasons why writers do this. Reasons why, for a writer, it can be good to do this, in addition to just being difficult to avoid.

But i don't think that's really the point. We're not here to critique science fiction. We're not tv critics. We're trying to learn rationality techniques to help us "win" whatever we're trying to win. And this is a fairly good description of a certain kind of bias.

You're right though. Sci-fi is a good example to demonstrate what the bias is, but not a great example to demonstrate why it's important.

Comment by CynicalOptimist on The Least Convenient Possible World · 2016-04-24T12:56:48.212Z · LW · GW

This is fair, because you're using the technique to redirect us back to the original morality issue.

But i also don't think that MBlume was completely evading the question either. The question was about ethical principles, and his response does represent an exploration of ethical principles. MBlume suggests that it's more ethical to sacrifice one of the lives that was already in danger, than to sacrifice an uninvolved stranger. (remember, from a strict utilitarian view, both solutions leave one person dead, so this is definitely a different moral principle.)

This technique is good for stopping people from evading the question. But some evasions are more appropriate than others.

Comment by CynicalOptimist on The Least Convenient Possible World · 2016-04-24T12:45:50.429Z · LW · GW

Okay, well let's apply exactly the technique discussed above:

If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?

Note: I'm asking that in order to give another example of the technique in action. But still feel free to give a real answer if you'd choose to.

Side you didn't answer the question one way or another, I can't apply the second technique here. I can't ask what would have to change in order for you to change your answer.

Comment by CynicalOptimist on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-22T22:24:32.080Z · LW · GW

Yes, that's definitely true. If you know a little, or a lot, about genetics, then the theory is falsifiable.

I think it still works just fine as an example though. The goal was to explain the meaning and the importance of falsifiability. Spotiswood's theory, as presented and as it was being used, wasn't making any useful predictions. No one was looking at familial comparisons, and i implied that Spotiswood wasn't making any effort to identify the gene, so the only observations that were coming in were "person lives", or "person dies". Within that context, Spotiswood's theory can explain any observation, and makes no useful predictions.

If that's not an example of an unfalsifiable theory, then it's still an example that helps explain the key elements of unfalsifiability, and helps explain why they're important.

If an audience member should then point out what you pointed out? Then that's brilliant. We can agree with the audience member, and talk about how this new consideration shows that the theory can be falsifiable after all.

But then we also get to point out how this falsifiability is what makes a theory much more useful... and the example still works because (QED) that's exactly the point we were trying to demonstrate.

Comment by CynicalOptimist on The True Prisoner's Dilemma · 2016-04-17T16:02:22.622Z · LW · GW

It's an appealing notion, but i think the logic doesn't hold up.

In simplest terms: if you apply this logic and choose to cooperate, then the machine can still defect. That will net more paperclips for the machine, so it's hard to claim that the machine's actions are irrational.

Although your logic is appealing, it doesn't explain why the machine can't defect while you co-operate.

You said that if both agents are rational, then option (C,D) isn't possible. The corollary is that if option (C,D) is selected, then one of the agents isn't being rational. If this happens, then the machine hasn't been irrational (it receives its best possible result). The conclusion is that when you choose to cooperate, you were being irrational.

You've successfully explained that (C, D) and (D, C) arw impossible for rational agents, but you seem to have implicitly assumed that (C, C) was possible for rational agents. That's actually the point that we're hoping to prove, so it's a case of circular logic.

Comment by CynicalOptimist on The Parable of the Dagger · 2016-04-17T14:46:00.051Z · LW · GW

There's a lot of value in that. Sometimes it's best not to go down the rabbit hole.

Whatever the technicalities might be, the jester definitely followed the normal, reasonable rules of this kind of puzzle, and by those rules he got the right answer. The king set it up that way, and set the jester up to fail.

If he'd done it to teach the jester a valuable lesson about the difference between abstract logic and real life, then it might have been justified. But he's going to have the jester executed, so that argument disappears.

I think we can all agree, The King is definitely a dick.

Comment by CynicalOptimist on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-17T14:08:43.509Z · LW · GW

I don't really recommend talking to a bunch of children and deliberately spreading the message "some of you just suck at most things".

There are positive and valuable ways to teach the lesson that people aren't all equally "good at stuff", but it's a tough one to communicate well. It's not a good thing to bring up casually as an example when you're talking about something else.

Comment by CynicalOptimist on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-17T13:58:03.928Z · LW · GW

Incidentally, i think that you're proposing a test for susceptibility to the medicine. The relevant theory here is that any person who would be killed by a full dose, would be also be harmed but not killed, by a much smaller dose. That's a perfectly testable, falsifiable theory, but i don't think it would directly test the claim that the cause is genetic.

A better test for genetic causes, is to look at family relationships. If we believe the cause is genetic, then we predict that people who are more closely related to each other, are more likely to have the same reaction to the medicine. And we predict that identical twins would always have the exact same reaction to the medicine.

The original poster was looking for a very easy example that children could follow, without needing to understand any maths or probability theory, so I wanted to keep it simple. That's why i didn't mention the idea of improving the original scientist's theory.

Comment by CynicalOptimist on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-17T13:48:52.417Z · LW · GW

Absolutely.

If the first scientist can come up with a way to test his theory, then it would probably make his theory more useful. It would also make it more falsifiable.

Comment by CynicalOptimist on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-13T19:30:41.955Z · LW · GW

I think it would be great to start with a theory that sounds very scientific, but is unfalsifiable, and therefore useless. Then we modify the theory to include an element that is falisfiable, and the theory becomes much more useful.

For example, we have a new kind of medicine, and it is very good for some people, but when other people take the medicine it kills them. Naturally, we want to know who would be killed by the medicine, and who would be helped by it.

A scientist has a theory. He believes there is a gene that he calls the "Spottiswood gene". Anyone who has the proper form of the Spottiswood gene will be safe, they can take the medicine freely. But some people have a broken version of the Spottiswood gene, and they die when then they take the medicine. Unfortunately the scientist has no way of detecting the Spottiswood gene, so he can't tell you whether you have the gene or not.

Now this theory sounds very scientific and it's got lots of scientific words in it, but it isn't very useful. The scientist doesn't know how to detect the gene, so he can't tell you whether you are going to live or whether you are going to die. He can't tell you whether it is safe to take the medicine. If you take the pill and you survive, then the scientist will say that you had the working version of the gene. If you take the pill and you die, the scientist will say that you have the broken version of the gene. But he cannot say what will happen to you until after it has already happened, so his theory is useless. He can explain anything, but he can't make predictions in advance.

Now another scientist has a different theory. She thinks that the medicine is related to eye color. She thinks anyone with blue eyes will die if they take the medicine, and she thinks that anyone with brown eyes will be okay. She's not sure why this happens, but she plans to do more research and find out. Even if she doesn't do any more research, her theory is much more useful than than the first scientist's theory. If she's right, then blue-eyed people will know that they should avoid the medicine, and brown eyed people will know that they can take the medicine safely. She has made predictions. She predicts that no brown eyed person will die after taking the medicine, and she predicts that no blue eyed person will live.

Of course, the second scientist might be wrong. But the interesting thing is that if she's wrong, then we can prove that she's wrong. She predicted that no one with brown eyes will die after taking the medicine, so if lots of people with brown eyes die, then we will know that she's wrong.

If her theory is wrong, then we should be able to prove that it's wrong. And then if the results don't prove that she's wrong, we accept that she's probably right. That's called falsifiability.

But the first scientist doesn't have falsifiability. We know that even If he's wrong, we'll never be able to prove it - and that means we'll never know if he's wrong or right. More importantly, even he is right, his theory still wouldn't do anybody any good.

Comment by CynicalOptimist on The Sally-Anne fallacy · 2016-04-13T18:46:09.004Z · LW · GW

I think you're saying that all the cases described above, could be expressed as a mix of other fallacies, therefore it's not distinct fallacy in its own right?

I think a better question is "If we think of class of mistake as a specific named fallacy, will it help us to spot errors of reasoning that we would otherwise have missed? Or alternatively, help us to talk about errors of reasoning that we've noticed."

If it can be expressed in terms of other fallacies, but these mistakes aren't immediately obvious as examples of those fallacies, then it can be worth giving them their own label as philh suggests.

Ultimately, different people will find that different tools and explanations work well for them. While two explanations might be logically equivalent, some people will find that one makes more sense to them, and some people will find that the other makes more sense.

It seems like a useful fallacy to me (so to speak), and I intend to keep an eye out for it.

Comment by CynicalOptimist on Attention! Financial scam targeting Less Wrong users · 2016-04-13T18:19:42.634Z · LW · GW

"We have a pretty stupid banking system if you can..."

Yes, we do.

It's a complicated system that developed slowly, piece by piece, influenced by legislation, commercial pressures, other (contradictory) commercial pressures, and customers' needs. The need for backwards compatibility makes it impossible to rip up the old system and start again, and no one person is in charge of designing it. Naturally it's messed up and has inconsistencies.

---Meta comment: At first I was writing this with the intention of saying, basically: "Duh! isn't that obvious?". Now I realize that that's really unkind and unfair.

You've encountered something that you hadn't known before, and you "noticed you were surprised". That's a good thing, and it's good that you expressed it so that other people can realize the same thing.

Comment by CynicalOptimist on Attention! Financial scam targeting Less Wrong users · 2016-04-13T18:10:13.343Z · LW · GW

Suffice to say: There are many different methods for sending money. Some of them will involve paper forms, some will not. Some of them involve the internet, some will not. And each one has its own rules.

"Maybe the scammer wants the part of their money returned using a different method (one that does not allow cancelling, or has shorter deadlines)"

This is essentially correct. I've read about similar scams, and I believe this was how they worked.

Comment by CynicalOptimist on Attention! Financial scam targeting Less Wrong users · 2016-04-13T18:04:03.429Z · LW · GW

"I think it can be taken for granted that people on this site have an elevated sense of skepticism"

I disagree. Being a participant on this site implies that one has accepted some or all of the central premises of the community: that we can significantly improve our lives by thinking differently, that we should be willing to think and behave in ways that are very counter-intuitive to the average person, and that we can learn to do all of this by reading and talking on a website.

A great many 'normal' people would dismiss Less Wrong as a silly venture. Likewise, they'd be willing to dismiss something that looks like a scam immediately, without any thought at all. Those of us who pride ourselves on being clever, being willing to embrace an idea that other people would reject, and want to exploit loopholes and inefficiencies that other people have missed? I suspect we're less skeptical than the average person.

Being on the site only signals that we want to be rational, and like to think that we are. It doesn't necessarily mean that we're good at it.

Comment by CynicalOptimist on The Parable of the Dagger · 2016-04-13T13:54:06.545Z · LW · GW

I can't speak for Eliezer's intentions when he wrote this story, but I can see an incredibly simple moral to take away from this. And I can't shake the feeling that most of the commenters have completely missed the point.

For me, the striking part of this story is that the Jester is shocked and confused when they drag him away. "How?!" He says "It's logically impossible". The Jester seems not to understand how it is possible for the dagger to be in the second box. My explanation goes as follows, and I think I'm just paraphrasing the king here.

1- If a king has two boxes and a means to write on them, then he can write any damn thing on them that he wants to. 2- If a king also has a dagger, then he can place that dagger inside one of the two boxes, and he can place it in whichever box he decides to place it in.

That's it. That's the entire explanation for how the dagger could "possibly" be inside the second box. It's a very simple argument, that a five year old could understand, and no amount of detailed consideration by a logician is going to stop this simple argument from being true.

The jester, however, thought it was impossible for the dagger to be in the second box. Not just that it wasn't there, but that it was IMPOSSIBLE. That's how I read the story, anyway. He used significantly more complicated logic, and he thought that he'd proven it impossible. But it only takes a moment's reflection to see that he's wrong.

Some of the comments above have tried to work out what was wrong with Jester's logic, and they've explained the detailed and subtle flaws in his reasoning. That's great - if you want to develop a deep understanding of logic, self-referential statements, and mathematical truth values (and lets be fair, I suppose most of us do), but in the context of the sequences on rationality, I think there's a much better lesson to learn.

Remember: rationalists are supposed to WIN. We're supposed to develop reasoning skills that give us a better and more useful understanding of reality. So the lesson is this: don't be seduced by complex and detailed logic, if that logic is taking you further and further away from an accurate description of reality. If something is already true, or already false, then no amount of reasoning will change it.

Reality is NOT required to conform to your understanding or your reasoning. It is your reasoning that should be required to conform to reality.

Comment by CynicalOptimist on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-08T00:09:19.613Z · LW · GW

Now I want to try having a watch that randomly speeds up, and slows down, within preset limits. So that at any point I could be as many as 5 minutes ahead, or 5 minutes behind.

That would probably get me used to showing up a few minutes early to everything.

Comment by CynicalOptimist on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-08T00:05:41.956Z · LW · GW

I might be missing something here.

These seem to be application forms to lease or purchase land that the belongs to a railway-related organization?

Land that belongs to a railway-related organization isn't necessarily part of a railway. The land could be disused office-space, parking lots, or warehouses.

Comment by CynicalOptimist on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-07T23:58:51.169Z · LW · GW

Completely Ad-Hoc proposal:

Ethics are very very heavily influenced by one consideration: other people's opinions. It may not be consciously admitted, but when people faced with an ethical conundrum, I think they make a decision that's based on the question "What will people think of me?". (The internalized version is: "What will I think of myself?" / "Will I be able to look at myself in the mirror?").

The question here relates to letting 5 people die (by inaction) or killing one person (by taking action). If you pick the second one, then you're actively responsible for that death. You were the killer. And that's the sort of action that will get you judged by other people. That's the sort of action that will make other people label you as a killer, as a betrayer, as an untreatable person. Therefore, we're very heavily biased against certain things, and those biases don't allow for utilitarian ethics.

It's very often true that drunk people care less than sober people about what others think of them.