Comment by Wes_W on Is Willpower a Finite Resource, or a Myth? · 2017-02-14T18:48:40.892Z · LW · GW

"Willpower is not exhaustible" is not necessarily the same claim as "willpower is infallible". If, for example, you have a flat 75% chance of turning down sweets, then avoiding sweets still makes you more likely to not eat them. You're not spending willpower, it's just inherently unreliable.

Comment by Wes_W on Counterfactual Mugging · 2016-11-29T03:20:24.468Z · LW · GW

I'm pretty sure that decision theories are not designed on that basis.

You are wrong. In fact, this is a totally standard thing to consider, and "avoid back-chaining defection in games of fixed length" is a known problem, with various known strategies.

Comment by Wes_W on Counterfactual Mugging · 2016-10-31T20:18:06.545Z · LW · GW

Yes, that is the problem in question!

If you want the payoff, you have to be the kind of person who will pay the counterfactual mugger, even once you no longer can benefit from doing so. Is that a reasonable feature for a decision theory to have? It's not clear that it is; it seems strange to pay out, even though the expected value of becoming that kind of person is clearly positive before you see the coin. That's what the counterfactual mugging is about.

If you're asking "why care" rhetorically, and you believe the answer is "you shouldn't be that kind of person", then your decision theory prefers lower expected values, which is also pathological. How do you resolve that tension? This is, once again, literally the entire problem.

Comment by Wes_W on Counterfactual Mugging · 2016-10-28T05:30:06.735Z · LW · GW

Your decision is a result of your decision theory, and your decision theory is a fact about you, not just something that happens in that moment.

You can say - I'm not making the decision ahead of time, I'm waiting until after I see that Omega has flipped tails. In which case, when Omega predicts your behavior ahead of time, he predicts that you won't decide until after the coin flip, resulting in hypothetically refusing to pay given tails, so - although the coin flip hasn't happened yet and could still come up heads - your yet-unmade decision has the same effect as if you had loudly precommitted to it.

You're trying to reason in temporal order, but that doesn't work in the presence of predictors.

Comment by Wes_W on Counterfactual Mugging · 2016-10-16T06:49:57.682Z · LW · GW

You're fundamentally failing to address the problem.

For one, your examples just plain omit the "Omega is a predictor" part, which is key to the situation. Since Omega is a predictor, there is no distinction between making the decision ahead of time or not.

For another, unless you can prove that your proposed alternative doesn't have pathologies just as bad as the Counterfactual Mugging, you're at best back to square one.

It's very easy to say "look, just don't do the pathological thing". It's very hard to formalize that into an actual decision theory, without creating new pathologies. I feel obnoxious to keep repeating this, but that is the entire problem in the first place.

Comment by Wes_W on Counterfactual Mugging · 2016-09-14T23:29:34.382Z · LW · GW

But in the single-shot scenario, after it comes down tails, what motivation does an ideal game theorist have to stick to the decision theory?

That's what the problem is asking!

This is a decision-theoretical problem. Nobody cares about it for immediate practical purpose. "Stick to your decision theory, except when you non-rigorously decide not to" isn't a resolution to the problem, any more than "ignore the calculations since they're wrong" was a resolution to the ultraviolet catastrophe.

Again, the point of this experiment is that we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment. The original motivation is almost certainly in the context of AI design, where you don't HAVE a human homunculus implementing a decision theory, the agent just is its decision theory.

Comment by Wes_W on Counterfactual Mugging · 2016-08-18T17:07:00.779Z · LW · GW

There will never be any more 10K; there is no motivation any more to give 100. Following my precommitment, unless it is externally enforced, no longer makes any sense.

This is the point of the thought experiment.

Omega is a predictor. His actions aren't just based on what you decide, but on what he predicts that you will decide.

If your decision theory says "nah, I'm not paying you" when you aren't given advance warning or repeated trials, then that is a fact about your decision theory even before Omega flips his coin. He flips his coin, gets heads, examines your decision theory, and gives you no money.

But if your decision theory pays up, then if he flips tails, you pay $100 for no possible benefit.

Neither of these seems entirely satisfactory. Is this a reasonable feature for a decision theory to have? Or is it pathological? If it's pathological, how do we fix it without creating other pathologies?

Comment by Wes_W on Counterfactual Mugging · 2016-08-17T15:36:27.101Z · LW · GW

Decision theory is an attempt to formalize the human decision process. The point isn't that we really are unsure whether you should leave people to die of thirst, but how we can encode that in an actual decision theory. Like so many discussions on Less Wrong, this implicitly comes back to AI design: an AI needs a decision theory, and that decision theory needs to not have major failure modes, or at least the failure modes should be well-understood.

If your AI somehow assigns a nonzero probability to "I will face a massive penalty unless I do this really weird action", that ideally shouldn't derail its entire decision process.

The beggars-and-gods formulation is the same problem. "Omega" is just a handy abstraction for "don't focus on how you got into this decision-theoretic situation". Admittedly, this abstraction sometimes obscures the issue.

Comment by Wes_W on Counterfactual Mugging · 2016-08-15T23:49:46.159Z · LW · GW

Precommitments are used in decision-theoretic problems. Some people have proposed that a good decision theory should take the action that it would have precommitted to, if it had known in advance to do such a thing. This is an attempt to examine the consequences of that.

Comment by Wes_W on Fake Explanations · 2016-07-11T02:52:08.334Z · LW · GW

I'm not sure you've described a different mistake than Eliezer has?

Certainly, a student with a sufficiently incomplete understanding of heat conduction is going to have lots of lines of thought that terminate in question marks. The thesis of the post, as I read it, is that we want to be able to recognize when our thoughts terminate in question marks, rather than assuming we're doing something valid because our words sound like things the professor might say.

Comment by Wes_W on Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID] · 2016-06-08T20:31:24.755Z · LW · GW

No part of his objection hinged on reversibility, only the same linearity assumption you rely on to get a result at all.

Comment by Wes_W on Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID] · 2016-06-07T21:44:06.259Z · LW · GW

OK. I think I see what you are getting at.

First, one could simply reject your conclusion:

However at no point did I do anything that could be described as "simulating you".

The argument here is something like "just because you did the calculations differently doesn't mean your calculations failed to simulate a consciousness". Without a real model of how computation gives rise to consciousness (assuming it does), this is hard to resolve.

Second, one could simply accept it: there are some ways to do a given calculation which are ethical, and some ways that aren't.

I don't particularly endorse either of these, by the way (I hold no strong position on simulation ethics in general). I just don't see how your argument establishes that simulation morality is incoherent.

Comment by Wes_W on Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID] · 2016-06-07T06:24:42.827Z · LW · GW

From the point of view of physics, it contains garbage,

But a miracle occurs, and your physics simulation still works accurately for the individual components...?

I get that your assumption of "linear physics" gives you this. But I don't see any reason to believe that physics is "linear" in this very weird sense. In general, when you do calculations with garbage, you get garbage. If I time-evolve a simulation of (my house plus a bomb) for an hour, then remove all the bomb components at the end, I definitely do not get the same result as running a simulation with no bomb.

Comment by Wes_W on What makes buying insurance rational? · 2016-04-02T05:28:57.232Z · LW · GW

And apparently insurance companies can make money because the expected utility of buying insurance is lower than it's price.

No, the expected monetary value of insurance is lower than its price. (Assuming that the insurance company's assessment of your risk level is accurate.) You're equivocating between money and utility, which is the source of your confusion.

Suppose I offered a simple wager: we flip a coin, and if it comes up heads, I give you a million dollars. But if it comes up tails, you owe me a million dollars, and I get every cent you earn until that debt is paid. Is this bet fair?

Monetarily, yes. But even if I skew the odds in your favor a little, maybe 60/40, I'll bet you still don't want to take it. Why not? Isn't an expected return of $200,000 wildly in your favor?

Yeah, but that doesn't matter. An extra million dollars would make your life somewhat better; spending the next twenty years flat broke would make your life drastically worse. Expected utility is very negative.

The utility of money is sometimes claimed to be logarithmic. For small amounts of money you can use a linear approximation, but if the outcome can shift you to a totally different region of the curve, the concavity becomes very important.

Comment by Wes_W on Reflexive self-processing is literally infinitely simpler than a many world interpretation · 2015-11-19T04:40:09.071Z · LW · GW

Non-locality, surely? Or "would violate locality"?

Comment by Wes_W on Reflexive self-processing is literally infinitely simpler than a many world interpretation · 2015-11-15T02:27:55.881Z · LW · GW

Because we can't actually get infinite information, but we still want to calculate things.

And in practice, we can in fact calculate things to some level of precision, using a less-than-infinite amount of information.

Comment by Wes_W on The mystery of Brahms · 2015-10-22T15:41:00.338Z · LW · GW

You're right, I missed that line.

Comment by Wes_W on The mystery of Brahms · 2015-10-22T00:56:59.699Z · LW · GW

If I were making music in the style of someone who died six years before I was born, people would probably think I was out of style. I'm not sure if this is the historical fallacy I don't have a name for, where we gloss over differences in a few decades because they're less salient to us than the differences between the 1990s and the 1960s, or if musical styles just change more quickly now.

Comment by Wes_W on The Infinity Project · 2015-10-03T18:52:33.705Z · LW · GW

I spent a long time associating Amazon with "something in South America, so it's probably not accessible to me" before the company was as ultra-famous as it is now.

Comment by Wes_W on Help Build a Landing Page for Existential Risk? · 2015-08-21T00:55:27.523Z · LW · GW

On the other hand, asteroid mining technologies have some risks of their own, although this only reaches "existential" if somebody starts mining the big ones.

The largest nuclear weapon was the Tsar Bomba: 50 megatonnes of TNT, roughly equivalent to a 3.3-million-tonne impactor. Asteroids larger than this are thought to number in the tens of millions, and at the time of writing only 1.1 million had been provisionally identified. Asteroid shunting at or beyond this scale is by definition a trans-nuclear technology, which means a point comes where the necessary level of trust is unprecedented.

Comment by Wes_W on 0 And 1 Are Not Probabilities · 2015-08-20T18:05:37.414Z · LW · GW

You're right, I misread your sentence about "his personal preferences" as referring to the whole claim, rather than specifically the part about what's "mentally healthy". I don't think we disagree on the object level here.

Comment by Wes_W on 0 And 1 Are Not Probabilities · 2015-08-20T17:27:16.390Z · LW · GW

Cromwell's Rule is not EY's invention, and relatively uncontroversial for empirical propositions (as opposed to tautologies or the like).

If you don't accept treating probabilities as beliefs and vice versa, then this whole conversation is just a really long and unnecessarily circuitous way to say "remember that you can be wrong about stuff".

Comment by Wes_W on 0 And 1 Are Not Probabilities · 2015-08-20T17:02:33.234Z · LW · GW

If we're asking what the author "really meant" rather than just what would be correct, it's on record.

The argument for why zero and one are not probabilities is not, "All objects which are special cases should be cast out of mathematics, so get rid of the real zero because it requires a special case in the field axioms", it is, "ceteris paribus, can we do this without the special case?" and a bit of further intuition about how 0 and 1 are the equivalents of infinite probabilities, where doing our calculations without infinities when possible is ceteris paribus regarded as a good idea by certain sorts of mathematicians. E.T. Jaynes in "Probability Theory: The Logic of Science" shows how many probability-theoretic errors are committed by people who assume limits directly into their calculations, without first showing the finite calculation and then finally taking its limit. It is not unreasonable to wonder when we might get into trouble by using infinite odds ratios. Furthermore, real human beings do seem to often do very badly on account of claiming to be infinitely certain of things so it may be pragmatically important to be wary of them.

I... can't really recommend reading the entire thread at the link, it's kind of flame-war-y and not very illuminating.

Comment by Wes_W on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-19T22:26:36.112Z · LW · GW

What do the following have in common?

You focused on akrasia, and obviously this is a component.

My guess was: they're all wildly underdetermined. "Cheer up" isn't a primitive op. "Don't have sex" or "eat less and exercise more" sound like they might be primitive ops, but can be cashed out in many different ways. "Eat less and exercise more, without excessively disrupting your career/social life/general health/etc" is not a primitive op at all, and may require many non-obvious steps.

Comment by Wes_W on An Alien God · 2015-08-12T00:46:18.898Z · LW · GW

I am the downvoter, although another one seems to have found you since. I found your comment to be a mixture of "true, but irrelevant in the context of the quote", and a restatement of non-novel ideas. This is admittedly a harsh standard to apply to a first comment (particularly since you may not have yet even read the other stuff that duplicates your point about human designers being able to avoid local optima!), so I have retracted my downvote.

Welcome to the site, I hope I haven't turned you off.

Comment by Wes_W on The horrifying importance of domain knowledge · 2015-08-08T21:42:52.128Z · LW · GW

Good I'm glad we agree on this. Now, why are you trying to defend positions that rely on denying this claim?

I'm not. I entered this discussion mostly to point out that you were equating "corresponds to social behavior" with "does not correspond to anything", which is silly.

It's worse than gender not corresponding to anything. Like in the standard example, it corresponds to multiple things, which don't necessarily agree.


Yes, and creeps, or example, want to be treated as a woman with respect to which bathroom they enter.

Do they? I mean, as a theoretical problem, sure. But to my knowledge this is a vanishingly rare event.

Comment by Wes_W on Your Strength as a Rationalist · 2015-08-08T06:39:39.544Z · LW · GW

No. Two possibilities, not just one:


Comment by Wes_W on The horrifying importance of domain knowledge · 2015-08-08T02:45:30.608Z · LW · GW

So a man getting an ID card with a typo in the gender field makes him female?

Legally, maybe so, at least until the error is corrected. You'd have to ask a lawyer to be sure.

ID cards are a physical object, which is not determined by biological sex, since as a question of legal fact one can get an ID card of one's self-identified gender if one jumps through the appropriate hoops, even without sex reassignment surgery. (At least that's how it works here in California. I have no idea how it works in other states or countries.)

This seems to me a counterexample to the claim that gender, as distinct from sex, doesn't correspond to anything. Social interaction is another: for example, women are much more likely to ask each other if they want old clothes before giving/throwing them away, and much less likely to get asked to be someone's Best Man at a wedding.

How about not "every five minutes", but whenever he feels like going to the women's bathroom to ogle/be generally creepy?

By far the dominant hypothesis here would be "you're lying", but failing that probably yes, gender identities aren't supposed to be able to work that way.

"Your gender is whatever you say it is" is a social norm, not a factual claim. Saying you're a woman doesn't make you a woman. People just don't generally assert it unless they actually want to be treated as a woman. Creeps, or other people lying for personal gain, seem exceptionally rare - probably because it's a giant hassle, and the institutions they'd want to take advantage of don't obey that norm anyway.

If transition ever became socially easy and stigma-free, we probably would need a different anti-creep mechanism.

I agree that genderfluid people might break gjm's model, although he seems to have some wiggle room as written. Of course, I don't know if this is a deliberate result of accounting for their existence, or a lucky accident.

Comment by Wes_W on The horrifying importance of domain knowledge · 2015-08-08T01:10:20.349Z · LW · GW

So you agree that "gender" as distinct from "sex" doesn't correspond to anything,

I'm pretty sure that ID cards and human interaction are territory, not map. Please don't do the "social constructs basically don't exist" thing, it's very silly.

The discussion of a hypothetical person who wants to change gender (but nothing else) every five minutes is giving me a vibe similar to when someone asks "how does evolution explain a monkey giving birth to a human?" It doesn't. That would falsify the model, much like our hypothetical person would falsify the "gender identity" model.

There exists a group of people who explicitly claim to have gender identities that are not stable over time, but this usually includes behaviors beyond requested pronouns.

Of course, if you do observe internal mind state, e.g., by using sufficiently good brain scans or personality tests, you'd like find that most of the people claiming to be "trans" are clustered with their birth gender.

Hey, an empirical disagreement! I think this research has in fact been done, I'll go digging for it later this evening.

Comment by Wes_W on Catastrophe Engines: A possible resolution to the Fermi Paradox · 2015-07-28T22:04:21.704Z · LW · GW

So instead of every civ fillings its galaxy, we get every civ building one in every galaxy. For this to not result in an Engine on every star, you still have to fine-tune the argument such that new civs are somehow very rare.

There are some hypotheticals where the details are largely irrelevant, and you can back up and say "there are many possibilities of this form, so the unlikeliness of my easy-to-present example isn't the point". "Alien civs exist, but prefer to spread out a lot" does not appear to be such a solution. As such, the requirement for fine-tuning and multiple kinds of exotic physics seem to me like sufficiently burdensome details that this makes a bad candidate.

Comment by Wes_W on Catastrophe Engines: A possible resolution to the Fermi Paradox · 2015-07-28T18:19:25.731Z · LW · GW

The second civilization would just go ahead and build them anyways, since doing so maximizes their own utility function.

Then why isn't there an Engine on every star?

Comment by Wes_W on Catastrophe Engines: A possible resolution to the Fermi Paradox · 2015-07-28T07:38:35.200Z · LW · GW

Implausible premises aside, I'm not convinced this actually resolves the paradox.

The first spacefaring civilization fills the galaxy/universe with Catastrophe Engines at the maximum usable density.

But now the second spacefaring civilization doesn't have any room to build Catastrophe Engines, so they colonize space the regular way. And we're right back at the original problem: either life has to be rare enough that everybody has room to build Engines, or there's lots of life out there that had to expand the non-Engine way but we somehow can't see them.

Comment by Wes_W on Policy Debates Should Not Appear One-Sided · 2015-07-21T20:59:59.404Z · LW · GW

But... you can already buy many items that are lethal if forcefully shoved down someone's throat. Knives, for example. It's not obvious to me that a lack of lethal drugs is currently preventing anyone from hurting people, especially since many already-legal substances are very dangerous to pour down someone's throat.

From the Overcoming Bias link, "risky buildings" seem to me the clearest example of endangering people other than the buyer.

Comment by Wes_W on Crazy Ideas Thread · 2015-07-15T00:38:48.988Z · LW · GW

No one does it for the fun of it.

A friend of mine does. Not "fun" per se, but she derives enjoyment and satisfaction from it.

Comment by Wes_W on A Parable On Obsolete Ideologies · 2015-07-12T06:28:11.538Z · LW · GW

I can see in the Recent Comments sidebar that your comment starts with "[There is no such thing as ", but the actual text of your comment appears blank. Something might be screwy with your link syntax?

Comment by Wes_W on There is no such thing as strength: a parody · 2015-07-11T05:29:28.635Z · LW · GW

Interesting. Can I ask you to unpack this statement? I'm curious what exactly you're comparing.

The difference between "has practiced a movement to mastery" and "has never performed a movement before" can be very large, like my powerlifter/snatch example in the other comment. But this is comparing zero practice to a very large amount of practice over a very long period of time. I would find it easy to believe that IQ tests see much greater returns from small amounts of practice.

Comment by Wes_W on There is no such thing as strength: a parody · 2015-07-11T03:55:15.602Z · LW · GW

It does! It's pretty reasonable to say that I'm much stronger than the average non-athlete, and Dan Green is much stronger than me, and all the fiddly caveats don't really change that analysis.

Does this work better or worse than IQ? I'm not sure.

Comment by Wes_W on There is no such thing as strength: a parody · 2015-07-09T09:28:10.797Z · LW · GW

Also, you bias IQ tests if you repeatedly take them, but you don't do likewise with strength tests so it's much easier to track changes in an individual's strength over time and most anyone whose lifts weights can objectively verify that he has become stronger.

Strength tests are absolutely biased by taking them repeatedly. Athletes call this "specificity".

Comment by Wes_W on There is no such thing as strength: a parody · 2015-07-09T09:28:08.536Z · LW · GW

How do you define/determine this?

The standard definition of strength, which the post cleverly avoided ever stating, is "the ability to produce force against external resistance" or some variant thereof. Force is a well-defined physics term, and can be measured pretty directly in a variety of ways.

Isn't there an "obvious" causal relationship between brain mass and intelligence?

No. Whales aren't smarter than humans.

If by "obvious" you mean "the sort of thing you might guess from first principles", then both are obvious. But the muscle-strength relationship is obvious in another sense: in actual data, it will leap out at you as a very large factor. For example, 97% of variance in strength between sexes is accounted for by muscle mass, and one of the strongest predictors of performance in powerlifters is muscle mass per unit height.

Comment by Wes_W on Argument Screens Off Authority · 2015-07-08T06:54:47.239Z · LW · GW

Given a field with no expert consensus, where you can't just check things yourself, shouldn't the rational response be uncertainty?

I don't think global warming fits this description, though. AFAIK domain experts almost all broadly agree.

Comment by Wes_W on There is no such thing as strength: a parody · 2015-07-07T00:39:59.732Z · LW · GW

Some subcomponents aren't skills - or at least, it seems odd to label e.g. "unusually long arms" as a skill - but this is a nitpick.

Comment by Wes_W on There is no such thing as strength: a parody · 2015-07-06T06:30:32.443Z · LW · GW

It is interesting, though, how non-general strength is.

There is indeed a widely (unwittingly) held idea that "strength" is a one-dimensional thing: consider, say, superhero comics where the Hulk is stronger than anybody else, which means he's stronger at everything. You never read a comic where the Hulk is stronger at lifting things but Thor is stronger at throwing; that would feel really weird to most people. If the Marvel universe had a comic about strength sports, the Hulk would be the best at every sport.

But this isn't at all how strength works in the real world: there is a pretty large component of specificity. Very, very few athletes are competitive at high levels in even two strength sports, never mind all of them. Giant male powerlifters frequently have a snatch weaker than tiny female weightlifters, despite having dramatically more lean body mass, and naturally higher testosterone, and (usually) the benefit of performance-enhancing drugs. And if you put a powerlifter in the Highland Games - a contest of strength via various throwing events - well, they'd be hopeless!

To a strength athlete, this is obvious. Of course powerlifters have a lousy snatch! Most powerlifters don't even train the snatch! Strength isn't just about raw muscle mass; there is a very large component of skill, technique, and even neural adaptation to specific movement patterns.

But under the folk one-dimensional model of strength, this is a strange and surprising fact.

Comment by Wes_W on Top 9+2 myths about AI risk · 2015-06-30T23:04:37.206Z · LW · GW

If we could build a working AGI that required a billion dollars of hardware for world-changing results, why would Google not throw a billion dollars of hardware at it?

Comment by Wes_W on The Joy of Bias · 2015-06-12T21:41:19.667Z · LW · GW

Given that I am wrong, I would prefer being proven wrong to not being proven wrong.

Yours is probably the central case, but "prove me wrong" and "I hope I'm wrong" aren't unheard-of sentiments. For example, a doctor giving a grim diagnosis. I think this can only (?) happen when the (perceived) value on the object level outweighs concerns about ego.

Comment by Wes_W on Debunking Fallacies in the Theory of AI Motivation · 2015-05-11T04:34:22.940Z · LW · GW

It appears to me that ChristianKI just listed four. Did you have something specific in mind?

Comment by Wes_W on A sense of logic · 2015-05-09T04:30:06.009Z · LW · GW

In the above explained situations I would say that in that case simply put their are multiple answers each of which can in the eyes of a different person he true or false.

Yes, except often it really is important to nail down which question we're asking, rather than just accepting that different interpretations yield different answers.

Like he killed a man so its bad BUT that man who was killed had also killed a man so it was good. Choose one it cant be both and the judge of any court knows that.

In logic, we have the law of excluded middle, which states that truth and falsehood are the only possibilities, and they are mutually exclusive.

There is no such law for "good" and "bad". There is no reason whatsoever that a single action can't have two (or more) consequences which, in isolation, would be unmitigated good or bad. I once took a medication which successfully treated a medical problem (good), but gave me constant nausea (bad), which incidentally made me lose weight (good), but caused me to develop dysregulated eating habits (bad), which eventually prompted me to eat healthier (good), which sometimes causes me stress in social situations (bad), which...

Now you can, in principle, sum up and compare the goodness and the badness and reach an overall verdict (assuming you subscribe to something like utilitarianism), and then you can say that on balance a certain thing was good or bad. But in practice, this is very often prohibitively difficult. Sometimes, the best answer is to just admit that you don't know, that there are points in both columns but you can't be sure which outweighs the other.

There are also lots of ways to get this wrong, and I certainly agree that dealing with sloppy reasoning is frustrating.

Comment by Wes_W on A sense of logic · 2015-05-09T03:23:14.735Z · LW · GW

Not all statements are precise enough to be nailed down as definitely true or false. If there's any leeway or ambiguity in exactly what is being stated, there might also be ambiguity in whether it's true or false.

As a trivial example, consider this statement: "If a tree falls in the forest, and there's nobody around to hear it, it doesn't make a sound". Is the statement true or false? Well, it depends on what you mean by "sound": if you mean acoustic vibrations in the air, the tree does make a sound and the statement is false; if you mean auditory experiences induced in a brain, the tree does not make a sound and the statement is true.

Much more complicated cases are possible, and come up pretty regularly. Politics and the sciences very frequently have debates where nobody has quite nailed down precisely what proposition is being debated. For example, Slate Star Codex has an ongoing series of posts about disagreements over what "growth mindset" even is, which is very relevant to whether or not claims about growth mindset are true.

You might enjoy the sequence on 37 Ways That Words Can Be Wrong, from which I have shamelessly stolen the above example.

Comment by Wes_W on Is there a list of cognitive illusions? · 2015-05-06T21:40:09.254Z · LW · GW

I don't think this is carving reality at the joints.

The free will illusion, at least as presented by Yudkowsky, is that we don't know our own planning algorithm, and understanding how it (probably) works dissolves the illusion, so that "do I have free will" stops even seeming like a question to ask. The illusion is that there was a question at all. The relevant category to watch for is when lots of people want an answer even though nobody can nail down exactly what the question is, or how to tell when you have an answer.

This is a much more specific phenomenon than "elaborate structures", which includes pretty much everything except fundamental particles or the like.

Comment by Wes_W on Is there a list of cognitive illusions? · 2015-05-06T08:06:08.958Z · LW · GW

Can you clarify what you mean by "cognitive illusion"? I don't see why your other three examples should be grouped in that category with free will.

Comment by Wes_W on Torture vs. Dust Specks · 2015-04-11T05:46:10.709Z · LW · GW

Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it's completely morally OK to do very bad things to huge numbers of people - in fact, it's no worse than radically improving huge numbers of lives - as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have.

You can attempt to mitigate this property with too-clever objections, like "aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all". I don't think that actually works, but didn't want it to obscure the point, so I picked "kill their dog" as an example, because it's a clearly bad thing which definitely doesn't bump anyone to the bottom.