Comment by Kindly on Too Much Effort | Too Little Evidence · 2017-01-25T22:02:06.081Z · LW · GW

Based on our rational approach we are at a disadvantage for discovering these truths.

Is that a bad thing?

Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don't see it as a problem.

The situation you're describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you're at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you're also not paying the opportunity cost of trying out many unlikely ideas, most of which don't pan out. Overall, you're better off, because you have more time to pursue more promising ways to satisfy your goals.

(And if you're not better off overall, there's a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that's a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it's a separate problem from the problem of "you don't try things that look like they aren't worth the opportunity cost.")

Comment by Kindly on Too Much Effort | Too Little Evidence · 2017-01-25T17:40:17.394Z · LW · GW

When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.

And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.

From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn't start with your true claim, but start working my way through a bunch of other false claims instead.

Evidence, in the general sense of "some way of filtering out the false claims", can take on many forms. For example, I can choose to try out lucid dreaming, not because I've found scientific evidence that it works, but because it's presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me "this is a real effect and has effects you'll find worth the cost of trying it out", I believe them.

Comment by Kindly on Too Much Effort | Too Little Evidence · 2017-01-25T05:35:48.850Z · LW · GW

Rational assessment can be misleading when dealing with experiential knowledge that is not yet scientifically proven, has no obvious external function but is, nevertheless, experientially accessible.

So, uh, is the typical claim that has an equal lack of scientific evidence true, or false? (Maybe if we condition on how difficult it is to prove.)

If true - then the rational assessment would be to believe such claims, and not wait for them to be scientifically proven.

If false - then the rational assessment would be to disbelieve such claims. But for most such claims, this is the right thing to do! It's true that person A has actually got hold of a true claim that there's no evidence for. But there's many more people making false claims with equal evidence; why should B believe A, and not believe those other people?

(More precisely, we'd want to do a cost-benefit analysis of believing/disbelieving a true claim vs. a comparably difficult-to-test false claim.)

Comment by Kindly on Infinite Summations: A Rationality Litmus Test · 2017-01-23T04:31:43.866Z · LW · GW

I think that in the interests of being fair to the creators of the video, you should link to, the explanation written by (at least one of) the creators of the video, which addresses some of the complaints.

In particular, let me quote the final paragraph:

There is an enduring debate about how far we should deviate from the rigorous academic approach in order to engage the wider public. From what I can tell, our video has engaged huge numbers of people, with and without mathematical backgrounds, and got them debating divergent sums in internet forums and in the office. That cannot be a bad thing and I'm sure the simplicity of the presentation contributed enormously to that. In fact, if I may return to the original question, "what do we get if we sum the natural numbers?", I think another answer might be the following: we get people talking about Mathematics.

In light of this paragraph, I think a cynical answer to the litmus test is this. Faced with such a ridiculous claim, it's wrong to engage with it only on the subject level, where your options are "Yes, I will accept this mathematical fact, even though I don't understand it" or "No, I will not accept this fact, because it flies in the face of everything I know." Instead, you have to at least consider the goals of the person making the claim. Why are they saying something that seems obviously false? What reaction are they hoping to get?

Comment by Kindly on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2017-01-07T16:49:53.241Z · LW · GW

No, I think I meant what I said. I think that this song lyric can in fact only make a difference given a large pre-existing weight, and I think the distribution of being weirded out by Solstices is bimodal: there are not people that are moderately weirded out but not enough to leave.

Comment by Kindly on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-22T02:33:57.027Z · LW · GW

Extremely unlikely that people exist that aren't weirded out by Solstices in general but one song lyric is the straw that breaks the camel's back.

Comment by Kindly on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-30T05:08:29.801Z · LW · GW

Not quite. I outlined the things that have to be going on for me to be making a decision.

Comment by Kindly on Two-boxing, smoking and chewing gum in Medical Newcomb problems · 2015-06-29T19:21:54.009Z · LW · GW

In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.

If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's problem, and so on; if, despite all that, the box I choose has been determined since my birth, then none of these things (none of the things that make up me!) are a factor at all. Either my reasoning process is overridden in one specific case, or it is irreparably flawed to begin with.

Comment by Kindly on Open Thread, Jun. 8 - Jun. 14, 2015 · 2015-06-12T23:53:52.081Z · LW · GW

Let's assume that every test has the same probability of returning the correct result, regardless of what it is (e.g., if + is correct, then Pr[A returns +] = 12/20, and if - is correct, then Pr[A returns +] = 8/20).

The key statistic for each test is the ratio Pr[X is positive|disease] : Pr[X is positive|healthy]. This ratio is 3:2 for test A, 4:1 for test B, and 5:3 for test C. If we assume independence, we can multiply these together, getting a ratio of 10:1.

If your prior is Pr[disease]=1/20, then Pr[disease] : Pr[healthy] = 1:19, so your posterior odds are 10:19. This means that Pr[disease|+++] = 10/29, just over 1/3.

You may have obtained 1/2 by a double confusion between odds and probabilities. If your prior had been Pr[disease]=1/21, then we'd have prior odds of 1:20 and posterior odds of 1:2 (which is a probability of 1/3, not of 1/2).

Comment by Kindly on What Would You Do If You Only Had Six Months To Live? · 2015-05-20T04:11:47.741Z · LW · GW

If you're looking for high-risk activities that pay well, why are you limiting yourself to legal options?

Comment by Kindly on Open Thread, May 11 - May 17, 2015 · 2015-05-12T15:21:36.879Z · LW · GW

On the subject of Arimaa, I've noted a general feeling of "This game is hard for computers to play -- and that makes it a much better game!"

Progress of AI research aside, why should I care if I choose a game in which the top computer beats the top human, or one in which the top human beats the top computer? (Presumably both the top human and the top computer can beat me, in either case.)

Is it that in go, you can aspire (unrealistically, perhaps) to be the top player in the world, while in chess, the highest you can ever go is a top human that will still be defeated by computers?

Or is it that chess, which computers are good at, feels like a solved problem, while go still feels mysterious and exciting? Not that we've solved either game in the sense of having solved tic-tac-toe or checkers. And I don't think we should care too much about having solved checkers either, for the purposes of actually playing the game.

Comment by Kindly on Open Thread, May 11 - May 17, 2015 · 2015-05-11T23:37:06.328Z · LW · GW

This ought to be verified by someone to whom the ideas are genuinely unfamiliar.

Comment by Kindly on Logical Pinpointing · 2015-05-07T14:53:17.116Z · LW · GW

I know that's what you're trying to say because I would like to be able to say that, too. But here's the problems we run into.

  1. Try writing down "For all x, some number of subtract 1's cause it to equal 0". We can write the "∀x. ∃y. F(x,y) = 0" but in place of F(x,y) we want "y iterations of subtract 1's from x". This is not something we could write down in first-order logic.

  2. We could write down sub(x,y,0) (in your notation) in place of F(x,y)=0 on the grounds that it ought to mean the same thing as "y iterations of subtract 1's from x cause it to equal 0". Unfortunately, it doesn't actually mean that because even in the model where pi is a number, the resulting axiom "∀x. ∃y. sub(x,y,0)" is true. If x=pi, we just set y=pi as well.

The best you can do is to add an infinitely long axiom "x=0 or x = S(0) or x = S(S(0)) or x = S(S(S(0))) or ..."

Comment by Kindly on Logical Pinpointing · 2015-05-05T14:40:28.259Z · LW · GW

Repeating S n times is not addition: addition is the thing defined by those axioms, no more, and no less. You can prove the statements:

∀x. plus(x, 1, S(x))

∀x. plus(x, 2, S(S(x)))

∀x. plus(x, 3, S(S(S(x))))

and so on, but you can't write "∀x. plus(x, n, S(S(...n...S(x))))" because that doesn't make any sense. Neither can you prove "For every x, x+n is reached from x by applying S to x some number of times" because we don't have a way to say that formally.

From outside the Peano Axioms, where we have our own notion of "number", we can say that "Adding N to x is the same as taking the successor of x N times", where N is a real-honest-to-god-natural-number. But even from the outside of the Peano Axioms, we cannot convince the Peano Axioms that there is no number called "pi". If pi happens to exist in our model, then all the values ..., pi-2, pi-1, pi, pi+1, pi+2, ... exist, and together they can be used to satisfy any theorem about the natural numbers you concoct. (For instance, sub(pi, pi, 0) is a true statement about subtraction, so the statement "∀x. sub(x, x, 0)" can be proven but does not rule out pi.)

Comment by Kindly on Is Determinism A Special Case Of Randomness? · 2015-05-05T05:51:36.668Z · LW · GW

What makes you think that decision making in our brains is free of "regular certainty in physics"? Deterministic systems such as weather patterns can be unpredictable enough.

To be fair, if there's some butterfly-effect nonsense going on where the exact position of a single neuron ends up determining your decision, that's not too different from randomness in the mechanics of physics. But I hope that when I make important decisions, the outcome is stable enough that it wouldn't be influenced by either of those.

Comment by Kindly on Disputing Definitions · 2015-05-03T16:49:15.934Z · LW · GW

I´d say this is not needed, when people say "Snow is white" we know that it really means "Snow seems white to me", so saying it as "Snow seems white to me" adds length without adding information.

Ah, but imagine we're all-powerful reformists that can change absolutely anything! In that case, we can add a really simple verb that means "seems-to-me" (let's say "smee" for short) and then ask people to say "Snow smee white".

Of course, this doesn't make sense unless we provide alternatives. For instance, "er" for "I have heard that", as in "Snow er white, though I haven't seen it myself" or "The dress er gold, but smee blue."

Comment by Kindly on Stupid Questions May 2015 · 2015-05-02T20:18:55.652Z · LW · GW

Insurance makes a profit in expectation, but an insurance salesman does have some tiny chance of bankruptcy, though I agree that this is not important. What is important, however, is that an insurance buyer is not guaranteed a loss, which is what distinguishes it from other Dutch books for me.

Prospect theory and similar ideas are close to an explanation of why the Allais Paradox occurs. (That is, why humans pick gambles 1A and 2B, even though this is inconsistent.) But, to my knowledge, while utility theory is both a (bad) model of humans and a guide to how decisions should be made, prospect theory is a better model of humans but often describes errors in reasoning.

(That is, I'm sure it prevents people from doing really stupid things in some cases. But for small bets, it's probably a bad idea; Kahneman suggests teaching yourself out of it by making yourself think ahead to how many such bets you'll make over a lifetime. This is a frame of mind in which the risk thing is less of a factor.)

Comment by Kindly on Stupid Questions May 2015 · 2015-05-02T19:36:43.206Z · LW · GW

Yyyyes and no. Our utility functions are nonlinear, especially with respect to infinitesimal risk, but this is not inherently bad. There's no reason for our utility to be everywhere linear with wealth: in fact, it would be very strange for someone to equally value "Having $1 million" and "Having $2 million with 50% probability, and having no money at all (and starving on the street) otherwise".

Insurance does take advantage of this, and it's weird in that both the insurance salesman and the buyers of insurance end up better off in expected utility, but it's not a Dutch Book in the usual sense: it doesn't guarantee either side a profit.

The Allais paradox points out that people are not only averse to risk, but also inconsistent about how they are averse about it. The utility function U(X cents) = X is not risk-averse, and it picks gambles 1A and 2A (in Wikipedia's notation). The utility function U(X cents) = log X is extremely risk-averse, and it picks gambles 1B and 2B. Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.

There's a Dutch book for the Allais paradox in this post reading after "money pump".

Comment by Kindly on Mental Context for Model Theory · 2015-05-01T03:45:20.163Z · LW · GW

When it comes to neutral geometry, nobody's ever defined "parallel lines" in any way other than "lines that don't intersect". You can talk about slopes in the context of the Cartesian model, but the assumptions you're making to get there are far too strong.

As a consequence, no mathematicians ever tried to "prove that parallel lines don't intersect". Instead, mathematicians tried to prove the parallel postulate in one of its equivalent forms, of which some of the more compelling or simple are:

  • The sum of the angles in a triangle is 180 degrees. (Defined to equal two right angles.)

  • There exists a quadrilateral with four right angles.

  • If two lines are parallel to the same line, they are parallel to each other.

It's also somewhat misleading to say that mathematicians were mainly motivated by the inelegance of the parallel postulate. Though this was true for some mathematicians, it's hard to say that the third form of the parallel postulate which I gave is any less elegant, as an axiom, than "If two line segments are congruent to the same line segment, then they are congruent to each other". Some form of the latter was assumed both by Euclid (his first Common Notion) and by all of his successors.

A stronger motivation for avoiding the parallel postulate is that so much can be done without it that one begins to suspect it might be unnecessary.

Comment by Kindly on Mental Context for Model Theory · 2015-04-30T23:50:05.575Z · LW · GW

Understandable; perhaps. In mathematics, it is very easy to say understandable things that are simply false. In this case, those false things become nonsense when you realize that the meaning of "parallel lines" is "lines that do not intersect".

You might say that an explanation gets these facts completely wrong, then it is still a good explanation if it makes you think the right things. I say that such an explanation goes against the spirit of all mathematics. It is not enough that your argument is understandable, for many understandable arguments have later turned out to be incoherent. It is not enough that your argument is believable, for many believable arguments have later turned out to be false.

If you want to do good mathematics, the statements you make must be true.

Comment by Kindly on What's in a name? That which we call a rationalist… · 2015-04-30T02:04:40.074Z · LW · GW

Only a single mile to the mile? I've seen maps in biology textbooks that were much larger than that.

Comment by Kindly on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-04-27T20:14:07.816Z · LW · GW

Okay, then interpret my answer as "rape and murder are bad because they make others sad, and making others sad is bad by definition".

Comment by Kindly on Mental Context for Model Theory · 2015-04-27T19:04:46.877Z · LW · GW

The tone is well-deserved. This is a serious mistake that renders all further discussion of geometry in the post nonsensical.

Comment by Kindly on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-04-27T18:09:46.506Z · LW · GW

You can always keep asking why. That's not particularly interesting.

Comment by Kindly on Memory is Everything · 2015-04-27T12:13:27.232Z · LW · GW

It occurs to me that we can express this problem in the following isomorphic way:

  1. Omega makes an identical copy of you.

  2. One copy exists for a week. You get to pick whether that week is torture or nirvana.

  3. The other copy continues to exist as normal, or maybe is unconscious for a week first, and depending on what you picked for step 2, it may lose or receive lots of money.

I'm not sure how enlightening this is. But we can now tie this to the following questions, which we also don't have answers to: is an existence of torture better than no existence at all? And is an existence of nirvana good when it does not have any effect on the universe?

Comment by Kindly on How to sign up for Alcor cryo · 2015-04-27T03:44:57.122Z · LW · GW

Yes, this, exactly.

I do nice things for myself not because I have deep-seated beliefs that doing nice things for myself is the right thing to do, but because I feel motivated to do nice things for myself.

I'm not sure that I could avoid doing those things for myself (it might require willpower I do not have) or that I should (it might make me less effective at doing other things), or that I would want to if I could and should (doing nice things for myself feels nice).

But if we invent a new nice thing to do for myself that I don't currently feel motivated to do, I don't see any reason to try to make myself do it. If it's instrumentally useful, then sure: learning to like playing chess means means that my brain gets exercise while I'm having fun.

With cryonics, though? I could try to convince myself that I want it, and then I will want it, and then I will spend money on it. I could also leave things as they are, and spend that money on things I currently want. Why should I want to want something I don't want?

Comment by Kindly on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-26T05:42:26.318Z · LW · GW

I did say what I would do, given the premise that I know Omega is right with certainty. Perhaps I was insufficiently clear about this?

I am not trying to fight the hypothetical, I am trying to explain why one's intuition cannot resist fighting it. This makes the answer I give seem unintuitive.

Comment by Kindly on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-25T21:25:30.254Z · LW · GW

So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.

Your formulation, however, doesn't work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don't precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.

Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice. Even if Omega actually is infallible, it's hard to imagine evidence extraordinary enough to convince us that Omega is sufficiently infallible.

(I think I am willing to bite the suicide bullet as long as we're clear that I would require truly extraordinary evidence.)

Comment by Kindly on LessWrong Experience of Flavours · 2015-04-25T16:27:25.503Z · LW · GW

Result spoilers: Fb sne, yvxvat nypbuby nccrnef gb or yvaxrq gb yvxvat pbssrr be pnssrvar, naq gb yvxvat ovggre naq fbhe gnfgrf. (Fbzr artngvir pbeeryngvba orgjrra yvxvat nypbuby naq yvxvat gb qevax ybgf bs jngre.)

I haven't done the responsible thing and plotted these (or, indeed, done anything else besides take whatever correlation coefficient my software has seen fit to provide me with), so take with a grain of salt.

Comment by Kindly on LessWrong Experience of Flavours · 2015-04-24T18:32:14.477Z · LW · GW

I believe editing polls resets them, so there's no reason to do it if it's just an aesthetically unpleasant mistake that doesn't hurt the accuracy of the results.

Comment by Kindly on Torture vs. Dust Specks · 2015-04-23T14:34:33.518Z · LW · GW

Absolutely. We're bad at anything that we can't easily imagine. Probably, for many people, intuition for "torture vs. dust specks" imagines a guy with a broken arm on one side, and a hundred people saying 'ow' on the other.

The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn't take the number of people saved by an intervention into account; we just picture the typical effect on a single person.

What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don't know how bad being in prison is, but it probably becomes much worse than I imagine if you're there for 50 years, and we don't think about that at all when arguing (or voting) about prison sentences.

Comment by Kindly on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-22T04:18:45.957Z · LW · GW

That wasn't obvious to me. It's certainly false that "people who use the strategy of always paying have the same odds of losing $1000 as people who use the strategy of never paying". This means that the oracle's prediction takes its own effect into account. When asking about my future, the oracle doesn't ask "Will Kindly give me $1000 or die in the next week?" but "If hearing a prophecy about it, will Kindly give me $1000 or die in the next week?"

Hearing the prediction certainly changes the odds that the first clause will come true; it's not obvious to me (and may not be obvious to the oracle, either) that it doesn't change the odds of the second clause.

It's true that if I precommit to the strategy of not giving money in this specific case, then as long as many other people do not so precommit, I'm probably safe. But if nobody gives the oracle money, the oracle probably just switches to a different strategy that some people are vulnerable to. There is certainly some prophecy-driven exploit that the oracle can use that will succeed against me; it's just a question of whether that strategy is sufficiently general that an oracle will use it on people. Unless an oracle is out to get me in particular.

Comment by Kindly on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-22T00:19:56.286Z · LW · GW

You're saying that it's common knowledge that the oracle is, in fact, predicting the future; is this part of the thought experiment?

If so, there's another issue. Presumably I wouldn't be giving the oracle $1000 if the oracle hadn't approached me first; it's only a true prediction of the future because it was made. In a world where actual predictions of the future are common, there should be laws against this, similar to laws against blackmail (even though it's not blackmail).

(I obviously hand over the $1000 first, before trying to appeal to the law.)

Comment by Kindly on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-21T21:32:44.526Z · LW · GW

Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it's not an unreasonable objection to at least some forms of teaching statistics.

Hopefully people with statistics degrees move beyond that stage, though.

Comment by Kindly on LessWrong experience on Alcohol · 2015-04-21T11:56:33.008Z · LW · GW

There are varieties of strawberries that are not sour at all, so I suppose it's possible that you simply have limited experience with strawberries. (Well, you probably must, since you don't like them, but maybe that's the reason you don't think they're sour, as opposed to some fundamental difference in how you taste things.)

I actually don't like the taste of purely-sweet strawberries; the slightly-sour ones are better. A very unripe strawberry would taste very sour, but not at all sweet, and its flesh would also be very hard.

Comment by Kindly on Self-verification · 2015-04-20T01:30:40.016Z · LW · GW

Do you have access to the memory wiping mechanism prior to getting your memory wiped tomorrow?

If so, wipe your memory, leaving yourself a note: "Think of the most unlikely place where you can hide a message, and leave this envelope there." The envelope contains the information you want to pass on.

Then, before your memory is wiped tomorrow, leave yourself a note: "Think of the most unlikely place where you can hide a message, and open the envelope hidden there."

Hopefully, your two memory-wiped selves should be sufficiently similar that the unlikely places they think of will coincide. At the same time, the fact that there is an envelope in the unlikely place you just thought of should be evidence that it came from you.

Comment by Kindly on Self-verification · 2015-04-20T01:23:53.767Z · LW · GW

Wouldn't you forget the password once your memories are wiped?

Comment by Kindly on Open Thread, Apr. 13 - Apr. 19, 2015 · 2015-04-18T15:29:42.752Z · LW · GW

In an alternate universe, Peter and Sarah could have had the following conversation instead:

P: I don't know the numbers.

S: I knew you didn't know the numbers.

P: I knew that you knew that I didn't know the numbers.

S: I still don't know the numbers.

P: Now I know the numbers.

S: Now I also know the numbers.

But I'm worried that my version of the puzzle can no longer be solved without brute force.

Comment by Kindly on Open Thread, Apr. 13 - Apr. 19, 2015 · 2015-04-18T15:01:14.833Z · LW · GW

I believe I have it. rot13:

Sbyq naq hasbyq gur cncre ubevmbagnyyl, gura qb gur fnzr iregvpnyyl, gb znex gur zvqcbvag bs rnpu fvqr. Arkg, sbyq naq hasbyq gb znex sbhe yvarf: vs gur pbearef bs n cncre ner N, O, P, Q va beqre nebhaq gur crevzrgre, gura gur yvarf tb sebz N gb gur zvqcbvag bs O naq P, sebz O gb gur zvqcbvag bs P naq Q, sebz P gb gur zvqcbvag bs N naq Q, naq sebz Q gb gur zvqcbvag bs N naq O.

Gurfr cnegvgvba gur erpgnatyr vagb avar cvrprf: sbhe gevnatyrf, sbhe gencrmbvqf, naq bar cnenyyrybtenz. Yrg gur cnenyyrybtenz or bar cneg, naq tebhc rnpu gencrmbvq jvgu vgf bja nqwnprag gevnatyr gb znxr gur sbhe bgure cnegf.

Obahf: vs jr phg bhg nyy avar cvrprf, n gencrmbvq naq n gevnatyr pna or chg onpx gbtrgure va gur rknpg funcr bs gur cnenyyrybtenz.

Comment by Kindly on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-13T15:17:17.969Z · LW · GW

Desensitization training is great if it (a) works and (b) is less bad than the problem it's meant to solve.

(I'm now imagining Alice and Carol's conversation: "So, alright, I'll turn my music down this time, but there's this great program I can point you to that teaches you to be okay with loud noise. It really works, I swear! Um, I think if you did that, we'd both be happier.")

Treating thin-skinned people (in all senses of the word) as though they were already thick-skinned is not the same, I think. It fails criterion (a) horribly, and does not satisfy (b) by definition: it is the problem desensitization training ought to solve.

Comment by Kindly on Is my theory on why censorship is wrong correct? · 2015-04-12T14:50:39.665Z · LW · GW

I wanted to upvote you for amusing me, but I changed my vote to one I think you would prefer.

Comment by Kindly on On immortality · 2015-04-10T23:53:52.989Z · LW · GW

What if we assume a finite universe instead? Contrary to what the post we're discussing might suggest, this actually makes recurrence more reasonable. To show that every state of a finite universe recurs infinitely often, we only need to know one thing: that every state of the universe can be eventually reached from every other state.

Is this plausible? I'm not sure. The first objection that comes to mind is entropy: if entropy always increases, then we can never get back to where we started. But I seem to recall a claim that entropy is a statistical law: it's not that it cannot decrease, but that it is extremely unlikely to do so. Extremely low probabilities do not frighten us here: if the universe is finite, then all such probabilities can be lower-bounded by some extremely tiny constant, which will eventually be defeated by infinite time.

But if the universe is infinite, this does not work: not even if the universe is merely potentially infinite, by which I mean that it can grow to an arbitrarily large finite size. This is already enough for the Markov chain in question to have infinitely many states, and my intuition tells me that in such a case it is almost certainly transient.

Comment by Kindly on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-10T16:06:12.704Z · LW · GW

To Bob, I would point out that:

  1. Contrary to C, it is easy to prove that you have an ear or mental condition that makes you sensitive to noise; a note from a doctor or something suffices.

  2. Contrary to D, in case such a condition exists, "toughening up and growing a thicker skin" is not actually a possible response. In some cases, it appears that loud noises make the condition worse. Even when this is not the case, random exposure to noises at the whim of the environment doesn't help.

I realize that you are appealing to a metaphor, but I think that these points often apply to the unmetaphored things as well.

Comment by Kindly on If You Like This Orange... · 2015-04-06T21:37:11.393Z · LW · GW

Regarding my style: many philosophies have both a function and a form. In writing, some philosophies have a message to convey and a style that it is often conveyed in. There is a style to objectivist essays, Maoist essays, Buddhist essays, and often there is a style to less wrong essays. I wrote my egoist essay in the egoist style, in honor of those egoists who led to me including Max Stirner, Dora Marsden, Apio Ludd and especially Malfew Seklew. Egoism - it's not for everybody.

The things that make your writing style unapproachable are not features of "the egoist style", at least according to what my superficial inspection of "the egoist style" discovered. What makes your writing style unapproachable is the lack of indication you give of what you're trying to prove.

I decided to investigate the first name on your list, Max Stirner, who has the admirable character trait of being long dead and therefore available to read on Google Books for free. I skimmed the bit of The Ego and His Own which was under the heading "All Things are Nothing to Me". Here is what I found.

Stirner begins by saying "People want me to care about everything--God, country, and so on--except myself. Is this reasonable? Let us look at what God and country have to say about it." He then fulfills his promise by explaining, in the next few paragraphs, how those causes are selfish; addressing, in turn, "God", "country", and "and so on". He ends by giving his own answer to what he thinks he should care about.

You, on the other hand, begin with oranges. I follow along with this game for a few paragraphs, and eventually discover that you did not mean oranges when you said oranges. I considered re-reading those paragraphs to see what you did mean, but get bored and skip to the end, where you tell me that it's okay to like things I like. Well, okay. This doesn't seem like a controversial conclusion; if you were arguing for this all along, then maybe I was right to skip to the end. Maybe I skipped the bit where you explained how some people disagree, so I can believe that your conclusion is interesting. Oh well.

Stirner signposts. Stirner makes promises about what he will talk about and then keeps them. If I had been interested in engaging with the substance of Stirner, rather than his style, I would have read carefully the paragraphs where he explains why God's cause is a selfish cause. Not having done that, I can still point to those paragraphs, because Stirner told me where he would explain this. I can summarize Stirner's argument, not because I am good at summarizing, but because Stirner gave me several summaries.

If you don't tell me where you are and where you're going, I have no means or inclination to follow along with you.

Comment by Kindly on Nash Equilibria and Schelling Points · 2015-04-06T19:12:21.644Z · LW · GW

That's true, but I think I agree with TheOtherDave that the things that should make you start reconsidering your strategy are not bad outcomes but surprising outcomes.

In many cases, of course, bad outcomes should be surprising. But not always: sometimes you choose options you expect to lose, because the payoff is sufficiently high. Plus, of course, you should reconsider your strategy when it succeeds for reasons you did not expect: if I make a bad move in chess, and my opponent does not notice, I still need to work on not making such a move again.

I also worry that relying on regret to change your strategy is vulnerable to loss aversion and similar bugs in human reasoning. Betting and losing $100 feels much more bad than betting and winning $100 feels good, to the extent that we can compare them. If you let your regret of the outcome decide your strategy, then you end up teaching yourself to use this buggy feeling when you make decisions.

Comment by Kindly on Desire is the direction, rationality is the magnitude · 2015-04-06T18:03:26.006Z · LW · GW

Part of it might just be the order. Compare that paragraph to the following alternative:

The rationality of Rationality: AI to Zombies isn't about using cold logic to choose what to care about. Reasoning well has little to do with what you're reasoning towards. If your goal is to annihilate as many puppies as possible, then this kind of rationality will help you annihilate more puppies. But if your goal is to enjoy life to the fullest and love without restraint, then better reasoning (while hot or cold, while rushed or relaxed) will also help you do so.

Comment by Kindly on Nash Equilibria and Schelling Points · 2015-04-06T04:36:05.134Z · LW · GW

I'm not sure that regretting correct choices is a terrible downside, depending on how you think of regret and its effects.

If regret is just "feeling bad", then you should just not feel bad for no reason. So don't regret anything. Yeah.

If regret is "feeling bad as negative reinforcement", then regretting things that are mistakes in hindsight (as opposed to correct choices that turned out bad) teaches you not to make such mistakes. Regretting all choices that led to bad outcomes hopefully will also teach this, if you correctly identify mistakes in hindsight, but this is a noisier (and slower) strategy.

If regret is "feeling bad, which makes you reconsider your strategy", then you should regret everything that leads to a bad outcome, whether or not you think you made a mistake, because that is the only kind of strategy that can lead you to identify new kinds of mistakes you might be making.

Comment by Kindly on Open thread, Apr. 01 - Apr. 05, 2015 · 2015-04-03T16:09:51.449Z · LW · GW

My proposal: the ideas that goodness or evil are substances and they can formed into magic objects such as sword made of pure evil.

Of course, some novels also subvert this delightfully. Patricia Wrede's The Seven Towers, for instance, is all about exactly what goes wrong when you try to make a magical object out of pure good.

(Edit: that is, Wrede does not literally spend the whole book talking about this problem. It is merely mentioned as backstory. But still.)

Comment by Kindly on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-31T20:46:51.093Z · LW · GW

What changes is that I would like to have a million dollars as much as Joe would. Similarly, if I had to trade between Joe's desire to live and my own, the latter would win.

In another comment you claim that I do not believe my own argument. This is false. I know this because if we suppose that Joe would like to be killed, and Joe's friends would not be said if he died, then I am okay with Joe's death. So there is no other hidden factor that moves me.

I'm not sure what the observation that I do not give all of my money away to charity has to do with anything.

Comment by Kindly on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-30T04:35:38.563Z · LW · GW

I don't think that's true in any important way.

I might say: "Killing Joe is bad because Joe would like not to be killed, and enjoys continuing to live. Also, Joe's friends would be sad if Joe died." This is not a sophisticated argument. If an atheist would have a hard time making it, it's only because one feels awkward making such an unsophisticated argument in a debate about morality.