Meetup : Stanford THINK starts weekly meetups this Sunday 2014-09-20T20:00:16.749Z
Theory of Knowledge (rationality outreach) 2011-08-09T21:36:07.057Z
Teenage Rationalists and Changing Your Mind 2011-08-05T18:19:30.993Z


Comment by KPier on MIRI location optimization (and related topics) discussion · 2021-05-11T02:18:13.612Z · LW · GW

Given your described desiderata, I would think that a slightly more rural location along the coast of California ought to be up there. Large properties in Orinda are not that expensive (there are gorgeous 16-30 acre lots for about 1million on Zillow right now), and right now, for better and for worse, the Bay is the locus of the rationalist and EA communities and of the tech industry; convincing people to move to a pastoral retreat 1hour from the city everyone already lives in is a much easier sell and smoother transition than convincing them to move across the country. (I recognize that MIRI is doing this in part because of thinking that it's bad for the Bay to be that, but I think the Bay community already has at least four distinctive sub communities with different values and norms and priorities, and a campus in more-rural California could form a distinctive one while not disrupting all existing social bonds.) I know Bay zoning is notorious, but that's much less true as soon as you're out of the Bay proper, and all of those properties emphasize in the listings that you have total flexibility about what to build on that land. Other nearby properties are often also for sale. 

I worry that if MIRI moves to a place with no local rationalists or rationalist-inclined people, they'll be less likely to make new friends and more likely to become very insular, as the people who valued their non-MIRI relationships most fall away; it seems like a huge advantage if a move is either to a place with a preexisting rationalist community or doesn't require severing ties with the current ones.

The big downside of this, to my mind, would be fire, and it's a substantial downside, but on the whole I anticipate-success much more strongly for a rural-California enclave than for the locations you describe. (Disclaimer: this may be because I have strong roots in the Bay and am not personally likely to move.)

Comment by KPier on Cash transfers are not necessarily wealth transfers · 2017-12-02T00:38:26.432Z · LW · GW

That is, of course, consistent with it being net neutral to give people money which they spend on school fees, if the mechanism here is 'there are X good jobs, all of which go to people who've had formal education, but formal education adds no value here'. In that scenario it's in anyone's interest to send their kid to school, but all of the kids being sent to school does not net improve anything.

It seems kind of unlikely to me that primary school teaches nothing - and even just teaching English and basic literacy and numeracy seems really valuable - but if it does, that wouldn't make this woman irrational while it would make cash transfers spent on schooling poorly spent overall.

Comment by KPier on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T23:24:28.219Z · LW · GW

Thanks for answering this. It sounds like the things in the 'maybe concerns, insufficient info' categories are largely not concerns, which is encouraging. I'd be happy to privately contribute salary and CoL numbers to someone's effort to figure out how much people would save. is a little discouraging; there are Lead Java Developer roles listed for £30-50k , no equity which would pay $150,000-$180,000 base in SF and might well see more than $300k in total compensation. Even if you did want to buy a house, which again Bay rationalists largely just don't, that means a house costs three-four years' salary in both cases and in one case you own a million-dollar property which will (unfortunately for the city) probably appreciate significantly and in another you own a £125k property not expected to appreciate any. It might be better to target people who want to retire early to Manchester and people not in tech.

I don't think any amount of gender-related recruiting is more predictive of gender balance than 'how similar is this to the parts of the community which have gender balance'? So it actually would surprise me if, even throwing everything under the bus to achieve this goal, it worked. Of course, I'd say a thriving Manchester community with a lousy gender ratio would still be an amazing accomplishment. A reasonable way to estimate gender balance in the Bay might be "count at Solstice, excluding anyone who flew in for Solstice"? (On the Facebook page so far, of the 29 people attending 12 are women, but Facebook pages are very noisy estimates of attendance and the attendance will be an order of magnitude higher than that, so I won't put that much weight on that.)

Come to think of it, you've got an uphill battle on gender ratios for another reason, which is that women are on average less likely to do weird things, less likely to be underemployed in their twenties, and likelier to have close social ties preventing moving. I still am confident in my prediction but this general factor might be a stronger contributor than culture-specific ones.

Comment by KPier on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T05:03:35.915Z · LW · GW

Are you disagreeing with my prediction? I'd be happy to bet on it and learning that two of the four initial residents are trans women does not change it.

Comment by KPier on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T03:36:47.748Z · LW · GW

I wrote a post listing reasons why I would not move to Manchester. Since writing it I've gotten more confident about the 'bad culture fit' conclusion by reading bendini's blog. I would also add that the part of the community with the best gender ratio (rationalist tumblr) and the adjacent community with the best gender ratio (Alicorn's fan community) are also the ones with the norms that the founders of this project seem to find most objectionable, and the ones who seem to be the worst culture fit for the project. I think things like 'culture fit with existing parts of the community that are gender-balanced' end up predicting gender ratio much more than degree of prioritization of attracting women, so I predict Manchester will have significantly (10% or greater) worse gender balance than the Bay in five years, and less strongly expect it to have worse gender balance than the community as a whole.

Comment by KPier on I Want To Live In A Baugruppe · 2017-03-17T01:54:09.623Z · LW · GW

I would live in this if it existed. Buying an apartment building or hotel seems like the most feasible version of this, and (based on very very minimal research) maybe not totally intractable; the price-per-unit on some hotels/apartments for sale is like $150,000, which is a whole lot less than the price of independently purchasing an SF apartment and a pretty reasonable monthly mortgage payment.

Comment by KPier on Can we talk about mental illness? · 2015-03-10T04:54:48.889Z · LW · GW

Brienne's Löb's Theorem Defeated My Social Anxiety deserves to be among your resources.

Comment by KPier on 2014 Survey Results · 2015-01-06T02:46:13.462Z · LW · GW

I am suspicious of this as an explanation. Most straight-identified women I know who will dance with/jokingly flirt with other women are in fact straight and not 'implicitly bisexual'; plenty of them live in environments where there'd be no social cost to being bisexual, and they are introspective enough that 'they are actually just straight and don't interpret those behaviors as sexual/romantic' seems most likely.

Men face higher social penalties for being gay or bisexual (and presumably for being thought to be gay or bisexual) which seems a more likely explanation for why they don't do things that could be perceived as showing romantic interest toward men (like dancing or 'joking' flirting) than that women are borderline bisexual by nature.

Comment by KPier on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-03T18:50:10.517Z · LW · GW

I am not sure that we're communicating meaningfully here. I said that there's a place to set a threshold that weighs the expense against the lives. All that is required for this to be true is that we assign value to both money and lives. Where the threshold is depends on how much we value each, and obviously this will be different across situations, times, and cultures.

You're conflating a practical concern (which behaviors should society condemn?) and an ethical concern (how do we decide the relative value of money and lives?) which isn't even a particularly interesting ethical concern (governments have standard figures for the value of a human life; they'd need to have such to conduct any interventions at all.) And I am less certain than I was at the start of this conversation of what sort of answer you are even interested in.

Comment by KPier on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-03T04:30:37.580Z · LW · GW

Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.

It doesn't have to be well-known. Morally there's a threshold. Everyone who is trying to act morally is trying to ascertain where it should be, and everyone who isn't acting morally is taking advantage of the uncertainty about where the threshold is to avoid spending money. That doesn't change that there is a threshold.

Consider doctors sending patients in for surgery after a cancer screening. It is hard to estimate whether someone has cancer, and different doctors might recommend different actions on the basis of the same estimate. This does not change the fact that, in fact, there's a place to put the threshold that balances the risk of sending in patients for unnecessary surgery and the risk of letting cancer spread. On any ethical question this threshold exists. We don't have to be certain about it to acknowledge that judging where it is and where cases fall with respect to it is basically always what we're doing.

Mr. Doc's actions are morally right to the extent he's right (given the evidence he could reasonably have acquired) about the threshold.

Comment by KPier on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-01T23:28:23.333Z · LW · GW

Assume there's a threshold at which sending the ship for repairs is morally obligatory (if we're utilitarians, that is the point at which the cost of the repairs is less than the probability*expected cost of the ship sinking, taking into account the lives aboard, but the threshold needn't be utilitarian for this to work.)

Let's say that the threshold is 5% - if there's more than a 5% chance the ship will go down, you should get it repaired.

Mr. Grumpy's thought process seems to be 'I alieve that my ship will sink, but this alief is harmful and I should avoid it'. He is morally justified in quelling his nightmares, but he'd be morally unjustified if in doing so he rationalized away his belief 'there's a 10% chance my ship will sink' to arrive at 'there's a 3% chance my ship will sink' and thereby did not do the repairs.

Likewise, it's great that Mr. Happy doesn't want to worry, but if you asked him to bet on the ship going down, what odds would he demand? If he thinks that the probability of his ship going down is greater than 5%, then he should have gotten it refitted. If he knows he has a bias toward neglecting negative events, and he knows that his estimate of 1% is probably the result of rationalization rather than reasoning, he should get someone else to estimate or he should correct his estimate for this known bias of his.

Mr. Doc looks at this probability and deems it acceptable (so, presumably, below our action threshold). He is not guilty of anything.

Comment by KPier on Rationality Quotes September 2014 · 2014-09-30T21:50:19.358Z · LW · GW

The next passage confirms that this is the author's interpretation as well:

Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out.

And clearly what he is guilty of (or if you prefer, blameworthy) is rationalizing away doubts that he was obligated to act on. Given the evidence available to him, he should have believed the ship might sink, and he should have acted on that belief (either to collect more information which might change it, or to fix the ship). Even if he'd gotten lucky, he would have acted in a way that, had he been updating on evidence reasonably, he would have believed would lead to the deaths of innocents.

The Ethics of Belief is an argument that it is a moral obligation to seek accuracy in beliefs, to be uncertain when the evidence does not justify certainty, to avoid rationalization, and to help other people in the same endeavor. One of his key points is that 'real' beliefs are necessarily entangled with reality. I am actually surprised he isn't quoted here more.

Comment by KPier on Rationality Quotes September 2014 · 2014-09-27T02:51:51.544Z · LW · GW

A shipowner was about to send to sea an emigrant-ship. He know that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely though so many voyages and weathered so many storms, that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such a way he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her depature with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.

What shall we say of him? Surely this, that he was verily guilty of the death of those men. It is admitted that he did sincerely believe in the soundness of his ship, but the sincerity of his conviction can in nowise help him, because he had no right to believe on such evidence as was before him. He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.

  • W.J. Clifford, the Ethics of Belief
Comment by KPier on PROPOSAL: LessWrong for Teenagers · 2012-11-12T03:09:53.931Z · LW · GW

From the upvotes I'm concluding it's worthwhile to go ahead and write it: I agree it serves as a pretty decent example of applying rationality concepts for long-term decision making. It'll have to wait a week until Thanksgiving Break, though.

Comment by KPier on PROPOSAL: LessWrong for Teenagers · 2012-11-09T20:36:16.090Z · LW · GW

I'm a freshman in college now, but a post or two analyzing the reasons for choosing an (expensive, high status) private college versus an (essentially free, low status) state college, or going to school in America versus Europe versus somewhere else, would have been immensely valuable to me a year ago.

This would belong on LessWrong because typical advice on this topic is either "follow your dreams, do what you love, everything will work out", or "you're an idiot to take on debt, if you can't pay your own way through college you're a lazy, entitled brat".

A post describing how to make such a decision based on expected-value calculations, discussing value of information and college visits, and dissecting the research into the income effects of attending top colleges would be very nice.

(I could write such a post, if others think it would be of enough general interest).

Comment by KPier on My experience with dieting and exercise · 2012-11-01T03:26:12.230Z · LW · GW

give 300 bucks to the Against Malaria Foundation, saving the lives of 1-3 children.

Source? The most recent estimate I've seen was that saving a life costs around $2000.

Comment by KPier on The Problem With Rational Wiki · 2012-10-28T20:34:45.281Z · LW · GW

Fixed, sorry! (I'm female and that mistake doesn't bother me at all, but I know it really annoys some people. I'll be more careful in future.)

I completely agree that characterizing RW as contributing to existential risk is absurd.

Comment by KPier on The Problem With Rational Wiki · 2012-10-28T09:35:14.251Z · LW · GW

Thanks for linking to the context! In fairness, though, if people are citing RationalWiki as proof that LessWrong has a "reputation", then devoting a discussion-level post to it doesn't strike me as excessive.

(On a related note: I hadn't read Jade's comments, but I did after you flagged them as interesting; they struck me as totally devoid of value. Would you mind explaining what you think the valid concern he/she's expressing is?)

Comment by KPier on The Problem With Rational Wiki · 2012-10-28T06:38:32.720Z · LW · GW

LW paying RW this much attention while also claiming that the entire future of human value itself is at stake looks on the surface like a failure of apportionment of cognitive resources, but perhaps I've missed something.

What do you mean by "this much attention"? If Konkvistador's links at the top are reasonably comprehensive (and a quick search doesn't turn up much more), there have been 2 barely-upvoted discussion posts about RW in four years, which hardly seems like much attention. For comparison, LW has devoted several times as much energy to dating advice.

Is there a lot of discussion of RW that I'm missing, or are you claiming that even two posts in Discussion is totally excessive?

Comment by KPier on Is Omega Impossible? Can we even ask? · 2012-10-24T23:29:48.131Z · LW · GW

... and if your utility scales linearly with money up to $1,001,000, right?

Comment by KPier on [Link] One in five American adults say they are atheist, agnostic or "nothing in particular" · 2012-10-11T02:04:49.915Z · LW · GW

I don't think there's anything wrong with the topic, if it comes with a little bit of discussion along the lines of palladius's comment below, or along the lines of "What evidence would convince us that the sanity waterline is actually rising, as opposed to just more people being raised non-religious?"

It would be very interesting to see this study in the context of trendlines for other popular sanity-correlated topics, such as belief in evolution, disbelief in ghosts, non-identification with a political party, knowledge about GMOs, etcetera, even though there are lots and lots of confounding variables.

One alone, though, without commentary about rationality, probably does not belong on LessWrong.

Comment by KPier on [SEQ RERUN] Fighting a Rearguard Action Against the Truth · 2012-09-07T01:48:53.265Z · LW · GW

I don't think he's saying that motives are morally irrelevant - I think he's saying that they are irrelevant to the point he is trying to make with that blog post.

Comment by KPier on Open Thread, September 1-15, 2012 · 2012-09-03T22:35:00.950Z · LW · GW

I just want to experience being wrong sometimes.

Your comments are consistent with wanting to be proved wrong. No one experiences "being wrong" - from the inside, it feels exactly like "being right". We do experience "realizing we were wrong", which is hopefully followed by updating so that we once again believe ourselves to be right. Have you never changed your mind about something? Realized on your own that you were mistaken? Because you don't need to "lose" or to have other people "beat you" to experience that.

And if you go around challenging other people about miscellaneous points in the hopes that they will prove you wrong, this will annoy the other people and is unlikely to give you the experience you hoped for.

I also think that your definition of "being wrong" might be skewed. If you try to make comments which you think will be well-received, then every comment that has been heavily downvoted is an instance in which you were wrong about the community reaction. You apparently thought most people were concerned about an Eternal September; you've already realized that this belief was wrong. I'm not sure why being wrong about these does not have the same impact on you as being wrong about the relative fighting skills of programmers and fruit-pickers, but it probably should have a bigger impact, since it's a more important question.

Comment by KPier on Open Thread, September 1-15, 2012 · 2012-09-03T20:04:28.460Z · LW · GW

It looks like I won here, but I thought of some reasons why I may still have lost:

You should stop thinking about discussions in these terms.

Comment by KPier on Dealing with trolling and the signal to noise ratio · 2012-09-02T07:03:25.534Z · LW · GW

My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down.

It seems like it's your estimate of the programming knowledge of the commenters that should go down. Most of the proposed solutions have in common that they sound really simple to implement, but would in fact be complicated - which someone with high general intelligence and rationality, but limited domain-specific knowledge, might not know.

Should people who can't program refrain from suggesting programming fixes? Maybe. But maybe it's worth the time to reply to some of the highly-rated suggestions and explain why they're much harder than they look.

(I agree with your proposed solution to attempt simplifications.)

Comment by KPier on The noncentral fallacy - the worst argument in the world? · 2012-09-01T17:43:04.431Z · LW · GW

Generally speaking, there are fewer upvotes later in a thread, since fewer people read that far. If the children to your comment have more karma then your comment, it's reasonable to assume that people saw both comments and chose to up vote theirs, but if a parent to your comment has more karma, you can't really draw any inference from that at all.

Comment by KPier on [Link] Reddit, help me find some peace I'm dying young · 2012-08-18T17:12:25.464Z · LW · GW

Not to fall into the "trap" of buying warm fuzzies? Do you advocate a policy of never buying yourself any warm fuzzies, or just of never buying warm fuzzies specifically through donating to charity (because it's easy to trick your brain into believing it just did good)?

Comment by KPier on Admissions Essay Help? · 2012-08-03T18:33:41.363Z · LW · GW

Looks like PMing is down, actually. You can email me at kelseyp [at] (not written out to avoid spambots).

Comment by KPier on Admissions Essay Help? · 2012-08-01T23:21:36.831Z · LW · GW

I was accepted to Stanford this spring. At the welcome weekend, we talked a lot with the admissions representatives about what they're looking for - I'd be happy to share tips and my own essays. PM me.

Comment by KPier on Revisiting SI's 2011 strategic plan: How are we doing? · 2012-07-17T03:24:50.463Z · LW · GW

The July matching drive was news to me; I wonder how many other readers hadn't even heard about it.

Is there a reason this hasn't been published on LessWrong, i.e. with the usual public-commitment thread?

Also, if a donation is earmarked for CFAR, does the "matching" donation also go to CFAR?

Comment by KPier on Rational Ethics · 2012-07-13T02:43:58.544Z · LW · GW

Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying "thinking rationally about metaethics is not rational" is using the world two different ways, and is the reason your post is so confusing to me.

On your example of a witch, I don't actually see why believing that would be rational. But if you take a more straightforward example, say, "Not knowing that your boss is engaging in insider training, and not looking, could be rational," then I agree. You might rationally choose to not check if a belief is false.

Why is it necessary to muddy the waters by saying "You might rationally have an irrational belief?"

you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.

Of course. You can decide that learning something has negative expected consequences, and choose not to learn it. Or decide that learning it would have positive expected consequences, but that the value of information is low. Why use the "rational" and "irrational" labels?

Something like half of women will consider an abortion; their support or lack thereof has an enormous impact on whether that particular abortion is implemented. And if you're proposing this as a general policy, the relevant question is whether overall people adopting your heuristic is good, meaning that the question of whether any given one of them can impact politics is less relevant. If lots of people adopt your heuristic, it matters.

For effective charities, everyone who gives to the religious organization selected by their church is orders of magnitude less effective than they could be. Thinking for themselves would allow them to save hundreds of lives over their lifetime.

Comment by KPier on Rational Ethics · 2012-07-12T19:49:37.753Z · LW · GW

most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.


For most people, a rational ethics system costs far more than it provides in benefits.

I don't think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious questions do arise is definitely worth it, and I think they arrive more often than you realize (donating to effective/efficient charity, choosing a career, supporting/opposing gay marriage or abortion or universal health care).

We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

So are you saying that you agree people ought to spend time considering arguments for various moral systems, but that they shouldn't all bother with metaethics? Agreed. Or are you saying they shouldn't bother with thinking about "morality" at all, and should just consider the arguments for and against (for example) abortion independent of a bigger system?

And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through".

I think you could improve the post - and make your point clearer, by replacing "rational" with one of these words.

Comment by KPier on Rational Ethics · 2012-07-12T15:51:52.517Z · LW · GW

I think what you're trying to say is:

"Morally as computation" is expensive, and you get pretty much the same results from "morality as doing what everyone else is doing." So it's not really rational to try to arrive at a moral system through precise logical reasoning, for the same reasons it's not a good idea to spend an hour evaluating which brand of chips to buy. Yeah, you might get a slightly better result - but the costs are too high.

If that's right, here are my thoughts:

Obviously you don't need to do all moral reasoning from scratch. There aren't many people (on LessWrong or off) who think that you should. The whole point of Created Already in Motion is that you can't do all moral reasoning from scratch. Or, as Yvain put in in his Consequentialism FAQ, you don't need a complete theory of ballistics to avoid shooting yourself in the foot.

That said, "rely on society" is a flawed enough heuristic that almost everyone ought to do some moral reasoning for themselves. The majority of people tend to reject consequentialism in surveys, but there are compelling logical reasons to accept it. Death is widely consideed to be good, and seeking immortality to be immoral, but doing a bit of ethical reasoning tends to turn up different answers.

Moral questions have far greater consequences than day-to-day decisions; they're probably worth a little more of our attention.

(My main goal here is identifying points of disagreement, if any. Let me know if I've interpreted your post correctly.)

Comment by KPier on Interlude for Behavioral Economics · 2012-07-07T04:14:18.816Z · LW · GW

He also says:

As in so many other areas, our most important information comes from reality television.

I'm guessing both are a joke.

Comment by KPier on Open Thread, June 1-15, 2012 · 2012-06-02T21:00:12.751Z · LW · GW

Your article describes the consequences of being perceived as "right-wing" on American campuses. Is pick-up considered "right wing"? Or is your point more generally that students do not have as much freedom of speech on campus as they think?

I'm specifically curious about the claim that most professors would consider what you are doing to be evil. Is that based on personal experience with this issue?

Comment by KPier on Open Thread, March 16-31, 2012 · 2012-03-19T04:04:48.837Z · LW · GW

My favorite explanation of Bayes' Theorem barely requires algebra. (If you don't need the extended explanation, just scroll to the bottom, where the problem is solved.)

Comment by KPier on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-16T03:32:43.482Z · LW · GW

Chapter 79:

I think we're supposed to be able to figure this one out. My mental model of Eliezer says he thinks he's given us more than enough hints, and we have a week to wait despite it being a short, high tension chapter. He makes a big deal out of how Harry only has thirty hours, which isn't enough; he gives us a week, and a lot of information Harry doesn't have.

Who benefits from isolating Harry from both of his friends, and/or making him do something stupid to protect Hermione in front of the most powerful people in the Wizarding World?

Evidence against Quirrell as Hat-and-Cloak: Apart from everything that's already been discussed, he's been trying to strengthen Harry. He chose Draco and Hermione for the armies knowing that the likely outcome would be them getting closer (especially when he set them up against Harry).

Evidence for Quirrell as Hat-and-Cloak: Apart from what has already been discussed, he seemed very interested when Harry mentioned Lucius's threat to set aside everything to protect Draco. And there's that line in the most recent author's note:

anything you think won’t confuse the readers, will.

Which implies we're overthinking this and the obvious answer is the right one.

Quirrell conveniently rescuing Draco after seven hours makes sense if we assume he's also the one who almost killed him.

Evidence I can't sort: Quirrell's admission during interrogation can't have been an accident, and doesn't seem to serve his interests whether he's Hat-and-Cloak or not. If he is, he presumable wants to isolate Harry so he can talk him into stage 2 of the plan - but for that, he needs to be at Hogwarts or otherwise have access to Harry. If he's not Hat-and-Cloak, there's not much reason for him to tie himself up in the Ministry.

Unless he doesn't want Harry to be able to contact him and he wants to have a plausible reason for being unreachable?

I think this makes me update more toward "Quirrell is Hat-and-Cloak," but I'm not convinced.

Comment by KPier on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-13T00:32:51.823Z · LW · GW

It's also mentioned in Circular Altruism.

This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).

My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life. After rejecting the report, the agency decided not to implement the measure.

Trading off a sacred value (like refraining from torture) against an unsacred value (like dust specks) feels really awful. To merely multiply utilities would be too cold-blooded - it would be following rationality off a cliff...

I'm sure there's a hint in there, but I don't know what it is.

Comment by KPier on Open Thread, February 1-14, 2012 · 2012-02-04T02:57:50.922Z · LW · GW

An egoist is generally someone who cares only about their own self-interest; that should be distinct from someone who has a utility function over experiences, not over outcomes.

But a rational agent with a utility function only over experiences would commit quantum suicide if we also assume there's minimal risk of the suicide attempt failing/ the lottery not really being random, etc.

In short, it's an argument that works in the LCPW but not in the world we actually live in, so the absence of suiciding rationalists doesn't imply MWI is a belief-in-belief.

Comment by KPier on Open Thread, February 1-14, 2012 · 2012-02-03T18:48:28.529Z · LW · GW

I believe that my death has negative utility. (Not just because my family and friends will be upset; also because society has wasted a lot of resources on me and I am at the point of being able to pay them back, I anticipate being able to use my life to generate lots of resources for good causes, etc.)

Therefore, I believe that the outcome (I win the lottery ticket in one world; I die in all other worlds) is worse than the outcome (I win the lottery in one world; I live in all other worlds) which is itself worse than (I don't waste money on a lottery ticket in any world).

Least Convenient Possible World, I assume, would be believing that my life has negative utility unless I won the lottery, in which case, sure, I'd try quantum suicide.

thus creating an outcome pump for the subset of the branches where you survive (the only one that matters).

What? No! All of the worlds matter just as much, assuming your utility function is over outcomes, not experiences..

Comment by KPier on HPMOR: What could've been done better? · 2012-01-30T04:55:33.598Z · LW · GW

In the original books, Harry's cohort was born ten years into an extremely bloody civil war. I always assumed birth rates were extremely low for Harry's age group, which would imply that the overall population is much larger than what you'd extrapolate from class sizes.

Of course, the numbers still don't work. There are 40 kids in canon!Harry's class. Even if you assume that's a tenth of the normal birthrate and the average person lives to 150, you get a wizarding population of 6,000.

In MoR, class sizes are around 120 (more than half the kids are in the armies, and armies are 24 each), which is still problematic - with the generous assumptions above, you get a population of 18,000. But MoR does seem to hint there are other magical schools: Daphne at one point wonders if it's worth going to the same school as Harry just to go to the same school as everybody important, which supports the theory that there are other magic schools, but that almost everyone influential went through Hogwarts.

Comment by KPier on Stupid Questions Open Thread · 2011-12-31T21:52:23.254Z · LW · GW

Kolmogorov Complexity/Solmanoff Induction and Minimum Message Length have been proven equivalent in their most-developed forms. Essentially, correct mathematical formalizations of Occam's Razor are all the same thing.

Comment by KPier on Welcome to Less Wrong! · 2011-12-30T07:36:23.511Z · LW · GW

You not being Will_Newsome. (I can't imagine how bizarre it must be to be watching this conversation from your perspective.)

Comment by KPier on Stupid Questions Open Thread · 2011-12-30T06:12:47.181Z · LW · GW

I think 1) should probably be split into two arguments, then. One of them is that Many World is strictly simpler (by any mathematical formalization of Occam's Razor.) The other one is that collapse postulates are problematic (which could itself be split into sub-arguments, but that's probably unnecessary).

Grouping those makes no sense. They can stand (or fall) independently, they aren't really connected to each other, and they look at the problem from different angles.

Comment by KPier on Why would a free human society be in agreement on how to alter itself? · 2011-12-30T00:48:01.263Z · LW · GW

Eliezer said in the comments that it was in fact a fully fleshed out idea, but taken from a different story, and that it didn't seem right in the context of this story because it belonged to a different universe.

But yes, the out-of-placeness is noticeable.

Comment by KPier on Less Wrong mentoring thread · 2011-12-29T02:49:00.331Z · LW · GW

I'm 17 and just got into a top U.S. college, where I want to major in math and economics. I am a bit worried that I haven't learned good work habits and that I waste too much time on the internet, since high school was mostly a breeze for me. I've heard from a lot of people that kids like me get hit hard in college when they have to work hard for the first time, and while I can think of lots of reasons I'm different, this is probably a good situation to take the outside view.

So in short, I'd love a mentor. How does this work, exactly?

Comment by KPier on If You Were Brilliant When You Were Ten... · 2011-12-27T20:47:54.996Z · LW · GW

I once read that some people don't vote because they believe that they can't influence the outcome enough to outweigh the time it takes to vote (decide who to vote for etc.). Other reasons include the perceived inability to judge which candidate will be better. That line of reasoning seems to be even more relevant when it comes to existential risk charities. Not only might your impact turn out to be negligible but it seems even more difficult to judge the best charity. Are people who contribute money to existential risk charities also voting on presidential elections?

The obvious difference between voting in an election and giving money to the best charity is that voting is zero-sum. If you vote for Candidate A and it turns out that Candidate B was a better candidate (by your standards, whatever they are), then your vote actually had a negative impact. But if you give money to Charity A and it turns out Charity B was slightly more efficient, you've still had a dramatically bigger impact than if you spent it on yourself.

Even if you have no idea which charity is better, the only case in which you would be justified in not donating to either is if a) there's a relatively simple way to figure out which is better (see the Value of Information stuff). or

b) you think that giving money to charity is likely enough to be counterproductive that the expected value is negative. Which seems plausible for some forms of African aid, possible for FAI, and demonstrably false for "charity in general."

It's also worth noting that the expected value of donating to a good charity is a lot higher than the expected value of voting, since the vast majority of people don't direct their giving thoughtfully and there's a lot of low hanging fruit. (GiveWell has plenty of articles on this).

Second stupid question: There is a lot of talk about ethics on lesswrong. I still don't understand why people talk about ethics and not just about what they want. Whatever morality is or is not, shouldn't it be implied by what we want and the laws of thought?

Yes, it should. That's what people are talking about, for the most part, when they talk about ethics. Note that even though ethics is (probably) implied by what we want, it isn't equal to what we want, so it's worth having a separate word to distinguish between what we should want if we were better informed, etc. and what we actually want right now. This strikes me as so obvious I think I might be missing the point of your question. Do you want to clarify?

Third stupid question: I still don't get how expected utility maximization doesn't lead to the destruction of complex values. Even if your utility-function is complex, some goals will yield more utility than others and don't hit diminishing marginal returns. Bodily sensations like happiness for example don't seem to run into diminishing returns.

Well, since I value all that complex stuff, happiness has negative marginal returns as soon as it starts to interfere with my ability to have novelty, challenge, etc. I would rather be generally happier, but I would not rather be a wirehead, so somewhere between my current happiness state and wireheading, the return on happiness turns negative (assuming for a moment that my preferences now are a good guide to my extrapolated preferences). If your utility function is complex, and you value preserving all of its components, then maximizing one aspect can't maximize your utility.

As for the second part of your question: hadn't thought of that. I'll let my smarter post-Singularity self evaluate my options and make the best decision it can, and if the utility-maximizing choice is to devote all resources to trying to beat entropy or something, then that's what I'll do. My current instinct, though, is that preserving existing lives is more important than creating new ones, so I don't particularly care to get as many resources as possible to create as many humans as possible. I also don't really understand what you are trying to get at. Is this an argument-from-consequences opposing x-risk prevention? Or are you arguing that utility-maximization generally is bad?

These aren't stupid questions, by the way; they're relevant and thought provoking, and the fact that you did extremely poorly on an IQ test is some of the strongest evidence that IQ tests don't matter that I've encountered.

Comment by KPier on Welcome to Less Wrong! (2012) · 2011-12-27T18:45:06.206Z · LW · GW

Welcome to LessWrong! There's an email list and occasional online meetups for LessWrong teenagers; you can join here..

Comment by KPier on Summary of "The Straw Vulcan" · 2011-12-26T23:35:45.135Z · LW · GW


The example didn't bother me, but when it switched to second person ("you need to factor in...") the continued gendering seemed unnecessary.

Comment by KPier on Welcome to Less Wrong! (2012) · 2011-12-26T19:55:45.707Z · LW · GW

There's an email list and occasional online meetups for LessWrong teenagers; you can join here. Welcome aboard!