Posts

Chess and cheap ways to check day to day variance in cognition 2021-07-07T00:08:46.668Z
Meetup : Stanford THINK starts weekly meetups this Sunday 2014-09-20T20:00:16.749Z
Theory of Knowledge (rationality outreach) 2011-08-09T21:36:07.057Z
Teenage Rationalists and Changing Your Mind 2011-08-05T18:19:30.993Z

Comments

Comment by KPier on Non-Disparagement Canaries for OpenAI · 2024-06-04T00:48:15.524Z · LW · GW

I have been in touch with around a half dozen former OpenAI employees who I spoke to before former employees were released and all of them later informed me they were released, and they were not in any identifiable reference class such that I’d expect OpenAI would have been able to selectively release them while not releasing most people. I have further been in touch with many other former employees since they were released who confirmed this. I have not heard from anyone who wasn’t released, and I think it is reasonably likely I would have heard from them anonymously on Signal. Also, not releasing a bunch of people after saying they would seems like an enormously unpopular, hard to keep secret, and not very advantageous move for OpenAI, which is already taking a lot of flak for this. I also have a model of how people choose whether or not to make public statements where it’s extremely unsurprising most people would not choose to do so.

I would indeed guess that all of the people you listed have been released if they were even subject to such agreements in the first place, which I do not know (and the fact Geoffrey Irving was not offered such an agreement is some basis to think they were not uniformly imposed during some of the relevant time periods, imo.)

Comment by KPier on Non-Disparagement Canaries for OpenAI · 2024-06-03T18:13:42.668Z · LW · GW

(This is Kelsey Piper). I am quite confident the contract has been widely retracted. The overwhelming majority of people who received an email did not make an immediate public comment. I am unaware of any people who signed the agreement after 2019 and did not receive the email, outside cases where the nondisparagement agreement was mutual (which includes Sutskever and likely also Anthropic leadership). In every case I am aware of, people who signed before 2019 did not reliably receive an email but were reliably able to get released if they emailed OpenAI HR. 

If you signed such an agreement and have not been released, you can of course contact me on Signal: 303 261 2769. 
 

Comment by KPier on Sharing Information About Nonlinear · 2023-09-07T20:56:45.957Z · LW · GW

Cross posting from the EA Forum: 

It could be that I am misreading or misunderstanding these screenshots, but having read through them a couple of times trying to parse what happened, here's what I came away with:

On December 15, Alice states that she'd had very little to eat all day, that she'd repeatedly tried and failed to find a way to order takeout to their location, and tries to ask that people go to Burger King and get her an Impossible Burger which in the linked screenshots they decline to do because they don't want to get fast food. She asks again about Burger King and is told it's inconvenient to get there.  Instead, they go to a different restaurant and offer to get her something from the restaurant they went to. Alice looks at the menu online and sees that there are no vegan options. Drew confirms that 'they have some salads' but nothing else for her. She assures him that it's fine to not get her anything.


It seems completely reasonable that Alice remembers this as 'she was barely eating, and no one in the house was willing to go out and get her nonvegan foods' - after all, the end result of all of those message exchanges was no food being obtained for Alice and her requests for Burger King being repeatedly deflected with 'we are down to get anything that isn't fast food' and 'we are down to go anywhere within a 12 min drive' and 'our only criteria is decent vibe + not fast food', after which she fails to find a restaurant meeting those (I note, kind of restrictive if not in a highly dense area) criteria and they go somewhere without vegan options and don't get her anything to eat. 

It also seems totally reasonable that no one at Nonlinear understood there was a problem. Alice's language throughout emphasizes how she'll be fine, it's no big deal, she's so grateful that they tried (even though they failed and she didn't get any food out of the 12/15 trip, if I understand correctly). I do not think that these exchanges depict the people at Nonlinear as being cruel, insane, or unusual as people. But it doesn't seem to me that Alice is lying to have experienced this as 'she had covid, was barely eating, told people she was barely eating, and they declined to pick up Burger King for her because they didn't want to go to a fast food restaurant, and instead gave her very limiting criteria and went somewhere that didn't have any options she could eat'.

On December 16th it does look like they successfully purchased food for her. 

My big takeaway from these exchanges is not that the Nonlinear team are heartless or insane people, but that this degree of professional and personal entanglement and dependence, in a foreign country, with a young person, is simply a recipe for disaster. Alice's needs in the 12/15 chat logs are acutely not being met. She's hungry, she's sick, she conveys that she has barely eaten, she evidently really wants someone to go to BK and get an impossible burger for her, but (speculatively) because of this professional/personal entanglement, she lobbies for this only by asking a few times why they ruled out Burger King, and ultimately doesn't protest when they instead go somewhere without food she can eat, assuring them it's completely fine. This is also how I relate to my coworkers, tbh - but luckily, I don't live with them and exclusively socialize with them and depend on them completely when sick!!

Given my experience with talking with people about strongly emotional events, I am inclined towards the interpretation where Alice remembers the 15th with acute distress and remembers it as 'not getting her needs met despite trying quite hard to do so', and the Nonlinear team remembers that they went out of their way that week to get Alice food - which is based on the logs from the 16th clearly true! But I don't think I'd call Alice a liar based on reading this, because she did express that she'd barely eaten and request apologetically for them to go somewhere she could get vegan food (with BK the only option she'd been able to find) only for them to refuse BK because of the vibes/inconvenience.

Comment by KPier on What is the best day to celebrate Smallpox Eradication Day? · 2022-05-09T18:05:54.315Z · LW · GW

We celebrate the May date because May is a good time for a holiday (not close to other major holidays, good weather in our part of the world) and December is very close to the date of Solstice and also close to Christmas, Thanksgiving, etc. 

Comment by KPier on Frame Control · 2021-11-28T03:19:52.110Z · LW · GW

I appreciate this post. I get the sense that the author is trying to do something incredibly complicated and is aware of exactly how hard it is, and the post does it as well as it can be done. 

I want to try to contribute by describing a characteristic thing I've noticed from people who I later realized were doing a lot of frame control on me: 

Comments like 'almost no one is actually trying but you, you're actually trying' 'most people don't actually want to hear this, and I'm hoping you're different'.' I can only tell you this if you want to hear it' 'it feels like you're already getting it, no one gets that far on their own' 'almost everyone is too locked into the system to actually listen to what I'm about to say' 'I've been wanting to find the right person to say this to, but no one wants to listen, but I think you might actually be ready to hear it': the common thread is that you, the listener, are special, and the speaker is the person who gets to recognize you as special, and the proof of your specialness is that you're going to try/going to listen/going to hear them out/ not going to instantly jump to conclusions

Counterexamples: 'you're the only Political Affiliation X I've ever found worth listening to' does not at all seem to come from the same kinds of motivations as the above. Some people have said "[x writing] demonstrated a rare ability to Actually Get It" and weren't doing weird manipulative shit at all; people who said it publicly in fact I think have in every case just been sincere/being nice/recommending a thinker they think highly of. The frame control people all said it privately or semiprivately, possibly because that way they can reuse the compliment on lots of people, possibly I'm just overgeneralizing from a small number of data points. 

Comment by KPier on How much should you update on a COVID test result? · 2021-10-19T14:48:17.951Z · LW · GW

Were the positive tests from the same batch/purchased all together?

Comment by KPier on How much should you update on a COVID test result? · 2021-10-17T21:32:41.235Z · LW · GW

And same question for a positive test: if you get a positive and then retest and get a negative, do you have a sense of how much of an overall update you should make? I've been treating that as 'well, it was probably a false positive then', but multiplying the two updates together would imply it's probably legit?

Comment by KPier on How much should you update on a COVID test result? · 2021-10-17T21:04:31.120Z · LW · GW

Are test errors going to be highly correlated? If you take two tests (either of the same type or of different types) and both come back negative, how much of an update is the second test?

Comment by KPier on MIRI location optimization (and related topics) discussion · 2021-05-11T02:18:13.612Z · LW · GW

Given your described desiderata, I would think that a slightly more rural location along the coast of California ought to be up there. Large properties in Orinda are not that expensive (there are gorgeous 16-30 acre lots for about 1million on Zillow right now), and right now, for better and for worse, the Bay is the locus of the rationalist and EA communities and of the tech industry; convincing people to move to a pastoral retreat 1hour from the city everyone already lives in is a much easier sell and smoother transition than convincing them to move across the country. (I recognize that MIRI is doing this in part because of thinking that it's bad for the Bay to be that, but I think the Bay community already has at least four distinctive sub communities with different values and norms and priorities, and a campus in more-rural California could form a distinctive one while not disrupting all existing social bonds.) I know Bay zoning is notorious, but that's much less true as soon as you're out of the Bay proper, and all of those properties emphasize in the listings that you have total flexibility about what to build on that land. Other nearby properties are often also for sale. 

I worry that if MIRI moves to a place with no local rationalists or rationalist-inclined people, they'll be less likely to make new friends and more likely to become very insular, as the people who valued their non-MIRI relationships most fall away; it seems like a huge advantage if a move is either to a place with a preexisting rationalist community or doesn't require severing ties with the current ones.

The big downside of this, to my mind, would be fire, and it's a substantial downside, but on the whole I anticipate-success much more strongly for a rural-California enclave than for the locations you describe. (Disclaimer: this may be because I have strong roots in the Bay and am not personally likely to move.)

Comment by KPier on Cash transfers are not necessarily wealth transfers · 2017-12-02T00:38:26.432Z · LW · GW

That is, of course, consistent with it being net neutral to give people money which they spend on school fees, if the mechanism here is 'there are X good jobs, all of which go to people who've had formal education, but formal education adds no value here'. In that scenario it's in anyone's interest to send their kid to school, but all of the kids being sent to school does not net improve anything.

It seems kind of unlikely to me that primary school teaches nothing - and even just teaching English and basic literacy and numeracy seems really valuable - but if it does, that wouldn't make this woman irrational while it would make cash transfers spent on schooling poorly spent overall.

Comment by KPier on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T23:24:28.219Z · LW · GW

Thanks for answering this. It sounds like the things in the 'maybe concerns, insufficient info' categories are largely not concerns, which is encouraging. I'd be happy to privately contribute salary and CoL numbers to someone's effort to figure out how much people would save.

https://angel.co/manchester/jobs is a little discouraging; there are Lead Java Developer roles listed for £30-50k , no equity which would pay $150,000-$180,000 base in SF and might well see more than $300k in total compensation. Even if you did want to buy a house, which again Bay rationalists largely just don't, that means a house costs three-four years' salary in both cases and in one case you own a million-dollar property which will (unfortunately for the city) probably appreciate significantly and in another you own a £125k property not expected to appreciate any. It might be better to target people who want to retire early to Manchester and people not in tech.

I don't think any amount of gender-related recruiting is more predictive of gender balance than 'how similar is this to the parts of the community which have gender balance'? So it actually would surprise me if, even throwing everything under the bus to achieve this goal, it worked. Of course, I'd say a thriving Manchester community with a lousy gender ratio would still be an amazing accomplishment. A reasonable way to estimate gender balance in the Bay might be "count at Solstice, excluding anyone who flew in for Solstice"? (On the Facebook page so far, of the 29 people attending 12 are women, but Facebook pages are very noisy estimates of attendance and the attendance will be an order of magnitude higher than that, so I won't put that much weight on that.)

Come to think of it, you've got an uphill battle on gender ratios for another reason, which is that women are on average less likely to do weird things, less likely to be underemployed in their twenties, and likelier to have close social ties preventing moving. I still am confident in my prediction but this general factor might be a stronger contributor than culture-specific ones.

Comment by KPier on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T05:03:35.915Z · LW · GW

Are you disagreeing with my prediction? I'd be happy to bet on it and learning that two of the four initial residents are trans women does not change it.

Comment by KPier on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T03:36:47.748Z · LW · GW

I wrote a post listing reasons why I would not move to Manchester. Since writing it I've gotten more confident about the 'bad culture fit' conclusion by reading bendini's blog. I would also add that the part of the community with the best gender ratio (rationalist tumblr) and the adjacent community with the best gender ratio (Alicorn's fan community) are also the ones with the norms that the founders of this project seem to find most objectionable, and the ones who seem to be the worst culture fit for the project. I think things like 'culture fit with existing parts of the community that are gender-balanced' end up predicting gender ratio much more than degree of prioritization of attracting women, so I predict Manchester will have significantly (10% or greater) worse gender balance than the Bay in five years, and less strongly expect it to have worse gender balance than the community as a whole.

Comment by KPier on I Want To Live In A Baugruppe · 2017-03-17T01:54:09.623Z · LW · GW

I would live in this if it existed. Buying an apartment building or hotel seems like the most feasible version of this, and (based on very very minimal research) maybe not totally intractable; the price-per-unit on some hotels/apartments for sale is like $150,000, which is a whole lot less than the price of independently purchasing an SF apartment and a pretty reasonable monthly mortgage payment.

Comment by KPier on Can we talk about mental illness? · 2015-03-10T04:54:48.889Z · LW · GW

Brienne's Löb's Theorem Defeated My Social Anxiety deserves to be among your resources.

Comment by KPier on 2014 Survey Results · 2015-01-06T02:46:13.462Z · LW · GW

I am suspicious of this as an explanation. Most straight-identified women I know who will dance with/jokingly flirt with other women are in fact straight and not 'implicitly bisexual'; plenty of them live in environments where there'd be no social cost to being bisexual, and they are introspective enough that 'they are actually just straight and don't interpret those behaviors as sexual/romantic' seems most likely.

Men face higher social penalties for being gay or bisexual (and presumably for being thought to be gay or bisexual) which seems a more likely explanation for why they don't do things that could be perceived as showing romantic interest toward men (like dancing or 'joking' flirting) than that women are borderline bisexual by nature.

Comment by KPier on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-03T18:50:10.517Z · LW · GW

I am not sure that we're communicating meaningfully here. I said that there's a place to set a threshold that weighs the expense against the lives. All that is required for this to be true is that we assign value to both money and lives. Where the threshold is depends on how much we value each, and obviously this will be different across situations, times, and cultures.

You're conflating a practical concern (which behaviors should society condemn?) and an ethical concern (how do we decide the relative value of money and lives?) which isn't even a particularly interesting ethical concern (governments have standard figures for the value of a human life; they'd need to have such to conduct any interventions at all.) And I am less certain than I was at the start of this conversation of what sort of answer you are even interested in.

Comment by KPier on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-03T04:30:37.580Z · LW · GW

Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.

It doesn't have to be well-known. Morally there's a threshold. Everyone who is trying to act morally is trying to ascertain where it should be, and everyone who isn't acting morally is taking advantage of the uncertainty about where the threshold is to avoid spending money. That doesn't change that there is a threshold.

Consider doctors sending patients in for surgery after a cancer screening. It is hard to estimate whether someone has cancer, and different doctors might recommend different actions on the basis of the same estimate. This does not change the fact that, in fact, there's a place to put the threshold that balances the risk of sending in patients for unnecessary surgery and the risk of letting cancer spread. On any ethical question this threshold exists. We don't have to be certain about it to acknowledge that judging where it is and where cases fall with respect to it is basically always what we're doing.

Mr. Doc's actions are morally right to the extent he's right (given the evidence he could reasonably have acquired) about the threshold.

Comment by KPier on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-01T23:28:23.333Z · LW · GW

Assume there's a threshold at which sending the ship for repairs is morally obligatory (if we're utilitarians, that is the point at which the cost of the repairs is less than the probability*expected cost of the ship sinking, taking into account the lives aboard, but the threshold needn't be utilitarian for this to work.)

Let's say that the threshold is 5% - if there's more than a 5% chance the ship will go down, you should get it repaired.

Mr. Grumpy's thought process seems to be 'I alieve that my ship will sink, but this alief is harmful and I should avoid it'. He is morally justified in quelling his nightmares, but he'd be morally unjustified if in doing so he rationalized away his belief 'there's a 10% chance my ship will sink' to arrive at 'there's a 3% chance my ship will sink' and thereby did not do the repairs.

Likewise, it's great that Mr. Happy doesn't want to worry, but if you asked him to bet on the ship going down, what odds would he demand? If he thinks that the probability of his ship going down is greater than 5%, then he should have gotten it refitted. If he knows he has a bias toward neglecting negative events, and he knows that his estimate of 1% is probably the result of rationalization rather than reasoning, he should get someone else to estimate or he should correct his estimate for this known bias of his.

Mr. Doc looks at this probability and deems it acceptable (so, presumably, below our action threshold). He is not guilty of anything.

Comment by KPier on Rationality Quotes September 2014 · 2014-09-30T21:50:19.358Z · LW · GW

The next passage confirms that this is the author's interpretation as well:

Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot. When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that. The man would not have been innocent, he would only have been not found out.

And clearly what he is guilty of (or if you prefer, blameworthy) is rationalizing away doubts that he was obligated to act on. Given the evidence available to him, he should have believed the ship might sink, and he should have acted on that belief (either to collect more information which might change it, or to fix the ship). Even if he'd gotten lucky, he would have acted in a way that, had he been updating on evidence reasonably, he would have believed would lead to the deaths of innocents.

The Ethics of Belief is an argument that it is a moral obligation to seek accuracy in beliefs, to be uncertain when the evidence does not justify certainty, to avoid rationalization, and to help other people in the same endeavor. One of his key points is that 'real' beliefs are necessarily entangled with reality. I am actually surprised he isn't quoted here more.

Comment by KPier on Rationality Quotes September 2014 · 2014-09-27T02:51:51.544Z · LW · GW

A shipowner was about to send to sea an emigrant-ship. He know that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections. He said to himself that she had gone safely though so many voyages and weathered so many storms, that it was idle to suppose she would not come safely home from this trip also. He would put his trust in Providence, which could hardly fail to protect all these unhappy families that were leaving their fatherland to seek for better times elsewhere. He would dismiss from his mind all ungenerous suspicions about the honesty of builders and contractors. In such a way he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her depature with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales.

What shall we say of him? Surely this, that he was verily guilty of the death of those men. It is admitted that he did sincerely believe in the soundness of his ship, but the sincerity of his conviction can in nowise help him, because he had no right to believe on such evidence as was before him. He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.

  • W.J. Clifford, the Ethics of Belief
Comment by KPier on PROPOSAL: LessWrong for Teenagers · 2012-11-12T03:09:53.931Z · LW · GW

From the upvotes I'm concluding it's worthwhile to go ahead and write it: I agree it serves as a pretty decent example of applying rationality concepts for long-term decision making. It'll have to wait a week until Thanksgiving Break, though.

Comment by KPier on PROPOSAL: LessWrong for Teenagers · 2012-11-09T20:36:16.090Z · LW · GW

I'm a freshman in college now, but a post or two analyzing the reasons for choosing an (expensive, high status) private college versus an (essentially free, low status) state college, or going to school in America versus Europe versus somewhere else, would have been immensely valuable to me a year ago.

This would belong on LessWrong because typical advice on this topic is either "follow your dreams, do what you love, everything will work out", or "you're an idiot to take on debt, if you can't pay your own way through college you're a lazy, entitled brat".

A post describing how to make such a decision based on expected-value calculations, discussing value of information and college visits, and dissecting the research into the income effects of attending top colleges would be very nice.

(I could write such a post, if others think it would be of enough general interest).

Comment by KPier on My experience with dieting and exercise · 2012-11-01T03:26:12.230Z · LW · GW

give 300 bucks to the Against Malaria Foundation, saving the lives of 1-3 children.

Source? The most recent estimate I've seen was that saving a life costs around $2000.

Comment by KPier on The Problem With Rational Wiki · 2012-10-28T20:34:45.281Z · LW · GW

Fixed, sorry! (I'm female and that mistake doesn't bother me at all, but I know it really annoys some people. I'll be more careful in future.)

I completely agree that characterizing RW as contributing to existential risk is absurd.

Comment by KPier on The Problem With Rational Wiki · 2012-10-28T09:35:14.251Z · LW · GW

Thanks for linking to the context! In fairness, though, if people are citing RationalWiki as proof that LessWrong has a "reputation", then devoting a discussion-level post to it doesn't strike me as excessive.

(On a related note: I hadn't read Jade's comments, but I did after you flagged them as interesting; they struck me as totally devoid of value. Would you mind explaining what you think the valid concern he/she's expressing is?)

Comment by KPier on The Problem With Rational Wiki · 2012-10-28T06:38:32.720Z · LW · GW

LW paying RW this much attention while also claiming that the entire future of human value itself is at stake looks on the surface like a failure of apportionment of cognitive resources, but perhaps I've missed something.

What do you mean by "this much attention"? If Konkvistador's links at the top are reasonably comprehensive (and a quick search doesn't turn up much more), there have been 2 barely-upvoted discussion posts about RW in four years, which hardly seems like much attention. For comparison, LW has devoted several times as much energy to dating advice.

Is there a lot of discussion of RW that I'm missing, or are you claiming that even two posts in Discussion is totally excessive?

Comment by KPier on Is Omega Impossible? Can we even ask? · 2012-10-24T23:29:48.131Z · LW · GW

... and if your utility scales linearly with money up to $1,001,000, right?

Comment by KPier on [Link] One in five American adults say they are atheist, agnostic or "nothing in particular" · 2012-10-11T02:04:49.915Z · LW · GW

I don't think there's anything wrong with the topic, if it comes with a little bit of discussion along the lines of palladius's comment below, or along the lines of "What evidence would convince us that the sanity waterline is actually rising, as opposed to just more people being raised non-religious?"

It would be very interesting to see this study in the context of trendlines for other popular sanity-correlated topics, such as belief in evolution, disbelief in ghosts, non-identification with a political party, knowledge about GMOs, etcetera, even though there are lots and lots of confounding variables.

One alone, though, without commentary about rationality, probably does not belong on LessWrong.

Comment by KPier on [SEQ RERUN] Fighting a Rearguard Action Against the Truth · 2012-09-07T01:48:53.265Z · LW · GW

I don't think he's saying that motives are morally irrelevant - I think he's saying that they are irrelevant to the point he is trying to make with that blog post.

Comment by KPier on Open Thread, September 1-15, 2012 · 2012-09-03T22:35:00.950Z · LW · GW

I just want to experience being wrong sometimes.

Your comments are consistent with wanting to be proved wrong. No one experiences "being wrong" - from the inside, it feels exactly like "being right". We do experience "realizing we were wrong", which is hopefully followed by updating so that we once again believe ourselves to be right. Have you never changed your mind about something? Realized on your own that you were mistaken? Because you don't need to "lose" or to have other people "beat you" to experience that.

And if you go around challenging other people about miscellaneous points in the hopes that they will prove you wrong, this will annoy the other people and is unlikely to give you the experience you hoped for.

I also think that your definition of "being wrong" might be skewed. If you try to make comments which you think will be well-received, then every comment that has been heavily downvoted is an instance in which you were wrong about the community reaction. You apparently thought most people were concerned about an Eternal September; you've already realized that this belief was wrong. I'm not sure why being wrong about these does not have the same impact on you as being wrong about the relative fighting skills of programmers and fruit-pickers, but it probably should have a bigger impact, since it's a more important question.

Comment by KPier on Open Thread, September 1-15, 2012 · 2012-09-03T20:04:28.460Z · LW · GW

It looks like I won here, but I thought of some reasons why I may still have lost:

You should stop thinking about discussions in these terms.

Comment by KPier on Dealing with trolling and the signal to noise ratio · 2012-09-02T07:03:25.534Z · LW · GW

My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down.

It seems like it's your estimate of the programming knowledge of the commenters that should go down. Most of the proposed solutions have in common that they sound really simple to implement, but would in fact be complicated - which someone with high general intelligence and rationality, but limited domain-specific knowledge, might not know.

Should people who can't program refrain from suggesting programming fixes? Maybe. But maybe it's worth the time to reply to some of the highly-rated suggestions and explain why they're much harder than they look.

(I agree with your proposed solution to attempt simplifications.)

Comment by KPier on The noncentral fallacy - the worst argument in the world? · 2012-09-01T17:43:04.431Z · LW · GW

Generally speaking, there are fewer upvotes later in a thread, since fewer people read that far. If the children to your comment have more karma then your comment, it's reasonable to assume that people saw both comments and chose to up vote theirs, but if a parent to your comment has more karma, you can't really draw any inference from that at all.

Comment by KPier on [Link] Reddit, help me find some peace I'm dying young · 2012-08-18T17:12:25.464Z · LW · GW

Not to fall into the "trap" of buying warm fuzzies? Do you advocate a policy of never buying yourself any warm fuzzies, or just of never buying warm fuzzies specifically through donating to charity (because it's easy to trick your brain into believing it just did good)?

Comment by KPier on Admissions Essay Help? · 2012-08-03T18:33:41.363Z · LW · GW

Looks like PMing is down, actually. You can email me at kelseyp [at] stanford.edu (not written out to avoid spambots).

Comment by KPier on Admissions Essay Help? · 2012-08-01T23:21:36.831Z · LW · GW

I was accepted to Stanford this spring. At the welcome weekend, we talked a lot with the admissions representatives about what they're looking for - I'd be happy to share tips and my own essays. PM me.

Comment by KPier on Revisiting SI's 2011 strategic plan: How are we doing? · 2012-07-17T03:24:50.463Z · LW · GW

The July matching drive was news to me; I wonder how many other readers hadn't even heard about it.

Is there a reason this hasn't been published on LessWrong, i.e. with the usual public-commitment thread?

Also, if a donation is earmarked for CFAR, does the "matching" donation also go to CFAR?

Comment by KPier on Rational Ethics · 2012-07-13T02:43:58.544Z · LW · GW

Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying "thinking rationally about metaethics is not rational" is using the world two different ways, and is the reason your post is so confusing to me.

On your example of a witch, I don't actually see why believing that would be rational. But if you take a more straightforward example, say, "Not knowing that your boss is engaging in insider training, and not looking, could be rational," then I agree. You might rationally choose to not check if a belief is false.

Why is it necessary to muddy the waters by saying "You might rationally have an irrational belief?"

you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.

Of course. You can decide that learning something has negative expected consequences, and choose not to learn it. Or decide that learning it would have positive expected consequences, but that the value of information is low. Why use the "rational" and "irrational" labels?

Something like half of women will consider an abortion; their support or lack thereof has an enormous impact on whether that particular abortion is implemented. And if you're proposing this as a general policy, the relevant question is whether overall people adopting your heuristic is good, meaning that the question of whether any given one of them can impact politics is less relevant. If lots of people adopt your heuristic, it matters.

For effective charities, everyone who gives to the religious organization selected by their church is orders of magnitude less effective than they could be. Thinking for themselves would allow them to save hundreds of lives over their lifetime.

Comment by KPier on Rational Ethics · 2012-07-12T19:49:37.753Z · LW · GW

most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.

Agree.

For most people, a rational ethics system costs far more than it provides in benefits.

I don't think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious questions do arise is definitely worth it, and I think they arrive more often than you realize (donating to effective/efficient charity, choosing a career, supporting/opposing gay marriage or abortion or universal health care).

We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

So are you saying that you agree people ought to spend time considering arguments for various moral systems, but that they shouldn't all bother with metaethics? Agreed. Or are you saying they shouldn't bother with thinking about "morality" at all, and should just consider the arguments for and against (for example) abortion independent of a bigger system?

And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through".

I think you could improve the post - and make your point clearer, by replacing "rational" with one of these words.

Comment by KPier on Rational Ethics · 2012-07-12T15:51:52.517Z · LW · GW

I think what you're trying to say is:

"Morally as computation" is expensive, and you get pretty much the same results from "morality as doing what everyone else is doing." So it's not really rational to try to arrive at a moral system through precise logical reasoning, for the same reasons it's not a good idea to spend an hour evaluating which brand of chips to buy. Yeah, you might get a slightly better result - but the costs are too high.

If that's right, here are my thoughts:

Obviously you don't need to do all moral reasoning from scratch. There aren't many people (on LessWrong or off) who think that you should. The whole point of Created Already in Motion is that you can't do all moral reasoning from scratch. Or, as Yvain put in in his Consequentialism FAQ, you don't need a complete theory of ballistics to avoid shooting yourself in the foot.

That said, "rely on society" is a flawed enough heuristic that almost everyone ought to do some moral reasoning for themselves. The majority of people tend to reject consequentialism in surveys, but there are compelling logical reasons to accept it. Death is widely consideed to be good, and seeking immortality to be immoral, but doing a bit of ethical reasoning tends to turn up different answers.

Moral questions have far greater consequences than day-to-day decisions; they're probably worth a little more of our attention.

(My main goal here is identifying points of disagreement, if any. Let me know if I've interpreted your post correctly.)

Comment by KPier on Interlude for Behavioral Economics · 2012-07-07T04:14:18.816Z · LW · GW

He also says:

As in so many other areas, our most important information comes from reality television.

I'm guessing both are a joke.

Comment by KPier on Open Thread, June 1-15, 2012 · 2012-06-02T21:00:12.751Z · LW · GW

Your article describes the consequences of being perceived as "right-wing" on American campuses. Is pick-up considered "right wing"? Or is your point more generally that students do not have as much freedom of speech on campus as they think?

I'm specifically curious about the claim that most professors would consider what you are doing to be evil. Is that based on personal experience with this issue?

Comment by KPier on Open Thread, March 16-31, 2012 · 2012-03-19T04:04:48.837Z · LW · GW

My favorite explanation of Bayes' Theorem barely requires algebra. (If you don't need the extended explanation, just scroll to the bottom, where the problem is solved.)

Comment by KPier on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-16T03:32:43.482Z · LW · GW

Chapter 79:

I think we're supposed to be able to figure this one out. My mental model of Eliezer says he thinks he's given us more than enough hints, and we have a week to wait despite it being a short, high tension chapter. He makes a big deal out of how Harry only has thirty hours, which isn't enough; he gives us a week, and a lot of information Harry doesn't have.

Who benefits from isolating Harry from both of his friends, and/or making him do something stupid to protect Hermione in front of the most powerful people in the Wizarding World?

Evidence against Quirrell as Hat-and-Cloak: Apart from everything that's already been discussed, he's been trying to strengthen Harry. He chose Draco and Hermione for the armies knowing that the likely outcome would be them getting closer (especially when he set them up against Harry).

Evidence for Quirrell as Hat-and-Cloak: Apart from what has already been discussed, he seemed very interested when Harry mentioned Lucius's threat to set aside everything to protect Draco. And there's that line in the most recent author's note:

anything you think won’t confuse the readers, will.

Which implies we're overthinking this and the obvious answer is the right one.

Quirrell conveniently rescuing Draco after seven hours makes sense if we assume he's also the one who almost killed him.

Evidence I can't sort: Quirrell's admission during interrogation can't have been an accident, and doesn't seem to serve his interests whether he's Hat-and-Cloak or not. If he is, he presumable wants to isolate Harry so he can talk him into stage 2 of the plan - but for that, he needs to be at Hogwarts or otherwise have access to Harry. If he's not Hat-and-Cloak, there's not much reason for him to tie himself up in the Ministry.

Unless he doesn't want Harry to be able to contact him and he wants to have a plausible reason for being unreachable?

I think this makes me update more toward "Quirrell is Hat-and-Cloak," but I'm not convinced.

Comment by KPier on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-13T00:32:51.823Z · LW · GW

It's also mentioned in Circular Altruism.

This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).

My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life. After rejecting the report, the agency decided not to implement the measure.

Trading off a sacred value (like refraining from torture) against an unsacred value (like dust specks) feels really awful. To merely multiply utilities would be too cold-blooded - it would be following rationality off a cliff...

I'm sure there's a hint in there, but I don't know what it is.

Comment by KPier on Open Thread, February 1-14, 2012 · 2012-02-04T02:57:50.922Z · LW · GW

An egoist is generally someone who cares only about their own self-interest; that should be distinct from someone who has a utility function over experiences, not over outcomes.

But a rational agent with a utility function only over experiences would commit quantum suicide if we also assume there's minimal risk of the suicide attempt failing/ the lottery not really being random, etc.

In short, it's an argument that works in the LCPW but not in the world we actually live in, so the absence of suiciding rationalists doesn't imply MWI is a belief-in-belief.

Comment by KPier on Open Thread, February 1-14, 2012 · 2012-02-03T18:48:28.529Z · LW · GW

I believe that my death has negative utility. (Not just because my family and friends will be upset; also because society has wasted a lot of resources on me and I am at the point of being able to pay them back, I anticipate being able to use my life to generate lots of resources for good causes, etc.)

Therefore, I believe that the outcome (I win the lottery ticket in one world; I die in all other worlds) is worse than the outcome (I win the lottery in one world; I live in all other worlds) which is itself worse than (I don't waste money on a lottery ticket in any world).

Least Convenient Possible World, I assume, would be believing that my life has negative utility unless I won the lottery, in which case, sure, I'd try quantum suicide.

thus creating an outcome pump for the subset of the branches where you survive (the only one that matters).

What? No! All of the worlds matter just as much, assuming your utility function is over outcomes, not experiences..

Comment by KPier on HPMOR: What could've been done better? · 2012-01-30T04:55:33.598Z · LW · GW

In the original books, Harry's cohort was born ten years into an extremely bloody civil war. I always assumed birth rates were extremely low for Harry's age group, which would imply that the overall population is much larger than what you'd extrapolate from class sizes.

Of course, the numbers still don't work. There are 40 kids in canon!Harry's class. Even if you assume that's a tenth of the normal birthrate and the average person lives to 150, you get a wizarding population of 6,000.

In MoR, class sizes are around 120 (more than half the kids are in the armies, and armies are 24 each), which is still problematic - with the generous assumptions above, you get a population of 18,000. But MoR does seem to hint there are other magical schools: Daphne at one point wonders if it's worth going to the same school as Harry just to go to the same school as everybody important, which supports the theory that there are other magic schools, but that almost everyone influential went through Hogwarts.

Comment by KPier on Stupid Questions Open Thread · 2011-12-31T21:52:23.254Z · LW · GW

Kolmogorov Complexity/Solmanoff Induction and Minimum Message Length have been proven equivalent in their most-developed forms. Essentially, correct mathematical formalizations of Occam's Razor are all the same thing.