Posts

Young Americans believe they have the best health in the world... 2013-02-04T06:03:22.611Z
Course recommendations for Friendliness researchers 2013-01-09T14:33:50.300Z
Bounding the impact of AGI 2012-12-18T19:47:32.327Z
LessWrong podcasts 2012-12-03T08:44:38.125Z
New WBE implementation 2012-11-30T11:16:27.096Z
Sign up to be notified about new LW meetups in your area 2012-11-03T22:38:38.971Z
The Singularity Institute is hiring an executive assistant near Berkeley 2012-01-22T07:47:30.432Z
Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far 2011-12-27T21:24:30.416Z
$100 off for Less Wrong: Singularity Summit 2011 on Oct 15 - 16 in New York 2011-09-28T02:01:20.633Z
Last day to register for Foresight@Google - Mountain View, California 2011-06-21T20:33:55.159Z
96 Bad Links in the Sequences [fixed] 2011-04-27T19:01:24.288Z
Official Less Wrong Redesign: Call for Suggestions 2011-04-20T17:56:53.152Z
Singularity Institute featured on Philanthroper 2011-04-01T05:46:36.658Z
LessWrong search traffic doubles 2011-03-25T22:01:46.130Z
Verifying Rationality via RationalPoker.com 2011-03-25T16:32:00.315Z
Optimal Employment 2011-01-31T12:50:17.783Z
$295 bounty for new Singularity Institute logo design (crowd-sourced competition) 2011-01-28T06:01:46.672Z
Applied Optimal Philanthropy: How to Donate $100 to SIAI for Free 2011-01-04T06:14:31.288Z
Is there a guide somewhere for how to setup a Less Wrong Meetup? 2010-12-28T02:07:33.475Z
How to Save the World 2010-12-01T17:17:48.713Z
What I've learned from Less Wrong 2010-11-20T12:47:42.727Z
"Target audience" size for the Less Wrong sequences 2010-11-18T12:21:09.504Z
Theoretical "Target Audience" size of Less Wrong 2010-11-16T21:27:20.317Z
Currently Buying AdWords for LessWrong 2010-10-30T05:31:43.667Z

Comments

Comment by Louie on Calling all MIRI supporters for unique May 6 giving opportunity! · 2014-05-06T22:44:30.464Z · LW · GW

That's cool. Where did you hear that?

Comment by Louie on Truth: It's Not That Great · 2014-05-05T11:14:44.545Z · LW · GW

2009: "Extreme Rationality: It's Not That Great"

2010: "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality"

2013: "How about testing our ideas?"

2014: "Truth: It's Not That Great"

2015: "Meta-Countersignaling Equilibria Drift: Can We Accelerate It?"

2016: "In Defense Of Putting Babies In Wood Chippers"

Comment by Louie on Request for concrete AI takeover mechanisms · 2014-04-28T03:10:32.843Z · LW · GW

Yes. I assume this is why she's collecting these ideas.

Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in".

In general MIRI isn't in favor of soliciting storytelling about the singularity. It's a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.

Comment by Louie on Request for concrete AI takeover mechanisms · 2014-04-28T02:38:03.432Z · LW · GW

Than you should reduce your confidence in what you consider obvious.

Comment by Louie on Request for concrete AI takeover mechanisms · 2014-04-28T01:48:07.232Z · LW · GW

So MIRI is interested in making a better list of possible concrete routes to AI taking over the world.

I wouldn't characterize this as something that MIRI wants.

Comment by Louie on One Medical? Expansion of MIRI? · 2014-03-21T22:28:33.444Z · LW · GW

To clarify, One Medical partnered with us on this event... but are not materially involved with expanding MIRI themselves. They're simply an innovative business nearby us in Berkeley who wanted to support our work. I know it's somewhat unprecedented to see MIRI with strong corporate support, but trust me, it's a good thing. One Medical's people did a ton of legwork and made it super easy to host over 100 guests at that event with almost no planning needed on our part. They took care of everything so we could just focus on our work. A perfect partnership in our opinion.

Also, we still have $149 credits for the free 1-year memberships to One Medical's service. If you live in Berkeley, SF, NY, Boston, Chicago, LA, or DC and are looking for a good primary care doctor, check out their website and if you think it's a good fit for you, take them up on their promotional offer with this link: http://bit.ly/1fnRHrH (expires 4/9/14).

Comment by Louie on Book Review: Naïve Set Theory (MIRI course list) · 2013-10-05T08:36:42.199Z · LW · GW

Thanks. That was what I thought, but I haven't read Causality yet.

Comment by Louie on Book Review: Naïve Set Theory (MIRI course list) · 2013-10-03T19:52:49.703Z · LW · GW

Do you think Causality is a superior recommendation to Probabilistic Graphical Models?

Comment by Louie on MIRI's 2013 Summer Matching Challenge · 2013-07-22T22:58:21.714Z · LW · GW

Fixed. Thanks.

Comment by Louie on Effective Altruism Through Advertising Vegetarianism? · 2013-06-16T10:24:35.060Z · LW · GW

The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ." There are some people going around making the claim, based on the extreme low-ball cost estimates.

Correct. I make this claim. If vegetarianism is that cheap, it's reasonable to bin it with other wastefully low-value virtues like recycling paper, taking shorter showers, turning off lights, voting, "staying informed", volunteering at food banks, and commenting on less wrong.

Comment by Louie on Young Americans believe they have the best health in the world... · 2013-02-04T23:04:44.109Z · LW · GW

Yep, you're right. I've never used the Open Threads so I didn't know that. Thanks.

Comment by Louie on Young Americans believe they have the best health in the world... · 2013-02-04T07:01:22.611Z · LW · GW

Americans can only report their health derivative (dx/dt) :)

Comment by Louie on Young Americans believe they have the best health in the world... · 2013-02-04T06:25:30.413Z · LW · GW

A lot of the most unhealthy groups in the US are also poor and somewhat outside te reach of casual academic sampling.

I assumed that at first too. It turns out even removing the poor or minorities from the sample doesn't fix this gap.

Comment by Louie on Young Americans believe they have the best health in the world... · 2013-02-04T06:24:38.190Z · LW · GW

I guess the study used the modifier "wealthy" along with developed to explain their choice of reference class. I looked at the list and it didn't seem obviously cherry picked. What countries would you add?

Comment by Louie on Young Americans believe they have the best health in the world... · 2013-02-04T06:12:47.347Z · LW · GW

The guts of the study lists one (of many) possible causes:

"getting health care depends more on the market and on each person’s financial resources in the U.S. than elsewhere".

Insurance companies should point out to their detractors that they provide a valuable service by making healthcare so inaccessible that Americans no longer have any idea how they're doing. And that given this absence of knowledge, Americans assume they're doing great.

Comment by Louie on Generic Modafinil sales begin? · 2013-01-17T20:57:29.147Z · LW · GW

I received a letter telling me on no uncertain terms that if [US Customs] found another shipment of modafinil addressed to me, they would prosecute me as a drug smuggler.

You mean something like this? That's not really as meaningful as it seems. There is always some legal risk associated with doing anything since there are so many US laws that no one has even managed to count them, but a pretty serious search through legal database turns up no records of people being prosecuted for modafinil importation, ever. So that letter is 100% posturing by US Customs.

You should probably conclude from this that you're more likely to be prosecuted for illegally downloading music or jay walking.

And it's obviously everyones' personal choice to decide what level of legal risk they are comfortable with. But a rational person who wanted modafinil should be willing to order it from an online pharmacy at least as often as they're willing to pirate music or jaywalk without fear of prosecution. Otherwise their preferences for assuming legal risk are inconsistent.

Comment by Louie on 'Life exists beyond 50' · 2013-01-15T11:11:15.355Z · LW · GW

Yeah, don't be discouraged. LW is just like that sometimes. If you link to something with little or no commentary, it really needs to be directly about rationality itself or be using lots of LW-style rationality in the piece. This was a bit too mainstream to be appreciated widely here (even in discussion).

Glad to see you're posting though! You still in ATL and learning about FAI? I made a post you might like. :)

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-14T20:47:54.073Z · LW · GW

Just to clarify, I recommend the book "Probability and Computing" but the course I'm recommending is normally called something along the lines of "Combinatorics and Discrete Probability" at most universities. So the online course isn't as far off base as it may have looked. However, I agree there are better choices that cover more exactly what I want. So I've updated it with a more on-point Harvard Extension course.

The MIT and CMU courses both cover combinatorics and discrete probability. They are probably the right thing to take or very close to it if you're at those particular schools.

Thanks again for the feedback Klao.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-11T16:09:00.487Z · LW · GW

Fixed. Thanks.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-10T06:14:42.426Z · LW · GW

Yep, SI has summer internships. You're already in Berkeley, right?

Drop me an email with the dates you're available and what you'd want out of an internship. My email and Malo's are both on our internship page:

http://singularity.org/interns/

Look forward to hearing from you.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-09T19:23:51.793Z · LW · GW

Well, I figure I don't really want to recommend a ton of programming courses anyway. I'm already recommending what I presume is more than a bachelor's degree worth of course when pre-reqs and outside requirements at these universities are taken into account.

So if someone takes one course, they can learn so much more that helps them later in this curriculum from the applied, function programming course than its imperative counterpart. And the normal number of functional programming courses that people take in a traditional math or CS program is 0. So I have to make a positive recommendation here to correct this. I couldn't make people avoid imperative programming courses anyway, even if I tried. So people will oversample them (and follow your implied recommendation) relative to my core recommendations anyway.

So in practice, most people will follow your advice, by following mine and actually studying some functional programming instead of none and then study a ton of imperative programming no matter what anyone says.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-09T18:40:25.758Z · LW · GW

Ahh. Yeah, I'd expect that kind of content is way too specific to be built into initial FAI designs. There are multiple reasons for this, but off the top of my head,

  • I expect design considerations for Seed AI to favor smaller designs that only emphasize essential components for both superior ability to show desirable provability criteria, as well as improving design timelines.

  • All else equal, I expect that the less arbitrary decisions or content the human programmers provide to influence the initial dynamic of FAI, the better.

  • And my broadest answer is it's not a core-Friendliness problem, so it's not on the critical path to solving FAI. Even if an initial FAI design did need medical content or other things along those lines, this would be something that we could hire an expert to create towards the end of solving the more fundamental Friendliness and AI portions of FAI.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-09T18:04:35.523Z · LW · GW

I don't think those courses would impoverish anyones' minds. I expect people to take courses that aren't on this list without me having to tell people to do that. But I wouldn't expect courses drawn from these subjects to be mainstream recommendations for Friendliness researchers who were doing things like formalizing and solving problems relating to self-referencing mathematical structures and things along those lines.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-09T18:00:05.658Z · LW · GW

Good question. If I remember correctly, Berkeley teaches from it and one person I respect agreed it was good. I think the impenetrability was consider more of a feature than a bug by the person doing the recommending. IOW, he was assuming that people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.

Part of my motivation for posting this here was to improve my recommendations. So I'm happy to change the rec to something more accessible if we can crowd-source something like a consensus best choice here on LW that's still good for the smartest readers.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-09T16:41:19.534Z · LW · GW

Fixed. Thanks.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-09T16:37:56.668Z · LW · GW

The functional/imperative distinction is not a real one

How is the distinction between functional and imperative programming languages "not a real one"? I suppose you mean that there's a continuum of language designs between purely functional and purely imperative. And I've seen people argue that you can program functionally in python or emulate imperative programming in Haskell. Sure. That's all true. It doesn't change the fact that functional-style programming is manifestly more machine checkable in the average (and best) case.

it's less important to provability than languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners.

Agreed. The most poorly programmed functional programs will be harder to machine check than the mostly skillfully designed imperative programs. But I think for typical programming scenarios or best case scenarios, functional-style programming makes it hands-down more natural to write the correct kind of structures that can be reliably machine checked and imperative programming languages just don't.

The entry level functional programming course is going to focus on all the right things: type theory, model theory, deep structure. The first imperative programming course at most universities is going to teach you how to leverage side-effects, leverage side-effects more, and generally design your code in a way that makes it less tractable for verification and validation later on.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-09T16:21:22.629Z · LW · GW

But I'm not sure where that is best covered.

Yeah, universities don't reliably teach a lot of things that I'd want people to learn to be Friendliness researchers. Heuristics and Biases is about the closest most universities get to the kind of course you recommend... and most barely have a course on even that.

I'd obviously be recommending lots of Philosophy and Psychology courses as well if most of those courses weren't so horribly wrong. I looked through the course handbooks and scoured them for courses I could recommend in this area that wouldn't steer people too wrong. As Luke has mentioned (partially from being part of this search with me), you can still profitably take a minority of philosophy courses at CMU without destroying your mind, a few at MIT, and maybe two or three at Oxford. And there are no respectable, mainstream textbooks to recommend yet.

Believe me, Luke and I are sad beyond words every day of our lives that we have to continue recommending people read a blog to learn philosophy and a ton of other things that colleges don't know how to teach yet. We don't particularly enjoy looking crazy to everyone outside of the LW bubble.

Comment by Louie on Course recommendations for Friendliness researchers · 2013-01-09T14:39:20.458Z · LW · GW

PS - I had some initial trouble formatting my table's appearance. It seems to be mostly fixed now. But if an admin wants to tweak it somehow so the text isn't justified or it's otherwise more readable, I won't complain! :)

Comment by Louie on Bounding the impact of AGI · 2012-12-19T14:13:49.481Z · LW · GW

I believe Coq is already short and proven using other proving programs that are also short and validated. So I believe the tower of formal validation that exists for these techniques is pretty well secured. I could be wrong about that though... would be curious to know the answer to that.

Relatedly, there are a lot of levels you can go with this. For instance, I wish someone would create other programming languages like CompCert for programming formally validated programs.

Comment by Louie on Bounding the impact of AGI · 2012-12-19T02:55:53.907Z · LW · GW

Martel (1997) estimates a considerably higher annualized death rate of 3,500 from meteorite impacts alone (she doesn’t consider continental drift or gamma-ray bursts), but the internal logic of safety engineering demands we seek a lower bound, one that we must put up with no matter what strides we make in redistribution of food, global peace, or healthcare.

Is this correct? I'd expect that this lower-bound was superior to the above (10 deaths / year) for the purpose of calculating our present safety factor... unless we're currently able to destroy earth-threatening meteorites and no one told me.

Comment by Louie on Bounding the impact of AGI · 2012-12-19T02:26:35.845Z · LW · GW

http://kornai.com/Drafts/agi12.pdf

Comment by Louie on Bounding the impact of AGI · 2012-12-19T02:20:49.338Z · LW · GW

To paraphrase Kornai's best idea (which he's importing from outside the field):

A reasonable guideline is limiting the human caused xrisk to several orders of magnitude below the natural background xrisk level, so that human-caused dangers are lost in the noise compared to the pre-existing threat we must live with anyway.

I like this idea (as opposed to foolish proposals like driving risks from human made tech down to zero), but I expect someone here could sharpen the xrisk level that Kornai suggests. Here's a disturbing note from the appendix where he does his calculation:

Here we take the “big five” extinction events that occurred within the past half billion years as background. Assuming a mean time of 10^8 years between mass extinctions and 10^9 victims in the next one yields an annualized death rate of 10, comparing quite favorably to the reported global death rate of  ~500 for contact with hornets, wasps, and bees (ICD-9-CM E905.3). [emphasis added]

Obviously, this is a gross mis-understanding of xrisks and why they matter. No one values human lives linearly straight down to 0 or assumes no expansion factors for future generations.

A motivated internet researcher could probably just look up the proper citations from Bostrom's "Global Catastrophic Risks" and create a decomposed model that estimated the background xrisk level from only nature (and then only nature + human risks w/o AI), and develop a better safety margin that would be lower than the one in this paper (implying that AGI could afford to be a few orders of magnitude riskier than Kornai's rough estimates).

Comment by Louie on Thoughts on the Singularity Institute (SI) · 2012-11-18T10:04:40.950Z · LW · GW

Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management

FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you're just imagining this retroactively given that that's what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational improvements and implementing them. It had no points for doing administrative work (besides cleaning up the physical house or giving others car rides). And it had no points for rising to management roles. It was all about getting karma on LW or writing conference papers. When I first offered to help with the organization directly, I was told I was "too competent" and that I should go do something more useful with my talent, like start another business... not "waste my time working directly at SI."

Comment by Louie on Popular media coverage of Singularity Summit -the Verge [link] · 2012-11-04T03:58:01.266Z · LW · GW

I didn't notice any factual inaccuracies

Although, multiple quotes were manufactured and misattributed.

Comment by Louie on Cleaning up the "Worst Argument" essay · 2012-09-06T01:28:38.174Z · LW · GW

I preferred the original version that appeared on your private website.

Once you sanitized it for LW by making it more abstract and pedantic, it lost many of the most biting, hilarious asides, that made it a fun and entertaining to read.

Comment by Louie on Alan Carter on the Complexity of Value · 2012-05-14T16:41:19.310Z · LW · GW

Nope, I was wrong. It is the case that agents require equal priors for ATT to hold. AAT is like proving that mixing the same two colors of paint will always result in the same shade or that two equal numbers multiplied by another number will still be equal.

What a worthless theorem!

I guess when I read that AAT required "common priors" I assumed Aumann must mean known priors or knowledge of each others' priors, since equal priors would constitute both 1) an asinine assumption and, 2) a result not worth reporting. Hanson's assumption that humans should have a shared prior by virtue of being evolved together is interesting, but more creative than informative.

Good thing I don't rely on ATT for anything. It's obvious that disagreeing with most people is rational so updating on people's posteriors without evidence would be pretty unreasonable. I'm not surprised that ATT would turn out to be so meaningless.

Comment by Louie on Alan Carter on the Complexity of Value · 2012-05-11T13:18:52.824Z · LW · GW

They need to have the same priors? Wouldn't that make AAT trivial and vacuous?

I thought the requirement was that priors just weren't pathologically tuned.

Comment by Louie on Generic Modafinil sales begin? · 2012-04-04T06:07:00.204Z · LW · GW

Those "generics" you're talking about are ordered by your friends from overseas. The average American won't take advantage of Modafinil until they can pay x10 as much to buy it in a pharmacy in their neighborhood.

People are too risk-averse to try things that work. Hmm... if only there were some sort of drug they could take to make them smarter?

Comment by Louie on SotW: Be Specific · 2012-04-03T06:57:17.574Z · LW · GW

I think the bigger difference between CBT and psychoanalysis is something like, CBT: "Your feelings are the residue of your thoughts, many of which are totally wrong and should be countered by your therapist and you because human brains are horribly biased." vs, Psychoanalysis: "Your feelings are a true reflection of what an awful, corrupt, contemptible, morally bankrupt human being you are. As your therapist, I will agree with and validate anything you believe about yourself since anything you report about yourself must be true by definition."

CBT still works with specific past instances of your emotions to chart feelings into thoughts. It's good to do that so you can see clearly that thoughts always proceeded your feelings about a matter.... and also to see what the content of the thoughts are if they are, sneaky, "automatic" thoughts.

For example, "Jill made me sad." might be examined and reframed as "My automatic thought that hearing I was wrong about what day the garbage was picked up made me think: I'm wrong, therefore, I'm stupid, therefore, I'm worthless, therefore I'm sad. Those were all my highly-optimized and compressed thoughts which executed so fast... in such well-worn pathways... that I didn't even notice them. So my thoughts about that made me feel sad, not Jill."

Comment by Louie on Send me your photos of LessWrongers having fun! · 2012-03-21T05:25:58.685Z · LW · GW

;)

http://www.facebook.com/photo.php?fbid=10100815853330260&set=o.144017955332&type=1&theater

Comment by Louie on The Singularity Institute is hiring an executive assistant near Berkeley · 2012-01-22T12:22:52.165Z · LW · GW

Thanks for doing the research on this. It actually makes me feel a lot better knowing how low these base rates are.

Comment by Louie on Leveling Up in Rationality: A Personal Journey · 2012-01-19T07:55:15.766Z · LW · GW

I know lukeprog personally, but I suppose I should call him lukeprog on LW for other people's benefit. Thanks for the reminder.

Comment by Louie on Leveling Up in Rationality: A Personal Journey · 2012-01-18T22:38:17.555Z · LW · GW

I'm concerned with the overuse of the term "applause light" here.

An applause light is not as simple as "any statement that pleases an in-group". The way I read it, a charge of applause lights requires all of the following to hold:

1) There are no supporting details to provide the statement with any substance.

2) The statement is a semantic stopsign.

3) The statement exists purely to curry favor with an in-group.

4) No policy recommendations follow from that statement.

I don't see a bunch of applause lights when I read this post. I see a post overflowing with supporting details, policy recommendations, and the opposite of semantic stopsigns -- Luke actually bent over backwards and went to the trouble of linking to as many useful posts on the topic as he could find. By doing so, he's giving the curious reader a number of pointers to what others have said about the subject he's discussing -- so that they can go learn more if they're actually curious.

Really, what more could he have done? How was he supposed to discuss the massive utility he's gained from rationality without mentioning rationality? To make his post shorter, Luke had to use several terms that most people around here feel good about. Yay for Luke! He saved me from having to read a longer, less information-dense post by writing it this way. I understand the sanity benefits of guarding yourself against blatant applause lights, but at the same time, it would be rather perverse of me to automatically feel unhappy in response to Luke mentioning something that makes me happy.

It's not an affective death spiral for me to feel happy when someone tells me an inspiring life-success story that involves terms that I happen to have a positive affect for. It's having a reaction that fits the facts. I'm happy Luke is having a good life. It's relevant for him to tell me about it here on Less Wrong because rationality played a big part in his success. And I'm even more overjoyed and grateful to Luke that he's leaving a trail of sign-posts behind him, pointing the way forward as he levels up in rationality. Now is the time for him to be documenting this... while it's still fresh... so that one day when he forgets how he got to where he is, there will still be an overly detailed record to point people to.

Comment by Louie on The Savage theorem and the Ellsberg paradox · 2012-01-14T23:25:54.262Z · LW · GW

This post has put me to sleep 3 times while trying to read it. I'm done.

Comment by Louie on What Curiosity Looks Like · 2012-01-09T23:56:12.316Z · LW · GW

Thanks MinibearRex.

I've added ads on Google AdWords that will start coming up for this in a couple days when the new ads get approved so that anyone searching for something even vaguely like "How to think better" or "How to figure out what's true" will get pointed at Less Wrong. Not as good as owning the top 3 spots in the organic results, but some folks click on ads, especially when it's in the top spot. And we do need to make landing on the path towards rationality less of a stroke of luck and more a matter of certainty for those who are looking.

Comment by Louie on The bias shield · 2012-01-03T00:54:17.896Z · LW · GW

I also thought you meant that Bill O'Reilly had (surprisingly) written the best book ever on the Lincoln shooting when you said "But I was wrong."

Comment by Louie on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far · 2011-12-28T21:44:59.055Z · LW · GW

Thanks for the helpful comments! I was uninformed about all those details above.

These posts are not about GiveWell's process.

One of the posts has the sub-heading "The GiveWell approach" and all of the analysis in both posts use examples of charities you're comparing. I agree you weren't just talking about the GiveWell process... you were talking about a larger philosophy of science you have that informs things like the GiveWell process.

I recognize that you're making sophisticated arguments for your points. Especially the assumptions that you claim simply must be true to satisfy your intuition that charities should be rewarded for transparency and punished otherwise. Those seem wise from a "getting things done" point of view for an org like GiveWell -- even when there is no mathematical reason those assumptions should be true -- but only a human-level tit-for-tat shame/enforcement mechanism you hope eventually makes this circularly "true" through repeated application. Seems fair enough.

But adding regression adjustments to cancel out the effectiveness of any charity which looks too effective to be believed (based on the common sense of the evaluator) seems like a pretty big finger on the scale. Why do so much analysis in the beginning if the last step of the algorithm is just "re-adjust effectiveness and expected value to equal what feels right"? Your adjustment factor amounts to a kind of Egalitarian Effectiveness Assumption: We are all created equal at turning money into goodness. Or perhaps it's more of a negative statement, like, "None of us is any better than the best of us at turning money into goodness" -- where the upper limit on the best is something like 1000x or whatever the evaluator has encountered in the past. Any claims made above the best limit gets adjusted back down -- those guys were trying to Pascal's Mug us! That's the way in which there's a blinding effect. You disbelieve the claims of any groups who claims to be more effective per capita than you think is possible.

Comment by Louie on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-12-28T20:35:59.645Z · LW · GW

Your comments are a cruel reminder that I'm in a world where some of the very best people I know are taken from me.

Comment by Louie on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far · 2011-12-28T05:14:02.061Z · LW · GW

A few corrections.

  • I know that Holden interviewed two other supporters of ours... but I don't think he interviewed 2 other employees. If he did, why did he only publish the unhelpful notes from the one employee he spoke to who didn't know anything?

  • SIAI didn't give Jasen to GiveWell to be interviewed -- Holden chose him unilaterally -- not because he was a good choice, but because Jasen is from New York (just like Holden).

  • I'm unaware of Holden sending his notes to anyone at SIAI prior to publication. Who did he send them to? I never saw them.

  • My guess is Holden sent his notes back to Jasen and called that "sending them to SIAI for feedback". In other words, no one at SIAI who is a leader, or a board member, or someone who understands the plans/finances of the organization saw the notes prior to publication. If Holden had sent the notes to any of the board members of Singularity Institute, they would have sent him tons of corrections.

  • To clarify, I didn't say the interview itself was a lie. I said calling it an interview with SIAI was a lie. I stick by that characterization.

Comment by Louie on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far · 2011-12-28T03:02:06.527Z · LW · GW

I agree with Grognor -- that interview is beyond unhelpful. Even calling it an interview of SIAI is incredibly misleading. (I would say a complete lie). Holden interviewed the only visitor at SI who was there last summer who wouldn't have known anything about the organizations funding needs. Jasen was running a student summer program -- not SIAI. I would liken it to Holden interviewing a random boyscout somewhere and then publishing a report complaining that he couldn't understand the organizational funding needs of Boy Scouts of America.

Also, keep in mind that GiveWell is certainly a good service (and I support them) but their process is limited and is unable to evaluate the value of research. In fact, if an opportunity to donate as good as Singularity Institute existed, GiveWell's methodology would blind them to the possibility of discovering it.

Carl Shulman pointed out how absurd this was: If GiveWell had existed 100 years ago, they would have argued against funding the eradication of smallpox. Their process forces them to reject the possibility that an intervention could be that effective.

I'm curious about the new GiveWell Labs initiative though. Singularity Institute does meet all of that program's criteria for inclusion... perhaps that's why they started this program... so that they aren't forced to overlook so many extraordinary donation opportunities forever.