Posts

How Much Do Different Users Really Care About Upvotes? 2019-07-22T02:31:47.273Z · score: 14 (9 votes)
How Can Rationalists Join Other Communities Interested in Truth-Seeking? 2019-07-16T03:29:02.830Z · score: 23 (8 votes)
Evidence for Connection Theory 2019-05-28T17:06:52.635Z · score: 14 (4 votes)
Was CFAR always intended to be a distinct organization from MIRI? 2019-05-27T16:58:05.216Z · score: 8 (2 votes)
Getting Out of the Filter Bubble Outside Your Filter Bubble 2019-05-20T00:15:52.373Z · score: 21 (11 votes)
What kind of information would serve as the best evidence for resolving the debate of whether a centrist or leftist Democratic nominee is likelier to take the White House in 2020? 2019-02-01T18:40:12.571Z · score: 10 (5 votes)
Effective Altruists Go to Storm Crow Tavern 2018-10-03T22:32:45.940Z · score: 6 (1 votes)
Wild Animal Welfare Task Force: Intro to Welfare Biology 2018-10-03T22:30:09.072Z · score: 6 (1 votes)
HPS Reading Group, Session II (Do-Over) 2018-10-03T22:27:18.998Z · score: 6 (1 votes)
Tabletop Game Night 2018-10-03T22:21:49.331Z · score: 14 (2 votes)
HPS Reading Group, Session I 2018-09-11T22:04:46.753Z · score: 13 (5 votes)
Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why 2018-09-07T21:49:35.564Z · score: -27 (25 votes)
Why I Don't Like Scotty Slate Star and His Codex and So Can You 2018-09-01T07:45:42.155Z · score: -17 (8 votes)
Walk the Seawall with Christine Peterson 2018-08-29T07:28:23.861Z · score: 6 (1 votes)
2 Events with Christine Peterson in Vancouver This Week! 2018-08-29T07:04:51.863Z · score: 11 (4 votes)
Discussion with Christine Peterson of the Foresight Institute 2018-08-29T06:37:13.385Z · score: 6 (1 votes)
Effective Altruism as Global Catastrophe Mitigation 2018-06-08T04:17:03.925Z · score: 8 (5 votes)
Bug Report 2018-06-06T21:41:15.182Z · score: 10 (3 votes)
The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo 2018-05-20T07:19:12.924Z · score: 113 (32 votes)
Welcome to the Vancouver Rationality Community 2018-05-11T06:37:34.491Z · score: 4 (1 votes)
Ten Commandments for Aspiring Superforecasters 2018-04-25T04:55:37.642Z · score: 71 (17 votes)
Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? 2018-03-12T23:21:52.844Z · score: 38 (10 votes)
Update from Vancouver 2018-02-03T00:35:19.172Z · score: 23 (7 votes)
Be Like Stanislov Petrov 2016-11-28T06:04:27.335Z · score: 3 (3 votes)
Room For More Funding In AI Safety Is Highly Uncertain 2016-05-12T13:57:49.977Z · score: 12 (13 votes)
Don't Be Afraid of Asking Personally Important Questions of Less Wrong 2015-03-17T06:54:52.923Z · score: 52 (53 votes)
[LINK] Author's Note 119: Shameless Begging 2015-03-11T00:14:31.181Z · score: 7 (10 votes)
Announcing LessWrong Digest 2015-02-23T10:41:43.194Z · score: 27 (28 votes)
Has LessWrong Ever Backfired On You? 2014-12-15T05:44:32.795Z · score: 25 (25 votes)

Comments

Comment by evan_gaensbauer on Dialogue on Appeals to Consequences · 2019-07-25T09:22:44.074Z · score: 6 (5 votes) · LW · GW

Summary: I'm aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into long, complicated theories. These theories don't appear to respond at all to the content of the disagreements with the public accusations of lying and dishonesty, and that's why these repeat accusations and justifications for them are poorly received.

These complicated theories don't have anything to do with what people actually want when public accusations of dishonesty or lying are being made, what is typically called 'hard' (e.g., robust, empirical, etc.) evidence. If you were to make narrow claims of dishonesty with more modest language, based on just the best evidence you have, and being willing to defend the claim based on that; instead of making broad claims of dishonesty with ambiguous language, based on complicated theories, they would be received better. That doesn't mean the theories of how dishonesty functions in communities, as an exploration of social epistemology, shouldn't be written. It's just that they do not come across as the most compelling evidence to substantiate public accusations of dishonesty.

For me it's never been so complicated as to require involving decision theory. It's as simple as some of the basic claims being made into much larger, more exaggerated or hyperbolic claims being a problem. They also come along with readers, presumably a general audience among the effective altruism or rationality communities, apparently needing to have prior knowledge of a bunch of things they may not be familiar with. They will only be able to parse the claims being made by reading a series of long, dense blog posts that don't really emphasize the thing these communities should be most concerned about.

Sometimes the claims being made are that Givewell is being dishonest, and sometimes they are something like because of this the entire effective altruism movement has been totally compromised, and is also incorrigibly dishonest. There is disagreement, sometimes disputing how the numbers were used in the counterpoint to Givewell; and some about the hyperbolic claims made that appear as though they're intended to smear more people than whoever at Givewell, or who else in the EA community, is responsible. It appears as though people like you or Ben don't sort through, try parsing, and working through these different disagreements or criticisms. It appears as though you just take all that at face value as confirmation the rest of the EA community doesn't want to hear the truth, and that people worship Givewell at the expense of any honesty, or something.

It's in my experience too, that with these discussions of complicated subjects that appear very truncated for those unfamiliar, that the instructions are just to go read some much larger body of writing or theory to understand why and how people deceiving themselves, each other, and the public in the ways you're claiming. This is often said as if it's completely reasonable to claim it's the responsibility of a bunch of people with other criticisms or disagreements with what you're saying to go read tons of other content, when you are calling people liars, instead of you being able to say what you're trying to say in a different way.

I'm not even saying that you shouldn't publicly accuse people of being liars if you really think they're lying. In cases of a belief that Givewell or other actors in effective altruism have failed to change their public messaging in the face of, by their own convictions, being correctly pointed out as them being wrong, then just say that. It's not necessary to claim that thus the entire effective altruism community are also dishonest. That is especially the case for members of the EA community who disagree with you, not because they dishonestly refused the facts they were confronted with, but because they were disputing the claims being made, and their interlocutor refused to engage, or deflected all kinds of disagreements.

I'm sure there are lots of responses to criticisms of EA which have been needlessly hostile. Yet reacting, and writing strings of posts as though, the whole body of responses were consistent in just being garbage, is just not accurate of the responses you and Ben have received. Again, if you want to write long essays about what rational implications how people react to public accusations of dishonesty has for social epistemology, that's fine. It would just suit most people better if that was done entirely separately from the accusations of dishonesty. If you're publicly accusing some people of being dishonest, just accuse those and only those people of being dishonest very specifically. Stop tarring so many other people with such a broad brush.

I haven't read your recent article accusing some actors in AI alignment of being liars. This dialogue seems like it is both about that, and a response to other examples. I'm mostly going off those other examples. If you want to say someone is being dishonest, just say that. Substantiate it with what the closest thing you have to hard or empirical evidence that some kind of dishonesty is going on. It's not going to work with an idiosyncratic theory of how what someone is saying meets some kind of technical definition of dishonesty that defies common sense. I'm very critical of a lot of things that happen in effective altruism myself. It's just that the way that you and Ben have gone about it is so poorly executed, and backfires so much, I don't think there is any chance of you resolving the problems you're trying to resolve with your typical approaches.

So, I've given up on keeping up with the articles you're writing criticizing things in effective altruism happening, at least on a regular basis. Sometimes others nudge me to look at them. I might get around to them eventually. It's honestly at the point, though, where the pattern I've learned to follow is to not being open-minded that the criticisms being made of effective altruism are worth taking seriously.

The problem I have isn't the problems being pointed out, or that different organizations are being criticized for their alleged mistakes. It's how the presentation of the problem, and the criticism being made, are often so convoluted I can't understand them, and that's before I can figure out if I agree or not. I find that I am generally more open-minded than most people in effective altruism to take seriously criticisms made of the community, or related organizations. Yet I've learned to suspend that for the criticisms you and Ben make, for the reasons I gave, because it's just not worth the time and effort to do so.

Comment by evan_gaensbauer on How Much Do Different Users Really Care About Upvotes? · 2019-07-25T07:23:44.405Z · score: 2 (1 votes) · LW · GW
BTW, it might be worth separating out the case where controversial topics are being discussed vs boring everyday stuff. If you say something on a controversial topic, you are likely to get downvotes regardless of your position. "strong, consistent, vocal support" for a position which is controversial in society at large typically only happens if the forum has become an echo chamber, in my observation.

On a society-wide scale, "boring everyday stuff" is uncontroversial by definition. Conversely, articles that have a high total number of votes, but a close-to-even upvote:downvote ratio, are by definition controversial to at least several people. If wrong-headed views of boring everyday stuff aren't heavily downvoted, and are "controversial" to the point half or more of the readers supported someone spreading supposedly universally recognizable nonsense, that's a serious problem.

Also, regarding the EA Forum and LW, at least, "controversial topics" vs. "boring everyday stuff" is a false dichotomy. These fora are fora for all kinds of "weird" stuff, by societal standards. Some of popular positions on the EA Forum and LW are also controversial, but that's normal for EA and LW. What going by societal standards doesn't reflect is why different positions are or aren't controversial on the EA Forum or LW, and why. There are heated disagreements in EA, or on LW, for when most people outside those fora don't care about any side of those debates. For the examples I have in mind, some of the articles were on topics that were controversial in society at large, and then some that were only controversial disagreements in a more limited sense on the EA Forum or LW.

Comment by evan_gaensbauer on How Much Do Different Users Really Care About Upvotes? · 2019-07-25T07:07:57.890Z · score: 4 (2 votes) · LW · GW

You make a good point I forgot to add: the function karma on an article or comment serves in providing info to other users, as opposed to just the submitting user. That's something people should keep in mind.

Comment by evan_gaensbauer on How Much Do Different Users Really Care About Upvotes? · 2019-07-25T06:58:45.282Z · score: 5 (2 votes) · LW · GW

What bugs me is when people who ostensibly aspire to understand reality better let their sensitivity get in the way, and let their feelings colour the reality of how their ideas are being received. It seems to me this should be a basic skill of debiasing that people would employ if they were as serious about being effective or rational thinkers as they claim to be. If there is anything that bugs me you're suspicious of, it's that.

Typically, I agree with an OP who is upset about the low quality of negative comments, but I disagree with how upset they get about it. The things they say as a result are often inaccurate. For example, people will say because of a few comments worth of low-quality negative feedback on a post that's otherwise decently upvoted that negative reception is typical of LW, or the EA Forum. They may not be satisfied with the reception they've received on an article. That's just a different claim than their reception was extremely negative.

I don't agree with how upset people are getting, though I do to think they're typically correct the quality of some responses to their posts is disappointingly low. I wasn't looking for a solution to a problem. I was asking an open-ended question to seek answers that would explain some behaviour on others' part that doesn't fully make sense to me. Some other answers I've gotten are just people speaking from their own experience, like G Gordon, and that's fine by me too.

Comment by evan_gaensbauer on How Can Rationalists Join Other Communities Interested in Truth-Seeking? · 2019-07-23T06:17:22.164Z · score: 2 (1 votes) · LW · GW

Some but not all academics also seek truth in terms of their own beliefs about the world, and their own processes (including hidden ones) for selecting the best model for any given decision. From a Hansonian perspective, that's at least what scientists and philosophers are telling themselves. Yet from a Hansonian perspective, that's what everyone is telling themselves about their ability to seek truth, especially if a lot of their ego is bound up in 'truth-seeking', including rationalists. So the Hansonian argument here would appear to be a perfectly symmetrical one.

I don't have a survey on hand for what proportion of academia seek truth both in a theoretical sense, and a more pragmatic sense like rationalists aspire to do. Yet "academia", considered as a population, it much larger than the rationality community, or a lot of other intellectual communities. So, even if the relative proportion of academics who could be considered a "truth-seeking community" in the eyes of rationalists is small, the absolute/total amount of academics who would be considered part of a "genuine truth-seeking community" in those same eyes would be large enough to take seriously.

To be fair, the friends I have in mind who are more academically minded, and are critical of the rationality community and LessWrong, are also critical of much of academia as well. For them it's about aspiring to a greater and evermore critical intellectualism than it is sticking to academic norms. Philosophy tends to be a field in academia that tends to be more like this than most other academic fields, because philosophy has a tradition of being the most willing to criticize the epistemic practices of other academic fields. Again, this is a primary application of philosophy. There are different branches and specializations in philosophy, like the philosophies of: physics; biology; economics; art (i.e., aesthetics); psychology; politics; morality (i.e., ethics); and more.

The practice of philosophy at it's most elementary level is a practice of 'going meta', which is an art many rationalists seek to master. So I think truth-seekers in philosophy, and in academia more broadly, are the ones rationalists should seek to interact with more, even if finding academics like that is hard. Of course, the most common way rationalists could find academics like that, is to look to academics already in the rationality community like that (there are plenty), and ask them if they know other people/communities they enjoy interacting with for reasons similar to why they enjoy interacting with rationalists.

There is more I could say on the subject of how learning from philosophy, academia, and other communities in a more charitable way could benefit the rationality community. They're really only applicable if you either are part of an in-person/'irl' local rationalist community; or if you're intellectually and emotionally open to criticisms and recommendations for improvement to the culture of the rationality community. If one or both of those conditions apply to you, I can go on.

Comment by evan_gaensbauer on How Can Rationalists Join Other Communities Interested in Truth-Seeking? · 2019-07-19T06:53:04.375Z · score: 2 (1 votes) · LW · GW

One thing about this comment that really sticks out to me is the fact I know several people who think LessWrong and/or the rationality community aren't that great at truth-seeking. There are a lot of specific domains where rationalists aren't reported to be particularly good at truth-seeking. Presumably, that could be excused by the fact rationalists are generalists. However, I still know people who think the rationality community is generally bad at truth-seeking.

Those people tend to hail from philosophy. To be fair, 'philosophy', as a community, is one of the only other communities that I can think of that are interested in truth-seeking in as generalized way as the rationality community. You can ask the mods about it, but they've got some thoughts on how 'LessWrong' is a project of course strongly tied to but distinct from the 'rationality community'. I'd associate with LessWrong more with truth-seeking than 'the rationality community', since if you ask a lot of rationalists, truth-seeking isn't nearly all of what the community is about these days, and and truth-seeking isn't even a primary draw for a lot of people.

Anyway, most philosophers don't tend to think LessWrong is very good at seeking truth much of the time either. Again, to be fair, philosophers think lots of different kinds of people aren't nearly as good at truth-seeking as they make themselves out to be, including all kinds of scientists. Doing that kind of thing comes with the territory of philosophy, but I digress.

The thing is about 'philosophy', as a human community, is, unlike rationality originating from LessWrong, is blended into the rest of the culture that 'philosophers' don't congregate outside of academia like 'rationalists' do. 'Scientists' seem to tend to do that more than philosophers, but not more than rationalists. Yet for people who want to surround themselves with a whole community of like-minded others, all of them wouldn't want to join academia to get that. Even for rationalists who have worked in academia, the fact the truth-seeking is more part of the profession than something weaved into the fabric of their lifestyles.

Of course, the whole point of this question was to figure out what truth-seeking communities are out there that rationalists would get along with. If rationalists aren't perceived as good enough at truth-seeking for others to want to get along with them, which oftentimes appears to be the case, I don't know what a rationalist should do about that. Of course, you didn't mention truth-seeking, and I mentioned there are plenty of things rationalists are interested in other than truth-seeking. So, the solution I would suggest is for rationalists to route around that, and see if they can't get along with people who share something in common with rationalists, that they also appreciate about rationalists, other than truth-seeking.

Comment by evan_gaensbauer on How Can Rationalists Join Other Communities Interested in Truth-Seeking? · 2019-07-18T05:07:36.324Z · score: 4 (3 votes) · LW · GW

Hi Dayne. I'd like to join the Facebook group. How do I join?

Comment by evan_gaensbauer on Diversify Your Friendship Portfolio · 2019-07-14T18:16:14.008Z · score: 2 (1 votes) · LW · GW

The first thing I would think to look at to solve this problem is to look at cultural gaps between rationality and adjacent communities, especially based on how they interact in person, like effective altruism, startup culture, transhumanism, etc.

Comment by evan_gaensbauer on Schism Begets Schism · 2019-07-11T08:25:05.848Z · score: 10 (5 votes) · LW · GW

One thing I find interesting, as an example that may be particularly pertinent to some rationalists, is how effective altruism has, in spite of everything else, been robust to the kinds of schisms you're talking about. In spite of all the differences between different factions of EA, it remains a grand coalition/alliance (of a sort). Each of the following subgroups of EA, usually built around a specific, preferred cause, in total has at a few hundred if not a couple thousand adherents in EA, and I expect would each be able to command millions of dollars in donations to their preferred charities each year, including:

  • high-impact/evidence-based global poverty alleviation (aka global health and development)
  • AI risk/alignment/safety
  • existential risk reduction (inclusive of AI risk as a distinct and primary subgroup, but focused on other potential x-risks as well)
  • effective animal advocacy (focused on farm animal welfare)
  • reducing wild animal suffering (focused on wild animal welfare)
  • rationality
  • transhumanism

While none of these subgroups of EA is wholly within EA, it's very possible the majority of members of these communities also identifies as part of the EA community as well. An easy explanation is that everyone is sticking around for the Open Phil bucks, or the chance of receiving Open Phil bucks in the future, as a cause area's increased prominence in EA is moderately-to-highly correlated with them receiving => $10^7/year within a few years, when before each area's annual funding was probably <= $10^5. Yet there isn't a guarantee, and the barriers to access to these resources has been such that I've seen multiple of these subgroups openly and seriously consider splitting with EA. If any or all of these causes could sustain and grow themselves such that one or more of them might do better by investing its own resources into growing outside of EA, and securing its independence. However, as far as I can tell, there has never been a single, whole cause area of EA that has 'exited' the community. As the movement has existed for ~10 years, it seems unlikely that this would be the case if there wasn't other factors contributing to the cohesion of such otherwise disparate groups.

Comment by evan_gaensbauer on Schism Begets Schism · 2019-07-11T08:08:10.834Z · score: 2 (1 votes) · LW · GW

I was thinking about something similar the other day. I was wondering if, from a historical perspective, it would be valid to look not just specific sects, but all Abrahamic religions, as 'schisms' from the original Judaism. One thing is that religious studies scholars and historians may see transformation of one sect into an unambiguously distinct religion as more of an 'evolution', like speciation in biology, than 'schisms', as we typically think of them in human societies.

Comment by evan_gaensbauer on Diversify Your Friendship Portfolio · 2019-07-11T08:01:09.800Z · score: 14 (5 votes) · LW · GW

The one thing I think this post is most missing, if it's primarily aimed at rationalists, is how introverted rationalists can go about making new friends. I've met a lot of people drawn to the rationality community because they don't know how to otherwise join of group of people to befriend, who they also have enough in common with that they would want to befriend them. Not saying that this isn't a good article (I strongly upvoted it, based alone on how important I think this signal/message is), nor that I know the best way to write about "how to make more friends (outside the rationality community)". I'm just saying if you have it in you, I think that would also be a post worth writing.

"Making new friends" or "Joining a New Group of Friends" or "Joining a New Community" might seem so obvious that it doesn't merit writing up how rationalists can do that. Yet, again, I've met rationalists who before they joined the rationality community that thought themselves so unable to make new friends in adulthood, they consider themselves lucky to even have fallen ass-backwards into the rationality community.

Comment by evan_gaensbauer on What kind of thing is logic in an ontological sense? · 2019-06-19T21:55:43.806Z · score: 6 (3 votes) · LW · GW

I think this is an interesting question. I know some friends who know a lot more about philosophy than I do on social media. A lot of people who aren't as well-read in philosophy only come across notions of what the ontology of what things like logic and mathematics might be through Platonism. I'm not as familiar with them myself, yet I'm aware there is a much wider variety of options for what constitutes the ontology of logic to explore beyond Platonism. I haven't read Plato, and say I couldn't rightly say what if anything is wrong with it. Yet learning about things like the ontology of logic has led me to think more recent and obscure options to explain such things than the Platonic realm are better. I don't think they're the kind of views someone with only a cursory understanding of academic philosophy would have heard of. I've been saying 'things like the ontology of logic', because I've actually thought more specifically about the ontology of mathematics. I've also talked to some friends who know much more than me about maths, logic, and philosophy. I would suggest looking into the following fields for a much greener and greater garden of potential answers to your question:

  • Philosophy of Mathematics
  • Metamathematics
  • Philosophy of Mind
  • Philosophy of Logic

I will also ask my friends what answers they would give to this question, and then I will report them back here.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-10T08:11:19.735Z · score: 6 (3 votes) · LW · GW

So, I've read the two posts on Benquo's blog you've linked to. The first one "Bad Intent Is a Disposition, Not a Feeling", depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and 'mens rea' on his blog to see if he had posted any updated thoughts on the subject. There weren't results from the date of publication of that post onward on either of those topics on his blog, so it doesn't appear he has publicly updated his thoughts on these topics. That was over 2 years ago.

The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn't totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:

Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.

Benquo's conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn't the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I'm not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities' discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.

Part of the problem is that it seems how Benquo construes 'bad faith' is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn't a super high priority for me, I'm not sure that I will get around to it. However, there is enough material in Benquo's posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.

I don't know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.

Comment by evan_gaensbauer on Logic, Buddhism, and the Dialetheia · 2019-06-10T07:35:40.501Z · score: 2 (1 votes) · LW · GW

This is a post I, funnily, found both useful, and intend to, in other comments, intend to 'tear to shreds', so to speak. The first thing I would say is this article could be edited and formatted better. This is a relatively long post for LW, that nonetheless covers a great breadth of material rather briefly relative to the scope of the topics. I think having an introduction at the beginning that generally summarizes the different sections of your post at the beginning, it would be helpful for readers. You could also use formatting options for presenting formal logic or philosophy, and others, like subheadings, available on LW, that would make this article more readable on this site. I'd also say that you move through a lot of subjects very fast that it would be unrealistic to expect most readers to know enough about to put them altogether in the way you're intending to understanding your conclusion. If you were to provide some links as resources to learn more about the subjects, or you were to expand on how the central theme(s) of this article relate to the different topics you bring up (e.g., theoretical physics, quantum computing, AI, Bayesian epistemology). I think editing this article to make it more readable is what would get more people to read it to the end, and thus understand the message you're trying to impart.

Comment by evan_gaensbauer on All knowledge is circularly justified · 2019-06-10T07:01:24.198Z · score: 2 (1 votes) · LW · GW

Have you looked at cognitive science before? I haven't looked at it extremely deeply, but I think it can offer routes to empirically based insights into how the human mind-brain operates and functions. However, to unify the insights cognitive science can offer with human consciousness and other difficult issues like a longing for meaning is a whole other set of very hard problems.

Comment by evan_gaensbauer on All knowledge is circularly justified · 2019-06-10T06:58:21.310Z · score: 2 (1 votes) · LW · GW

I haven't read a lot about it, 'but this seems related to a kind of problem in philosophy that I know as 'grounding problems'. E.g., the question of 'how do we ground truth?' On Wikipedia, the article I found to describe it calls it the symbol grounding problem. On the Stanford Encyclopedia of Philosophy, this kind of problem are known as problems of metaphysical grounding. For rationalists, one application of the question of metaphysical grounding is to what makes propositions true. That constitutes my reading on the subject, but those links should provide further reading resources. Anyway, the connection between the question of how to ground knowledge, and this post, is that if knowledge can't be grounded, it seems by default it can only be circularly justified. Another way to describe this issue is to see it as a proposition that all worldviews entail some kind of dogma to justify their own knowledge claims.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-09T21:40:46.770Z · score: 4 (2 votes) · LW · GW

When he wrote:

Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense.

In most contexts when language liked this is used, it's usually pretty clear that you are implying someone is doing something closer to deliberately lying than some softer kind of deception. I am aware Ben might have some model about how Givewell or others in EA are acting in bad faith in some other manner, involving self-deception. If that is what he is implying that Givewell or Good Ventures are doing instead of deliberately lying, that isn't clear from the OP. He could have also stated the organizations in question are not fully aware they're just marketing obvious nonsense, and had been immune to his attempts to point this out to them. If that is the case, but he didn't state that in the OP either.

So, based on their prior experience, I believe it would appear to many people like he was implying Givewell, Good Ventures, and EA are deliberately lying. Deliberate lying is generally seen as a bad thing. So, to imply someone is deliberately lying seems to clearly be an attribution of bad motives to others. So if Ben didn't expect or think that is how people would construe part of what he was trying to say, I don't know what he was going for.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-09T21:22:18.112Z · score: 2 (1 votes) · LW · GW

I will take a look at them. Thanks.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-09T21:20:47.799Z · score: 2 (1 votes) · LW · GW

I'll take a look at these links. Thanks.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-09T21:18:00.316Z · score: 2 (1 votes) · LW · GW

Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-09T20:15:12.059Z · score: 3 (2 votes) · LW · GW

Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the 'aesthetic identity movement' model might be lacking. If a theory makes the same predictions everywhere, it's useless. I feel like the 'aesthetic identity movement' model might be one of those theories that is too general and not specific enough for me to understand what I'm supposed to take away from its use. For example:

So, the United States of America largely isn't actually about being a land of freedom to which the world's people may flock (which requires having everyone's civil liberties consistently upheld, e.g., robust support for the rule of law, and not adding noise to conversations about these), it's an aesthetic identify movement around the Founding Fathers as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of America, America ought to be replaced with something very, very different.

Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn't be as confused, if I knew what I am supposed to do with this information.


Comment by evan_gaensbauer on Drowning children are rare · 2019-06-09T19:46:37.751Z · score: 2 (1 votes) · LW · GW
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn't think that?)
Geeks, Mops, Sociopaths happened to the rationality community, not just EA.

I don't think you didn't think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).

I asked because it's frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I'm guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn't be as willing to criticize them.

I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don't feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be "good" or not for you to do it, though.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-09T05:49:37.590Z · score: 7 (3 votes) · LW · GW
But if attempts to analyze how we're collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!

For Ben's criticisms of EA, it's my opinion that while I agree with many of his conclusions, I don't agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn't respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn't himself agree with, his interlocutors are persistently acting in bad faith. I haven't interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven't been following as closely how Ben construes 'bad faith', and I haven't taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don't find them a compelling account of people's real motivations in discourse.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-09T05:42:52.740Z · score: 5 (2 votes) · LW · GW

I understand the "Vassar Crowd" to be a group of Michael Vassar's friends who:

  • were highly critical of EA.
  • were critical of somewhat less so of the rationality community.
  • were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.

Maybe you meet those qualifications, but as I understand it the "Vassar Crowd" started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn't posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance's Long-Term World Improvement mailing list.

It doesn't seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn't appear from the outside it is as cohesive anymore, I assume in large part because of Vassar's decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.

So while I appreciate the disclosure, I don't know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-08T21:24:09.471Z · score: 6 (3 votes) · LW · GW

I'm going to flip this comment on you, so you can understand how I'm seeing it, and thus I fail to see why the point you're trying to make matters.

So, rationality largely isn't actually about doing thinking clearly (which requires having correct information about what things actually work, e.g., well-calibrated priors, and not adding noise to conversations about these), it's an aesthetic identify movement around HPMoR as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.

One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I'll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we'll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community's stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah's blog post as if that means something unique about EA that isn't true about other human communities, you've argued for too much.

Also, in this comment I indicated my awareness of what was once known as the "Vassar crowd", which I recall you were a part of:

Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a "Vassar crowd" that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you've done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael's friends in the bunch too, along with as much of the rationality community as I feel like? After all, you're all friends, and you decided to make the effort together, even though you each made your own individual contributions.

While we're here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?


Comment by evan_gaensbauer on Drowning children are rare · 2019-06-06T23:53:58.995Z · score: 2 (1 votes) · LW · GW

I believe it's because people get their identities very caught up in EA, and for EAs focused on global poverty alleviation, Givewell and their recommended charities. So, when someone like you criticizes Givewell, a lot of them react in primarily emotional ways, creating a noisy space where the sound of messages like yours get lost. So, the points you're trying to make about Givewell, or what similar points many others have tried making about Givewell, don't stick to enough for enough of the EA community, or whoever else the relevant groups of people are. Thus, in the collective memory of the community, these things are forgotten or not noticed. Then, the cycle repeats itself each time you write another post like this.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-06T23:49:38.183Z · score: -13 (3 votes) · LW · GW
I'm not going to bother addressing comments this long in depth when they're full of basic errors like this.

While there is what you see as at least one error in my post, there are many items I see as errors in your post I will bring to everyone's attention. It will be revised, edited, and polished to not have what errors you see in it, or at least it will be clear enough what I am and am not saying won't be ambiguous. It will be a top-level article on both the EA Forum and LW. A large part of it is going to be that you at best are using extremely sloppy arguments, and at worst are making blatant attempts to use misleading info to convince others to do what you want, just as you accuse Good Ventures, Open Phil, and Givewell of doing. One theme will be that you're still in the x-risk space, employed in AI alignment, willing to do this towards your former employers, also involved in the x-risk/AI alignment space. So, while you may not want to bother with addressing these points, I imagine you will have to eventually for the sake of your reputation.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-06T23:44:27.805Z · score: 2 (1 votes) · LW · GW

Otherwise, here is what I was trying to say:

1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they're aren't responsible for anything to do with OpenAI.

2. It's unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell's annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn't establish that, so it doesn't make sense.

3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it's all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so.

Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations?

Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way?

Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a "Vassar crowd" that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you've done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael's friends in the bunch too, along with as much of the rationality community as I feel like? After all, you're all friends, and you decided to make the effort together, even though you each made your own individual contributions.

I won't do those things. Yet that is what it would be for me to behave as you are behaving. I'll ask you one more question about what you might do: when can I expect you to publicly condemn FHI on the grounds it's justified to do so because FHI is right next door to CEA, yet Nick Bostrom lacks the decency to go over there and demand the CEA stop posting misleading stats, lest FHI break with the EA community forevermore?

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-06T23:13:27.702Z · score: 2 (1 votes) · LW · GW
Then why do Singer and CEA keep making those exaggerated claims?

I don't know. Why don't you ask Singer and/or the CEA?

I don't see why they'd do that if they didn't think it was responsible for persuading at least some people.

They probably believe it is responsible for persuading at least some people. I imagine the CEA does it through some combo of revering Singer, thinking it's good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they're presented in.

Comment by evan_gaensbauer on Drowning children are rare · 2019-06-01T16:24:56.444Z · score: 15 (5 votes) · LW · GW

So, first of all, when you write this:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated.

It seems like what you're trying to accomplish for rhetorical effect, but not irrationally, is to demonstrate that the only alternative to "wildly exaggerated" cost-effectiveness estimates is that foundations like these are doing something even worse, that they are hoarding money. There are a few problems with this.

  • You're not distinguishing who the specific cost-effectiveness estimates you're talking about are coming from. While it's a bit of a nitpick to point out it's Givewell rather than Good Ventures that makes the estimate, when the 2 organizations are so closely connected, and Good Ventures can be held responsible for the grants they make on the basis of the estimates, if not the original analysis that informed them.
  • At least in the case of Good Ventures, there is a third alternative that they are reserving billions of dollars not at the price of millions of preventable deaths, because, for a variety of reasons, they intend in the present and future to give that money to their diverse portfolio of causes they believe present just as if not more opportunity to prevent millions of deaths, or to otherwise do good. Thus, in the case of Good Ventures, you knew as well as anyone that the idea only one of the two conclusions you've posed here is wildly misleading.

So, what might have worked better as something like:

Foundations like Good Ventures apportion a significant amount of their endowment to developing-world interventions. If the low cost-per-life-saved numbers Good Ventures is basing this giving off of are not wildly exaggerated, then Good Ventures is saving millions fewer lives than they could with this money.

The differences in my phrasing are:

  • it doesn't imply foundations like Good Ventures or the Gates Foundation are the only ones to be held responsible for the fact the cost-effectiveness estimates are wildly exaggerated.
  • it doesn't create the impression Good Ventures and the Gates Foundation, in spite of common knowledge, ever intended exclusively to use their respective endowments to save lives with developing-world interventions, which sets up a false dichotomy that the organizations are necessarily highly deceptive, or doing something even more morally indictable.

You say a couple sentences later:

Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.

As you've covered in discussions elsewhere, the implication of the fact, based on the numbers they're using these foundations they could be saving more lives they aren't with the money they've intended to use to save lives through developing-world interventions, is the estimates are clearly distorted. You don't need an "either scenario", one of which you wrote about in a way that implies something could be true you know is false, to get across that implication is clear.

There aren't 2 scenario, one which makes Good Ventures look worse than they actually are, and one about the actual quality of their mission that is less than the impression people have of it. There is just 1 scenario where it is the case the ethical quality of these foundations' progress on their goals is less than the impression much of the public has gotten of it.

As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense.

Here, you do the same thing of conflating multiple, admittedly related actors. When you say "the same people", you could be referring to any or all of the following, and it isn't clear who you are holding responsible for what:

  • Good Ventures
  • The Open Philanthropy Project
  • Givewell
  • The Gates Foundation
  • 'effective altruism' as a movement/community, independent of individual, officially aligned or affiliated non-profit organizations

In treating each of these actors part and parcel with each other, you appear to hold each of them equally culpable for all the mistakes you've listed here, which I've covered in this, and my other, longer comment, as false in myriad ways. Were you make clear in your conclusion who you are holding responsible for each respective factor in the total outcome of the negative consequences of the exaggerated cost-effectiveness estimates, your injunctions for how people should change their behaviour in the face of how they should respond to these actors differently would have rung more true.


Comment by evan_gaensbauer on Drowning children are rare · 2019-05-30T17:59:46.808Z · score: 3 (2 votes) · LW · GW

Here is the link.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-30T16:58:53.374Z · score: 2 (1 votes) · LW · GW

Based on how you write, it is clear in your writing you understand the mistake may be made more by others referencing Givewell's numbers rather than Givewell itself. Yet the tones of your post seem to be holding Givewell, and not others, culpable for how others use Givewell's numbers. To make an ethical appeal to effective altruists drawing unjustified conclusions based on Givewell's numbers to them directly that what they're doing is misleading, dishonest, or wrong, may be less savoury than a merely rational appeal of how or why what they're doing is misleading, dishonest, or wrong, based on the expectation people are not fully cognizant of their own dishonesty. Yet your approach thus far doesn't appear to be working. You've been trying this for a few years now, so I'd suggest trying some new tactics or strategy.

I think at this point it would be fair for you to be somewhat less charitable to those who make wildly exaggerated claims that make Givewell's numbers, and write as though you are explaining to Givewell and others as your audience that people who use Givewell's numbers are being dishonest, rather than explaining to people who aren't acting entirely in good faith that they are being dishonest.

The way you write makes it seems as though you believe in this whole affair Givewell itself is the most dishonest actor, which I think readers find senseless enough they're less inclined to take the rest of what you're trying to say seriously. I think you should try talking more about the motives of the actors your'e referring to other than Givewell, in addition to Givewell's motives.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-30T15:36:55.828Z · score: 2 (1 votes) · LW · GW

I can't find the link right now, but I asked others for it. So, hopefully it'll come back up again. If I come across it again, I'll respond back here again with it.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-30T02:31:45.872Z · score: 7 (3 votes) · LW · GW

As an aside, 'high bar for terror' is the best new phrase I've come across in a long while.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-30T01:05:13.688Z · score: 22 (10 votes) · LW · GW

I haven't read your entire series of posts on Givewell and effective altruism. So I'm basing this comment mostly off of just this post. It seems like it is jumping all over the place.

You say:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives.

This sets up a false dichotomy. Both the Gates Foundation and Good Ventures are focused on areas in addition to funding interventions in the developing world. Obviously, they both believe those other areas, e.g., in Good Ventures' case, existential risk reduction, present them with the opportunity to prevent just as many, if not more, deaths than interventions in the developing world. Of course, a lot of people disagree with the idea something like AI alignment, which Good Ventures funds, is in any way comparable to cost-effective interventions into the developing world in terms of how many deaths it prevents, its cost-effectiveness, or its moral value. Yet based on how you used to work for Givewell, and you're now much more focused on AI alignment, it doesn't seem like you're one of those people.

If you were one of those people, you would be the kind of person to think that Good Ventures not spending all their money on developing-world interventions, and instead spreading out their grants over time to shape the longer-term future in terms of AI safety and other focus areas, quite objectionable. If you are that kind of person, i.e., you believe it is indeed objectionable Good Ventures is, from the viewpoint of thinking their top priority should be developing-world interventions, 'hoarding' their money for other focus areas like AI alignment is objectionable, that is not at all clear or obvious.

Unless you believe that, then, right here, there is a third option other than "Gates Foundation and Good Ventures are hoarding money at the price of millions of deaths", and the "numbers are wildly exaggerated". That is, both foundations believe the money they are reserving for focus areas other than developing-world interventions aren't being hoarded at the expense of millions of lives. Presumably, this is because both foundations also believe the counterfactual expected value of these other focus areas is at least comparable to the expected value of developing-world interventions.

If the Gates Foundation and Good Ventures appear not to, across the proportions of their endowments they've respectively allotted for developing-world interventions and other focus areas, be not giving away their money as quickly as they could while still being as effective as possible, then you objecting to it would make sense. However, that would be a separate thesis that you haven't covered in this post. Were you to put forward such a thesis, you've already laid out the case for what's wrong with a foundation like Good Ventures not fully funding each year the developing-world interventions of Givewell's recommended charities.

Yet you would still need to make additional arguments for what Good Ventures is doing wrong in only granting to another focus area like AI alignment as much as they annually are now, instead of grantmaking at a much higher annual rate or volume. Were you to do that, it would be appropriate to point out what is wrong with the reasons an organization like the Open Philanthropy Project (Open Phil) doesn't grant much more to their other focus areas each year.

For example, one reason it wouldn't make sense for AI alignment to be granting in total each year 100x as much to AI risk as they are now, starting this year, is because it's not clear AI risk as a field currently has that much room for more funding. It is at least not clear AI risk organizations could sustain such a high growth rate assuming their grants from Open Phil were 100x bigger than they are now. That's an entirely different point than any you made in this post. Also, as far as I'm aware, that isn't an argument you've made anywhere else.

Given that you are presumably familiar with these considerations, it seems to me you should have been able to anticipate the possibility of the third option. In other words, unless you're going to make the case that either:

  • it is objectionable for a foundation like Good Ventures to reserve some of their endowment for the long-term development of a focus area like AI risk, instead of using it all to fund cost-effective developing-world interventions, and/or;
  • it is objectionable Good Ventures isn't funding AI alignment more than they currently are, and why;

you should have been able to tell in advance the dichotomy you presented is indeed a false one. It seems like of the two options in the dichotomy you presented, you believe cost-effectiveness estimates like those from Givewell are wildly exaggerated. I don't know why you presented it as though you thought it might just as easily be one of the two scenarios you presented, but the fact you're exactly the kind of person who should have been able to anticipate a plausible third scenario and didn't undermines the point you're trying to make.

Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.

One thing that falls out of my above commentary is that since it is not clearly the case that is only one of the two scenarios you presented is true, it is not necessarily the case either that the mentioned cost-effectiveness estimates "have to be interpreted as marketing copy designed to control your behaviour". What's more, you've presented another false dichotomy here. It is not the case Givewell's cost-effectiveness estimates must be only and exclusively one of either:

  • severely distorted marketing copy designed for behavioural control.
  • unbiased estimates designed to improve the quality of your decision-making process.

Obviously, Givewell's estimates aren't unbiased. I don't recall Givewell ever claiming to be unbiased, although it is a problem for other actors in EA to treat Givewell's cost-effectiveness estimates as unbiased. I recall from reading a couple posts from you series on Givewell it seemed as though you were trying to hold Givewell responsible for the exaggerated rhetoric made by others in EA using Givewell's cost-effectiveness estimates. It seems like you're doing that again now. I never understood then, and I don't understand now, why you've tried explaining all this as if Givewell is responsible for how other people are misusing their numbers. Perhaps Givewell should do more to discourage a culture of exaggeration and bluster in EA built on people using their cost-effectiveness estimates and prestige as a charity evaluator to make claims about developing-world interventions that aren't actually backed up by Givewell's research and analysis.

Yet that is another, different argument you would have to make, and one that you didn't. To hold Givewell as exclusively culpable for how their cost-effectiveness estimates and analyses have been misused as you have, in the past and present, would only be justified by some kind of evidence Givewell is actively trying to cultivate a culture of exaggeration and bluster and shiny-distraction-via-prestige around themselves. I'm not saying no such evidence exists, but if it does, you haven't presented any of it.

We should be more skeptical, not less, of vague claims by the same parties to even more spectacular returns on investment for speculative, hard to evaluate interventions, especially ones that promise to do the opposite of what the argument justifying the intervention recommends.

You make this claim as though it might be the exact same people in the organizations of Givewell, Open Phil, and Good Ventures who are responsible for all the following decisions:

  • presenting Givewell's cost-effectiveness estimates in the way they do.
  • making recommendations to Good Ventures via Givewell about how much Good Ventures should grant to each of Givewell's recommended charities.
  • Good Ventures' stake in OpenAI.

However, it isn't the same people making all of these decisions across these 3 organizations.

  • Dustin Moskowitz and Cari Tuna are ultimately responsible for what kinds of grants Good Ventures makes, regardless of focus area, but they obviously delegate much decision-making to Open Phil.
  • Good Ventures obviously has tremendous influence over how Givewell conducts their research and analysis to reach particular cost-effectiveness estimates, but by all appearances Good Ventures appears to have let Givewell operate with a great deal of autonomy, and haven't been trying to influence Givewell to dramatically alter how they conduct their research and analysis. Thus, it would make sense to look to Givewell, and not Good Ventures, for what to make of their research and analysis.
  • Elie Hassenfeld is the current executive director of Givewell, and thus is the one to be held ultimately accountable for Givewell's cost-effectiveness estimates, and recommendations to Good Ventures. Holden Karnofsky is a co-founder of Givewell, but for a long time has been focusing full-time on his role as executive director of Open Phil. Holden no longer co-directs Givewell with Elie.
  • As ED of Open Phil, Holden has spearheaded Open Phil's work in, and Good Ventures' funding of, AI risk research.
  • That their is a division of labour whereby Holden has led Open Phil's work, and Elie Givewell's, has been common knowledge in the effective altruism movement for a long time.

What many people disagreed with about Open Phil recommending Good Ventures take a stake in OpenAI, and Holden Karnofsky consequently being made a Board Member of Open Phil, is based on the particular roles played by the people involved in the grant investigation that I won't go through here. Also, like yourself, on the expectation OpenAI may make the state of things in AI risk worse rather than better, based on either OpenAI's ignorance or misunderstanding of how AI alignment research should be conducted, at least in the eyes of many people in the rationality and x-risk reduction communities.

The assertion Givewell is wildly exaggerating their cost-effectiveness estimates is an assertion the numbers are being fudged at a different organization than Open Phil. The common denominator is of course that Good Ventures made grants made on recommendations from both Open Phil and Givewell. Holden and Elie are co-founders of both Open Phil and Givewell. However, with the two separate cases of Givewell's cost-effectiveness estimates, and Open Phil's process for recommending Good Ventures take a stake in OpenAI, it is two separate organizations, run by two separate teams, led separately by Elie and Holden respectively. If in each of the cases you present of Givewell, and Open Phil's support for OpenAI, something wrong has been done, they are two very different kinds of mistakes made for very different reasons.

Again, Good Ventures is ultimately accountable for grants made in both cases. You could hold each organization accountable separately, but when you refer to them as the "same parties", you're making it out as though Good Ventures, and their satellite organizations, are either, generically, incompetent or dishonest. I say " generically", because while you set it up that way, you know as well as anyone the specific ways in which the two cases of Givewell's estimates, and Open Phil's/Good Venture's relationship with OpenAI, differ. You know this because you have been one of, if not the most, prominent individual critic in both cases for the last few years.

Yet when you call them all the "same parties", you're treating both cases as if the 'family' of Good Ventures and surrounding organizations generally can't be trusted, because it's opaque to us how they come to make these decisions that lead to dishonest or mistaken outcomes as you've alleged. Yet you're one of the people who made clear to everyone else how the decisions were made; who were the different people/organizations who made the decisions; and what one might find objectionable about them.

To substantiate the claim the two different cases of Givewell's estimates, and Open Phil's relationship to OpenAI, are sufficient grounds to reach the conclusion none of these organizations, nor their parent foundation Good Ventures, can generally be trusted, you could have held Good Ventures accountable for not being diligent enough in monitoring the fidelity of the recommendations they receive from either Givewell and Open Phil. Yet you didn't do that. You could have also, now or in the past, tried to make the arguments Givewell and Open Phil should each separately be held accountable for what you see as their mistakes on in the two separate cases. Yet you didn't do that either.

Making any of those arguments would have made sense. Yet what you did is you treated it as though Givewell, Open Phil, and Good Ventures all play the same kind of role in both cases. Not even all 3 organizations are involved in both cases. To summarize all this, the two cases of Givewell's estimates, and Open Phil's relationship to OpenAI, if they are problematic, are not the same kinds of problems caused by Good Ventures for the same reasons. Yet you're making it out as though they are.

It might make more sense if you were someone else who just saw the common connection of Good Ventures, and didn't know how to go about criticizing them other than to point out they were sloppy in both cases. Yet you know everything I've mentioned about who the different people are in each of the two cases, and the different kinds of decisions each organization is responsible for, and how they differ in how they make those decisions. So, you know how to hold each organization separately accountable for what you see as their separate mistakes. You know these things because you:

  • identified as an effective altruist for several years.
  • have been a member of the rationality community for several years.
  • are a former employee of Givewell.
  • have transitioned since you've left Givewell to focusing more of your time on AI alignment.

Yet you make it out as though Good Ventures, Givewell, and Open Phil are some unitary blob that makes poor decisions. If you wanted to make any one of, or even all, the other specific, alternative arguments I suggested about how to hold each of the 3 organizations individually accountable, it would have been a lot easier for you to make a solid and convincing argument than the one you've actually made regarding these organizations. Yet because you didn't, this is another instance of you undermining what you yourself are trying to accomplish with a post like this.

As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent.

You started this post off with what's wrong with Peter Singer's cost-effectiveness estimates from his 1997 essay. Then you pointed out what you see as being wrong similarly done by specific EA-aligned organizations today. Then you bridge to how, because funding gaps are illusory given the erroneous cost-effectiveness estimates, the Gates Foundation and Good Ventures are doing much less than they should with regards to developing-world interventions.

Then, you zoom in on what you see as the common pattern of bad recommendations being given to Good Ventures by Open Phil and Givewell. Yet the two cases of recommendations you've provided are from these 2 separate organizations who make their decisions and recommendations in very different ways, and are run by 2 different teams of staff, as I pointed out above. And as it's I've established you've known all this in intimate detail for years, you're making arguments that make much less sense than the ones you could have made based on the information available to you.

None of that has anything to do with the Gates Foundation. You told me in response to another comment I made on this post that it was another recent discussion on LW where the Gates Foundation came up that inspired you to make this post. You made your point about the Gates Foundation. Then, that didn't go anywhere, because you made unrelated points about unrelated organizations.

For the record, when you said:

If you give based on mass-marketed high-cost-effectiveness representations, you're buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There's no substitute for developing and acting on your own models of the world.

and

Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.

none of that applies to the Gates Foundation, because the Gates Foundation isn't an EA-aligned organization "mass-marketing high cost effectiveness representations" in a bid to get small, individual donors to build up a mass movement of effective charitable giving to fill illusory funding gaps they could easily fill themselves. Other things being equal, the Gates Foundation could obviously fill the funding gap. None of the rest of those things apply to the Gates Foundation, though, and they would have to for it to make sense that this post, and its thesis, were inspired by mistakes being made by the Gates Foundation, not just EA-aligned organizations.

However, going back to "the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent", it seems like you're claiming the thesis of Singer's 1997 essay, and the basis for effective altruism as a movement(?), are predicated exclusively on reliably nonsensical cost-effectiveness estimates from Givewell/Open Phil, not just for developing-world interventions, but in general. None of that is true, because Singer's thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, and Singer's thesis isn't the exclusive basis for the effective altruism movement. Even if that was a logically valid argument, your conclusion would not be sound either way, because, as I've pointed out above, the premise that it makes sense to treat Givewell, Open Phil, and Good Ventures, like a unitary actor, is false.

In other words, because " mass-marketed high-cost-effectiveness representations" are not the foundation of "the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent" in general, and certainly isn't some kind of primary basis for effective altruism if that was something you were suggesting, your conclusion destroys nothing.

To summarize:

  • you knowingly presented a false dichotomy about why the Gates Foundation and Good Ventures don't donate their entire endowments to developing-world interventions.
  • you knowingly set up a false dichotomy whereby everyone has been acting the whole time as if it's the case Givewell's and Open Phil's cost-effectiveness estimates are unbiased, or the reason they are wildly exaggerated is because those organizations are trying to deliberately manipulate people's behaviour.
  • you cannot deny you could not have been cognizant of the fact these dichotomies are false, because the evidence with which you present them are your own prior conclusions drawn in part from your personal and professional experiences.
  • you said this post was inspired by the point you made about the Gates Foundation, but that has nothing to do with the broader arguments you've made about Good Ventures, Open Phil, or Givewell, and those arguments don't back the conclusion you've consequently drawn about utilitarianism and effective altruism.

In this post, you've raised some broad concerns of things happening in the effective altruism movement I think are worth serious consideration.

My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives.

I don't believe the rationale for why Givewell doesn't recommend to Good Ventures to fully fund Givewell's top charities totally holds up, and I'd like to understand better why they don't. I think Givewell maybe should recommend Good Ventures fully fund their own top charities each year.

Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.

That EA has a tendency to move people in a direction too far away from these more ordinary and concrete aspects of their lives is a valid one.

I am also unhappy with much of what has happened relating to OpenAI.

All these are valid concerns that would be much easier to take more seriously from you if you presented arguments for them on their own, as opposed to presenting them as a few of many different assertions that related to each other, at best, in a very tenuous manner, in a big soup of an argument against effective altruism that doesn't logically hold up, based on the litany of unresolved issues with it I've pointed out above. It's also not clear why you wouldn't have realized any of this before you made this post, based on all the knowledge that served as the evidence you used for your premises in this post you had before you made this post, as it was information you yourself published on the internet.

Even if all the apparent leaps of logic you've made in this post are artifacts of this post being a truncated summary of your entire, extensive series of posts on Givewell, and EA, the entire structure of this one post undermines the point(s) you're trying to make with it.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-29T19:16:41.302Z · score: 12 (3 votes) · LW · GW

Okay, thanks. Sorry for the paranoia. I just haven't commented on any LW posts with the 'reign of terror' commenting guidelines before, so I didn't know what to expect. That gives me enough context to feel confident my comment won't be like that one you deleted.


Comment by evan_gaensbauer on Drowning children are rare · 2019-05-29T18:00:01.189Z · score: 2 (1 votes) · LW · GW

That's surprising. Have you been exposed to the other aforementioned critique of Givewell? I ask because it falls along very similar lines to yours, but it appears to have been written without reference to yours whatsoever.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-29T15:18:41.055Z · score: 2 (1 votes) · LW · GW

Strongly upvoted.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-29T15:18:17.304Z · score: 2 (1 votes) · LW · GW

Just to add user feedback, I did indeed have no idea this was what happens when comments are deleted.

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-29T15:17:20.425Z · score: 16 (5 votes) · LW · GW

I didn't know that, but neither do I mind the experience of having a comment deleted. I would mind:

  • that Benquo might moderate this thread to a stringent degree according to a standard he might fail to disclose, and thus can use moderation as a means to move the goal posts, while under the social auspices of claiming to delete my comment because he is saw it as wilfully belligerent, without substantiating that claim.
  • that Benquo will be more motivated to do this than he otherwise would be with on other discussions he would moderate on LW, as he has initiated this discussion with an adversarial frame, and is one that Benquo feels personally quite strongly about (e.g., it is based on a long-lasting public dispute he has had with his former employer, and Benquo here is not shy here about his hostility to at least large portions of the EA movement).
  • that were he to delete my comment on such grounds, there would be no record by which anyone reading this discussion would be able to hold Benquo accountable to the standards he used to delete my comments, unduly stacking the deck against an appeal I could make that in deleting my comment Benquo had been inconsistent in his moderation.

Were this to actually happen, of course I would take my comment and re-post it as its own article. However, I would object to how Benquo would have deleted my comment in that case, not the fact that he did do it, on the grounds I'd see it as legitimately bad for the state of discourse LW should aspire to. By checking what form Benquo's moderation standard specifically takes beyond a reign of tyranny against any comments he sees as vaguely annoying or counterproductive, I am trying to:

1. externalize a moderation standard to which Benquo could be held accountable.

2. figure out how I can write my comment so it meets Benquo's expectations for quality, so as to minimize unnecessary friction.


Comment by evan_gaensbauer on Drowning children are rare · 2019-05-28T22:20:40.188Z · score: 4 (2 votes) · LW · GW

Was your posting this inspired by another criticism of Givewell recently published by another former staffer at Givewell?

Comment by evan_gaensbauer on Drowning children are rare · 2019-05-28T21:54:25.659Z · score: 7 (7 votes) · LW · GW

I have one or more comments I'd like to make, but I'd like to know what sorts of comments you consider to be either 'annoying' or 'counterproductive' before I make them. I agree with some aspects of this article, but I disagree with others. I've checked, and I think my disagreements will be greater in both number and degree than other comments here. I wouldn't expect you to find critical engagement based on some strong disagreement to "be annoying or counterproductive", but I'd like to get a sense if you think for me to come out of the gate disagreeing or criticizing is too annoying or counterproductive.

I ask because I wouldn't like to make a lengthy response that would go deleted. If you think that might end up being the case, then I will respond to this article with one of my own.

Comment by evan_gaensbauer on Evidence for Connection Theory · 2019-05-28T21:40:32.327Z · score: 6 (3 votes) · LW · GW

I'm writing an article that will be cross-posted on LW that will cover the following:

  • What Leverage might be doing.
  • The reasons why it might be hard to figure out what they're doing.

By that I mean there are a variety of reasons Leverage is apparently not much of a public-facing organization (some of those reasons seem either truer or better than others, as a lot of it is based on rumours about Leverage). I'll lay those out. I will try to figure out what Leverage is currently doing, and try to communicate it. I'm not confident I'll succeed at this.

CT was always Geoff Anders' baby, so I don't think it mattered as much whether the rest of Leverage endorsed it or not. I wouldn't bet on Geoff still endorsing this, but as far as I can tell, while the philosophy Leverage is working off of isn't called "Connection Theory", it is something that evolved out of it. So, I expect CT is still at least somewhat representative of Leverage's current philosophy. I'm also aware of and sorry for this unclear and illegible info, which are just the weeds one has to get used to wading through in pursuit of info about Leverage.

Comment by evan_gaensbauer on Evidence for Connection Theory · 2019-05-28T19:31:20.039Z · score: 5 (3 votes) · LW · GW

For context, finding information from Leverage Research about Leverage Research, or their, you know, research, online has often been historically difficult for those who have gone looking for it. It has a tendency to seemingly disappear from the internet, or at least its prior web address, after a couple years. So, while I don't know where one is right now, I intend to scrounge up a link to what CT actually claims. This document was one I hadn't seen before. Since I think I might write something up about CT later, I thought I'd throw up this link now. If/when I find a link to a description of CT, I'll come back here and ping you. I'll probably throw such a link up as another link post.

Comment by evan_gaensbauer on Evidence for Connection Theory · 2019-05-28T19:26:29.598Z · score: 6 (3 votes) · LW · GW

Thanks. I was expecting a big block with the word 'Link Post' somewhere, instead of just the chain link. I'm only used to looking for that icon in the comments section to link to comments. It's just that in the comments the icon is so small I could never tell if it was supposed to be an image of a chain link.

Comment by evan_gaensbauer on Getting Out of the Filter Bubble Outside Your Filter Bubble · 2019-05-22T12:06:19.617Z · score: 2 (1 votes) · LW · GW

Well, before you were using diplomats as an example, and now you're specifically talking about the POTUS, which changes everything, especially with regards to the Overton window. Suggesting the POTUS do something is of course much more sensitive than suggesting a generic/random diplomat do the same, regardless of what it is.

Comment by evan_gaensbauer on Getting Out of the Filter Bubble Outside Your Filter Bubble · 2019-05-22T00:26:35.808Z · score: 4 (2 votes) · LW · GW

Upvoted. Those are good points. I still don't know enough about how to evaluate the quality of the 'Kegan levels' framework, so I'm not sure what I should make of its application in this context.

Comment by evan_gaensbauer on Getting Out of the Filter Bubble Outside Your Filter Bubble · 2019-05-22T00:17:48.593Z · score: 2 (1 votes) · LW · GW
You can use this kind of definition of politics but you have to be careful as it means that political power has nothing to do with politics. In that frame Robert Moses can have the power to get the parliament in New York to pass whatever bill he wants without him doing anything political.

This is a good point. It's not the definition of politics I use for everything, just for the purposes of the OP.

Getting diplomats to do exercises to get in touch with their body on the other hand can be a means to get them to come to a peaceful resolution instead of waging war.

I guess it's fine as an example for depicting how far outside the norm thinking can be. I don't think you're using the concept of "Overton window" entirely correctly with your given example. An Overton window isn't just what in a country what is normal for people to talk about in politics. It's specifically the window of acceptable political discourse. That doesn't mean just 'politically correct', because while a lot of cultural institutions in, e.g., North America are in the thrall of political correctness, the fact of the matter is there are multiple political factions who disagree with political correctness. Altogether, this leads to what might be a majority of people opposing (different kinds of) political correctness, but for different reasons. So, there is wide opposition to political correctness. It's just not centralized like the support for political correctness is. So, I'd say in the last few years, ideologies like socialism and nationalism that 20 years ago weren't in the Overton window are back into it in the United States.

It doesn't seem the example of diplomats doing exercises isn't outside the Overton window, because it doesn't strike me as politically unacceptable. It's just seems like something most people wouldn't bring up because for whatever reasons they just wouldn't see the relevance of it. Or at least they may not be able to affect the personal behaviour of diplomats, so they wouldn't be the point of discussing it.

Comment by evan_gaensbauer on Getting Out of the Filter Bubble Outside Your Filter Bubble · 2019-05-22T00:02:26.987Z · score: 5 (3 votes) · LW · GW

That has been my experience as well. I didn't have a term for it, so I didn't use 'bubble-hopping' in the OP. That term captures a lot of my experience as well. A tacit part of the process I didn't include in the OP is what the experience of actually exposing yourself to different political worldviews is like. I was mostly trying to get the basic concept out. I might write another post about what it's like.

I spend my time in new, different bubbles for about as long it takes for me to get a sense or feeling that I understand the basics of how they see the world. Doing so has in the past meant immersing myself in the communications and culture of the specific political niche online and through other sources for a month or two, or not longer. There are some groups I will dip in and out of.

I would have captured all this is I had written about how to do what I was talking about in the OP. There are a couple reasons I didn't do that:

  • Like I said above, I wanted to introduce a basic concept without extra clutter.
  • A lot of the sources I consumed to understand how radically different political perspectives than the norm aren't of great quality. Neither are lots of sources for more mainstream political perspectives, but the problem is political groups seen as extremist or on the fringe are controversial. So I didn't want to unnecessarily rouse backlash by recommending some controversial sources I think are at best mediocre. I figured people could easily find sources that suit them by themselves.
  • I didn't want my post to run overly long.

If the discussion here prompts interesting insights or questions, I might write a follow-up post.

Comment by evan_gaensbauer on Getting Out of the Filter Bubble Outside Your Filter Bubble · 2019-05-21T00:07:41.101Z · score: 4 (2 votes) · LW · GW
Awww how kind of you <3. But no, I was referring to agorism. Economic action as political action, criticizing when you implied that violence and voting are the main ways to change the world.

I'm familiar with agorism. When I said there was 'liberal' ideologies and 'illiberal' ideologies, I didn't mean to imply that respectively through each voting and violence are exclusively the ways through which people change the world. It's not a great framing, I admit, but you're making mountains out of molehills, and positing me as believing a bunch of things I don't, and assuming I am much more ignorant than I actually am.

All actions have an opportunity cost, thats an exterme example as its dealing with tiny details.

You're being pedantic. The fact that economically, all actions have opportunity costs, has nothing to do with what we're talking about here. The time you took to write the above comment imposed an opportunity cost of time you couldn't spend spreading agorism, or whatever.

I'm not going to engage your comment further, because it's not worth my time. Another reason I'm saying this things like this in a public comment because I want it to serve as a signal to others on LW that this is how you tend to engage others, and how you persist in writing poor responses even when someone is willing to engage you. Your comments are making bogus assumptions, are full of grammar and spelling errors that make sections of your writing incoherent, and you're setting things up to be some kind of petty debate when nobody else was reading that into the conversation.