A few questions about recent developments in EA

post by Peter Berggren (peter-berggren) · 2024-11-23T02:36:25.728Z · LW · GW · 6 comments

Contents

6 comments

Having recognized that I have asked these same questions repeatedly across a wide range of channels and have never gotten satisfying answers for them, I'm compiling them here so that they can be discussed by a wide range of people in an ongoing way.

  1. Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it? In my non-expert assessment, there are pros and cons to each decision; what made EV think the balance turned out in a particular direction?
  2. Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
  3. Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?
  4. Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
  5. Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?
  6. Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?
  7. Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?
  8. Why is there a pattern of EA organizations renaming themselves (e.g. Effective Altruism MIT renaming to Impact@MIT)? What were seen as the pros and cons, and why did these organizations decide that the pros outweighed the cons?
  9. When they did rename, why did they choose to rename to relatively "boring" names that potentially aren't as good for SEO as one that more clearly references Effective Altruism?
  10. Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
  11. When EAs talk about the "unilateralist's curse," why don't they qualify those claims with the fact that Arkhipov and Petrov were unilateralists who likely saved the world from nuclear war?
  12. Why hasn't AI safety as a field made an active effort to build large hubs outside the Bay, rather than the current state of affairs in which outside groups basically just function as recruiting channels to get people to move to the Bay?

I'm sorry if this is a bit disorganized, but I wanted to have them all in one place, as many of them seem related to each other.

6 comments

Comments sorted by top scores.

comment by Cole Wyeth (Amyr) · 2024-11-23T04:23:33.801Z · LW(p) · GW(p)

I think a really hardcore rationality monastery would be awesome. Seems less useful on the EA side - EA’s have to interact with Overton window occupying institutions and are probably better off not totalizing too much.

Replies from: peter-berggren
comment by Peter Berggren (peter-berggren) · 2024-11-23T05:13:45.941Z · LW(p) · GW(p)

Very much agree on this one, as do many other people that I know of. However, the key counterargument as to why this may be better as an EA project than a rationality one is that "rationality" is vague on what you're applying it to, while "EA" is at least slightly more clear, and a community like this benefits from having clear goals. Nevertheless, it may make sense to market it as a "rationality" project and just have EA be part of the work it does.

So the question now turns to, how would one go about building it?

comment by MichaelDickens · 2024-11-23T03:36:54.331Z · LW(p) · GW(p)

I will attempt to answer a few of these.

  1. Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it?

Power within EA is currently highly centralized. It seems very likely that the correct amount of centralization is less than the current amount.

  1. Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?

This sounds like a rhetorical question. The non-rhetorical answer is that women are much more likely than men to join a Community Health team, for approximately the same reason that most companies' HR teams are mostly women; and nobody has bothered to counteract this.

  1. Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence?

I had never considered that but I don't think it's a strong incentive. It doesn't look like the Community Health team is doing this. If anything, I think they're incentivized to give themselves less work, not more.

  1. Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?

That's not correct. Lots of EA orgs fundraise outside of the EA community.

  1. Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?

Because guilt-by-association is a very weak form of argument. (And it's not even obvious to me that there are relevant parallels there.) And FWIW I don't respond to the sorts of people who use the word "TESCREAL" because I don't think they're worth taking seriously.

  1. Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?

University groups do do those other things. But they do those things internally so you don't notice. Recruiting is the only thing they do externally, so that's what you notice.

  1. Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up?

Some orgs did that and it generally didn't go well (eg Leverage Research). I think most people believe that totalizing jobs are bad for mental health and create bad epistemics and it's not worth it.

  1. When EAs talk about the "unilateralist's curse," why don't they qualify those claims with the fact that Arkhipov and Petrov were unilateralists who likely saved the world from nuclear war?

Those are not examples of the unilateralist's curse. I don't want to explain it in this short comment but I would suggest re-reading some materials that explain the unilateralist's curse, e.g. the original paper.

  1. Why hasn't AI safety as a field made an active effort to build large hubs outside the Bay, rather than the current state of affairs in which outside groups basically just function as recruiting channels to get people to move to the Bay?

Because doing so would be a lot of work, which would take time away from doing other important things. I think people agree that having a second hub would be good, but not good enough to justify the effort.

Replies from: mateusz-baginski, peter-berggren
comment by Mateusz Bagiński (mateusz-baginski) · 2024-11-23T04:51:33.789Z · LW(p) · GW(p)

guilt-by-association

Not necessarily guilt-by-association, but maybe rather pointing out that the two arguments/conspiracy theories share a similar flawed structure, so if you discredit one, you should discredit the other.

Still, I'm also unsure how much structure they share, and even if they did, I don't think this would be discursively effective because I don't think most people care that much about (that kind of) consistency (happy to be updated in the direction of most people caring about it).

comment by Peter Berggren (peter-berggren) · 2024-11-23T04:10:17.796Z · LW(p) · GW(p)

Thanks for giving some answers here to these questions; it was really helpful to have them laid out like this.

1. In hindsight, I was probably talking more about moves towards decentralization of leadership, rather than decentralization of funding. I agree that greater decentralization of funding is a good thing, but it seems to me like, within the organizations funded by a given funder, decentralization of leadership is likely useless (if leadership decisions are still being made by informal networks between orgs rather than formal ones), or it may lead to a lack of clarity and direction. 

3. I understand the dynamics that may cause the overrepresentation of women. However, that still doesn't completely explain why there is an overrepresentation of white women, even when compared to racial demographics within EA at large. Additionally, this also doesn't explain why the overrepresentation of women here isn't seen as a problem on CEA's part, if even just from an optics perspective.

4. Makes sense, but I'm still concerned that, say, if CEA had an anti-Stalinism team, they'd be reluctant to ever say "Stalinism isn't a problem in EA."

5. Again, this was a question that was badly worded on my end. I was referring more specifically to organizations within AI safety, more than EA at large. I know that AMF, GiveDirectly, The Humane League, etc. fundraise outside EA.

6. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.

7. That makes sense. That was one of my hypotheses (hence my phrase "at least upon initial examination"), and I guess in hindsight it's probably the best one.

10. Starting an AI capabilities company that does AI safety as a side project generally hasn't gone well, and yet people keep doing it. The fact that something hasn't gone well in the past doesn't seem to me to be a sufficient explanation for why people don't keep doing it, especially because it largely seems like Leverage failed for Leverage-specific reasons (i.e. too much engagement with woo). Additionally, your argument here seems to prove too much; the Manhattan Project was a large scientific project operating under an intense structure, and yet it was able to maintain good epistemics (i.e. not fixating too hard on designs that wouldn't work) under those conditions. Same with a lot of really intense start-ups.

11. They may not be examples of the unilateralist's curse in the original sense, but the term seems to have been expanded well past its original meaning, and they're examples of that expanded meaning.

12. It seems to me like this is work of a different type than technical alignment work, and could likely be accomplished by hiring different people than the people already working on technical alignment, so it's not directly trading off against that.

Replies from: MichaelDickens
comment by MichaelDickens · 2024-11-23T04:35:33.606Z · LW(p) · GW(p)
  1. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.

I intended my answer to be descriptive. EAs generally avoid making weak arguments (or at least I like to think we do).