A few questions about recent developments in EA
post by Peter Berggren (peter-berggren) · 2024-11-23T02:36:25.728Z · LW · GW · 12 commentsContents
12 comments
Having recognized that I have asked these same questions repeatedly across a wide range of channels and have never gotten satisfying answers for them, I'm compiling them here so that they can be discussed by a wide range of people in an ongoing way.
- Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it? In my non-expert assessment, there are pros and cons to each decision; what made EV think the balance turned out in a particular direction?
- Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
- Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?
- Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
- Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?
- Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?
- Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?
- Why is there a pattern of EA organizations renaming themselves (e.g. Effective Altruism MIT renaming to Impact@MIT)? What were seen as the pros and cons, and why did these organizations decide that the pros outweighed the cons?
- When they did rename, why did they choose to rename to relatively "boring" names that potentially aren't as good for SEO as one that more clearly references Effective Altruism?
- Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
- When EAs talk about the "unilateralist's curse," why don't they qualify those claims with the fact that Arkhipov and Petrov were unilateralists who likely saved the world from nuclear war?
- Why hasn't AI safety as a field made an active effort to build large hubs outside the Bay, rather than the current state of affairs in which outside groups basically just function as recruiting channels to get people to move to the Bay?
I'm sorry if this is a bit disorganized, but I wanted to have them all in one place, as many of them seem related to each other.
12 comments
Comments sorted by top scores.
comment by MichaelDickens · 2024-11-23T03:36:54.331Z · LW(p) · GW(p)
I will attempt to answer a few of these.
- Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it?
Power within EA is currently highly centralized. It seems very likely that the correct amount of centralization is less than the current amount.
- Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?
This sounds like a rhetorical question. The non-rhetorical answer is that women are much more likely than men to join a Community Health team, for approximately the same reason that most companies' HR teams are mostly women; and nobody has bothered to counteract this.
- Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence?
I had never considered that but I don't think it's a strong incentive. It doesn't look like the Community Health team is doing this. If anything, I think they're incentivized to give themselves less work, not more.
- Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?
That's not correct. Lots of EA orgs fundraise outside of the EA community.
- Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?
Because guilt-by-association is a very weak form of argument. (And it's not even obvious to me that there are relevant parallels there.) And FWIW I don't respond to the sorts of people who use the word "TESCREAL" because I don't think they're worth taking seriously.
- Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?
University groups do do those other things. But they do those things internally so you don't notice. Recruiting is the only thing they do externally, so that's what you notice.
- Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up?
Some orgs did that and it generally didn't go well (eg Leverage Research). I think most people believe that totalizing jobs are bad for mental health and create bad epistemics and it's not worth it.
- When EAs talk about the "unilateralist's curse," why don't they qualify those claims with the fact that Arkhipov and Petrov were unilateralists who likely saved the world from nuclear war?
Those are not examples of the unilateralist's curse. I don't want to explain it in this short comment but I would suggest re-reading some materials that explain the unilateralist's curse, e.g. the original paper.
- Why hasn't AI safety as a field made an active effort to build large hubs outside the Bay, rather than the current state of affairs in which outside groups basically just function as recruiting channels to get people to move to the Bay?
Because doing so would be a lot of work, which would take time away from doing other important things. I think people agree that having a second hub would be good, but not good enough to justify the effort.
Replies from: Viliam, mateusz-baginski, peter-berggren↑ comment by Viliam · 2024-11-24T16:15:46.677Z · LW(p) · GW(p)
Some orgs did that and it generally didn't go well (eg Leverage Research). I think most people believe that totalizing jobs are bad for mental health and create bad epistemics and it's not worth it.
Working hard together with similarly minded people seems great. Never taking a break, and isolating yourself from the world, is not.
People working at startups usually get at least free weekends, and often have a partner at home who is not a member of the startup. If you never take a break, I suspect that you are optimizing for appearing to work hard, rather than for actually being productive.
Replies from: peter-berggren↑ comment by Peter Berggren (peter-berggren) · 2024-11-25T05:54:14.401Z · LW(p) · GW(p)
I'm not proposing to never take breaks. I'm proposing something more along the lines of "find the precisely-calibrated amount of breaks to maximize productivity and take exactly those."
↑ comment by Mateusz Bagiński (mateusz-baginski) · 2024-11-23T04:51:33.789Z · LW(p) · GW(p)
guilt-by-association
Not necessarily guilt-by-association, but maybe rather pointing out that the two arguments/conspiracy theories share a similar flawed structure, so if you discredit one, you should discredit the other.
Still, I'm also unsure how much structure they share, and even if they did, I don't think this would be discursively effective because I don't think most people care that much about (that kind of) consistency (happy to be updated in the direction of most people caring about it).
Replies from: Viliam↑ comment by Viliam · 2024-11-24T16:05:54.394Z · LW(p) · GW(p)
I have read the "TESCREAL" paper recently, and wrote some thoughts about it in an ACX Open Thread.
It also gave me conspiracy theory vibes, as it tried too hard to connect together various groups and people that are parts of the sinister-sounding "TESCREAL" (including a table of individuals and organizations involved in various parts), trace their roots back to eugenicists (but also Plato and Aristotle), and warn about their wealth and influence.
It reminded me how some people in my country love to compile lists of people working at various non-profits to prove how this is all linked to Soros and how they are all servants of American propaganda trying to destroy our independence. Because apparently you cannot volunteer in a shelter for abandoned puppies without being a part of some larger sinister plot.
From the Dark Arts perspective, I think it would be useful to sigh and say "oh, this conspiracy theory again?" to signal that you consider the authors low-status. But then focus on the object-level objections.
The actual objection, from my perspective, is that the thing that connects the parts of the "TESCREAL" is simply "nerds who care, and think that technology is the answer". Some parts are more strongly related; if you believe in technological progress, then longtermism and transhumanism and extropianism and cosmism are more or less the same thing, the belief that in future, humans will overcome their current limitations using technology. That should not really come as huge a surprise for anyone.
The connection with EA is cherry-picking; yes, there are some longtermist projects, but most of it is stuff like curing malaria. But of course, you can't say that, if your agenda is to call them Nazis eugenicists.
And the connection with eugenicists is mostly "you know who else worried about the future of humanity?" (I find it difficult to think of a more appropriate response than "fuck you!") But also, speaking about intelligence is a taboo, which means that it is a taboo to worry about artificial intelligences becoming potentially smarter than humans. -- Here, I think a potential solution would be to push the authors towards making some object-level statements. Not just "people who say X are like Hitler eugenicists", but state your opinion clearly, whether it is "X" or "not X"; make a falsifiable statement.
But I think it is not too uncharitable to summarize the paper as "a conspiracy theory claiming that people who donate money to African charities that cure malaria are secretly eugenicists", because that is an important part of the "TESCR-EA-L" construct.
↑ comment by Peter Berggren (peter-berggren) · 2024-11-23T04:10:17.796Z · LW(p) · GW(p)
Thanks for giving some answers here to these questions; it was really helpful to have them laid out like this.
1. In hindsight, I was probably talking more about moves towards decentralization of leadership, rather than decentralization of funding. I agree that greater decentralization of funding is a good thing, but it seems to me like, within the organizations funded by a given funder, decentralization of leadership is likely useless (if leadership decisions are still being made by informal networks between orgs rather than formal ones), or it may lead to a lack of clarity and direction.
3. I understand the dynamics that may cause the overrepresentation of women. However, that still doesn't completely explain why there is an overrepresentation of white women, even when compared to racial demographics within EA at large. Additionally, this also doesn't explain why the overrepresentation of women here isn't seen as a problem on CEA's part, if even just from an optics perspective.
4. Makes sense, but I'm still concerned that, say, if CEA had an anti-Stalinism team, they'd be reluctant to ever say "Stalinism isn't a problem in EA."
5. Again, this was a question that was badly worded on my end. I was referring more specifically to organizations within AI safety, more than EA at large. I know that AMF, GiveDirectly, The Humane League, etc. fundraise outside EA.
6. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.
7. That makes sense. That was one of my hypotheses (hence my phrase "at least upon initial examination"), and I guess in hindsight it's probably the best one.
10. Starting an AI capabilities company that does AI safety as a side project generally hasn't gone well, and yet people keep doing it. The fact that something hasn't gone well in the past doesn't seem to me to be a sufficient explanation for why people don't keep doing it, especially because it largely seems like Leverage failed for Leverage-specific reasons (i.e. too much engagement with woo). Additionally, your argument here seems to prove too much; the Manhattan Project was a large scientific project operating under an intense structure, and yet it was able to maintain good epistemics (i.e. not fixating too hard on designs that wouldn't work) under those conditions. Same with a lot of really intense start-ups.
11. They may not be examples of the unilateralist's curse in the original sense, but the term seems to have been expanded well past its original meaning, and they're examples of that expanded meaning.
12. It seems to me like this is work of a different type than technical alignment work, and could likely be accomplished by hiring different people than the people already working on technical alignment, so it's not directly trading off against that.
Replies from: MichaelDickens↑ comment by MichaelDickens · 2024-11-23T04:35:33.606Z · LW(p) · GW(p)
- I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.
I intended my answer to be descriptive. EAs generally avoid making weak arguments (or at least I like to think we do).
comment by Cole Wyeth (Amyr) · 2024-11-23T04:23:33.801Z · LW(p) · GW(p)
I think a really hardcore rationality monastery would be awesome. Seems less useful on the EA side - EA’s have to interact with Overton window occupying institutions and are probably better off not totalizing too much.
Replies from: peter-berggren↑ comment by Peter Berggren (peter-berggren) · 2024-11-23T05:13:45.941Z · LW(p) · GW(p)
Very much agree on this one, as do many other people that I know of. However, the key counterargument as to why this may be better as an EA project than a rationality one is that "rationality" is vague on what you're applying it to, while "EA" is at least slightly more clear, and a community like this benefits from having clear goals. Nevertheless, it may make sense to market it as a "rationality" project and just have EA be part of the work it does.
So the question now turns to, how would one go about building it?
Replies from: Amyr↑ comment by Cole Wyeth (Amyr) · 2024-11-23T16:25:52.486Z · LW(p) · GW(p)
My intuition is kind of the opposite - I think EA has a less coherent purpose. It's actually kind of a large tent for animal welfare, longtermism, and global poverty. I think some of the divergence in priorities between EA's is about impact assessment / fact finding, and a lot of ink is spilled on this, but some is probably about values too. I think of EA as very outward-facing, coalitional, and ideally a little pragmatic, so I don't think it's a good basis for an organized totalizing worldview.
The study of human rationality is a more universal project. It makes sense to have a monastic class that (at least for some years of their life) sets aside politics and refines the craft, perhaps functioning as an impersonal interface when they go out into the world - almost like Bene Gesserit advisors (or a Confessor).
I have thought about building it. The physical building itself would be quite expensive, since the monastery would need to meet many psychological requirements - it would have to be both isolated and starkly beautiful. Also, well-provisioned. So this part would be expensive; and its an expense that EA organizations probably couldn't justify (that is, larger and more extravagant than buying a castle). Of course, most of the difficulty would be in creating the culture - but I think that building the monastery properly would go a long way (if you build it, they will come).
Replies from: peter-berggren↑ comment by Peter Berggren (peter-berggren) · 2024-11-23T17:56:50.115Z · LW(p) · GW(p)
OK then, so how would one go about making an organization that is capable of funding and building this? Are there any interested donors yet?
Replies from: Amyr↑ comment by Cole Wyeth (Amyr) · 2024-11-23T18:45:13.041Z · LW(p) · GW(p)
Hmmm, my long term strategy is to build wealth and then do it myself, but I suppose that would require me to leave academia eventually :)
I wonder if MIRI would fund it? Doesn't seem likely.