Survey: Help Us Research Coordination Problems In The Rationalist/EA Community
post by namespace (ingres)
I think by this point we're all aware that our world is in an ongoing state of mundane quasi-apocalypse on several dimensions. Climate change, insect population decline, people dying around the world at a relentless pace, AI risk, global thermonuclear war. The question is not what's wrong with the world, you probably have a good understanding of that. The question is what we're going to do about it. And for too many of us so far the answer is "well, nothing".
Some of the lowest hanging fruit for changing that centers around coordination. I've been creating lists to map out the LessWrong Diaspora, what active projects exist in the community, tools to help people make their projects more credible, and more because I believed it would help create common knowledge of what resources exist. Eventually I realized that trivial inconveniences eat most of the gains. It's nice that I can show newbie organizers a list of writeups and reports, but becomes less impressive when you consider it would take an impractical amount of time for them to absorb that knowledge. It doesn't matter how many resources I put together if they can't answer people's questions without a ton of effort.
I'm prepared to put together a team that will run a service solving exactly this problem. A query about what the community has learned so far about meetups could be answered by a summary from someone who has done the research. But before doing that I want to be sure this is a thing people actually want. I'd also like to confirm that the necessary resources to coordinate exist. For this reason I invite you to take a survey looking at the combination of talent, project ideas, donor capital, popularity of cause areas and willingness to work within the wider rationalist and effective altruist communities.
You should especially take this survey if:
- You are looking for projects to support, either to help the projects succeed or as opportunities for you to level up
- You have ideas for projects related to EA/Rationality/X-Risk things for which you want collaborators or financial support
- You are available to help mentor, assist and guide other people with their projects
- You want other people to create projects you would be interested in giving money to. Or want to discover existing projects you can support financially.
Survey Link: https://goo.gl/forms/sKUk3YTLvhV12CpS2
In any case I will post a write up of the results after the survey closes on May 1st.
Comments sorted by top scores.
comment by ozymandias ·
2018-04-08T01:28:26.193Z · LW(p) · GW(p)
Potentially confusing aspect of the survey: "animal rights" and "animal welfare" refer to different things within the animal activism space. In general, animal rights activism seeks to end all human exploitation of animals, while animal welfare activism seeks to make sure all animals have a high quality of life. Since PETA is a very prominent animal rights organization and not particularly associated with effective animal activism, it's unclear to me whether you intended to specify animal rights activism in particular or whether you intended to include all animal activism but accidentally made a misleading question. If the former, I'd suggest adding "animal welfare (HSUS)" as a category; if the latter, I'd suggest making a single "animal activism" category and using an effective-animal-activism-associated charity such as the Humane League as an example.
It's unclear to me whether "mundane societal issues" is intended to be solely political issues or to include apolitical issues such as mentoring talented students. Regardless, I'd suggest replacing the Cato Institute with a less partisan organization, to prevent non-libertarians from feeling excluded.
There is no place to clarify which of the potentially three projects we're talking about when we say what skills are needed.Replies from: ingres
↑ comment by namespace (ingres) ·
2018-04-08T03:24:30.223Z · LW(p) · GW(p)
Oh weird, Google Forms clearly glitched on me. I fixed both the Cato Institute and PETA issues on earlier versions of the survey. I will go through and fix them again, thanks for pointing them out.
EDIT: I think I see what happened, I fixed it on one part of the survey but not the other. Thanks again for the bug report.
As for your last point, that's entirely fair and I'll have to think of a way to handle it.
comment by Ben Pace (Benito) ·
2018-04-07T04:19:50.554Z · LW(p) · GW(p)
Filled it out - thanks :-)
- In a few places I said things like I wasn't interested in finding new projects or contributing more time, and this isn't because I don't want to contribute time, just that I already contribute it to my main rationalist project.
- There was a page about how much I'd give to orgs at different levels of accomplishment, and I didn't give any numbers because the questions just weren't stratified by the variables that I care about - I'm happy to give money to people I trust to do new projects, but 'outside' metrics very rarely would persuade me.
- Similarly In the various skills lists, I did want to say that the main determinant of whether someone would succeed, or I'd want to fund, them was sort of vague, the same way that it's not simple to write down the necessary skills to be a very successful startup founder. (Well, you can write them down, but rarely in a sufficiently concrete way that you can determine whether or not someone has them (bar actually observing them make a billion dollars).)
comment by PeterMcCluskey ·
2018-04-08T02:14:55.276Z · LW(p) · GW(p)
The survey lists CFAR under "Raising The Sanity Waterline". I donate to CFAR because it's an AI risk charity. I don't donate to charities that aim at "Raising The Sanity Waterline".Replies from: ingres
↑ comment by namespace (ingres) ·
2018-04-08T03:23:18.925Z · LW(p) · GW(p)
CFAR changed its mission to AI risk in the last handful of years. Their original mission was raising the sanity waterline, hence why that line comes with a date marker.
Replies from: Unnamed, PeterMcCluskey
↑ comment by PeterMcCluskey ·
2018-04-10T14:47:24.767Z · LW(p) · GW(p)
It was clear to me as a donor in 2013 that CFAR was primarily motivated by AI risk, but I got that impression mainly from talking to the people involved.
The 2013 date marker was only on one of the two references to CFAR when I took the survey. That was confusing.
comment by Chris_Leong ·
2018-04-07T10:15:23.890Z · LW(p) · GW(p)
I recently bought the domain ea.link. In the next few weeks, I'll have a service up that allows you to request shortlists that should help reduce some of these trivial co-ordination issues. For example, people will be more likely to use some of these resources if they are easy to find.