Announcing a contest: EA Criticism and Red Teaming

post by fin · 2022-06-02T20:27:18.860Z · LW · GW · 1 comments

This is a link post for https://forum.effectivealtruism.org/posts/8hvmvrgcxJJ2pYR4X/announcing-a-contest-ea-criticism-and-red-teaming

Contents

  Introduction
      tl;dr: We're running a writing contest for critically engaging with theory or work in effective altruism (EA). 
  How to apply
      The deadline is September 1, 2022.
  Prizes
  Criteria
  What to submit
    Formats
  Additional resources
  The judging panel
  Rationale
    What this is not about
  Contact us
  Q&A
    Submissions and how they’ll be judged
    About the contest
    Other
None
1 comment

Cross-posted from the EA Forum [EA · GW]. I would add that we are open to and positively interested in critiques of any aspect of effective altruism from users of this Forum and the rationalist community. We plan to respond to comments on the EA Forum, but may check this post less often.


Introduction

tl;dr: We're running a writing contest for critically engaging with theory or work in effective altruism (EA). 

Submissions can be in a range of formats [EA(p) · GW(p)] (from fact-checking to philosophical critiques or major project evaluations); and can focus on a range of subject matters [EA · GW] (from assessing empirical or normative claims to evaluating organizations and practices).  

We plan on distributing $100,000, and we may end up awarding more than this amount if we get many excellent submissions. 

The deadline is September 1, 2022. You can find the submission instructions below [EA · GW]. Neither formal nor significant affiliation with effective altruism is required to enter into the contest.

We are: Lizka Vaintrob [EA · GW] (the Content Specialist [EA · GW] at the Centre for Effective Altruism), [EA · GW]Fin Moorhouse [EA · GW] (researcher at the Future of Humanity Institute), and Joshua Teperowski Monrad [EA · GW] (biosecurity program associate at Effective Giving). The contest is funded via the FTX Future Fund [? · GW] Regranting Program, with organizational support from the Centre for Effective Altruism [? · GW].

We ‘pre-announced’ this contest in March [EA · GW]. 

The rest of this post gives more details, outlines the kinds of critical work we think are especially valuable, and explains our rationale. We’re also sharing a companion resource for criticisms and red teams [EA · GW]. 

How to apply

Submit by posting on the EA Forum[1] [EA(p) · GW(p)] and tagging the post[2] [EA(p) · GW(p)] with the contest’s tag [? · GW], or by filling out this form.

If you post on the Forum, you don't need to do anything except tag your post[2] [EA(p) · GW(p)] with the “Criticism and Red Teaming Contest [? · GW]” topic, and we’ll consider your post for the contest. If you’d prefer to post your writing outside the Forum, you can submit it via this form — we’d still encourage you to cross-post [EA · GW] it to the Forum (although please be mindful of copyright issues). 

We also encourage you to refer other people’s work to the contest if you think more people should know about it. To refer someone else’s work, please submit it via this form. If it wins, we may reward you for this — please see an explanation below.

The deadline is September 1, 2022.

Please contact us [EA · GW] with any questions. You can also comment here.

Prizes

We have $100,000 currently set aside for prizes, which we plan on fully distributing.

Prizes will fall under three main tiers:

In addition, we may award a prize of $100,000 for outstanding work that looks likely to cause a very significant course adjustment in effective altruism.

Therefore, we’re prepared to award (perhaps significantly) more than $100,000 if we’re impressed by the quality and volume of submissions. 

We’re also offering a bounty for referring winning submissions: if you refer [EA · GW] a winning submission (if you’re the first person to refer it, and the author never entered the contest themselves), you’ll get a referral bounty of 5% of the award.

We will also consider helping you find proactive funding for your work if you require the security of guaranteed financial support to enable a large project (though we may deduct proactive funding from prize money if you are awarded one). See the FAQ [EA · GW] for more details.

Submissions must be posted or submitted no later than 11:59 pm BST on September 1st, and we’ll announce winners by the end of September.

Criteria

Overall, we want to reward critical work according to a question like: “to what extent did this cause me to change my mind about something important?” — where “change my mind” can mean “change my best guess about whether some claim is true”, or just “become significantly more or less confident in this important thing.”

Below are some virtues of the kind of work we expect to be most valuable. We’ll look out for these features in the judging process, but we’re aware it can be difficult or impossible to live up to all of them:

We don't expect that every winning piece needs to do well at every one of these criteria, but we do think each of these criteria can help you most effectively change people’s minds with your work.

We also want to reward clarity of writing, avoiding ‘punching down’, awareness of context, and a scout mindset [EA · GW]. We don’t want to encourage personal attacks, or diatribes that are likely to produce much more heat than light. And we hope that subject-matter experts who don’t typically associate with EA find out about this, and share insights we haven’t yet heard.

What to submit

We’re looking for critical work that you think is important or useful for EA. That’s a broad remit, so we’ve suggested some topics and kinds of critiques below.

If you’re looking for more detail, we’ve collaborated on a separate post [EA · GW] that collects resources for red teaming and criticisms, including guides to different kinds of criticisms, and examples. If you’re interested in participating in this contest, we highly recommend that you take a look. (We’d also love help updating and improving it.)

It’s helpful —but not required — to also suggest 1–3 people you think most need to heed your critique. For many topics, this nomination is better done privately (contact us [EA · GW], or submit through the form). We’ll send it their way where possible. (If you don’t know who needs to see it most, we’ll work it out.) 

Formats

You might consider framing your submission as one of the following:

Again, for more detail on topic ideas, kinds of critiques, and examples: visit our longer post with resources for critiques and red teams [EA · GW]. 

We don’t want to give an analogous list for topic ideas, because any list is necessarily going to leave things out. However, you might take a look at Joshua’s post outlining four categories of effective altruism critiques [EA · GW]: normative and moral questions, empirical questions, institutions & organizations, and social norms & practices. Browsing this Forum could be a good way to get ideas if you are new to effective altruism.

Browsing this Forum (especially curated lists like the Decade Review prizewinners [EA · GW], the EA Wiki [? · GW], and the EA Handbook [? · GW]) could be a good way to get ideas if you are new to effective altruism.

If you’re unsure whether something you plan on writing could count for this contest, feel free to ask us.

Additional resources

We’ve compiled a companion post [EA · GW], in which we’ve collected some resources for criticisms and red teaming. 

We’re also tentatively planning on running (or helping with) several workshops on criticisms and red teaming, which will be open to anyone who is interested, including people who are new to effective altruism. We hope that the first two will be in June. If you’d like to hear about dates when they’re decided, you can fill out this form.

The judging panel

The judging panel is:

No one on the judging panel will be able to “veto” winners, and every submission will be read by at least two people. If submissions are technical and outside of the panelists’ fields of expertise, we will consult domain experts. We might add more panelists if we get many submissions.

If we get many submissions or if we find that the current panel doesn’t have enough bandwidth, we may invite more people to the panel. 

Rationale

Why do we think this matters? In short, we think there are some reasons to expect good criticism to be undersupplied relative to its real value. And that matters: as EA grows, it’s going to become increasingly important that we scrutinize the ideas and assumptions behind key decisions — and that we welcome outside experts to do the same.

Encouraging criticism is also a way to encourage a culture of independent thinking, and openness to criticism and scrutiny within the EA community. Part of what made and continues to make EA so special is its epistemic culture: a willingness to question and be questioned, and freedom to take contrarian or unusual ideas seriously. As EA continues to grow, one failure mode we anticipate is that this culture may give way to a culture of over-deference.

We also really care about raising the average quality of criticism. Perhaps you can recall some criticisms of effective altruism that you think were made in bad faith, or otherwise misrepresented their target in a mostly unhelpful and frustrating way. If we don’t make an effort to encourage more careful, well-informed critical work, then we may have less reason to complain about the harms that poor-quality work can cause, such as by misinforming people who are learning about effective altruism. Crucially, we’d also miss out on the real benefits of higher-quality, good-faith criticism.

In his opening talk for EA Global this year, Will MacAskill considered how a major risk to the success of effective altruism is the risk of degrading its quality of thinking: “if you look at other social movements, you get this club where there are certain beliefs that everyone holds, and it becomes an indicator of in-group mentality; and that can get strengthened if it’s the case that if you want to get funding and achieve very big things you have to believe certain things — I think that would be very bad indeed. Looking at other social movements should make us worried about that as a failure mode for us as well.”

It’s also possible that some of the most useful critical work goes relatively unrewarded because it might be less attention-grabbing or narrow in its conclusions. Conducting really high-quality criticism is sometimes thankless work: as the blogger Dynomight points out, there’s rarely much glory in fact-checking someone else’s work. We want to set up some incentives to attract this kind of work, as well as more broadly attention-grabbing work.

Ultimately, critiques have an impact by bringing about actual changes. The ultimate goal of this contest is to facilitate those positive changes, not just to spot what we’re currently getting wrong.

In sum, we think and hope: 

  1. Criticism will help us form truer beliefs, and that will help people with the project of doing good effectively. People and institutions in effective altruism might be wrong in significant ways — we want to catch that and correct our course.
    1. This is especially important in the non-profit context, since it lacks many of the signals in the for-profit world (like prices). For-profit companies have a strong signal of success: if they fail to make a profit, they eventually fail. One insight of effective altruism is that there are weaker pressures for nonprofits to be effective — to achieve the goals that really matter — because their ability to fundraise isn’t necessarily tied to their effectiveness. Charity evaluators like GiveWell do an excellent job at evaluating nonprofits, but we should also try to be comparably rigorous and impartial in assessing EA organizations and projects, including in areas where outputs are harder to measure. Where natural feedback loops don’t exist, it’s our responsibility to try making them!
    2. It’s also especially important for effective altruism, given that so many of the ideas are relatively new and untested. We think this is especially true of longtermist work.
  2. Stress-testing important ideas is crucial even when the result is that the ideas are confirmed; this allows us to rely more freely on the ideas.
  3. We want to sustain a culture of intellectual openness, open disagreement, and critical thinking. We hope that this contest will contribute to reinforcing that culture.
  4. Highlighting especially good examples of criticism may create more templates for future critical work, and may make the broader community more appreciative of critical work.
  5. We also think that people in the effective altruism network tend to hear more from other people in the network, and hope that this contest might bring in outside experts and voices. (You can see more discussion of this phenomenon in "The motivated reasoning critique of effective altruism [EA · GW]".)
  6. We want to break patterns of pluralistic ignorance where people underrate how sceptical or uncertain others (including ‘experts’) are about some claim.

Finally, we want to frame this contest as one step towards generating high-quality criticism, and not the final one. For instance, we’re interested in following up with winning submissions, such as by meeting with winning entrants to discuss ways to translate your work into concrete changes and communicate your work to the relevant stakeholders.

What this is not about

Note that critical work is not automatically valuable just by virtue of being critical: it can be attention-grabbing in a negative way. It can be stressful and time-consuming to engage with bad-faith or ill-considered criticism. We have a responsibility to be especially careful here.

This contest isn’t about making EA look open-minded or self-scrutinizing in a performative way: we want to award work that actually strikes us as useful, even if it isn’t likely to be especially popular or legible for a general audience.

We’re not going to privilege arguments for more caution about projects over arguments for urgency or haste. Scrutinizing projects in their early stages is a good way to avoid errors of commission; but errors of omission [EA · GW] (not going ahead with an ambitious project because of an unjustified amount of risk aversion, or oversensitivity to downsides over upsides) can be just as bad.

Similarly, we don’t want this initiative to only result in writing that one-directionally worries about EA ideas or projects being too ‘weird’ or too different from some consensus or intuitions. We’re just as interested to hear why some aspect of EA is being insufficiently weird — perhaps not taking certain ideas seriously enough. Relatedly, this isn’t just about being more epistemically modest: we are likely being both overconfident in some spots, and overly modest in others. What matters is being well calibrated in our beliefs!

We would also caution against criticizing the actions or questioning the motivations of a specific individual, especially without first asking them. We urge you to focus on the ideas or ‘artefacts’ individuals produce, without speculating about personal motivations or character — this is rarely helpful.

Contact us

Email criticism-contest@effectivealtruism.com, message [EA · GW] any of the authors of this post via the Forum, or leave a comment on this post. 

Q&A

Submissions and how they’ll be judged

About the contest

Other

We're extremely grateful to everyone who helped us kick this off, including the many people who gave feedback following our pre-announcement [EA · GW] of the contest.

1 comments

Comments sorted by top scores.

comment by trevor (TrevorWiesinger) · 2022-06-02T23:26:48.525Z · LW(p) · GW(p)

If you’d prefer to post your writing outside the Forum, you can submit it via this form — we’d still encourage you to cross-post [EA · GW] it to the Forum (although please be mindful of copyright issues).

  1. I want to make sure that everyone is aware of this option. Nobody should miss this sentence.
  2. This should be standard. Even with the form submission, I'd worry about it getting intercepted by hackers. AI, pandemics, and even global poverty have a significant geopolitical element, and even without that element the vested interests would still be very large.
  3. Public submissions can also be very good, like in the AGI rhetoric contest from last month. For example, I'd like EY to stop using the specific phrase "dying with dignity" because it's heavily associated with the assisted suicide activism, and many people will remember that those words were also used by Jim Jones, the leader of an SF-area communist cult, as he gave his final speech that manipulated 900 people into killing themselves. If EY unwittingly continues to use this super cursed phrase again, at some point between now and the end of this contest in a few months, then that would be bad, so I'd like him to stop now instead of waiting until after the entries are evaluated. That's not even an entry in the contest, I'm getting zero net money from this, please just stop.