MIRI's April 2024 Newsletter
post by Harlan · 2024-04-12T23:38:20.781Z · LW · GW · 0 commentsThis is a link post for https://intelligence.org/2024/04/12/april-2024-newsletter/
Contents
No comments
The MIRI Newsletter is back in action after a hiatus since July 2022. To recap some of the biggest MIRI developments since then:
- MIRI released its 2024 Mission and Strategy Update, announcing a major shift in focus: While we’re continuing to support various technical research programs at MIRI, our new top priority is broad public communication and policy change.
- In short, we’ve become increasingly pessimistic that humanity will be able to solve the alignment problem in time, while we’ve become more hopeful (relatively speaking) about the prospect of intergovernmental agreements to hit the brakes on frontier AI development for a very long time—long enough for the world to find some realistic path forward.
- Coinciding with this strategy change, Malo Bourgon transitioned from MIRI COO to CEO, and Nate Soares transitioned from CEO to President. We also made two new senior staff hires: Lisa Thiergart, who manages our research program; and Gretta Duleba, who manages our communications and media engagement.
- In keeping with our new strategy pivot, we’re growing our comms team: I (Harlan Stewart) recently joined the team, and will be spearheading the MIRI Newsletter and a number of other projects alongside Rob Bensinger. I’m a former math and programming instructor and a former researcher at AI Impacts, and I’m excited to contribute to MIRI’s new outreach efforts.
- The comms team is at the tail end of another hiring round, and we expect to scale up significantly over the coming year. Our Careers page and the MIRI Newsletter will announce when our next comms hiring round begins.
- We are launching a new research team to work on technical AI governance, and we’re currently accepting applicants for roles as researchers and technical writers. The team currently consists of Lisa Thiergart and Peter Barnett, and we’re looking to scale to 5–8 people by the end of the year.
- The team will focus on researching and designing technical aspects of regulation and policy which could lead to safe AI, with attention given to proposals that can continue to function as we move towards smarter-than-human AI. This work will include: investigating limitations in current proposals such as Responsible Scaling Policies; responding to requests for comments by policy bodies such as the NIST, EU, and UN; researching possible amendments to RSPs and alternative safety standards; and communicating with and consulting for policymakers.
- Now that the MIRI team is growing again, we also plan to do some fundraising this year, including potentially running an end-of-year fundraiser—our first fundraiser since 2019. We’ll have more updates about that later this year.
As part of our post-2022 strategy shift, we’ve been putting far more time into writing up our thoughts and making media appearances. In addition to announcing these in the MIRI Newsletter again going forward, we now have a Media page that will collect our latest writings and appearances in one place. Some highlights since our last newsletter in 2022:
- MIRI senior researcher Eliezer Yudkowsky kicked off our new wave of public outreach in early 2023 with a very candid TIME magazine op-ed [LW · GW] and a follow-up TED Talk, both of which appear to have had a big impact. The TIME article was the most viewed page on the TIME website for a week, and prompted some concerned questioning at a White House press briefing.
- Eliezer and Nate have done a number of podcast appearances since then, attempting to share our concerns and policy recommendations with a variety of audiences. Of these, we think the best appearance on substance was Eliezer’s multi-hour conversation with Logan Bartlett.
- This December, Malo was one of sixteen attendees invited by Leader Schumer and Senators Young, Rounds, and Heinrich to participate in a bipartisan forum on “Risk, Alignment, and Guarding Against Doomsday Scenarios.” Malo’s written statement is the best current write-up of MIRI’s policy recommendations. At the event, Malo found it heartening to see how far the discourse has come in a very short time—Leader Schumer opened the event by asking attendees for their probability that AI could lead to a doomsday scenario, using the term “p(doom)”.
- Nate has written several particularly important essays pertaining to AI risk:
- In a new report, MIRI researchers Peter Barnett and Jeremy Gillen argue that without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI.
- Other unusually-good podcast appearances and write-ups include Eliezer’s appearances on Bankless, Bloomberg, and the David Pakman Show, Nate’s comments on an OpenAI strategy document, and Rob Bensinger’s take on ten relatively basic reasons to expect AGI ruin. See the Media page for a fuller list.
In next month’s newsletter, we’ll discuss some of the biggest developments in the world at large since the MIRI Newsletter went on pause, as well as returning to form with a more detailed discussion of MIRI’s most recent activities and write-ups. You can subscribe to the MIRI Newsletter here or by following my account.
Thanks to Rob Bensinger for extensively helping with this edition of the newsletter.
0 comments
Comments sorted by top scores.