MIRI's July 2024 newsletter
post by Harlan · 2024-07-15T21:28:17.343Z · LW · GW · 2 commentsThis is a link post for https://intelligence.org/2024/07/10/july-2024-newsletter/
Contents
MIRI updates News and links None 2 comments
MIRI updates
- Rob Bensinger suggests that AI risk discourse could be improved by adopting a new set of labels for different perspectives on existential risk from AI. One drawback of “AI doomer” (a label sometimes used in online discussions) is that it does not have a consistent meaning.
- AI researcher John Wentworth guesses [LW · GW] that a central difference between his and Eliezer Yudkowsky’s views might be that Eliezer expects AI to not use abstractions which are similar to those used by humans. Eliezer clarifies [LW(p) · GW(p)] that he expects similar abstractions for predictive parts of the natural world, but larger differences for utility-laden concepts like human values and reflectivity-laden concepts like corrigibility. Also, Eliezer says that he is still concerned about the difficulty of reliably steering advanced ML systems even if they do use similar abstractions internally.
- Eliezer joins Joscha Bach, Liv Boeree, and Scott Aaronson for a panel discussion (partly paywalled) on the nature and risks of advanced AI.
News and links
- In an excellent new video essay, Rob Miles discusses major developments in AI capabilities, policy and discourse over the last year, how those developments have shaped the strategic situation, and what individuals can do to help.
- A report from Convergence Analysis gives a high-level overview of the current state of AI regulation. The report summarizes legislative text and gives short analyses for a range of topics.
- A group of current and former employees of OpenAI and Google DeepMind released an open letter calling on frontier AI companies to adopt policies that allow whistleblowers to share risk-related information. This comes in the wake of news that former OpenAI employees were subject to an extreme non-disclosure policy that the company has since apologized for and seems to have removed.
You can subscribe to the MIRI Newsletter here.
2 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2024-07-15T21:37:04.606Z · LW(p) · GW(p)
Eliezer also speaks with Bloomberg’s Nate Lanxon and Jackie Davalos, making the case for international coordination to shut down frontier AI development.
This happened in July 2023, a year ago.
Replies from: Harlan