Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research
post by David_Kristoffersson, Deric Cheng (deric-cheng) · 2024-03-07T21:37:00.526Z · LW · GW · 1 commentsContents
Executive Summary History Programs Scenario Research Governance Recommendations Research AI Awareness Learn more and follow our work None 1 comment
Cross-posted on the EA Forum. [EA · GW]
Executive Summary
We’re excited to introduce Convergence Analysis - a research non-profit & think-tank with the mission of designing a safe and flourishing future for humanity in a world with transformative AI. In the past year, we’ve brought together an interdisciplinary team of 10 academics and professionals, spanning expertise in technical AI alignment, ethics, AI governance, hardware, computer science, philosophy, and mathematics. Together, we’re launching three initiatives focused on conducting Scenario Research, Governance Recommendations Research, and AI Awareness.
Our programs embody three key elements of our Theory of Change and reflect what we see as essential components of reducing AI risk: (1) understanding the problem, (2) describing concretely what people can do, and (3) disseminating information widely and precisely. In some more detail, they do the following:
- Scenario Research: Explore and define potential AI scenarios - the landscape of relevant pathways that the future of AI development might take.
- Governance Recommendations Research: Provide concrete, detailed analyses for specific AI governance proposals that lack comprehensive research.
- AI Awareness: Inform the general public and policymakers by disseminating important research via books, podcasts, and more.
In the next three months, you can expect to see the following outputs:
- Convergence’s Theory of Change: A report detailing an outcome-based, high-level strategic plan on how to mitigate existential risk from TAI.
- Research Agendas for our Scenario Research and Governance Recommendations initiatives.
- 2024 State of the AI Regulatory Landscape: A review summarizing governmental regulations for AI safety in 2024.
- Evaluating A US AI Chip Registration Policy: A research paper evaluating the global context, implementation, feasibility, and negative externalities of a potential U.S. AI chip registry.
- A series of articles on AI scenarios highlighting results from our ongoing research.
- All Thinks Considered: A podcast series exploring the topics of critical thinking, fostering open dialogue, and interviewing AI thought leaders.
Learn more on our new website.
History
Convergence originally emerged as a research collaboration in existential risk strategy [EA · GW] between David Kristoffersson and Justin Shovelain from 2017 to 2021, engaging a diverse group of collaborators. Throughout this period, they worked steadily on building a body of foundational research on reducing existential risk, publishing some findings on the EA Forum and LessWrong, and advising individuals and groups such as Lionheart Ventures. Through 2021 to 2023, we laid the foundation for a research institution and built a larger team.
We are now launching Convergence as a strong team of 10 researchers and professionals with a revamped research and impact vision. Timelines to advanced AI have shortened, and our society urgently needs clarity on the paths ahead and on the right courses of action to take.
Programs
Scenario Research
There are large uncertainties about the future of AI and its impacts on society. Potential scenarios range from flourishing post-work futures to existential catastrophes such as the total collapse of societal structures. Currently, there’s a serious dearth of research to understand these scenarios - their likelihood, causes, and societal outcomes.
Scenario planning [EA · GW] is an analytical tool used by policymakers, strategists, and academics to explore and prepare for the landscape of possible outcomes in domains defined by uncertainty. Such research typically defines specific parameters that are likely to cause certain scenarios, and identifies specific outcomes that are likely to result.
Our research program will conduct the following investigations:
- Clarifying Scenarios: We’ll identify pathways to existential hazards, review proposed AI scenarios, select key parameters across which AI scenarios vary, and generate additional scenarios that arise from combinations of those parameters.
- Evaluating Strategies: We’ll collect and review strategies for AI governance and other actions, evaluate them for their performance across scenarios, and recommend those that best mitigate existential risk across all plausible scenarios.
As an initial focus, we will analyze scenarios where AI scales to Transformative AI in fewer than 15 years. We will publish our work as it develops, and compile it into two major technical reports in 2024. You can find our first article here: Scenario planning for AI x-risk [EA · GW].
Governance Recommendations Research
Because of the rapid recent rate of developments in AI, there are few existing regulations around AI technologies and wide consensus that more comprehensive and effective policies need to be developed. As a result, there have been dozens of public calls to action around implementing various policies concerning AI. But for many of these proposed policies, there is a lack of detailed analysis on key questions such as the feasibility, effectiveness, or negative externalities.
We believe that the gap between high-level policy proposals and specific, concrete research is one of the major challenges of implementing effective AI governance. Currently, interested parties (such as policymakers or CEOs) must consider dozens of scattered resources over many weeks before arriving at an informed position. As a result, individuals often end up with highly divergent vocabularies, priorities, and areas of knowledge. This often results in confusion and difficulty aligning around the most effective AI safety proposals.
Our first two key efforts in AI governance recommendations will be:
- 2024 State of the AI Regulatory Landscape: We are producing a comprehensive review intended to serve as a broad primer for researchers, policymakers, and individuals new to AI governance.
- Governance Recommendation Reports: We'll launch a series of deep-dive analyses on specific, upcoming governance regulatory proposals (e.g. AI chip registration policies or incident reporting databases). These reports will consider the geopolitical context, feasibility, effectiveness at reducing risk, and negative externalities of such proposals.
AI Awareness
The public is becoming increasingly aware of the potential risks of AI, but there’s limited understanding about how these dangers may manifest in the near future, and on what society can do to prevent them. Notably, practical solutions for governing AI remain largely unknown to the broader public. We are working to help bridge this gap by informing the public and policymakers about realistic AI scenarios and governance solutions.
Three projects we’re currently working on:
- The Oxford Handbook of AI Governance: A manual that compiles the views of over 20 different AI experts on the theoretical, practical, and policy-driven aspects of governing artificial intelligence. This handbook is currently in publication, and was produced and edited by Justin Bullock, a senior researcher at Convergence.
- Building A God: An upcoming book exploring the consequences of the future progress of humanity in developing an agentic, super-intelligent being via machine learning. This book is being written by Christopher DiCarlo, a senior researcher at Convergence.
- All Thinks Considered: A podcast hosted by Christopher DiCarlo, inviting global thought leaders, politicians, and celebrity guests to explore the complexities of important pressing issues through critical thinking, and open dialogue. This podcast is currently being published on a biweekly basis.
Learn more and follow our work
Keep up with our 2024 roadmap and learn more about Convergence here:
- Visit our website
- Subscribe to research updates from Convergence
- Follow our new account on X / Twitter
- Browse existing publications from Convergence
We welcome your inquiries - if you’d like to chat with us, please reach out here.
1 comments
Comments sorted by top scores.
comment by mic (michael-chen) · 2024-03-08T07:54:28.385Z · LW(p) · GW(p)
Looking forward to the Oxford Handbook of AI Governance!