An International Manhattan Project for Artificial Intelligence

post by Glenn Clayton (glenn-clayton) · 2023-04-27T17:34:13.242Z · LW · GW · 2 comments

Contents

  The Manhattan Project - A Brief History
  A Call for an International, Interdisciplinary Project for AI
  How Might This Actually Work?
  Prospects for a Brighter Future
None
2 comments

The Manhattan Project - A Brief History

Before I delve into the discussion on AI, I would like to take a moment to reflect on a historical event that showcases the power of interdisciplinary collaboration: the Manhattan Project. During the height of World War II, thousands of scientists, engineers, and policymakers from diverse backgrounds came together under a cloak of secrecy, driven by a singular mission—to develop the world's first atomic bomb.

While the consequences of this invention have been the subject of much justifiable debate, the Manhattan Project does serve as a powerful reminder of the magnitude of what humanity can achieve when we unite our collective talents and resources to tackle complex challenges. 

As we navigate the ever-changing landscape of human civilization, we find ourselves amid an AI revolution that is reshaping our world. With systems like ChatGPT gaining extraordinary capabilities, we must contemplate how to balance artificial intelligence's potential benefits and risks. 

Once again, the comparison to the Manhattan Project is apropos as some of our brightest minds have repeatedly cautioned us that AI could pose a much bigger threat to humanity than nuclear weapons. From Stephen Hawking to Nick Bostrom to the recent open letter from the Future of Life Institute, many very intelligent people have warned us, as a global society, to proceed with caution for decades. 

 

A Call for an International, Interdisciplinary Project for AI

There has been a lot of debate about how to proceed (or whether even to proceed) with AI development. But there has been little in the way of actionable recommendations that policy and business leaders can feasibly support and implement. In my judgment, advocating that AI should simply not be pursued [LW · GW] is not a realistically viable plan. I agree that “shutting it all down” is preferable to taking an existential risk. I just don’t think it's realistic that such a plan will be implemented. So I worry that advocating for it as the only option is forgoing an opportunity to call for less optimal but more realistic options that could produce positive outcomes. 

So, despite my incredible lack of qualifications, I’d like to propose an alternative plan to address the AI challenges we collectively face. The only thing I’m certain of regarding this proposal is that it is insufficient, naive, and full of holes. But maybe it will inspire more capable minds to propose a better plan that we can realistically encourage global political and business leaders to support. 

 

We require an international, interdisciplinary collaboration focused on developing the necessary scientific theories and regulatory frameworks to govern the development of beneficial AI. 

 

Such a collaboration would have the scale and urgency of the Manhattan Project but with a sense of international collaboration and a humanitarian focus more reminiscent of the Human Genome Project. This global endeavor would assemble the best and brightest minds in AI, cognitive science, physics, and social sciences, as well as policymakers and representatives from around the world. Additionally, leading world governments would supply the resources and budgets commensurate to the existential importance of the mission. By pooling our collective wisdom and resources, we could endeavor to create a foundation for developing and integrating safe AI into our societies and economies. 

This ambitious collaboration could potentially focus on:

  1. Developing a unified theory of general intelligence: By combining insights from neuroscience, physics, and AI research, we could develop a unified scientific theory of general intelligence that explains how complex systems (biologically derived or engineered) can produce goal-directed behaviors. It would need to both explain the mechanism of how our own brains work and illuminate the underlying laws that govern any intelligent, goal-directed system. In the same manner that relativity influences nuclear science and gene theory influences biotechnologies, such an accomplishment would perhaps be the most powerful contribution to engineering safe and ethical intelligent systems. 
  2. Addressing relevant philosophical questions: With a strong scientific understanding, we can collectively delve into the philosophical questions about life, consciousness, and sentience with renewed vigor. Developing insights here will help us navigate the ethical quandaries of AI technologies, helping to ensure their responsible integration into our society.
  3. Crafting international policy and regulatory recommendations: Developing globally-coordinated policy recommendations for world governments could harmonize AI development across nations and ensure that AI's benefits and potential risks are fairly distributed among Earth's inhabitants.
  4. Encouraging public participation: Engaging the international community and people from diverse backgrounds in discussions about AI development will ensure that their concerns and values are taken into account, fostering a sense of shared responsibility and trust in AI technologies.

This collaborative initiative would need to operate with a sense of urgency and shared purpose, recognizing the potential global consequences of uncontrolled AI development. By uniting the brightest minds and key stakeholders from around the world, we could create an unprecedented global effort to address the complex challenges posed by AI, ensuring a safer and more ethical future for all.

 

How Might This Actually Work?

Given the current geopolitical landscape, I understand that many might consider such a global collaboration improbable. However, there are likely ways to pursue this initiative that could overcome current challenges and result in positive outcomes aligned with national interests while protecting humanity's overall flourishing.

For a collaboration such as this to be effective, it would require significant forethought in planning. This would include establishing clear objectives and milestones, addressing potential conflicts of interest, encouraging cultural diversity and inclusion, developing policies for equitable knowledge/technology transfers, and developing a long-term plan for global AI governance that was realistic and enforceable.

Again, my proposal is naive and incomplete. Bright minds will immediately see many flaws in the feasibility of such a plan (as I myself can see). I only aim to illustrate that such a plan could be conceived and to encourage more capable and experienced minds to propose an actionable alternative. 
 

Prospects for a Brighter Future

The cynicism and negativity of our public discourse have never been higher, and the global political and socio-economic order seems to be on the brink of dark times. I believe that educated individuals and citizens of an increasingly interdependent global civilization have a responsibility to engage in these critical discussions. That said, if we wish to contribute our voices to the debate, do we not have a responsibility to advocate for a responsible, collaborative, and realistic approach to resolving that debate? 

As we embark on this AI journey, let us harness the power of interdisciplinary collaboration to ensure the responsible development and implementation of artificial intelligence. By doing so, we can create AI technologies that align with human values and contribute to a brighter future for all.

Then again, perhaps all of this is a silly pipedream. If that is the case, I still would prefer to be one of those dreamers who advocated for us to at least attempt to make AI the next giant leap for humankind rather than the last petty squabble.

2 comments

Comments sorted by top scores.

comment by ChristianKl · 2023-04-29T01:43:36.120Z · LW(p) · GW(p)

Again, my proposal is naive and incomplete. Bright minds will immediately see many flaws in the feasibility of such a plan (as I myself can see). I only aim to illustrate that such a plan could be conceived and to encourage more capable and experienced minds to propose an actionable alternative. 

LessWrong isn't really a place for naive thought. If you see flaws don't leave their analysis unsaid. 

Replies from: glenn-clayton
comment by Glenn Clayton (glenn-clayton) · 2023-05-04T12:54:31.425Z · LW(p) · GW(p)

Good point. Perhaps a better way to say this would be: I'm sure that my thoughts could be better and that there are gaps in my proposed plan based on my lack of expertise in relevant areas such as geopolitics, etc. 
 

I think we'd likely be a strong society if more people openly confessed their intellectual shortcomings and asked more experienced experts to improve on their ideas. ;)