We might need to rethink the Hard Reset , aka the AI Pause.

post by Jonas Kgomo (jonas-kgomo) · 2023-03-30T21:38:44.564Z · LW · GW · 0 comments

Contents

  We might need to rethink the Hard Reset , aka the AI Pause.
None
No comments

We might need to rethink the Hard Reset , aka the AI Pause.

 

Last month Viktoria joined me for a talk at Cohere for AI, It was a perfect timing as she told us about AGI Safety. A few days ago, Viktoria, with Elon among others she asked all AI labs to pause training superintelligent AI.

 

The Future of Life Institute is one of the few organisations thinking about existential risks of such technologies. The open letter stated:

"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

 

I agree with Viktoria on this, for the future of humanity we need to think about this things, sign the petition if you are working on AI and feel the need to. However, there is another point to consider when it comes to pausing technological development.

 

 A principle of differential technology development may inform government research funding priorities and technology regulation, as well as philanthropic research and development funders and corporate social responsibility measures. A differential approach is a constant recurring pause we take at each stage of development.

I created the journal on Progress Studies to address these kind of tech developments.

 

 

Jonas Sandbrink and Anders Sandberg wrote a paper titled "Differential Technology Development: A Responsible Innovation Principle for Navigating Technology Risks

": I think this approach is more calculated and should be employed by governments and research labs when developing superinterlligent technology such as the UK Government's Taskforce by Matt Clifford.

Which focuses on safety, security and robustness. I am advocating for us to develop a General Differential Technological Development, we need quantum safety, nuclear safety, ai safety all in the same weight without biases.

 

 

The hyperscalers and accelerationists are spending tens of billions on GPUs to do billion-dollar GPU runs , more and more compute without retreat. ChatGPT and then GPT-4 are now so good, and have vindicated predictions of scaling that are undeniable. And it is sensible to intervene the same way we do with Bio-Security, Nuclear Research etc.

 

You do not have to be a longtermist to care about AGI Safety.

If you’d like to contribute to alignment research, here is a list of research agendas in this space and a good course to get up to speed on the fundamentals of AGI alignment:

 

progress studies: https://progress-studies.org

resources: https://vkrakovna.wordpress.com/ai-safety-resources/#research-agendas

0 comments

Comments sorted by top scores.