Some reasons to start a project to stop harmful AI
post by Remmelt (remmelt-ellen) · 2024-08-22T16:23:34.132Z · LW · GW · 0 commentsContents
No comments
Hey, I’m a coordinator of AI Safety Camp. Our program has supported many projects in the past for finding technical solutions, and my two colleagues still do!
Below is my view on safety, and what made me want to support pause/stop AI projects.
With safety, I mean constraining a system’s potential for harm.
To prevent harms, we must ensure that future AI systems are safe:
- Safety is context-dependent. Harms are the result of the system’s interactions with a more complex surrounding world.
- Safety must be comprehensive. Safety engineering is about protecting users and, from there, our society and ecosystem at large. If one cannot even design an AI product to not harm current users, there is no sound basis to believe that scaling that design up to larger scales will not also deeply harm future generations.
Today, companies recklessly scale designs and uses of AI models. There is a disregard for human safety. To hide this, companies utilise researchers [LW · GW] to give users the impression [LW · GW] of safety rather than actual safety. Safety researchers chase after the companies – trying new methods to somehow safely contain the growing combinatorial complexity (and outside connectivity) of models already in use. Billionaires sympathetic to the cause even support [? · GW] the companies to start up ‘safely’. This is a losing game [LW · GW].
Sincere researchers strove to solve lethal risks. Instead they discovered deeper problems that they could at best solve partially, using fragile assumptions then questioned by other researchers. No-one found a method to control the machinery once it starts scaling itself (to not converge on deadly changes to our environment needed for its self-preservation, etc). Some researchers are in fact discovering sharp limits to controlling AI.
There is dignity in informing the public: ‘We did our best to solve safety for autonomous open-ended AI systems. Sadly, we discovered that this problem is intractable.’
Therefore, it is not on us to solve all the risks that accelerationist CEOs and their engineers introduce by releasing unscoped designs. It is on us to hold firm: ‘You shall not pass. No longer shall we allow your reckless behaviour to put our world in true peril.’
We are not alone [EA · GW]. Many communities want to prevent companies from harmfully scaling AI. Creatives and privacy advocates aim to stop AI freely feeding on personal data. Workers and whistleblowers aim to stop cheap but shoddy automation. Consumer organisations and auditors aim to stop unsafe but profitable uses. Environmentalists and local country folk aim to stop the polluting energy-slurping data centres.
Let’s ally to end the careless pursuit of ‘powerful AI’, at the cost of everything we hold dear in life.
Some reasons to start a project:
- AI companies are causing increasing harms.
- We are not on track [LW · GW] to solving safe control of ‘AGI’.
- There are fundamental limits to control [LW · GW]. Machinery that autonomously reprograms and reproduces its own internals could not be made to stay safe.
Email me if you are taking initiative and looking for collaborations. I’m limited on time, but would gladly share my connections and offer insight into questions.
As an AISC organiser, I take projects that are well-scoped around an aim to robustly help pause/stop AI, and are considerate of other communities’ concerns about AI. For examples of projects, see last round.
I’m excited about:
- Research projects for inquiring into or explicating an underexplored consideration for restricting AI. (Eg. upcoming project to formalise a model of AGI uncontainability.)
- Engineering projects that demonstrate how model functionality is already unsafe, or that define a design scope for engineering a comprehensively safe model. (Eg. last round’s project on operational design domains.)
- Outreach projects involving continued inquiry and discussion with specific concerned stakeholders. (Eg. last round’s project on congressional messaging campaigns [EA · GW].)
- Direct action projects that bring to light the mass violation of a civil right, and offer legal means to restrict AI companies from going further. (Eg. last round’s policy paper.)
You can submit a project proposal here using the template. More info about the next round of AI Safety Camp will follow soon!
0 comments
Comments sorted by top scores.