Leverage points for a pause
post by Remmelt (remmelt-ellen) · 2024-08-28T09:21:17.593Z · LW · GW · 0 commentsContents
No comments
What are ways to prevent development of dangerous AI?
When I started on this question two years ago, I expected that passing laws to ban dangerous architectures was the way to go. Then I learned about many new ways from other concerned communities. It was overwhelming.
Here’s a four-level framework I found helpful for maintaining an overview.
Four things need to be available to scale AI:
- Data (inputs received from the world)
- Work (functioning between domains)
- Uses (outputs expressed to the world)
- Hardware (computation of inputs into outputs)
At each level, AI gets scaled from extracted resources:
- Machine programs searched-for data into code to predict more data.
- Workers design this machine to cheaply automate out more workers.
- Corporations sink profit into working machines for more profitable uses.
- Markets produce infrastructure for the production of more machines.
At each level, AI scaling is increasingly harming people:
- Disconnected person
bots feed on our online data to spread fake posts between persons. - Dehumanised workplace
bots act as coworkers until robots sloppily automate our workplace. - Destabilised society
robot products are hyped up and misused everywhere over society. - Destroyed environment
robots build more machines that slurp energy and pollute nature.
Communities [EA · GW] are stepping up now to stop harmful AI. You can support their actions. For example, you can fund lawsuits [EA · GW] by creatives and privacy advocates to protect their data rights. Or give media support [EA · GW] for unions to negotiate contracts so workers aren’t forced to use AI. Or advocate for auditors [EA · GW] having the power to block unsafe AI products.
Over the long term, our communities can work towards comprehensive restrictions:
- Digital surveillance ban
no machine takes input data from us, or from any spaces we are in, without our free express consent. - Multi-job robot ban
no machine learns more than one job function and only then with workers’ free express consent. - Autonomous use ban
no machine outputs to where we live, if not tested and steered by local humans in the loop. - Excess hardware ban
no machine can process more than just the data humans curate for scoped uses.
I noticed there are ways to prevent harms and risks at the same time. Communities with diverse worldviews can act in parallel – to restrict how much data, work, uses, and hardware is available for scaling AI. While hard, it's possible to pause AI indefinitely.
0 comments
Comments sorted by top scores.