Avoiding AI Races Through Self-Regulation

post by G Gordon Worley III (gworley) · 2018-03-12T20:53:45.465Z · score: 6 (3 votes) · LW · GW · 2 comments

This is a link post for https://mapandterritory.org/avoiding-ai-races-through-self-regulation-1b815fca6b06

Summary:

The first group to build artificial general intelligence or AGI stands to gain a significant strategic and market advantage over competitors, so companies, universities, militaries, and other actors have strong incentives to race to build AGI first. An AGI race would be dangerous, though, because it would prioritize capabilities over safety and increase the risk of existential catastrophe. A self-regulatory organization (SRO) for AGI may be able to change incentives to favor safety over capabilities and encourage cooperation rather than racing.

2 comments

Comments sorted by top scores.

comment by Dagon · 2018-03-12T21:43:28.712Z · score: 5 (2 votes) · LW · GW

Often, regulation ends up changing the race to favor those who most influence the regulators, not to actually slow or removed the race.

Can you describe some mechanisms that a (voluntary, I presume) SRO could use to have any effect on safety or cooperation?

comment by G Gordon Worley III (gworley) · 2018-03-13T19:07:45.981Z · score: 5 (1 votes) · LW · GW

Quoting myself from the link:

  • inspections to demonstrate to the other company that they are cooperating
  • contractual financial penalties that would offset any gains from defecting
  • social sanctions via public outreach that would reduce gains from defecting
  • sharing discoveries between companies
  • required shutdown of any uncooperatively built AGI