Self-regulation of safety in AI research
post by Gordon Seidoh Worley (gworley) · 2018-02-25T23:17:44.720Z · LW · GW · 6 commentsContents
6 comments
In many industries, but especially those with a potentially adversarial relationship to society like advertising and arms, self regulatory organizations (SROs) exist to provide voluntary regulation of actors in those industries to assure society of their good intentions. For example:
- TrustArc (formerly TRUSTe) has long provided voluntary certification services to web companies to help them assure the public that companies are following basic practices that allow consumers to protect their privacy. They have been successful enough to, outside the EU, keep governments from much regulating online businesses.
- The US Green Building Council offers multiple levels of LEED certification to provide both targets and proof to the public that real estate developers are protecting environmental commons.
- The American Medical Association, The American Bar Association, and the National Association of Realtors are SROs that function as de facto official regulators of their industries despite being non-governmental organizations (NGOs).
- Financial regulation was formerly and sometimes still is done via SROs, although governments have progressively taken a stronger hand in the industry over the last 100 years.
AI, especially AGI, is an area where there are many incentives to violate societal preferences and damage the commons and it is currently unregulated except where it comes into contact with existing regulations in its areas of application. Consequently, there may be reason to form an AGI SRO. Some reasons in favor:
- An SRO could offer certification of safety and alignment efforts being taken by AGI researchers.
- An SRO may be well positioned to reduce the risk of an AI race by coordinating efforts that would otherwise result in competition.
- An SRO could encourage AI safety in industry and academia while being politically neutral (not tied to a single university or company).
- An SRO may allow AI safety experts to manage the industry rather than letting it fall to other actors who may be less qualified or have different concerns that do not as strongly include prevention of existential risks.
- An SRO could act as a "clearinghouse" for AI safety research funding.
- An SRO could give greater legitimacy to prioritizing AI safety efforts among capabilities researchers.
Some reasons against:
- An SRO might form a de facto "guild" and keep out qualified researchers.
- An SRO could create the appearance that more is being done than really is.
- An SRO could relatedly promote the wrong incentives and actually result in less safe AI.
- An SRO might divert funding and effort from technical research in AI safety.
I'm just begining to consider the idea of assembling an SRO for AI safety, and especially interested in discussing the idea further to see if it's worth pursuing. Feedback very welcome!
6 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2018-02-26T03:14:12.088Z · LW(p) · GW(p)
Would something like this have substantially impacted nuclear weapons research in any way?
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2018-02-26T18:48:54.274Z · LW(p) · GW(p)
To the best of my knowledge, no, because atomic weapons got created under circumstances outside those an SRO could do much about, namely total war. That being said, an SRO might be able to spread ideas prior to total war that would increase the chance of researchers intentionally avoiding success the way, as I recall but may be totally wrong about this, it appears some German scientists working on nuclear weapons intentionally sabotaged their own efforts.
comment by John_Maxwell (John_Maxwell_IV) · 2018-02-26T10:34:19.530Z · LW(p) · GW(p)
This sounds a lot like the Partnership on AI. I wonder what they could learn from the history of SROs.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2018-02-26T18:51:57.719Z · LW(p) · GW(p)
Is the Partnership on AI actually doing anything, though? As far as I can tell right now it's just a vanity group designed to generate positive press for these companies rather than meaningfully self-regulate their actions, though maybe I'm mistaken about this.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2018-02-27T04:55:35.780Z · LW(p) · GW(p)
This seems rather uncharitable to me. Since the initial announcement, they have brought on a bunch more companies & organizations, which seems like a good first step and would not make sense for a vanity group, because positive press will be diluted across a larger number of groups. You can read some info about their initiatives here. If you're not excited, maybe you should apply for one of their open positions and give them your ideas.
FYI, CSER was announced in 2012, and they averaged less than 2 publications per year through the end of 2015.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2018-02-27T06:12:31.003Z · LW(p) · GW(p)
Oh interesting. Thanks for telling me!