Open Agency model can solve the AI regulation dilemma

post by Roman Leventov · 2023-11-08T20:00:56.395Z · LW · GW · 1 comments

Contents

1 comment

It seems to me that most people concerned about AI regulation (and calls for "CERN for AI", or the proposals such as the OAA [LW · GW]) are concerned about the monopolisation of AI, not about regulation per se. And in AI monopolisation (or oligopolisation), they are mostly concerned about the concentration of power and perhaps somewhat about the bias that may creep into the monopolistic AI (in the form of recommendations that the AI makes to the user, answers that it gives on contested questions, ethical worldview, or even language that it works best in and the vocabulary that it prefers).

Most of these people are probably fine with regulatory boundaries, like law -- AI shouldn't give instructions for making bioweapons, or plan terror attacks, etc.

The key question, of course, is how prevent AIs from being able to break the law in this way without effectively enacting AI oligopoly through a stringent regulatory approval regime.

The only approach to solving this conundrum that at least has a chance of working, it seems to me, is an Open Agency [LW · GW], where each AI service is dedicated to some part of the generative world model (material science, rocketry, macroeconomics, virology, etc.) and there are also some "glue AIs", like LLMs, that can solve problems by calling to these services (but are not exceedingly smart themselves and don't internalise a lot of specialised knowledge).

All the specialised services are approved (and therefore oligopolised, within a domain), with dangerous knowledge being erased from them (or only available to users with security clearance), and the inference of these models is constrained to conform to other regulatory and legal constraints. All services are forced to be developed by independent business or non-profit entities by antitrust agencies, to prevent the concentration of power.

Glue AIs could be independently developed or be open-source, on the condition that they didn't use any deeply specialised knowledge during training (apart from fine-tuning with the specialised services as tools), which could somehow be checked semi-automatically, perhaps though the use of approved datasets (cleaned from sensitive specialised data) and zero-knowledge proofs of training.

I think this model addresses the core concerns of the anti-AI regulation folks: the concentration of power and the freedom of general political and ethical views.

In this world, there still should be a lot of nasty compute surveillance and restrictions to prevent people from unilaterally developing AIs that don't conform to the above model, or from running their inference (perhaps, new models of GPUs must verify that the matrix weights belong to an approved or self-approved AI before doing the computation). Some people who are against AI regulation would probably be pissed off by such surveillance, too. But I don't see a way to remove surveillance from the picture and maintain an acceptable level or risk, per the Vulnerable World Hypothesis.


Cross-posted on the Effective Altruism Forum [EA · GW].

1 comments

Comments sorted by top scores.

comment by Artyom Karpov (artkpv) · 2023-11-13T15:58:47.963Z · LW(p) · GW(p)

All services are forced to be developed by independent business or non-profit entities by antitrust agencies, to prevent the concentration of power.

What do you think are realistic ways to enforce this on a global level? It seems UN can't enforce regulations world widely, USA and EU work in their areas only. Others can catch up but somewhat unlikely to do it.