Cap Model Size for AI Safety
post by research_prime_space · 2023-03-06T01:11:59.617Z · LW · GW · 4 commentsContents
4 comments
There are diminishing marginal returns to intelligence -- an AI with an IQ of 150 can perform almost all human tasks flawlessly. The only exception may be conducting scientific research.
So why don't we lobby for capping the model size, at perhaps, a couple hundred billion parameters? This cap can be strictly enforced if it's encoded into deep learning software packages (e.g. NVIDIA, PyTorch, etc.).
I think this may be the most tractable approach for AI safety.
4 comments
Comments sorted by top scores.
comment by baturinsky · 2023-03-06T06:36:13.323Z · LW(p) · GW(p)
I think more enforceable would be the capping of the size of GPUs that are produced and/are available for unrestricted use.
Replies from: research_prime_space↑ comment by research_prime_space · 2023-03-06T23:35:41.917Z · LW(p) · GW(p)
I feel like capping the memory of GPUs would also affect normal folk who just want to train simple models, so it may be less likely to be implemented. It also doesn't really cap the model size, which is the main problem.
But I agree it would be easier to enforce, and certainly, much better than the status quo.
comment by JBlack · 2023-03-06T04:54:23.321Z · LW(p) · GW(p)
We don't know that there are diminishing marginal returns to capability of models (I prefer not to use the word "intelligence" for this). There are many many applications where increasing accuracy on some tasks from 99% to 99.9% would be enormously better. When such models are deployed in earnest, proponents of AI development could very well justify the claim that regulatory limits are actually killing people e.g. through industrial accidents that would have been prevented by a more capable model, bad medical advice, road crashes, and so on. It would take serious and unwavering counter-pressure against that to prevent regulations from crumbling.
The other problem is that we don't know the real limits on capabilities for models of a given size, and an advance in efficiency may lead to generally superhuman capabilities anyway. It is unlikely that the earliest models will be nearly the most efficient that are possible, or even within a factor of 10 (or 100).
Bad actors would probably run models larger than the allowed size, and this would become rapidly easier over time. Much of the software is open-source, or can be relatively easily worked around. For example, running a deep model a few layers at a time so that any given running fragment has fewer parameters than the regulated maximum.
It would likely also push the development of AI models that aren't based on parameter counts at all, for which such regulation would be completely ineffective.
Replies from: research_prime_space↑ comment by research_prime_space · 2023-03-06T23:33:04.600Z · LW(p) · GW(p)
I think you make a lot of great points.
I think some sort of cap is the one of the highest impact things we can do from a safety perspective. I agree that imposing the cap effectively and getting buy-in from broader society is a challenge, however, these problems are a lot more tractable than AI safety.
I haven't heard anybody else propose this so I wanted to float it out there.