"AGI soon, but Narrow works Better"
post by AnthonyRepetto · 2022-10-14T21:35:37.815Z · LW · GW · 9 commentsContents
Our Last Limitations? The Narrow Revolution None 9 comments
This is my offering to the Future Fund's "A.I. Worldview Prize" - and I stand as an outsider who is unlikely to move the needle; I hope this angle of approach is noticed there, at least.
Our Last Limitations?
The AGI with feelings and motives is irrelevant to my concerns; I worry about a machine that can add to its skill-set without bounds. The key functional constraints on this are 'catastrophic forgetting' and the difficulty extracting causal relationships & formulae from observations. Once those twin pillars fall, a single AGI can successively add-to its competencies, while finding symbolic parallels which further compress and unify its understanding. Even such a 'Vulcan' AGI could be a mis-aligned monster.
I don't see those 'forgetting & formulating' problems taking very long to solve; my personal timeline is 5-10yrs. with greater than half likelihood (bad stuff can happen, but this industry is not about to winter). After those hurdles, the issue of 'getting to brain-scale' will be found irrelevant - because human brains are bloatware. To explain:
We worry that, because each neuron in our own brains has complexity that takes ~200 artificial neurons to simulate, and because we have 100 trillion synapses or so, then 'getting to human brain performance' will also require getting to some 20 million billion weights and biases. Yet, all of the evidence so far is a gauntlet challenging this concern. Consider that language models are able to out-perform the average human (we are an AGI) while containing some 1,000 times fewer connections than us. Even driverless vehicles have a better safety-record than the average human, and last I checked, those were not running an AGI with trillions of parameters.
Realistically, we should expect the human brain to be hyper-redundant - look at our plasticity! And the rate of loss, damage, over the course of our reality-bonked lives! And, our brains learn from startlingly few examples, which I suspect is related to its immense over-parameterization. (This has an impact on AGI policy, too - We won't want AGI zero-shotting situations that we ourselves haven't grasped yet; "over-parameterizing an AGI with sparse training data to zero-shot the unexpected" is the definition of insane risk!)
So, near-term compute capacity will be plenty to run super-human AGI... and it'll be crap.
The Narrow Revolution
Consider that an AGI will need more time training compared to a narrow intelligence, with more tedious checks upon its behaviors, considering the myriad tasks it can be asked to perform, and all of this will take longer to run-through its enormous network, leading to longer lag before it answers, while requiring thousands of times more compute. Here is my bet:
"If we have equal training data and compute, memory, etc. - and we have a real-world business or military objective, then my 1,000 distinct instances of differentiated narrow networks will outperform your bloated, slow, single instance of an AGI. If you pick AGI, you lose."
General intelligence was only called-upon once, during a period of repeated ice ages that gave advantage to the fastest adaptations, while narrow intelligence is preferred across the natural world. Industry is showing the same persistent pattern; Narrow wins.
With narrow AI, there are still immense risks; yet, we can ensure safety from 'take-over' by restricting ourselves to non-temporal intelligence. As per standard practice already, a network's weights are frozen once it is deployed, which ensures that it cannot form new memories. A frozen network cannot execute a multi-stage plan, because it doesn't know which steps it already completed. It can only respond to each prompt; it cannot form a strategy to keep you from turning it off. No Robo-Overlords.
That is my second bet: "We will realize that prompted, task-focused narrow intelligences do almost everything we feel comfortable with them doing, while AGI doesn't provide any performance benefits and it exposes us to known catastrophic risk. Therefore, we never bother with AGI (after proving that we can do it). The real safety issues are from poor alignment and mis-use."
9 comments
Comments sorted by top scores.
comment by SD Marlow (sd-marlow) · 2022-10-15T03:06:40.141Z · LW(p) · GW(p)
True enough that n AGI won't have the same "emotional loop" as humans, and that could be grounds for risk, of some kind. Not clear if such "feelings" at that level are actually needed, and no one seems to have concerns about loss of such an ability from mind uploading (so perhaps it's just a bias against machines?).
Also true that current levels of compute are enough for an AGI, and you at least hint at a change in architecture.
However, for the rest of the post, your descriptions are strictly talking about machine learning. It's my continued contention that we don't reach AGI under current paradigms, making such arguments about AGI risk moot.
comment by the gears to ascension (lahwran) · 2022-10-15T00:56:44.663Z · LW(p) · GW(p)
not sure I agree, but I love the post. what are your thoughts on this paper? https://www.semanticscholar.org/paper/The-Alberta-Plan-for-AI-Research-Sutton-Bowling/f3829d2f1de5c735c7767322bf742746dc682d4b
Replies from: AnthonyRepetto↑ comment by AnthonyRepetto · 2022-10-15T05:55:58.083Z · LW(p) · GW(p)
Thank you! I have a feeling Sutton will succeed, without having to make too many huge architectural leaps - we already have steady progress in generalization, and extracting formulae which fit observations is getting better. It will probably be some embarrassing moment where a researcher says "well, what if we just try it like this?"
And, with that 'generalized-concept-extractor' in hand... we'll find that we get better performance with the narrow AI that was AutoML'd into being in a few minutes. AGI research will grind to a halt as soon as it succeeds.
comment by [deleted] · 2022-10-15T00:10:49.589Z · LW(p) · GW(p)
So the problem with your second point is some capabilities - in fact many them - the system is going to need some memory just so it can do obvious things like object permanence etc. So while it's weights may be frozen it will have state from prior frames.
This does make complex plans possible.
The other aspect is that generality may prove to be cheaper and more robust. Assuming it has the ability to use knowledge from multiple learned tasks on the present task, you might deploy a general AI by taking some existing model, plumbing it to some collection of cheap robotics hardware, and giving it a JSON file with the rules you want it to adhere to.
Still yes in a way this machine is a narrow AI. Its the smallest and simplest model that scores well on a bench of tasks and measurements of generality where it needs to apply multiple skills it learned in training on a test task it has never seen.
I would call that machine an AGI but it doesn't have elements that don't help it score more points - no emotions or goals of it's own unless a machine needs these properties to score well.
Replies from: AnthonyRepetto, AnthonyRepetto, AnthonyRepetto↑ comment by AnthonyRepetto · 2022-10-29T18:01:05.842Z · LW(p) · GW(p)
I have an analogy that may help distinguish the two AI systems we are talking about. Those being: 1) the RETRO-type narrow language AIs we already possess, which are able to modify a cache of memory to aid in their operations, while their weights and biases are frozen, 2) a temporal, life-long learning AGI which is able to add new functionalities without limit, such that we cannot contain it, and it is guaranteed to overwhelm us.
That second, life-long learner is the 'AGI domination-risk' because it can add new capabilities without bounds. RETRO and its ilk are certainly not able to add capabilities without bounds, such that they are not a risk in the same ways (mostly mis-use).
The Analogy:
RETRO and other memory-cache equipped narrow AIs are like a piece of software that can have multiple global and local state variables. These state variables allow the software to act along numerous 'paths' within the totality of what its program can already do; yet these state variables do NOT allow the program to acquire new capabilities "at-will and without bounds". You have to write a new piece of software for that. The risk is AGI writing more capabilities onto itself, not a "narrow AI switching between pre-existing settings."
Do you have a rebuttal to my rebuttal, or any further critique? Yours is the same, singular response I've heard from the AI safety folks I've spoken to, in Berkeley, and none of you have had a follow-up when I point-out the distinction... Don't I stand provisionally correct, until I stand corrected?
Replies from: None↑ comment by [deleted] · 2022-11-06T19:52:17.492Z · LW(p) · GW(p)
Couple of comments here.
In practice the obvious way to construct an AI stack allows the AI itself to just be a product of another optimization engine. The model and it's architecture are generated from a higher level system that allocates a design to satisfy some higher level constraint. So the AI itself is pretty mutable, it can't add capabilities without limit because the capabilities must be in the pursuit of the system's design heuristic written by humans, but humans aren't needed to add capabilities.
In practice from a human perspective, a system that has some complex internal state variables can fail in many many ways. It's quite unreliable inherently. It's why in your own life you have seen many system failures - they almost all failed from complex internal state. It's why your router fails, why a laptop fails, a game console fails to update, a car infotainment system fails to update, a system at the DMV or a school or hospital goes down, and so on.
↑ comment by AnthonyRepetto · 2022-10-15T14:36:40.395Z · LW(p) · GW(p)
[[Tangent: I am also of the perspective that we are already FOOMing, via human+bot collaborations... what else can you call it, when we & narrow AI together are able to speed Matrix Multiplication, of all things! Sweet Lord, any CS major in 2015 would have called that FOOMing... and we are already doing it, using 'dumb' narrow AI, and we are stumbling forward barely conscious of what's next! Essentially, "AGI was created accidentally, when we drunkenly FOOMed so hard that capabilities came out..." No sane person was predicting how to find trimmed networks using the Lottery Ticket Hypothesis, so perhaps No-Sane Person will be right, again!]]
Replies from: AnthonyRepetto↑ comment by AnthonyRepetto · 2022-11-01T07:25:03.014Z · LW(p) · GW(p)
Well, narrow AI just FOOMed in its pants a little more: "Large Language Models can Self-Improve"
The researchers let PaLM parse, prompt, and filter its own outputs, to get a 'chain-of-thought' that is a more reliable epistemic methodology for the AI to follow, compared to its own once-through assumption. I stand by my claim - "AGI soon, but narrow works better", and "prompted, non-temporal narrow AI with frozen weights will be able to do almost everything we feel comfortable letting them do."
↑ comment by AnthonyRepetto · 2022-10-15T06:12:42.276Z · LW(p) · GW(p)
Thank you for the critique!
True, there is memory for RETRO, as an example, which allowed that language model to perform well with fewer parameters - yet, that sort of memory is distinct in impact from the sort of 'temporal awareness' that the above commenter mentioned with Sutton's Alberta Plan. Those folks want a learning agent that exists in time the way we do, responding in real-time to events and forming a concept of its world. That's the sort of AI which can continually up-skill - I'd mentioned that 'unbounded up-skilling' as the core criteria for AGI domination-risk - unbounded potential for potency. RETRO is still solidly a narrow intelligence, despite having a memory cache for internal processes; that cache can't add features about missile defense systems, specifically, so we're safe from it! :3
The idea that generalization is cheaper, by 'hitting everything at once' while narrow is 'more work, for each specific task' was only true when humans had to munge all the data and do the hyperparameter searches themselves. AutoML ensures that narrow has the same 'reach' as a single, general AI; there is also definitely less work and lag to train a narrow AI on any particular task, than to train a general AI that eventually learns that task also. The general AI won't be 'faster to train' for each specific task; it's likely to be locked-out of the value chain, by each narrow AI eating its cake first.
For generalization to be more robust, we have to trust it more... and that verification process, again, will take many more resources than deploying a narrow AI. I guarantee that the elites in China, who spent decades clawing power, are not going to research an AGI that is untested just so that they can hand it the reins of their compan- er, country. They're working on surveillance and military, factory task automation, and they'd want to stop AGI as much as us.
I, too, don't regard machine-emotions as relevant to the AGI-risk calculus; just 'unbounded up-skilling' by itself means it'll have capabilities we can't bottle, which is risk enough!