[Prediction] We are in an Algorithmic Overhang, Part 2
post by lsusr · 2021-10-17T07:48:20.305Z · LW · GW · 29 commentsContents
We're not limited by data We're not (yet) limited by hardware I don't think we're limited by our ability to write software We are definitely limited theoretically None 29 comments
In [Prediction] We are in an Algorithmic Overhang [LW · GW] I made technical predictions without much explanation. In this post I explain my reasoning. This prediction is contingent on there not being a WWIII or equivalent disaster disrupting semiconductor fabrication.
I wouldn't be surprised if an AI takes over the world in my lifetime. The idea makes me uncomfortable. I question my own sanity. At first I think "no way could the world change that quickly". Then I remember that technology is advancing exponentially. The world is changing faster than ever has before and the pace is accelerating.
Superintelligence is possible. The laws of physics demand it. If superintelligence is possible then it is inevitable. Why hasn't we built one yet? There are four[1] candidate limitations:
- Data. We lack sufficient training data.
- Hardware. We lack the ability to push atoms around.
- Software. The core algorithms are too complicated for human beings to code.
- Theoretical. We're missing one or more major technical insights.
We're not limited by data
There is more data available on the Internet than in the genetic code of a human being plus the life experience of a single human being.
We're not (yet) limited by hardware
This is controversial but I believe throwing more hardware at existing algorithms won't bring them to human level [LW · GW].
I don't think we're limited by our ability to write software
I suspect that the core learning algorithm of human beings could be written in a handful of scientific papers comparable to the length and complexity of Einstein's Annus Mirabilis. I can't prove this. It's just gut instinct. If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.
Porting a mathematical algorithm to a digital computer is straightforward. Individual inputs like snake detector circuits can be learned by existing machine learning algorithms and fed into the core learning algorithm.
We are definitely limited theoretically
We don't know how mammalian brains work.
I don't think there's a big difference of fundamental architecture between human brains and e.g. mouse brains. Humans do have specialized brain regions for language like Broca's area but I expect language comprehension would be easy to solve if we had an artificial mouse brain running on a computer.
Figuring out how mammalian brains work would constitute a disruptive innovation. It would re-write the rules of machine learning overnight. The instant this algorithm becomes public it would start a race to an superintelligent AI.
What happens next depends on the the algorithm. If it can be scaled efficiently on CPUs and GPUs then a small group could build the first superintelligence. If sufficient hardware is required then it might be possible to restrict AGI to nation-states the way private ownership of nuclear weapons is regulated. I think such a future is possible but unlikely. More precisely, I predict with >50% confidence that the algorithm will run efficiently enough on CPUs or GPUs (or whatever we have on the shelf) for a venture-backed startup to build a superintelligence on off-the-shelf hardware even though specialized hardware would be far more efficient.
A fifth explanation is we're good at pushing atoms around but our universal computers are too inefficient to run a superintelligence because the algorithms behind superintelligence run badly on the von Neumann architecture. This is a variant on the idea of being hardware limited. While plausible, I don't think it's very likely because universal computers are universal. ANNs may not (always) run efficiently on them but ANNs do run on them. ↩︎
29 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2021-10-17T15:45:28.958Z · LW(p) · GW(p)
If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.
It is possible for evolution to have stumbled upon a really complicated algorithm for humans. Deep learning is fairly simple. AIXI is simple. Evolution is simple. If the human brain is incredibly complicated, even in its core learning algorithm, we could make something else. (Or possibly copy lots of data with little understanding)
Replies from: Tofly, quintin-pope↑ comment by Tofly · 2021-10-17T17:35:13.716Z · LW(p) · GW(p)
The brain may also be excessively complicated to defend against parasites.
↑ comment by Quintin Pope (quintin-pope) · 2021-10-17T20:02:44.216Z · LW(p) · GW(p)
You could also likely build superintelligence by wiring up human brains with brain computer interfaces, then using reinforcement learning to generate some pattern of synchronized activations and brain-to-brain communication that prompts to brains collectively solve problems more effectively than a single brain is able to - a sort of AI guided super-collaboration. That would bypass both the algorithmic complexity and the hardware issues.
The main constraints here are the bandwidths of brain computer interfaces (I saw a publication that derived a Moore’s law-like trend for this, but now can’t find it. If anyone knows where to find such a result, please let me know.) and the difficulty of human experiments.
Replies from: gwern, donald-hobson, lsusr↑ comment by gwern · 2021-10-17T20:48:44.712Z · LW(p) · GW(p)
"Accelerating progress in brain recording tech"? One reason to be optimistic about brain imitation learning: we may just be in the knee of the curve, well before the curves cross.
Replies from: quintin-pope↑ comment by Quintin Pope (quintin-pope) · 2021-10-17T21:12:20.428Z · LW(p) · GW(p)
Thanks! I’m pretty sure this isn’t the one I saw, but it works even better for my purposes.
Edit: I'm working on an AI timeline / risk scenario where BCIs and neuro-imitative AI play a big role. I've sent you the draft if you're interested.
↑ comment by Donald Hobson (donald-hobson) · 2021-10-17T23:59:14.690Z · LW(p) · GW(p)
The set of designs that look like "Human brains + BCI + Reinforcement learning" is large. There is almost certainly something superintelligent in that design space, and a lot of things that aren't. Finding a superintelligence in this design space is not obviously much easier than finding a superintelligence in the space of all computer programs.
I am unsure how this bypasses algorithmic complexity and hardware issues. I would not expect human brains to be totally plug and play compatible. It may be that the results of wiring 100 human brains together (with little external compute) are no better than the same 100 people just talking. It may be you need difficult algorithms and/or lots of hardware as well as BCI's.
Replies from: quintin-pope↑ comment by Quintin Pope (quintin-pope) · 2021-10-18T06:41:19.838Z · LW(p) · GW(p)
I think using AI + BCI + human brains will be easier than straight AI for the same reason that it’s easier to finetune pretrained models for a specific task than it is to create a pretrained model. The brain must have pretty general information processing structure, and I expect it’s easier to learn the interface / input encoding for such structures than it is to build human level AI.
Part of that intuition comes from how adaptable the brain is to injury, new sensory modalities, controlling robotic limbs, etc. Another part of the intuition comes from how much success we’ve seen even with relatively unsophisticated efforts to manipulate brains, such as curing depression.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2021-10-18T19:31:45.210Z · LW(p) · GW(p)
Its easier to couple a cart to a horse than to build an internal combustion engine.
Its easier to build a modern car, than to cybernetically enhance a horse to be that fast and strong.
Humans plus BCI are not to hard. If keyboards count as crude BCI, its easy. Making something substantially superhuman. That's harder than building an ASI from scratch.
Replies from: quintin-pope↑ comment by Quintin Pope (quintin-pope) · 2021-10-18T23:22:03.616Z · LW(p) · GW(p)
You can easily combine multiple horses into a “super-equine” transport system by arranging for fresh horses to be available periodically across the journey and pushing each horse to unsustainable speeds.
Also, I don’t think it’s very hard to reach somewhat superhuman performance with BCIs. The difference between keyboards and the BCIs I’m thinking of is that my BCIs can directly modify neurology to increase performance. E.g., modifying motivation/reward to make the brains really value learning about/accomplishing assigned tasks. Consider a company where every employee/manager is completely devoted to company success, fully trust each other and have very little internal politicking/empire building. Even without anything like brain-level, BCI enabled parallel problem solving or direct intelligence augmentation, I’m pretty sure such a company would perform far better than any pure human company of comparable size and resources.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2021-10-21T08:58:20.059Z · LW(p) · GW(p)
Firstly we already have humans working together.
Secondly, do BCI's mean brainwashing for the good of the company? I think most people wouldn't want to work for such a company. I mean companies probably could substantially increase productivity with psycoactive substances. But that's illegal and a good way to loose all your employees.
Also something moloch like has a tendency to pop up in a lot of unexpected ways. I wouldn't be surprised if you get direct brain to brain politicking.
Also this is less relevant for AI safety research, where there is already little empire building because most of the people working on it already really value success.
Replies from: quintin-pope↑ comment by Quintin Pope (quintin-pope) · 2021-10-21T18:47:44.453Z · LW(p) · GW(p)
“… do BCI's mean brainwashing for the good of the company? I think most people wouldn't want to work for such a company.”
I think this is a mistake lots of people make when considering potentially dystopian technology: that dangerous developments can only happen if they’re imposed on people by some outside force. Most people in the US carry tracking devices with them wherever they go, not because of government mandate, but simply because phones are very useful.
Adderall use is very common in tech companies, esports gaming, and other highly competitive environments. Directly manipulating reward/motivation circuits is almost certainly far more effective than Adderall. I expect the potential employees of the sort of company I discussed would already be using BCIs to enhance their own productivities, and it’s a relatively small step to enhancing collaborative efficiency with BCIs.
The subjective experience for workers using such BCIs is probably positive. Many of the straightforward ways to increase workers’ productivity seem fairly desirable. They’d be part of an organisation they completely trust and that completely trusts them. They’d find their work incredibly fulfilling and motivating. They’d have a great relationship with their co-workers, etc.
Brain to brain politicking is of course possible, depending on the implementation. The difference is that there’s an RL model directly influencing the prevalence of such behaviour. I expect most unproductive forms of politicking to be removed eventually.
Finally, such concerns are very relevant to AI safety. A group of humans coordinated via BCI with unaligned AI is not much more aligned than the standard paper-clipper AI. If such systems arise before superhuman pure AI, then I expect them to represent a large part of AI risk. I’m working on a draft timeline where this is the case.
↑ comment by lsusr · 2021-10-17T20:13:07.620Z · LW(p) · GW(p)
This makes sense. I like that you brought the topic up.
I predict that brain-computer interfaces will advance too slow to matter much in the race to a superintelligence but I'd be excited to be proven wrong. A world where brain-computer interfaces advance faster than AI would be extrenely interesting.
Replies from: quintin-pope↑ comment by Quintin Pope (quintin-pope) · 2021-10-17T21:37:07.842Z · LW(p) · GW(p)
I’m actually woking on an AI progress timeline / alignment failure story where the big risk comes from BCI-enabled coordination tech (I've sent you the draft if you're interested). I.e., instead of developing superintelligence, the timeline develops models that can manipulate mood/behavior through a BCI, initially as a cure for depression, then gradually spreading through society as a general mood booster / productivity enhancer, and finally being used to enhance coordination (e.g., make everyone super dedicated to improving company profits without destructive internal politics). The end result is that coordination models are trained via reinforcement learning to maximize profits or other simple metrics and gradually remove non-optimal behaviors in pursuit of those metrics.
This timeline makes the case that AI doesn’t need to be superhuman to pose a risk. The behavior modifying models manipulate brains through BCIs with far fewer electrodes than the brain has neurons and are much less generally capable than human brains. We already have a proof of concept that a similar approach can cure depression, so I think more complex modifications like loyalty/motivation enhancement are possible in the not too distant future.
You may also find the section of my timeline addressing progress standard in AI interesting:
My rough mental model for AI capabilities is that they depend on three inputs:
- Compute per dollar. This increases at a somewhat sub-exponential rate. The time between 10x increases is increasing. We were initially at ~10x increase every four years, but recently slowed to ~10x increase every 10-16 years (source).
- Algorithmic progress in AI. Each year, the compute required to reach a given performance level drops by a constant factor, (so far, a factor of 2 every ~16 months) (source). I think improvements to training efficiency drive most of the current gains in AI capabilities, but they'll eventually begin falling off as we exhaust low hanging fruit.
- The money people are willing to invest in AI. This increases as the return on investment in AI increases. There was a time when money invested in AI rose exponentially and very fast, but it’s pretty much flattened off since GPT-3. My guess is this quantity follows a sort of stutter-stop pattern where it spikes as people realize algorithmic/hardware improvements make higher investments in AI more worthwhile, then flattens once the new investments exhaust whatever new opportunities progress in hardware/algorithms allowed.
When you combine these somewhat sub-exponentially increasing inputs with the power-law scaling laws so far discovered (see here), you probably get something roughly linear, but with occasional jumps in capability as willingness to invest jumps.
I think there's a reasonable case that AI progress will continue at approximately the same trajectory as it has over the last ~50 years.
Replies from: None↑ comment by [deleted] · 2021-10-18T15:13:00.293Z · LW(p) · GW(p)
What metric would you use to capture the trajectory of AI progress over the last 50 years? And would such a metric be able to bridge the transition from GOFAI to deep learning?
Replies from: quintin-pope↑ comment by Quintin Pope (quintin-pope) · 2021-10-18T23:32:21.529Z · LW(p) · GW(p)
My preferred algorithmic metric would be compute required to reach a certain performance level. This doesn’t really work for hand-crafted expert systems. However, I don’t think those are very informative of future AI trajectories.
comment by Donald Hobson (donald-hobson) · 2021-10-17T15:42:40.135Z · LW(p) · GW(p)
We're not (yet) limited by hardware
There are 2 questions here, the intelligence of existing algorithms with new hardware, and the intelligence of new algorithms with existing hardware.
We could be (and probably are) in a world where existing algorithms + more hardware and existing hardware + better algorithms can both lead to superintelligence. In which case the question is how much progress is needed, and how long it will take.
Replies from: lsusr↑ comment by lsusr · 2021-10-17T18:13:43.085Z · LW(p) · GW(p)
When you say "[w]e could be (and probably are) in a world where existing algorithms + more hardware…can…lead to superintelligence" are you referring to popular algorithms like GPT or obscure algorithms buried in a research paper somewhere?
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2021-10-18T00:03:56.963Z · LW(p) · GW(p)
Possibly GPT3 x 100. Or RL of similar scale.
Very likely Evolution (with enough compute, but you might need a lot of compute.)
AIXI. You will need a lot of compute.
I was kind of referring to the disjunction.
comment by Gunnar_Zarncke · 2021-10-17T10:02:10.216Z · LW(p) · GW(p)
- Theoretical. We're one or more major technical insights.
"missing" missing?
Replies from: lsusrcomment by Jsevillamol · 2021-12-20T16:36:48.364Z · LW(p) · GW(p)
If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.
Note that even if it is complicated it might be an attractor, so that a not-yet-AGI meta-learning algorithm might stumble upon it.
comment by YimbyGeorge (mardukofbabylon) · 2021-10-17T11:12:04.293Z · LW(p) · GW(p)
What do you think of Roger Penrose's idea that these algorithms will never really understand anything. Consciousness and true understanding in the way a human understands, require new physics that can explain consciousness.
Replies from: Harmless, lsusr↑ comment by Harmless · 2021-10-17T14:18:35.203Z · LW(p) · GW(p)
When you say 'require new physics that can explain consciousness', are you imagining:
"New insight shows human brain neuron connections have hundreds of tiny side channels that run much faster than the main connections, leading scientists to conclude that the human brain's processing apacity is much greater than previously thought"
or
"New insight reveals thin threads that run through all connections and allow neurons to undergo quantum superposition, allowing much faster and more complex pattern-matching and conscious thought than previously thougt possible, while still remaining overall deterministic"
or
"New insight shows that the human mind is fundamentally nondeterministic and this somehow involves quantum mechanics"
or
"New insight shows souls are fundamental"
What do you (or your interpretation of Robert Roger Penrose) think a new physics insight that would make consciousness go from mysterious to non-mysterious look like?
↑ comment by [deleted] · 2021-10-18T14:57:14.248Z · LW(p) · GW(p)
"New insight shows that the human mind is fundamentally nondeterministic and this somehow involves quantum mechanics"
When you say "nondeterministic" do you mean the human brain works akin to a Nondeterministic Turing Machine (and thereby can solve NP-complete problems in polynomial time), or simply that there is some randomness in the brain, or something else?
Replies from: Harmless↑ comment by Harmless · 2021-10-18T21:37:39.713Z · LW(p) · GW(p)
I don't have a specific mental image for what I mean when I say 'non-deterministic', I was placing a bet on the assumption that YimbyGeorge was hypothesizing that conscious was somehow fundamentally mysterious and therefore couldn't be 'merely' deterministic, based on pattern-matching this view rather than any specific mental image of what it would mean for consciousness to only be possible in non-deterministic systems.
↑ comment by YimbyGeorge (mardukofbabylon) · 2021-10-18T09:42:33.999Z · LW(p) · GW(p)
3rd or 4th options
Replies from: Harmless↑ comment by Harmless · 2021-10-18T21:49:42.355Z · LW(p) · GW(p)
Do you think it will ever be possible to simulate a human mind (or analagous conscious mind) on a deterministic computer?
Do you think it possible in principle that a 'non-deterministic' human mind can be simulated on a non-deterministic substrate analagous to our current flesh substrate, such as a quantum computer?
If yes to either, do you think that it is necessary to simulate the mind on the lowest level of physics (e.g. on a true simulated spacetime indistinguishable from the original) or are higer-level abstractions (like building a mathematical model of one neuron and then using this simple equation as a building block) permissible?
(Also, are you just asking about Robert Roger Penrose's view or is this also your view?)
↑ comment by YimbyGeorge (mardukofbabylon) · 2021-10-19T09:51:50.266Z · LW(p) · GW(p)
My intuitive view is that human minds cannot be simulated by turing machines because Qualia are not explained by computation. I beleive Roger Penrose's views are similar.
However useful Philosophical Zombie like AI can be created by turing computation. I Think the Blindsight novel https://en.wikipedia.org/wiki/Blindsight_(Watts_novel) describes a world where conscious and non consious intelligences both exist.
↑ comment by lsusr · 2021-10-17T18:08:28.210Z · LW(p) · GW(p)
Brains cause consciousness. Brains are made out of protons, neutrons, electrons and photons. Our knowledge of physics is more than adequate to model protons, neutrons, electrons and photons under Earthlike conditions for all purposes relevant to biology. If you believe, as I do, that duplicating a physical brain would duplicate a consciousness then our knowledge of fundamental physics is adequate to explain consciousness.
I do agree that we don't know how consciousness works but I don't think advancements in fundamental physics are necessary to solve the puzzle. (Advancements in fundamental physics are necessary to solve things like "why is there a universe"and "why is time" but I expect it's possible to figure out what consciousness is without solving these bits of physics.)