AGI Impossible due to Energy Constrains

post by TheKlaus · 2022-11-30T18:48:43.011Z · LW · GW · 13 comments

Contents

13 comments

Hi all, interesting forum, I came across it via the Future Fund’s AI Worldview Prize. I would like to draw you attention to a manuscript my friend Jay Coggan and I wrote which is relevant to the question if AGI, or even an artificial superintelligence will emerge soon, or even within the next 100 years. We think it’s completely unlikely, on energetic grounds. We use the processing needs of the Blue Brain Project to extrapolate how much energy would be needed to simulate a whole human brain. The Blue Brain Project is to date the most detailed brain simulation, but it STILL leaves out many details, so the extrapolated energy need is in fact a vast UNDERestimation of the actual energy to simulate a human brain.

When using the extrapolated value from the Blue Brain Project, and making some very generous assumptions on brain simulations versus emulations, we get an energy use for human, or super-human intelligence which is orders of magnitude above what the US produces in electricity.

Unless there will be a way of computing completely different from silicone-semiconductor based computing, closer to biology, AGI or a superintelligence are completely unlikely.

Hence we estimate that:

AGI will be developed by January 1, 2043, absolutely unlikely, that's in 10 years, <<1%

Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI: highly unlikely, <1%        

AGI will be developed by January 1, 2100: highly unlikely, <2%

The link to the full manuscript on a preprint server is here:

https://www.techrxiv.org/articles/preprint/A_Hard_Energy_Use_Limit_of_Artificial_Superintelligence/21588612

(note: I already submitted this directly to the AI Worldview Prize, but I think the community here would be interested, too).

13 comments

Comments sorted by top scores.

comment by Cervera · 2022-11-30T19:17:37.435Z · LW(p) · GW(p)

Hey, Interesting post. 

Artificial General Inteligence has nothing to do with simulating brains. 

The approaches are different, the math formulares are different, We're slowly moving to sparcity for some things (wich is similar to how a brain works) but still. 

I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. 

Nobody is saying "we will get a so good brain simulator that will kill us" That's not the point. 

The point is that we can create agents in other ways, and those Agents can still kills us, no brain simulation included. 

Replies from: TheKlaus, TAG, weverka
comment by TheKlaus · 2022-11-30T21:10:05.196Z · LW(p) · GW(p)

We address this argument. AGI has a lot to do with simulating brains in our opinion, since an agent of similar or higher complexity has to be created. There can be no shortcut, in our opinion.

A deep learning network with 10^7 nodes will not outperform a brain with 10^11 neurons, especially if each neuron is highly complex.

We are not arguing that a brain simulation will/will not take over, but that an agent which could would have to use a similar amount of energy, or even several orders below. And that's unrealistic.

Replies from: Viliam
comment by Viliam · 2022-12-01T08:03:07.537Z · LW(p) · GW(p)

Haven't read the paper, so sorry if it is explained there, but I disagree with the assumption that human brain is the minimum possible size for an agent. Human brain has some constraints that do not apply to electronic, non-evolved agents.

As an example, my external hard disk drive has a capacity 1.5 TB. How many bytes of information can a human brain store reliably? How many human-style neurons would we need to simulate in order to create a human-like agent capable of memorizing 1.5 TB of arbitrary data reliably? My point is that simply building the 1.5 TB external HDD, plus some interface with the rest of the brain if necessary, is several orders of magnitude cheaper than trying to use a human-like neuron architecture for the same.

Replies from: Erich_Grunewald, weverka
comment by Erich_Grunewald · 2022-12-01T08:51:30.708Z · LW(p) · GW(p)

Possible additional advantages for a silicon intelligence (besides storage):

  • It can view its own neurons and edit their weights or configuration.
    • It can be copied, and plausibly copy itself.
  • Its memory/storage can be shared/copied/backed up.
  • It may have access to better, higher-fidelity sensors.
  • We evolved to perform a set of tasks suitable to a hunter-gatherer environment. It can be trained or configured to perform a set of tasks that is more optimised for today's world.
  • It has access to more energy.
  • It can perform operations faster (Bostrom writes that biological neurons operate at 200 Hz).
  • It can send signals internally faster (Bostrom writes that axons carry action potentials at 120 m/s, which is 6 OOMs slower than the speed of light).
comment by weverka · 2022-12-01T13:17:15.059Z · LW(p) · GW(p)

Reliable?  Your hard disk will be unreadable before long, while the human brain has developed ways to pass information down over generations.

comment by TAG · 2022-12-01T15:49:25.220Z · LW(p) · GW(p)

AGI doesn't necessarily have anything to do with simulating brains, but it would count if you could do it.

comment by weverka · 2022-12-01T13:14:18.101Z · LW(p) · GW(p)

>I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. 

This is chastising him for failure to abide by groupthink.
The rest of your comment makes a point that is undermined by this statement.

Replies from: Cervera
comment by Cervera · 2022-12-01T13:27:52.182Z · LW(p) · GW(p)

I dont think I wrote that statement with that particular intention in mind. 

I'm not trying to imply he is wrong because he doenst know our "groupthink" I was just generally annoyed at how he started the post, so i wanted to be reasonably civil, but a bit mean. 

Thanks for noticing, I'm not convinced I should have refrained from that particular comment tho.

What would you have said? 

Replies from: lorenzo-rex, weverka
comment by lorepieri (lorenzo-rex) · 2022-12-01T15:09:51.399Z · LW(p) · GW(p)

I would suggest to remove "I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. " and present your argument, without speaking for the whole community. 

comment by weverka · 2022-12-02T13:04:17.529Z · LW(p) · GW(p)

>What would you have said? 

Your comment is stronger without this sentence.

comment by Gunnar_Zarncke · 2022-12-01T19:16:14.892Z · LW(p) · GW(p)

You may want to have a look at the Reply to Eliezer on Biological Anchors [LW · GW] post, which itself refers to Forecasting transformative AI timelines using biological anchors [LW · GW]. I think your writeup falls into this wider category, and you may see which of the discussed estimates (which get weighted together in the post) is closest to your approach or whether it is systematically different.

comment by TheKlaus · 2022-12-14T23:46:59.645Z · LW(p) · GW(p)

Thank you all for the very useful contributions! Apologies for my delayed response, the end of the semester ect caught up with me

I think a main issue which emerges is that there are no good measures for computation which apply equally well to brains and algorithms/computers. Can I memorize 1 TB worth of numbers? Of course not? But how many TB of data do my (mediocre) squash playing skills take up? I think there are no good answers to many related questions yet.

comment by delton137 · 2022-12-01T00:09:13.610Z · LW(p) · GW(p)

On current hardware, sure.

It does look like scaling will hit a wall soon if hardware doesn't improve, see this paper: https://arxiv.org/abs/2007.05558

But Gwern has responded to this paper pointing out several flaws... (having trouble finding his response right now..ugh)

However, we have lots of reasons to think Moore's law will continue ... in particular future AI will be on custom ASICs / TPUs / neuromorphic chips, which is a very different story. I wrote about this long ago, in 2015. Such chips, especially asynchronous and analog ones, can be vastly more energy efficient.