Thoughts On Computronium
post by Darklight · 2021-03-03T21:52:35.496Z · LW · GW · 6 commentsContents
6 comments
While it's widely accepted common knowledge that computers are considerably faster and more powerful than the human brain, it's arguable that evolution didn't optimize the human brain for raw speed, but rather, energy efficiency. The reason why this is important is that in the limit, the theoretical computronium needs to be not only powerful, but an efficient use of resources. In the limit, assuming reasonably that entropy cannot be reversed, then energy is our main practical confining factor for a universe filled with computronium or particularly utilitronium.
Furthermore, if natural selection has already provided a sufficiently close to optimal solution, it may make sense for Omega to fill the universe with human brain matter, perhaps even human beings enjoying lives worth living, as an optimal solution, rather than simply taking humanity's atoms and reconfiguring them into some other as yet unknown form of computronium. This idea of the human form already being optimal, has some obviously desirable characteristics for humans looking to imagine possible futures where the universe is filled with trillions upon trillions of happy humans.
So, practically speaking, how realistic is it to assume that the human brain is anywhere close to optimal, given that the theoretical limits of physics seem to imply that there is considerable leeway for a high upperbound on the efficiency of computronium? As an interesting exercise, let's look at real world supercomputers.
As of this writing the world's fastest supercomputer is the Fugaku, which achieves an impressive 1000 PetaFlops in single precision mode. In comparison, the human brain is estimated to be 20 PetaFlops. However, in terms of energy efficiency, the human brain achieves that with 20 watts of power, for an effective 1 PetaFlops/watt. On the other hand, Fugaku is listed as having an efficiency of about 0.000015 PetaFlops/watt, or six orders of magnitude less.
Due to mass-energy equivalence, the fact is that even if we close the gap on energy efficiency in terms of wattage, another possibly dominant term in the equation is the amount of energy in the mass of the matter that makes up the computronium. Here, the distance is similar. The human brain takes up about 1.5 kg of matter, while Fugaku is 700 tons or over 600,000 kg. The human brain thus has an effective mass efficiency of 13 PetaFlops/kg, while our best existing computer system stands at about 0.0017 PetaFlops/kg. That is four orders of magnitude less efficient.
Given, if we assume exponential technological growth, these orders of magnitude of difference could go away. But is the growth rate actually exponential?
If we look at the numbers, in 2014, the top of the Green500 was 5 GigaFlops/watt. In 2017, it was 17 GigaFlops/watt. In 2020, it was 26 GigaFlops/watt. This is a linear rate of growth of about 3 GigaFlops/watt per year. This means that it'll be the year 300,000 AD before this reaches human levels of efficiency.
What about mass efficiency? Again Fugaku's is 0.0017 PetaFlops/kg. IBM Summit, the previous record holder before Fugaku on the Top500 has a speed of 200 PetaFlops and weighs half as much at 340 tons or roughly 300,000 kg, which works out to 0.00067 PetaFlops/kg, and it was on the list two years in a row. If we go further back to 2011 (to find the last computer with a listed weight), the K-computer had the lead with 10 PetaFlops in about 1,200,000 kg worth of hardware, which works out to 0.00000008 PetaFlops/kg. Note the slope of the change in the last two years is much lower than in the previous seven years before that. This means the actual rate of growth is decreasing noteably. Even if it were linear at the rate of the last two years, it would still take until around 14,000 AD to reach parity with the human brain.
Now, these are admittedly very rough approximations assuming that current trends continue normally, and don't account for effects like a singularity or the appearance of artificial superintelligence could do. In theory, we already have enough compute power to be comparable to one human brain, so if we don't care about the efficiency of it, we could conceivably emulate a human brain by sheer brute force computation.
Nevertheless, the numbers of orders of magnitudes in difference between our existing technology and what biology already has achieved through billions of years of evolutionary refinement, mean that human brain matter could serve as a strong candidate for computronium for the foreseeable future, assuming that it is possible to devise programs that can run on neural matter. Given the relative low cost in energy, it may even make sense for a friendly artificial superintelligence to see multiplying humanity and ensuring they live desirable lives as a reasonable default option for deploying utilitronium efficiently, given uncertainty about whether and how long before a more efficient form can be found and mass produced.
6 comments
Comments sorted by top scores.
comment by [deleted] · 2021-03-05T19:22:53.752Z · LW(p) · GW(p)
According to this post [LW · GW], computers today are only 3 orders of magnitude away from Landauer limit. So it ought to be literally impossible for the human brain to be six orders of magnitude more efficient. Also, how the hell is the brain supposed to carry out 20 Petaflops with only 100 billion neurons and a firing rate of a few dozen Hertz? The estimate seems way off to me.
Replies from: Darklight↑ comment by Darklight · 2022-06-17T16:29:16.111Z · LW(p) · GW(p)
I'm using the number calculated by Ray Kurzweil for his book, the Age of Spiritual Machines from 1999. To get that figure, you need 100 billion neurons firing every 5 ms, or 200 Hz. That is based on the maximum firing rate given refractory periods. In actuality, average firing rates are usually lower than that, so in all likelihood the difference isn't actually six orders of magnitude. In particular, I should point out that six orders of magnitude is referring to the difference between this hypothetical maximum firing brain and the most powerful supercomputer, not the most energy efficient supercomputer.
The difference between the hypothetical maximum firing brain and the most energy efficient supercomputer (at 26 GigaFlops/watt) is only three orders of magnitude. For the average brain firing at the speed that you suggest, it's probably closer to two orders of magnitude. Which would mean that the average human brain is probably one order of magnitude away from the Landauer limit.
This also assumes that its neurons and not synapses that should be the relevant multiplier.
comment by Randomized, Controlled (BossSleepy) · 2021-03-04T00:13:21.778Z · LW(p) · GW(p)
Are supercomputers the right target to benchmark against? My naive model is that they'll be heavily optimized for things like FLOPs and bandwidth and not be particularly concerned with power usage or weight. What about systems that are more concerned about power-efficiency or weight?
Replies from: Darklight, theme_arrow↑ comment by Darklight · 2021-03-04T01:56:23.595Z · LW(p) · GW(p)
So, I did some more research, and the general view is that GPUs are more power efficient in terms of Flops/watt than CPUs, and the most power efficient of those right now is the Nvidia 1660 Ti, which comes to 11 TeraFlops at 120 watts, so 0.000092 PetaFlops/Watt, which is about 6x more efficient than Fugaku. It also weighs about 0.87 kg, which works out to 0.0126 PetaFlops/kg, which is about 7x more efficient than Fugaku. These numbers are still within an order of magnitude, and also don't take into account the overhead costs of things like cooling, case, and CPU/memory required to coordinate the GPUs in the server rack that one would assume you would need.
I used the supercomputers because the numbers were a bit easier to get from the Top500 and Green500 lists, and I also thought that their numbers include the various overhead costs to run the full system, already packaged into neat figures.
Replies from: Darklight↑ comment by Darklight · 2021-03-04T02:27:24.945Z · LW(p) · GW(p)
Even further research shows the most recent Nvidia RTX 3090 is actually slightly more efficient than the 1660 Ti, at 36 TeraFlops, 350 watts, and 2.2 kg, which works out to 0.0001 PetaFlops/Watt and 0.016 PetaFlops/kg. Once again, they're within an order of magnitude of the supercomputers.
↑ comment by theme_arrow · 2021-03-04T01:28:46.172Z · LW(p) · GW(p)
The new Apple M1-based mac mini appears to be able to do 2.6 teraflops on a power consumption of 39 W. That comes out to 0.000066 W/petaflop, or ~4x the efficiency of Fugaku.