Compute and size limits on AI are the actual danger

post by Shmi (shminux) · 2024-11-23T21:29:37.433Z · LW · GW · 1 comments

Contents

1 comment

 Epistemic status: rather controversial and not very well researched :) Not super novel, I assume, but a cursory look did not bring up any earlier posts, please feel free to link some.

Intuition pump: bigger brain does not necessarily imply a smarter creature. Apes are apparently smarter than elephants and dolphins appear smarter than blue whales. There is definitely a correlation, but the relationship is far from certain.

Starting point: intelligence is roughly equivalent to the degree of abstraction of the world models (detecting Dennett's "real patterns", at increasingly higher level). Humans are much better at abstract thought than other animals, and one can trace the creature's ability to find higher-level patterns in the world (including themselves) with higher intelligence throughout the natural and artificial world.

A non-novel point: Abstraction is compression. Specifically, abstraction is nothing but a lossy compression of the world model, be it the actual physical world, or the world of ideas.

An obvious point: generating good abstractions is expensive. If you have enough resources to use your existing mental capacity, there is no reason to expend resources on generating better abstractions. If you have room to grow your brain to add more of the same-level patterns, it is cheaper than building better abstractions in the same brain size. 

A less obvious point is how hard building good abstractions is. This is what theoretical research is and what separates the likes of Einstein, Dawkins and Nash from the rest of us.

An implication: size and compute restrictions while facing the need to cope with novel situations facilitate abstraction building.

A just so story: human brain size is (currently) constrained by the head size, which is constrained by the hip size due to having to walk upright, which is constrained by the body mass due to resource availability and, well, gravity, resulting in abstraction building being a good way to deal with the changing environment.

Current AI state: the LLMs now get smarter by getting larger and training more. There are always compute and size pressures, but they are not hard constraints, more like costs. Growing to get more successful, the elephant way, not the human way, seems like a winning strategy at this point.

Absolute constraints spark abstraction building: the vetoed California bill SB 1047 "covers AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million. If a covered model is fine-tuned using more than $10 million, the resulting model is also covered" according to  the Wikipedia. Should the bill had been signed, it would have created severe enough pressures to do more with less to focus on building better and better abstractions once the limits are hit.

A speculation: much better abstractions smooth out the "jagged frontier" and reduce or eliminate the weak spots of the current models, which is jumping from "rule interpolation" (according to François Chollet) to "rule invention", something he and other skeptics point out at as the weakness of the current models.

The danger: once the jagged frontier is smooth enough to enable "rule invention", we get to the "foom"-like zone Eliezer has been cautioning about. 

Conclusion: currently it does not look like there are skull-and-hip-size restrictions on AI, so even with the next few frontier models we are probably not at the point where the emerging abstraction level matches that of (smartest) humans. But this may not last.

1 comments

Comments sorted by top scores.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-11-23T23:16:39.312Z · LW(p) · GW(p)

Plus, we're already well into the misuse danger zone... And heading deeper fast.