Intelligence Is Jagged

post by Adam Train (aetrain) · 2025-02-19T07:08:46.444Z · LW · GW · 0 comments

Contents

No comments

In ethics, there is an argument called name the trait. It is deployed in many contexts, such as veganism—"name the trait that justifies our poor treatment of animals"—and theology—"name the trait that grants humanity dominion over the Earth"—among others. The idea is just to challenge your interlocutor to specify something, and then you would claim it as non-unique: "Well, cats have that trait too!"

It seems to me that we are in a similar situation when discussing AI capabilities, but with a different result: you can absolutely point to traits that frontier models lack to differentiate them from humans, as of now. There is no grey area here, not yet. AI models' capabilities in some domains, like coding, have clearly progressed to a point where it's getting harder even for skeptics to deny that it's feasible they could replace humans in the future. But in others, no such progress, at least not visibly; I imagine it will still be a while before I read an excellent AI-generated novel, for example, and the Overton window among my less-situationally aware friends still doesn't include claims that AI literature will ever happen. At the proverbial dinner party, I am still proverbially laughed out of the proverbial room.

Of course, I would answer that you can just as easily reverse the question and point out things that LLMs are flatly better than us at. I cannot, much as I try, think in JSON quickly enough to serve this API, for example. Tasks like that feel distinctly unrealistic for a human to do, of course, but that's my point: there are tasks where we obviously excel and tasks where the LLMs obviously excel. At present, we are complementary systems, much as humans and computers have always been.

Now, I don't (yet) believe the set of things LLMs are good at is nearly as large as the set that humans are good at, but at this stage of development there's already an interesting lack of overlap in those sets. If this weren't so, then LLMs would remain as useless for practical work as they were before GPT-3.

From Introduction to AI Safety, Ethics and Society by Dan Hendrycks:

While some would argue that an intelligence based on silicon or other materials will be unable to match one built on biological cells, we see no compelling reason to believe that particular materials are required. Such statements seem uncomfortably similar to the claims of vitalists, who argued that living beings are fundamentally different from non-living entities due to containing some non-physical components or having other special properties. Another objection is that copying a biological brain in silicon will be a huge scientific challenge. However, there is no need for researchers looking to create HLAI to create an exact copy or ''whole brain emulation''. Airplanes are able to fly but do not flap their wings like birds - nonetheless they function because their creators have understood some key underlying principles.

Indeed, we are not building human brains. Whatever vectors are in my head, I do not anticipate an AI will ever exist that shares my neural architecture exactly. We exist at some point in some optimization gradient, and the AIs are on some other gradient; or, at least, we are climbing the same one toward a general intelligence from very different directions.[1]

In the general population, a lot of self-anchoring [LW · GW] seems to be going on. The idea of a system that can be so obviously terrible at some things we humans take for granted—and yet still be, in some meaningful sense, intelligent—is foreign to most of us. But it seems to be that we will live out the rest of our time on this planet with such systems and their descendants. My fun term for this is the jaggedness of intelligence, which is just a visual way of imagining a fact we all already know: that intelligence is far more complex than a position along an axis. It is not as though a human has a certain IQ and an LLM has another; you would need a lot more dimensions of comparison than that. If you were to visualize the peaks and valleys of our cognitive abilities, we humans are certainly jagged too. Our topography just differs immensely from the AI systems we're building.

I argue that, until a certain point of AI advancement e.g. recursive self-improvement, things are almost certain to remain this way: with visibly highly uneven progress. We are crafting a new type of cognitive structure, and we are seeing what that structure can do at various levels of its development as it builds its own psychology [LW · GW]. This is something very different than watching a child grow up, but perhaps we are watching something similar, at least enough to draw the analogy, when we see emergent capabilities emerge emergently along the scaling curve.

As this continues, we have no reason to expect AI systems' capabilities will develop in a way that maps cleanly onto how human capabilities do, or do so in the same order, or result in the same set. All the same, the natural response from most people is to deny that we could possibly take seriously the intelligence of what we're creating—"you're telling me that you think ChatGPT is smart? It can't even do X!"—and I fear that unless and until someone sees uniformly human-level AI,[2] people will still say things like that even as the possible values of X shrink precipitously in number and perhaps reach zero.

But perhaps finding a more visceral way to guide people out of thinking that intelligence has a purely human shape, or that it would mean anything for an LLM to take an IQ test, would be of use to help people understand what's happening, where capabilities are probably headed, and what risks are involved.

  1. ^

    It just so happens that evolution and natural selection work a lot more slowly and less directionally than deep learning training runs do.

  2. ^

    By which I mean "an AI that is at least human-level at all tasks, such that there are no longer any gaps in capability relative to us."

0 comments

Comments sorted by top scores.