Posts
Comments
This post feels way, way too verbose, and for no good reason. Like it could be crunched down to half the size without losing any substance.
Too much of the mileage is spent meandering, and it feels like every point the text is trying to make is made at least 4 times over in different parts of the text in only slightly different ways. It's at the point where it genuinely hurts readability.
It's a shame, because the topic of AI-neurobiology overlap is so intriguing. Intuitively, modern AI seems extremely biosimilar - too many properties of large neural networks map extremely poorly to what's expected from traditional programming, and far better to what I know of human brain. But "intuitive" is a very poor substitute for "correct", so I'd love to read something that explores the topic - written by someone who actually understands neurobiology rather than just have a general vibe of it. But it would need to be, you know. Readable.
I agree that "general intelligence" is a concept that already applies to modern LLMs, which are often quite capable across different domains. I definitely agree that LLMs are, in certain areas, already capable of matching or outperforming a (non-expert) human.
There is some value in talking about just that alone, I think. There seems to be a bias in play - preventing many from recognizing AI as capable. A lot of people are all too eager to dismiss AI capabilities - whether out of some belief in human exceptionalism, some degree of insecurity, some manner of "uncanny valley" response, something like "it seems too sci-fi to be true", or something else entirely.
But I don't agree that the systems we have are "human level", and I'm against using "AGI", which implies human or superhuman level of intelligence, to refer to systems like GPT-4.
Those AIs are very capable. But there are a few glaring, massive deficiencies that prevent them from being broadly "human level". Off the top of my head, they are deficient in:
- Long term memory
- Learning capabilities
- Goal-oriented behavior
I like the term "subhuman AGI" for systems like GPT-4 though. It's a concise way of removing the implication of "human-level" from "AGI", and refocusing on the "general intelligence" part of the term.