What is the nature of humans general intelligence and it's implications for AGI?

post by Will_Pearson · 2024-03-26T15:20:39.860Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    5 johnswentworth
None
No comments

Humans seems to have some form of generality. We seem capable of a solving a large range of problems and the people that are capable on one aspect seem more capable in general. However the nature of this generality is important. There are at least two options that I've thought of.
1)A general intelligence is intrinsically better at solving problems
2) A general intelligence is better at solving problems in general because it is capable of absorbing social information about problems. And society has information about solving lots of different problems.

Option 2 is the one I lean towards as it fits with the evidence. Humans spent a long time in the stone age, with the same general architecture, but now can solve a much larger set of problems because of education and general access to information.

The difference is important because it has implications for the solving of novel problems (not solved by society today). If the form of generality we can make is all about absorbing social information there are no guarantees about it being able to go beyond the social knowledge in a principled way. Conceptual leaps to new understanding might require immense amounts of luck and so be slow to accumulate. ASIs might be the equivalent of us stuck in the stone age, at least to start with. 

Are people thinking about these kinds of issues when considering time lines?

Answers

answer by johnswentworth · 2024-03-26T16:57:05.597Z · LW(p) · GW(p)

This is definitely a split which I think underlies a lot of differing intuitions about AGI and timelines. That said, the versions of each which are compatible with evidence/constraints generally have similar implications for at least the basics of AI risk (though they differ in predictions about what AI looks like "later on", once it's already far past eclipsing the capabilities of the human species).

Key relevant evidence/constraints, under my usual framing:

  • We live in a very high dimensional environment. When doing science [LW · GW]/optimization [LW · GW] in such an environment, brute-force is search is exponentially intractable, so having e.g. ten billion humans running the same basic brute-force algorithm will not be qualitatively better than one human running a brute-force algorithm. The fact that less-than-exponentially-large numbers of humans are able to perform as well as we are implies that there's some real "general intelligence" going on in there somewhere.
    • That said, it's still possible-in-principle for whatever general intelligence we have to be importantly distributed across humans. What the dimensionality argument rules out is a model in which humans' capabilities are just about brute-force trying lots of stuff, and then memetic spread of whatever works. The "trying stuff" step has to be doing "most of the work", in some sense, of finding good models/techniques/etc; but whatever process is doing that work could itself be load-bearingly spread across humans.
    • Also, memetic spread could still be a bottleneck in practice, even if it's not "doing most of the work" in an algorithmic sense.
  • A lower bound for what AI can do is "run lots of human-equivalent minds, and cheaply copy them". Even under a model where memetic spread is the main bottlenecking step for humans, AI will still be ridiculously better at that. You know that problem humans have where we spend tons of effort accumulating "tacit knowledge" which is hard to convey to the next generation? For AI, cheap copy means that problem is just completely gone.
  • Humans' own historical progress/experience puts an upper bound on how hard it is to solve novel problems (not solved by society today). Humans have done... rather ridiculously a lot of that, over the past 250 years. That, in turn, lower bounds what AIs will be capable of.
comment by Will_Pearson · 2024-03-26T19:35:50.507Z · LW(p) · GW(p)

I'd like to see more discussion of this, I read some of the FOOM debate but I'm assuming that there has been more discussion of this important issue since?

I suppose the key question is for recursive self-improvement. We can give hardware improvement (improved hardware allows design of more complex and better hardware) because we are on the treadmill already.  But how likely is algorithmic self-improvement.  For an intelligence to be able to improve itself algorithmically the following seem to need to hold.

  1. The system needs to understand itself
  2. There has to be some capacity that can be improved without detriment to some other capacity (else you are doing some self-optimization and not necessarily improvement)
     

If it is the memeplex that gives us our generality (as is suggested by our flowering of discovery over the past 250 years compared to the past 300,000 years of homo sapiens), it might not be understandable. It would be in the weights or equivalents in whatever the AI uses. No human would understand it either.

Fiddling about with weights without knowledge would likely lead to trade offs and so you might not have the second consideration holding.

I'm not saying AI won't change history, but we need an accurate view of how it will change things.

Replies from: johnswentworth
comment by johnswentworth · 2024-03-26T20:45:37.721Z · LW(p) · GW(p)

On the matter of software improvements potentially available during recursive self-improvement, we can look at the current pace of algorithmic improvement [LW · GW], which has been probably faster than scaling for some time now. So that's another lower bound on what AI will be capable of, assuming that the extrapolation holds up.

Replies from: Will_Pearson
comment by Will_Pearson · 2024-03-26T20:57:42.009Z · LW(p) · GW(p)

I'm wary about that one, because that isn't a known "general" intelligence architecture, so we can expect AIs to make better learning algorithms for deep neural networks, but not necessarily themselves.

No comments

Comments sorted by top scores.