What is the probability that a superintelligent, sentient AGI is actually infeasible?

post by Nathan1123 · 2022-08-14T22:41:43.268Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    6 JBlack
    2 Konstantinos Sgantzos
    2 deepthoughtlife
    -1 Charlie Steiner
    -16 grantcastillou@gmail.com
None
No comments

Hello,

I am not at all adverse to discussing contingencies for things that are either uncertain or unlikely, but I'm curious what the general consensus is for how likely a superintelligent AGI scenario actually is.

To be clear, I am certainly aware that advances in AI have made leaps and bounds throughout my own lifetime alone, with public feats such as AlphaGo far exceeding the expectations of its creators.

But just because there is a trend over a certain period of time doesn't mean that it will stay that way forever, or develop in a way that one expects intuitively. For example, population models tend to grow quickly at first, but flatten out over time. It is possible that computer technology (or technology in general) has some natural, undiscovered limit, such that the graph will flatten towards a logistic asymptote or similar curve.

Or, alternatively, civilization may discover that there is a certain trade-off in computer science, analogous to Heisenberg Uncertainty, such that either of the two scenarios are possible, but not in combination:

  1. Superintelligent, non-sentient AI or Tool AI [? · GW]: Machines maximized for computational power and autonomous problem-solving, but has no self-awareness or autonomy associated with consciousness.
  2. Unintelligent AGI or Infant AI [LW · GW]: An artificial consciousness with full autonomy and self-awareness, but relatively poor computational power, such that it is no smarter than a real human.

In this scenario, a superintelligent AGI isn't possible because of the trade-off: it's either a superintelligent tool or an unintelligent consciousness. How probable do you think such a scenario might be?

Answers

answer by JBlack · 2022-08-15T05:13:33.862Z · LW(p) · GW(p)

I don't associate autonomy with consciousness at all, so I'm unsure how to deal with the option "no self-awareness or autonomy associated with consciousness". But let's just take it as an unexplained counterfactual and go from there.

I don't see anything in that scenario that prevents a human-level AGI from using a collection of superintelligent tool AIs with a better interface to achieve feats of intelligence that humans cannot, even with the same tool AIs. I'm not sure how much this differs from simply calling the combined system "superintelligent AGI".

But even that much restriction seems extremely unlikely to me. There does not seem to be any physical or information-theoretic reason why general intelligence cannot go faster, broader, and deeper than some arbitrary biological life-form that merely happened to be the first at achieving some useful level of flexibility in world modelling.

What fundamental law of the universe would set a limit right there, out of all possible capacities across every possible form of computing substrate? Even a priori it seems unlikely on the order of 1%, by considering a logarithmic scale of where such a bar could be set. Given what we can deduce about possible other computing substrates and the limited size of human brains compared with the universe, it seems much less likely still.

The only thing that looks like it could possibly interfere is something like an unknown nonphysical principle of consciousness that somehow, despite all evidence to the contrary, actually turns out to be required for general intelligence and has fundamental reasons why it can't get any faster or more capable no matter what physical system it might be associated with. I give that very poor odds indeed, and much less still that it is due to "a certain trade-off in computer science".

comment by Nathan1123 · 2022-08-15T15:19:29.928Z · LW(p) · GW(p)

I don't see anything in that scenario that prevents a human-level AGI from using a collection of superintelligent tool AIs with a better interface to achieve feats of intelligence that humans cannot, even with the same tool AIs.

At that point, it wouldn't functionally be different than a series of tool AIs being controlled directly by a human operator. If that poses risk, then mitigations could be extrapolated to the combined-system scenario.

What fundamental law of the universe would set a limit right there, out of all possible capacities across every possible form of computing substrate?

I'm not trying to imply there is something about the human mind specifically that forces a limit to computing power, I just used that as a benchmark as that is the only frame of reference that we have. If it is dumber or slightly smarter than a human on the same order of magnitude, that doesn't really matter.

The concept of a trade-off is simply saying that the more complex a system is to imitate consciousness, the more computational ability is sacrificed, tending towards some lower bound of computational substrate that one may not count as superintelligent. I'm not saying I have any physical or informational-theoretical law in mind for that currently, though.

answer by Konstantinos Sgantzos · 2022-08-25T10:36:33.120Z · LW(p) · GW(p)

It is an excellent question which we have tried to answer in our latest manuscript here: https://www.mdpi.com/1911-8074/15/8/360 

We hope you will find it interesting and we welcome any feedback.

K. Sgantzos

answer by deepthoughtlife · 2022-08-15T18:30:01.006Z · LW(p) · GW(p)

It's highly probable that superintelligent AI is not possible at the same level of tech where general AI first becomes possible (this could even max out below human level if humans are not minimally intelligent to be general, which seems obviously true.). Said general AI would then have to improve in the same old manner as computer chip fabrication processes currently do, with a lot of time, effort, and research. Under this scenario, eventually your dichotomy might actually be the case for a while as tech improves.

answer by Charlie Steiner · 2022-08-14T23:23:25.325Z · LW(p) · GW(p)

About 25 nines. So 0.0000000000000000000000001%

answer by grantcastillou@gmail.com · 2022-08-15T17:29:19.045Z · LW(p) · GW(p)

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

No comments

Comments sorted by top scores.