Why an Intelligence Explosion might be a Low-Priority Global Risk
post by XiXiDu · 2011-11-14T11:40:38.917Z · LW · GW · Legacy · 97 commentsContents
Intelligence, a cornucopia? Intelligence amplification, is it worth it? Humans and the importance of discovery Evolution versus Intelligence Humans are biased and irrational Embodied cognition and the environment Necessary resources for an intelligence explosion Artificial general intelligence, a single break-through? Paperclip maximizers Fermi paradox Summary Further reading None 97 comments
(The following is a summary of some of my previous submissions that I originally created for my personal blog.)
As we know,
There are known knowns.
There are things
We know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don’t know
We don’t know.
— Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing
Intelligence, a cornucopia?
It seems to me that those who believe into the possibility of catastrophic risks from artificial intelligence act on the unquestioned assumption that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries.
Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns and who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To enable an intelligence explosion the light would have to reach out much farther with each increase in intelligence than the increase of the distance between unknown unknowns. I just don’t see that to be a reasonable assumption.
Intelligence amplification, is it worth it?
It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.
If any increase in intelligence is vastly outweighed by its computational cost and the expenditure of time needed to discover it then it might not be instrumental for a perfectly rational agent (such as an artificial general intelligence), as imagined by game theorists, to increase its intelligence as opposed to using its existing intelligence to pursue its terminal goals directly or to invest its given resources to acquire other means of self-improvement, e.g. more efficient sensors.
What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages (enough to enable an intelligence explosion) over evolutionary discovery relative to its cost?
We simply don’t know if intelligence is instrumental or quickly hits diminishing returns.
Can intelligence be effectively applied to itself at all? How do we know that any given level of intelligence is capable of handling its own complexity efficiently? Many humans are not even capable of handling the complexity of the brain of a worm.
Humans and the importance of discovery
There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:
- Intelligence is goal-oriented.
- Intelligence can think ahead.
- Intelligence can jump fitness gaps.
- Intelligence can engage in direct experimentation.
- Intelligence can observe and incorporate solutions of other optimizing agents.
But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence itself does it take the discovery of novel unknown unknowns?
We have no idea about the nature of discovery and its importance when it comes to what is necessary to reach a level of intelligence above our own, by ourselves. How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?
Our “irrationality” and the patchwork-architecture of the human brain might constitute an actual feature. The noisiness and patchwork architecture of the human brain might play a significant role in the discovery of unknown unknowns because it allows us to become distracted, to leave the path of evidence based exploration.
A lot of discoveries were made by people who were not explicitly trying to maximizing expected utility. A lot of progress is due to luck, in the form of the discovery of unknown unknowns.
A basic argument in support of risks from superhuman intelligence is that we don’t know what it could possible come up with. That is also why it is called it a “Singularity“. But why does nobody ask how a superhuman intelligence knows what it could possible come up with?
It is not intelligence in and of itself that allows humans to accomplish great feats. Even people like Einstein, geniuses who were apparently able to come up with great insights on their own, were simply lucky to be born into the right circumstances, the time was ripe for great discoveries, thanks to previous discoveries of unknown unknowns.
Evolution versus Intelligence
It is argued that the mind-design space must be large if evolution could stumble upon general intelligence and that there are low-hanging fruits that are much more efficient at general intelligence than humans are, evolution simply went with the first that came along. It is further argued that evolution is not limitlessly creative, each step must increase the fitness of its host, and that therefore there are artificial mind designs that can do what no product of natural selection could accomplish.
I agree with the above, yet given all of the apparent disadvantages of the blind idiot God, evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven’t been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.
The example of altruism provides evidence that intelligence isn’t many levels above evolution. Therefore the crucial question is, how great is the performance advantage? Is it large enough to justify the conclusion that the probability of an intelligence explosion is easily larger than 1%? I don’t think so. To answer this definitively we would have to fathom the significance of the discovery (“random mutations”) of unknown unknowns in the dramatic amplification of intelligence versus the invention (goal-oriented “research and development”) of an improvement within known conceptual bounds.
Another example is flight. Artificial flight is not even close to the energy efficiency and maneuverability of birds or insects. We didn’t went straight from no artificial flight towards flight that is generally superior to the natural flight that is an effect of biological evolution.
Take for example a dragonfly. Even if we were handed the design for a perfect artificial dragonfly, minus the design for the flight of a dragonfly, we wouldn’t be able to build a dragonfly that can take over the world of dragonflies, all else equal, by means of superior flight characteristics.
It is true that a Harpy Eagle can lift more than three-quarters of its body weight while the Boeing 747 Large Cargo Freighter has a maximum take-off weight of almost double its operating empty weight (I suspect that insects can do better). My whole point is that we never reached artificial flight that is strongly above the level of natural flight. An eagle can after all catch its cargo under various circumstances like the slope of a mountain or from beneath the sea, thanks to its superior maneuverability.
Humans are biased and irrational
It is obviously true that our expert systems are better than we are at their narrow range of expertise. But that expert systems are better at certain tasks does not imply that you can effectively and efficiently combine them into a coherent agency.
The noisiness of the human brain might be one of the important features that allows it to exhibit general intelligence. Yet the same noise might be the reason that each task a human can accomplish is not put into execution with maximal efficiency. An expert system that features a single stand-alone ability is able to reach the unique equilibrium for that ability. Whereas systems that have not fully relaxed to equilibrium feature the necessary characteristics that are required to exhibit general intelligence. In this sense a decrease in efficiency is a side-effect of general intelligence. If you externalize a certain ability into a coherent framework of agency, you decrease its efficiency dramatically. That is the difference between a tool and the ability of the agent that uses the tool.
In the above sense, our tendency to be biased and act irrationally might partly be a trade off between plasticity, efficiency and the necessity of goal-stability.
Embodied cognition and the environment
Another problem is that general intelligence is largely a result of an interaction between an agent and its environment. It might be in principle possible to arrive at various capabilities by means of induction, but it is only a theoretical possibility given unlimited computational resources. To achieve real world efficiency you need to rely on slow environmental feedback and make decision under uncertainty.
AIXI is often quoted as a proof of concept that it is possible for a simple algorithm to improve itself to such an extent that it could in principle reach superhuman intelligence. AIXI proves that there is a general theory of intelligence. But there is a minor problem, AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself to a non-biological substrate because you showed that in some abstract sense you can simulate every physical process.
Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar, at least, and then wait for the environment to provide a lot of feedback.
Therefore even if we’re talking about the emulation of a grown up mind, it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?
Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler, that makes it unable to become a master of social engineering in a very short time?
Can we imagine what is missing that would enable one of the existing expert systems to quickly evolve vastly superhuman capabilities in its narrow area of expertise? Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?
In a sense an intelligent agent is similar to a stone rolling down a hill, both are moving towards a sort of equilibrium. The difference is that intelligence is following more complex trajectories as its ability to read and respond to environmental cues is vastly greater than that of a stone. Yet intelligent or not, the environment in which an agent is embedded plays a crucial role. There exist a fundamental dependency on unintelligent processes. Our environment is structured in such a way that we use information within it as an extension of our minds. The environment enables us to learn and improve our predictions by providing a testbed and a constant stream of data.
Necessary resources for an intelligence explosion
If artificial general intelligence is unable to seize the resources necessary to undergo explosive recursive self-improvement then the ability and cognitive flexibility of superhuman intelligence in and of itself, as characteristics alone, would have to be sufficient to self-modify its way up to massive superhuman intelligence within a very short time.
Without advanced real-world nanotechnology it will be considerable more difficult for an AGI to undergo quick self-improvement. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU’s. It will have to rely on puny humans for a lot of tasks. It won’t be able to create new computational substrate without the whole economy of the world supporting it. It won’t be able to create an army of robot drones overnight without it either.
Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But, more importantly, it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources. The AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
Therefore the absence of advanced nanotechnology constitutes an immense blow to the possibility of explosive recursive self-improvement and risks from AI in general.
One might argue that an AGI will solve nanotechnology on its own and find some way to trick humans into manufacturing a molecular assembler and grant it access to it. But this might be very difficult.
There is a strong interdependence of resources and manufacturers. The AGI won’t be able to simply trick some humans to build a high-end factory to create computational substrate, let alone a molecular assembler. People will ask questions and shortly after get suspicious. Remember, it won’t be able to coordinate a world-conspiracy, it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources, which it has to do the hard way without nanotech.
Anyhow, you’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
People associated with the SIAI would at this point claim that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. But what, magic?
Artificial general intelligence, a single break-through?
Another point to consider when talking about risks from AI is how quickly the invention of artificial general intelligence will take place. What evidence do we have that there is some principle that, once discovered, allows us to grow superhuman intelligence overnight?
If the development of AGI takes place slowly, a gradual and controllable development, we might be able to learn from small-scale mistakes while having to face other risks in the meantime. This might for example be the case if intelligence can not be captured by a discrete algorithm, or is modular, and therefore never allow us to reach a point where we can suddenly build the smartest thing ever that does just extend itself indefinitely.
To me it doesn’t look like that we will come up with artificial general intelligence quickly, but rather that we will have to painstakingly optimize our expert systems step by step over long periods of times.
Paperclip maximizers
It is claimed that an artificial general intelligence might wipe us out inadvertently while undergoing explosive recursive self-improvement to more effectively pursue its terminal goals. I think that it is unlikely that most AI designs will not hold.
I agree with the argument that any AGI that isn’t made to care about humans won’t care about humans. But I also think that the same argument applies for spatio-temporal scope boundaries and resource limits. Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible, I consider it an far-fetched assumption that any AGI intrinsically cares to take over the universe as fast as possible to compute as many digits of Pi as possible. Sure, if all of that are presuppositions then it will happen, but I don’t see that most of all AGI designs are like that. Most that have the potential for superhuman intelligence, but who are given simple goals, will in my opinion just bob up and down as slowly as possible.
Complex goals need complex optimization parameters (the design specifications of the subject of the optimization process against which it will measure its success of self-improvement).
Even the creation of paperclips is a much more complex goal than telling an AI to compute as many digits of Pi as possible.
For an AGI, that was designed to design paperclips, to pose an existential risk, its creators would have to be capable enough to enable it to take over the universe on its own, yet forget, or fail to, define time, space and energy bounds as part of its optimization parameters. Therefore, given the large amount of restrictions that are inevitably part of any advanced general intelligence, the nonhazardous subset of all possible outcomes might be much larger than that where the AGI works perfectly yet fails to hold before it could wreak havoc.
Fermi paradox
The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI’s with non-human values without working directly on AGI to test those hypothesis ourselves.
If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering.
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
Summary
In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish.
There are many question marks when it comes to the possibility of superhuman intelligence, and many more about the possibility of recursive self-improvement. Most of the arguments in favor of those possibilities solely derive their appeal from being vague.
Further reading
- Intelligence Explosion - A Disjunctive or Conjunctive Event?
- The Hanson-Yudkowsky AI-Foom Debate
- The Betterness Explosion
- Is The City-ularity Near?
- How far can AI jump?
- Why I’m Not Afraid of the Singularity
- What’s the Likelihood of the Singularity? Part One: Artificial Intelligence
- When Exactly Will Computers Go Ape-Shi* and Take Over?
- The slowdown hypothesis (extended abstract)
- The singularity as faith (extended abstract)
97 comments
Comments sorted by top scores.
comment by prase · 2011-11-14T11:53:23.321Z · LW(p) · GW(p)
It is true that a Harpy Eagle can lift more than three-quarters of its body weight while the Boeing 747 Large Cargo Freighter has a maximum take-off weight of almost double its operating empty weight (I suspect that insects can do better).
Which doesn't (automatically) mean that the 747 has worse design than the eagle. Smaller things (constructions, machines, animals) are relatively stronger than bigger things not because of their superior design, but because physics of materials is not scale invariant (unless you somehow managed to scale the size of atoms too). If you made a 1000:1 scaled copy of an ant, it wouldn't be able to lift objects twenty times heavier.
Replies from: Logos01↑ comment by Logos01 · 2011-11-14T12:07:31.943Z · LW(p) · GW(p)
If you made a 1000:1 scaled copy of an ant, it wouldn't be able to lift objects twenty times heavier.
It wouldn't be able to survive the crush of gravity, in fact.
Replies from: wedrifid, wedrifid↑ comment by wedrifid · 2011-11-14T19:53:59.029Z · LW(p) · GW(p)
If you made a 1000:1 scaled copy of an ant, it wouldn't be able to lift objects twenty times heavier.
It wouldn't be able to survive the crush of gravity, in fact.
It wouldn't even be able to breathe, for that matter. They don't have lungs and just absorb and release gases through the exoskeleton. That change in the surface area to volume ratio would kill them.
comment by TheWakalix · 2018-11-22T21:24:22.116Z · LW(p) · GW(p)
Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?
We have now, depending on how you interpret "teach itself". It wasn't given anything but the rules and how to play against itself.
comment by JoshuaZ · 2011-11-14T16:52:30.107Z · LW(p) · GW(p)
One issue I've mentioned before and I think is worth addressing is how much of the ability to quickly self-improve might depend on strict computational limits from theoretical computer science (especially complexity theory). If P != NP in a strong sense then recursive self-improvement many be very difficult.
More explicitly, many problems that are relevant for recursive self-improvement (circuit design and memory management for example) explicitly involve graph coloring and traveling salesman variants which are NP-hard or NP-complete. In that context, it could well be that designing new hardware and software will quickly hit diminishing marginal returns. If P, NP. coNP, PSPACE, and EXP are all distinct in a strong sense, then this sort of result is plausible.
There are problems with this sort of issue. One major one is that the standard distinctions between complexity classes are all in terms of Big-Os. So it could well be that the various classes are distinct but that the constants are small enough such that for all practical purposes one can do whatever one wants. There are also possible loopholes. For example, Scott Aaronson has shown that if one has access to closed time-like curves then one can quickly solve any problem in PSPACE. This is one of the more exotic loopholes. Quantum computers also may have more efficient computation. At this point, while almost everyone believes that P != NP, the notion that BQP doesn't contain NP seems substantially more uncertain. Finally, and most mundanely, it may be that even as the general problems are tough, the average cases of NP problems may not be difficult, and the specific examples that actually come up may have enough regularity that they can be solved more efficiently. Indeed, in one sense, part of why Deolalikar's attempted proof that P !=NP failed is that statistically, NP looks easy on average (more particularly, k-SAT is NP-complete for k >=3, but k-SAT for general k looks statistically a lot like 2-SAT).
It would seem to me that at this point that a lot more attention should be paid to computational complexity and what it has to say about the plausibility of quick recursive self-improvement.
Replies from: lessdazed, CarlShulman, amcknight, timtyler↑ comment by lessdazed · 2011-11-14T17:32:36.011Z · LW(p) · GW(p)
It would seem to me that at this point that a lot more attention should be paid to computational complexity and the plausibility of quick recursive self-improvement.
That sounds suspiciously like hard work rather than speculation from unfalsifiable assumptions in natural language that can't actually be cashed out even in theory.
↑ comment by CarlShulman · 2011-11-15T04:29:11.483Z · LW(p) · GW(p)
Eric Horvitz at Microsoft Research has expressed interest in finding complexity results for self-improvement in an intelligence explosion context. I don't know if much has come of it.
↑ comment by amcknight · 2011-11-15T19:25:59.421Z · LW(p) · GW(p)
I don't really see why solving these kinds of difficult problems is relevant. A system could still recursively self-improve to solve a vast number of easier problems. That being said, I'd probably still be interested in anything relating complexity classes to intelligence.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-16T16:14:05.053Z · LW(p) · GW(p)
A system could still recursively self-improve to solve a vast number of easier problems.
Well, the point here is that it looks like the problems involved with recursive self-improvement themselves fall into the difficult classes. For example, designing circuitboards involves a version of the traveling salesman problem which is NP-complete. Similarly, memory management and design involves graph coloring which is also NP complete.
↑ comment by timtyler · 2011-11-22T22:14:37.933Z · LW(p) · GW(p)
More explicitly, many problems that are relevant for recursive self-improvement (circuit design and memory management for example) explicitly involve graph coloring and traveling salesman variants which are NP-hard or NP-complete. In that context, it could well be that designing new hardware and software will quickly hit diminishing marginal returns. If P, NP. coNP, PSPACE, and EXP are all distinct in a strong sense, then this sort of result is plausible.
We have already seen quite a bit of software and hardware improvement. We already know that it goes pretty fast.
It would seem to me that at this point that a lot more attention should be paid to computational complexity and what it has to say about the plausibility of quick recursive self-improvement.
Maybe. Speed limits for technological evolution seem far off to me. The paucity of results in this area so far may mean that bounding progress rates is not an easy problem.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-23T00:28:08.346Z · LW(p) · GW(p)
Many practical problems we actually have pretty decent limits. For example, the best algorithm to find gcds in the integers are provably very close to best possible. Of course, the actual upper bounds for many problems may be far off or they may be near. That's why this is an argument that we need to do more research into this question not that it is a slamdunk against runaway self-improvement.
Replies from: timtyler↑ comment by timtyler · 2011-11-23T13:49:32.869Z · LW(p) · GW(p)
In practice, much is down to how fast scientific and technological progress can accelerate. If seems fairly clear that progress is autocatalytic - and that the rate of progress ramps up with the number of scientists, which does not have hard limits.
Algorithm limits seem to apply more to the question of how smart a computer program can become in an isolated virtual world.
Matt Mahoney has looked at that area - though his results so far do not seem terribly interesting to me.
I think one math problem is much more important to progress than all the other ones: inductive inference.
We can see a long history of progress in solving that problem - and I think we can see that the problem extends far above the human level.
One possible issue is whether progress will slow down as we head towards human capabilities. It seems possible (though not very likely) that we are making progress simply by coding our own inductive inference skills into the machines.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-25T18:54:40.094Z · LW(p) · GW(p)
In practice, much is down to how fast scientific and technological progress can accelerate. If seems fairly clear that progress is autocatalytic - and that the rate of progress ramps up with the number of scientists, which does not have hard limits.
It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns. There are more scientists today at some major research universities than there were at any point in the 19th century. Yet we don't have people constantly coming up with ideas as big as say evolution or Maxwell's equations. The low hanging fruit gets picked quickly.
Algorithm limits seem to apply more to the question of how smart a computer program can become in an isolated virtual world.
Matt Mahoney has looked at that area - though his results so far do not seem terribly interesting to me.
I agree that Mahoney's work isn't so far very impressive. The models used are simplistic and weak.
I think one math problem is much more important to progress than all the other ones: inductive inference.
Many forms of induction are NP-hard and some versions are NP-complete so these sorts of limits are clearly relevant. Some other forms are closely related where one models things in terms of recognizing pseudorandom number generators. But it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.
Replies from: XiXiDu, timtyler↑ comment by XiXiDu · 2011-11-25T19:56:41.982Z · LW(p) · GW(p)
You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.
Since the very beginning I wondered why nobody has written down what speaks against that possibility. Which is one of the reasons for why I even bothered to start arguing against it myself -- the trigger has been a deletion of a certain post which made me realize that there is a lot more to it (socially and psychologically) than the average research project -- even though I knew very well that I don't have the necessary background, nor patience, to do so in a precise and elaborated manner.
Do people think that a skeptical inquiry of, and counterarguments against an intelligence explosion are not valuable?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-28T19:50:57.131Z · LW(p) · GW(p)
You are far more knowledgeable than me and a lot better at expressing possible problems with an intelligence explosion.
I don't know about that. The primary issue I've talked about limiting an intelligence explosion is computational complexity issues. That's a necessarily technical area. Moreover, almost all the major boundaries are conjectural. If P=NP in a practical way, than an intelligence explosion may be quite easy. There's also a major danger that in thinking/arguing that this is relevant, I may be engaging in motivated cognition in that there's an obvious bias to thinking that things close to one's own field are somehow relevant.
↑ comment by timtyler · 2011-11-26T02:09:39.228Z · LW(p) · GW(p)
It might ramp up with increasing the number of scientists, but there are clear diminishing marginal returns.
Perhaps eventually - but much depends on how you measure it. In dollar terms, scientists are doing fairly well - there are a lot of them and they command reasonable salaries. They may not be Newtons of Einsteins, but society still seems to be prepared to pay them in considerable numbers at the moment. I figure that means there is still important stuff that needs discovering.
[re: inductive inference] it seems to me to be incorrect to identify this as the only issue or even that it is necessarily more important.. If for example one could factor large numbers more efficiently, an AI could do a lot with that if it got minimal internet access.
As Eray Özkural once said: "Every algorithm encodes a bit of intelligence". However, some algorithms do more so than others. A powerful inductive inference engine could be used to solve factoring problems - but also, a huge number of other problems.
comment by gwern · 2011-11-14T14:58:49.286Z · LW(p) · GW(p)
Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?
AIXI-MC can teach itself Pacman; Pacman by default is single-player so the game implementation has to be already done for AIXI-MC. I suppose you could set up a pair of AIXI-MCs like is sometimes done for training chess programs, and the two will gradually teach each other chess.
Replies from: magfrumpcomment by RobertLumley · 2011-11-14T18:03:30.767Z · LW(p) · GW(p)
Evolution versus Intelligence
I didn't read all of this post, unfortunately, I didn't have time before class. But I wanted to mention my thoughts on this section. This seemed like a very unfortunate analogy. For one, the specific example of flying is wildly biased against humans, who are orders of magnitude handicapped by the square cube law. Secondly, you can think of any number of arbitrary counterexamples where the advantage is on the side of intelligence. For a similar counterexample, intelligence has invented cars, trains, and boats which allow humans to travel at velocities far superior to evolution's animals and for longer periods of time
Replies from: RobertLumley↑ comment by RobertLumley · 2011-11-15T02:06:17.965Z · LW(p) · GW(p)
Guess someone beat me to this point.
comment by timtyler · 2011-11-14T13:57:11.869Z · LW(p) · GW(p)
Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible, I consider it an far-fetched assumption that any AGI intrinsically cares to take over the universe as fast as possible to compute as many digits of Pi as possible. Sure, if all of that are presuppositions then it will happen, but I don’t see that most of all AGI designs are like that. Most that have the potential for superhuman intelligence, but who are given simple goals, will in my opinion just bob up and down as slowly as possible.
It seems to be a kind-of irrelevant argument, since the stock market machines, query answering machines, etc. that humans actually build mostly try and perform their tasks as quickly as they can. There is not much idle thumb-twiddling in the real world of intellligent machines.
It doesn't much matter what machines who are not told to ack quickly will do - we want machines to do things fast, and will build them that way.
comment by Grognor · 2011-11-14T12:48:46.967Z · LW(p) · GW(p)
Before I dive into this material in depth, a few thoughts:
First, I want to sincerely congratulate you on being (it seems to me) the first in our tribe to dissent.
Second, it seems your problem isn't with an intelligence explosion as a risk all on its own, but rather as a risk among other risks, one that is farther from being solved (both in terms of work done and in resolvability), and so this post could use a better title, i.e., "Why an Intelligence Explosion is a Low-Priority Global Risk", which does not a priori exclude SIAI from potential donation targets. If I'm wrong about this, and you would consider it a low-priority thing to get rid of the global risk from an intelligence explosion even aside from other global risks, I'll have to ask for an explanation.
Edit: It seems my comment has been noted and the title of the post changed.
comment by jhuffman · 2011-11-14T16:36:09.752Z · LW(p) · GW(p)
I can't really accept innovation as random noise. That doesn't seem to account for the incredible growth in the rate of new technology development. I think a lot of developments are in fact based on sophisticated analysis of known physical laws - e.g. a lot of innovation is engineering versus discovery. Many foundational steps do seem to be products of luck; such as the acceptance of the scientific method.
Replies from: CarlShulman, timtyler↑ comment by CarlShulman · 2011-11-15T03:39:58.290Z · LW(p) · GW(p)
Trivially, we observe in the world that more innovations happen where there are more scientists, scientists with higher IQs, scientists spending more time on research, densely connected scientists, subjective time for scientists to think, etc. These are inputs that could be greatly boosted with whole brain emulations or not-even-superhuman AGI.
↑ comment by timtyler · 2011-11-14T16:57:58.384Z · LW(p) · GW(p)
This sounds like Campbell's:
In going beyond what is already known, one cannot but go blindly. If one can go wisely, this indicates already achieved wisdom of some general sort [...] which limits the range of trials.
As I have argued here it is a rather misleading idea.
There may be a random component. As Steven Johnson says: "Chance favours the connected mind".
comment by timtyler · 2011-11-14T12:42:01.609Z · LW(p) · GW(p)
given all of the apparent disadvantages of the blind idiot God, evolution was able to come up with altruism, something that works two levels above the individual and one level above society.
Much human altruism is fictional - humans are nice to other humans mostly because being nice pays.
There are low-level selection explanations for most of the genuine forms of human altruism. IMHO, the most promising explanations are:
- Altruism is the result of manipulation by other humans;
- Altruism is the result of manipulation by memes.
- Altruism is the result of overgeneralisation due to cognitive limitations;
- Altruism towards kin (and fictive kin) is explained by kin selection.
comment by billswift · 2011-11-14T16:01:28.865Z · LW(p) · GW(p)
The critical similarity is that both rely on dumb luck when it comes to genuine novelty.
Someone pointed out that a sufficiently powerful intelligence could search all of design space rather than relying on "luck".
I read it on the Web, but can't find it - search really sucks when you don't have a specific keyword or exact phrasing to match.
Replies from: Manfred↑ comment by Manfred · 2011-11-14T23:07:04.309Z · LW(p) · GW(p)
I don't think that really captures the idea of intelligence. A sufficiently patient calculator can churn out the 10^10th digit of pi by caluclating pi, but an intelligent calculator would figure out how to do it in about a minute on my desktop computer.
The point being that the label "dumb luck," while vaguely accurate, views discovery too much as a black box. Which is sort of ironic from this article.
comment by [deleted] · 2011-11-14T14:34:24.612Z · LW(p) · GW(p)
This should be in Main. I'd much rather have this than "how I broke up with my girlfriend" there.
(Otherwise, I don't have much to say because I basically agree with you. I find your arguments kinda weak and speculative, but much less so than arguments for the other side. So your skepticism is justified.)
Replies from: wedrifid, RobertLumley↑ comment by RobertLumley · 2011-11-15T02:04:32.160Z · LW(p) · GW(p)
Downvoted for rudeness, but I agree it should be in main.
Replies from: wedrifid, Jayson_Virissimo↑ comment by Jayson_Virissimo · 2011-11-15T08:46:13.765Z · LW(p) · GW(p)
Downvoted for oversensitivity to emotional tone.
comment by lessdazed · 2011-11-14T13:52:56.506Z · LW(p) · GW(p)
Many humans are not even capable of handling the complexity of the brain of a worm.
I don't think that's the right reference class. We're not asking is something is sufficient, but if something is likely.
Our “irrationality” and the patchwork-architecture of the human brain might constitute an actual feature. The noisiness and patchwork architecture of the human brain might play a significant role in the discovery of unknown unknowns because it allows us to become distracted, to leave the path of evidence based exploration...The noisiness of the human brain might be one of the important features that allows it to exhibit general intelligence.
If you can figure this out, and a superintelligent AI couldn't assign it the probability it deserves and investigate and experiment with it, does that make you supersuperintelligent?
Also, isn't the random noise hypothesis being privileged here? Likewise for "our tendency to be biased and act irrationally might partly be a trade off between plasticity, efficiency and the necessity of goal-stability."
But that expert systems are better at certain tasks does not imply that you can effectively and efficiently combine them into a coherent agency.
Why do these properties of expert systems matter, as no one is discussing combining them?
Why haven’t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?
There's progress along these lines.
It is claimed that an artificial general intelligence might wipe us out inadvertently
"Inadvertently" gives the wrong connotations.
For an AGI, that was designed to design paperclips, to pose an existential risk, its creators would have to be capable enough to enable it to take over the universe on its own, yet forget, or fail to, define time, space and energy bounds as part of its optimization parameters
What if the AI changed some of its parameters?
It would appear that we have reached the limits of what it is possible to achieve with computer technology, although one should be careful with such statements, as they tend to sound pretty silly in 5 years
--John von Neumann
comment by Donald Hobson (donald-hobson) · 2021-03-04T13:04:25.927Z · LW(p) · GW(p)
A lot of this post sounds like fake ignorance. If you just read over it, you might think the questions asked are genuinely unknown, but if you think for a bit, you can see we have quite a lot of evidence and can give a rough answer.
We simply don’t know if intelligence is instrumental or quickly hits diminishing returns.
Well humans are doing ok for themselves, it seems to have accelerating returns up to the level of a smart human. Whats more, intelligence gets more valuable with increasing scale, and with cheaper compute. When controlling a roomba, you are controling a few watts. An algorithm that took a 1kw computer cluster to run, and improved efficiency by 5% wouldn't be worth it. But it would be worth it to control a power station. Whatsmore, the human brain seems a long way from the theoretical limits of compute. So as a lower bound, imagine what a team of smart humans could do running at 1000 times speed, and then imagine that cost you < 1 watt in energy.
Can intelligence be effectively applied to itself at all?
Yes. Hence the fields of psycology and AI research.
It seems that you need to apply a lot more energy to get a bit more complexity.
Doesn't seem to match evolutionary record.
What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages (enough to enable an intelligence explosion) over evolutionary discovery relative to its cost?
The way humans can easily do things that evolution never could. The fact that evolution is a really stupid algorithm, its generally much faster to make the problem space differentiable, then use gradient descent.
comment by amcknight · 2011-11-15T19:54:06.166Z · LW(p) · GW(p)
Just a reminder that risk from AI can occur without recursive self-improvement. Any AGI with a nice model of our world and some goals could potentially be extremely destructive. Even if intelligence has diminishing returns, there is a huge hardware base to be exploited and a huge number of processors working millions of times faster than brains to be harnessed. Maybe intelligence won't explode in terms of self-improvement, but it can nevertheless explode in terms of pervasiveness and power.
comment by Gedusa · 2011-11-14T12:30:48.918Z · LW(p) · GW(p)
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.
I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.
Replies from: Kaj_Sotala, JoshuaZ, timtyler↑ comment by Kaj_Sotala · 2011-11-15T12:15:15.576Z · LW(p) · GW(p)
I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.
The Fermi Paradox was considered a paradox even before anybody started talking about paperclippers. And even if we knew for certain that superintelligence was impossible, the Fermi Paradox would still remain a mystery - it's not paperclippers (one possible form of colonizer) in particular that are hard to reconcile with the Fermi Paradox, it's the idea of colonizers in general.
Simply the fact that the paradox exists says little about the likelyhood of paperclippers, though it does somewhat suggest that we might run into some even worse x-risk before the paperclippers show up. (What value you attach to that "somewhat" depends on whether you think it's reasonable to presume that we've already passed the Great Filter.)
↑ comment by JoshuaZ · 2011-11-15T02:25:25.158Z · LW(p) · GW(p)
One important thing to keep in mind is that although Katja emphasizes this argument in a context of anthropics, the argument goes through even if one hasn't ever heard of anthropic arguments at all simply in terms of the Great Filter.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-11-15T03:36:09.353Z · LW(p) · GW(p)
the argument goes through even if one hasn't ever heard of anthropic arguments
Only very weakly. Various potential early filters are quite unconstrained by evidence, so that our uncertainty spans many orders of magnitude. Abiogenesis, the evolution of complex life and intelligence, creation of suitable solar systems for life, etc could easily cost many orders of magnitude. Late-filter doomsdays like extinction from nukes or bioweapons would have to be exceedingly convergent (across diverse civilizations) to make a difference of many orders of magnitude for the Filter (essentially certain doom).
Unless you were already confident that the evolution of intelligent life is common, or consider convergent doom (99.9999% of civilizations nuke themselves into extinction, regardless of variation in history or geography or biology) pretty likely the non-anthropic Fermi update seems pretty small.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-15T03:42:50.672Z · LW(p) · GW(p)
I'm not sure which part of the argument you are referring to. Are you talking about estimates that most of the Great Filter is in front of us? If so, I'd be inclined to tentatively agree. (Although I've been updating more in the direction of more filtration in front for a variety of reasons.) I was talking about that the observation that we shouldn't expect AI to be a substantial fraction of the Great Filter. Katja's observation in that context is simply a comment about what our light cone looks like.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-11-15T04:23:00.552Z · LW(p) · GW(p)
Are you talking about estimates that most of the Great Filter is in front of us? If so, I'd be inclined to tentatively agree.
OK.
I was talking about that the observation that we shouldn't expect AI to be a substantial fraction of the Great Filter.
Sure. I was saying that this alone (sans SIA) is much less powerful if we assign much weight to early filters. Say (assuming we're not in a simulation) you assigned 20% probability to intelligence being common and visible (this does invoke observation selection problems inevitably, since colonization could preempt human evolution), 5% to intelligence being common but invisible (environmentalist Von Neumann probes enforce low-visibility; or maybe the interstellar medium shreds even slow starships) 5% to intelligence arising often and self-destructing, and 70% to intelligence being rare. Then you look outside, rule out "common and visible," and update to 6.25% probability of invisible aliens, 6.25% probability of convergent self-destruction in a fertile universe, and 87.5% probability that intelligence is rare. With the SIA (assuming we're not in a simulation, even though the SIA would make us confident that we were) we would also chop off the "intelligence is rare" possibility, and wind up with 50% probability of invisible aliens and 50% probability of convergent self-destruction.
And, as Katja agrees, SIA would make us very confident that AI or similar technologies will allow the production of vast numbers of simulations with our experiences, i.e. if we bought SIA we should think that we were simulations, and in the "outside world" AI was feasible, but not to have strong conclusions about late or early filters (within many orders of magnitude) about the outside world.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-16T16:12:25.068Z · LW(p) · GW(p)
I agree with most of this. The relevant point is about AI in particular. More specifically, if an AGI is likely to start expanding to control its light cone at a substantial fraction of the speed of light, and this is a major part of the Filter, then we'd expect to see it. In contrast to something like nanotech for example that if it destroys civilization on a planet will be hard for observers to notice. Anthropic approaches (both SIA and SSA) argue for large amounts of filtration in front. The point is that observation suggests that AGI isn't a major part of that filtration if that's correct.
An example that might help illustrate the point better. Imagine that someone is worried that the filtration of civilizations generally occurs due to them running some sort of physics experiment that causes a false vacuum collapse that expands at less than the speed of light (say c/10,000). We can discount the likelyhood of such an event because we would see from basic astronomy the result of the civilizations that have wiped themselves out in how they impact the stars near them.
comment by timtyler · 2011-11-14T12:24:29.389Z · LW(p) · GW(p)
Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns and who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To enable an intelligence explosion the light would have to reach out much farther with each increase in intelligence than the increase of the distance between unknown unknowns. I just don’t see that to be a reasonable assumption.
We do have some data on historical increases in intelligence due to organic and cultural evolution. There's the fossil record of brain sizes, plus data like the Flynn effect. The process has been super-exponential. The intelligence eplosion has been going on for about 500 million years so far. As Moravec puts it:
The largest nervous systems doubled in size about every fifteen million years since the Cambrian explosion 550 million years ago. Robot controllers double in complexity (processing power) every year or two.
Machine intelligence looks set to be a straightforwards continuation of this long and well-established trend towards bigger brains in the brainiest creatures.
comment by Kaj_Sotala · 2011-11-15T11:52:39.368Z · LW(p) · GW(p)
I agree with the above, yet given all of the apparent disadvantages of the blind idiot God, evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven’t been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.
I don't understand this paragraph. What does "something that works two levels above the individual and one level above society" mean? Or the follow-up sentence?
comment by Curiouskid · 2011-12-06T01:11:53.542Z · LW(p) · GW(p)
It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.
I don't necessarily think it's true that you need to know an unknown unknown to reach a "quantum leap". This is a very qualitative reasoning about intelligence. You could simply increase the speed. Also, evolution didn't make intelligence by knowing some unknown unknown, it was a result of trial and error. Further intelligence improvement could use the same method, just faster.
comment by Drahflow · 2011-11-16T08:40:07.171Z · LW(p) · GW(p)
The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources. The AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
If the AGI creates a sufficiently convincing business plan / fake company front, it might well be able to command a significant share of the world's resources on credit and either repay after improving or grab power and leave it at that.
comment by magfrump · 2011-11-15T08:42:40.930Z · LW(p) · GW(p)
The first several points you make seem very weak to me, however starting with the section on embodied cognition the post gets better.
Embodied cognition seems to me like a problem for programmers to overcome, not an argument against FOOM. However it serves as a good basis for your point about constrained resources; I suspect that with sufficient time and leeway and access to AIM an AGI could become an extremely effective social manipulator. However this seems like the only avenue which it would obviously have the ability to get and process responses easily without needing hardware advances which would be difficult to acquire. Pure text messages don't require much memory or much bandwidth, and there are vast numbers of people accessible to interact with at a time, so it would be hard to restrict an AI's ability to learn to talk to people, but this is extremely limited as a way of causing existential risk.
Your paperclip maximizing argument is one that I have thought out before, but I would emphasize a point you seem to neglect. The set of mind designs which would take over the universe seems dense in the set of all generally intelligent mind designs to me, however not necessarily among the set of mind designs which humans would think to program. I don't think that humans would necessarily need to be able to take over the world to create a monster AI, but I do think that a monster AI implies something about the programmer which may or may not be compatible with human psychology.
Overall your post is decent and interesting subject matter. I'm not finding myself persuaded but my disagreements feel more empirical than before the reading which is good!
comment by timtyler · 2011-11-14T12:54:00.209Z · LW(p) · GW(p)
If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering.
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
It seems like an argument for DOOM - but what if getting this far is simply very difficult?
Then we could be locally first, without hypothesizing that we are surrounded by the ashes of many other failed technological civilisations.
In which case, machine intelligence might well represent humanity's biggest danger.
Lack of aliens just illustrates the great filter. It doesn't imply that it lies ahead of us. Indeed, we can see from the billions of years of progress and the recent invention of space travel for the first time that much of it lies behind us.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-14T18:32:28.901Z · LW(p) · GW(p)
Indeed, we can see from the billions of years of progress and the recent invention of space travel for the first time that much of it lies behind us.
How does this imply that there's a lot behind us? It could be that the technology that creates most of the great filter is something that generally arises close to the tech level where one is about to get largescale space travel. (I think the great filter is likely behind us, but looking just at the fact that it has taken us millions of years to get here is not by itself a decent argument that much of the filter is in fact in the past.)
Replies from: timtyler↑ comment by timtyler · 2011-11-14T18:42:57.427Z · LW(p) · GW(p)
There's around four-billion years worth of hold-ups behind us - and there's probably not much longer to wait.
That doesn't show that most of the risk isn't ahead - that is a complex speculation based on too many factors to fit into this blog comment. The point is that you can't argue from no aliens to major future risks - since we might well be past the bulk of the filter.
comment by Logos01 · 2011-11-14T12:04:29.016Z · LW(p) · GW(p)
We simply don’t know if intelligence is instrumental or quickly hits diminishing returns.
General intelligence -- defined as the ability to acquire, organize, and apply information -- is definitionally instrumental. Greater magnitudes of intelligence yield greater ability to acquire, organize, and apply said information.
Even if we postulate an increasing difficulty or threshold of "contemplative-productivity" per new "layer" of intelligence, the following remains true: Any AGI which is designed as more "intelligent" than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this. This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence's magnitude.
Furthermore; as to the statements about evolution -- evolutionary biology maximizes/optimizes for specific constraints that we humans in designing do not do. There is no evolutionary equivalent of the atomic bomb, nor of the Saturn-V rocket. Evolution furthermore typically will retain "just enough" of a given trait to "justify" the energy-cost of maintaining said trait.
Evolutionarily speaking, abstracting general intelligence has been around less than the blink of an eye.
I don't know that your position is coherent given these points. (Though I do want to point out for re-emphasis that I nowhere in this stated that seed-AGI is either likely or unlikely.)
Replies from: JoshuaZ, Cthulhoo↑ comment by JoshuaZ · 2011-11-14T18:39:59.547Z · LW(p) · GW(p)
General intelligence -- defined as the ability to acquire, organize, and apply information -- is definitionally instrumental. Greater magnitudes of intelligence yield greater ability to acquire, organize, and apply said information.
Intelligence is instrumentally useful but it comes at cost. Note that only a few tens of species have developed intelligence. This suggests that intelligence is in general costly. Even if more intelligence helps an AI's goals more that doesn't mean that acquiring more intelligence is easy or worth the effort.
Any AGI which is designed as more "intelligent" than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this.
Yes, but I don't think many people seriously doubt this. Humans will likely to do this in a few years even without any substantial AGI work simply by genetic engineering and/or implants.
This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence's magnitude.
This does not follow. It could be that it gets more and more difficult to design a superior intelligence. There may be diminishing marginal returns. (See my comment elsewhere in this thread for one possible example of what could go wrong.)
Replies from: Logos01↑ comment by Logos01 · 2011-11-14T20:06:43.702Z · LW(p) · GW(p)
We seem to be talking past one another. Why do you speak in terms of evolution, where I was discussing engineered intelligence?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-14T20:16:34.615Z · LW(p) · GW(p)
I'm only discussing evolved intelligences to make the point that intelligence seems to be costly from a resource perspective.
Replies from: Logos01↑ comment by Logos01 · 2011-11-15T03:25:11.387Z · LW(p) · GW(p)
Certainly. But evolved intelligences do not optimize for intelligence. They optimize for perpetuation of the genome. Constructed intelligence allows for systems that are optimized for intelligence. This was what I was getting at with the mentions of the fact that evolution does not optimize for what we optimize for; that there is no evolutionary equivalent of the atom bomb nor the Saturn-V rocket.
So mentioning "ways that can go wrong" and reinforcing that point with evolutionary precedent seems to be rather missing the point. It's apples-to-oranges.
After all: even if there is diminishing return on investment in terms of invested energy to achieve a more-intelligent design, once that new design is achieved it can be replicated essentially indefinitely.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-15T03:30:47.837Z · LW(p) · GW(p)
Taking energy to get there isn't what is relevant in that context. The relevant issue is that being intelligent takes a lot of resources up. This is an important distinction. And the fact that evolution doesn't optimize for intelligence but for other goals isn't really relevant, given that an AGI presumably won't optimize itself for intelligence (a paperclip maximizer for example will make itself just as intelligent enough as it estimates is optimal for making paperclips everywhere). The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Replies from: Logos01, Logos01↑ comment by Logos01 · 2011-11-15T04:51:21.761Z · LW(p) · GW(p)
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Quite correct, but you're still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the "aims"/"goals" of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
↑ comment by Logos01 · 2011-11-15T04:49:06.467Z · LW(p) · GW(p)
given that an AGI presumably won't optimize itself for intelligence
By the very dint of the fact that it is designed for the purpose of being intelligent, any AGI conceivably constructed by men would be optimized for intelligence; this seems a rather routinely heritable phenomenon. While a paperclip optimizer itself might not seek to optimize itself for intelligence, if we postulate that it is in the business of making a 'smarter' paperclip optimizer it will optimize for intelligence. Of course we cannot know the agency of any given point in the sequence; whether it will "make the choice" to recurse upwards.
That being said, there's a real "non-sequitor" here to your dialogue, insofar as I can see. "The relevant issue is that being intelligent takes a lot of resources up". -- Compared to what, exactly? Roughly 2/3 of our caloric intake goes to our brain. Our brain is not well-optimized for intelligence. "[...] given that an AGI presumably won't optimize itself for intelligence" -- but whatever designed that AGI would have.
The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)
I strongly disagree. The basic point is deeply flawed. I've already tried to say this repeatedly: evolution does not optimize for intelligence. Pointing at evolution's history with intelligence and saying, "aha! Optimization finds intelligence expensive!" is missing the point altogether: evolution should find intelligence expensive. It doesn't match what evolution "does". Evolution 'seeks' stable local minima to perpetuate replication of the genome. That is all it does. Intelligence isn't integral to that process; humans didn't need to be any more intelligent than we are in order to reach our local minima of perpetuation, so we didn't evolve any more intelligence.
To attempt to extrapolate from that to what intelligence-seeking designers would achieve is missing the point on a very deep level: to extrapolate correctly from the 'lessons' evolution would 'teach us', one would have to postulate a severe selection pressure favoring intelligence.
I don't see how you're doing that.
Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.
Quite correct, but you're still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the "aims"/"goals" of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-16T17:36:52.503Z · LW(p) · GW(p)
I don't understand your remark. No one is going to make an AGI whose goal is to become as intelligent as possible. Evolution is thus in this context one type of optimizer. Whatever one is optimizing for, becoming as intelligent as possible won't generally be the optimal thing to do even if becoming more intelligent does help achieve its goals more.
Replies from: Logos01↑ comment by Logos01 · 2011-11-16T19:50:13.249Z · LW(p) · GW(p)
No one is going to make an AGI whose goal is to become as intelligent as possible.
I would.
Evolution is thus in this context one type of optimizer.
To which intelligence is extraneous.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
Intelligence is definitionally instrumental to an artificial general intelligence. Given sufficient time, any AGI capable of constructing a superior AGI will do so.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-16T19:55:31.097Z · LW(p) · GW(p)
No one is going to make an AGI whose goal is to become as intelligent as possible.
I would.
Are you trying to make sure a bad Singularity happens?
Evolution is thus in this context one type of optimizer.
To which intelligence is extraneous.
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied environments. Unfortunately, it is a resource intensive tool. That's why Azathoth doesn't use except in a few very bright species.
Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.
You seem to be confusing two different notions of intelligence. One is the either/or "is it intelligent" and the other is how intelligent it is.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
I'm not sure what you mean here.
Replies from: wedrifid, Logos01↑ comment by wedrifid · 2011-11-16T20:04:21.789Z · LW(p) · GW(p)
Are you trying to make sure a bad Singularity happens?
If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer "No". (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Replies from: Logos01↑ comment by Logos01 · 2011-11-17T07:11:55.282Z · LW(p) · GW(p)
Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer "No".
As I've asked elsewhere to resoundingly negative results:
Why is it automatically "bad" to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience... why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a 'child' or 'inheritor' of "the human will" -- then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?
I do not, furthermore, define "humanity" in so strict terms as to require that it be flesh-and-blood to be "human". If our FOOMing AGI were a "human" one -- I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.
Sure, it would suck for me -- but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.
I ask this question because I feel that it is relevant. Why is "inheritor" non-Friendly AGI "bad"?
(This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)
Caveat: It is possible for me to discuss my own motivations using someone else's valuative framework. So context matters. The mere fact that I would say "No" does not mean that I could never say "Yes -- as you see it."
Replies from: wedrifid↑ comment by wedrifid · 2011-11-17T07:20:04.608Z · LW(p) · GW(p)
Why is it automatically "bad" to create an AGI that causes human exinction?
It isn't automatically bad. I just don't want it. This is why I said your answer is legitimately "No".
Replies from: Logos01↑ comment by Logos01 · 2011-11-17T07:26:14.951Z · LW(p) · GW(p)
Fair enough.
Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment -- without our permission -- would you consider this a Friendly outcome?
Replies from: wedrifid↑ comment by wedrifid · 2011-11-17T07:58:23.918Z · LW(p) · GW(p)
Potentially, depending on the simulated environment.
Replies from: Logos01↑ comment by Logos01 · 2011-11-17T07:22:27.679Z · LW(p) · GW(p)
No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied enviornments. Unfortunately, it is a resource intensive tool.
Given the available routes to general intelligence available to "the blind idiot god" due to the characteristics it does optimize for. We have a language breakdown here.
The reason I said intelligence is 'extraneous' to evolution was because evolution only 'seeks out' local minima for perpetuation of the genome. What specific configuration a given local minimum happens to be is extraneous to the algorithm. Intelligence is in the available solution space but it is extraneous to the process. Which is why generalists often lose out to specialists in limited environments. (Read: the pigmy people who "went back to the trees".)
Intelligence is not a goal to evolution; it is extraneous to its criteria. Intelligence is not the metric by which evolution judges fitness. Successful perpetuation of the genome is. Nothing else.
You seem to be confusing two different notions of intelligence. One is the either/or "is it intelligent" and the other is how intelligent it is.
Not at all. Not even remotely. I'm stating that any agent -- a construct (biological or synthetic) that can actively select amongst variable results; a thing that makes choices -- inherently values intelligence; the capacity to 'make good choices'. A more-intelligent agent is a 'superior' agent, instrumentally speaking.
Any time there is an intelligent designed agent, the presence of said intelligence is a hard indicator of the agent valuing intelligence. Designed intelligences are "designed to be intelligent". (This is tautological). This means that whoever designed that intelligence spent effort and time on making it intelligent. That, in turn, means that its designer valued that intelligence. Whatever goalset the designer imparted into the designed intelligence, thusly, is a goalset that requires intelligence to be effected.
Which in turn means that intelligence is definitionally instrumentally useful to a designed intelligence.
It is not possible to intentionally design a trait into a system without that trait being valuable to the system.
I'm not sure what you mean here.
What gives you trouble with it? Try rephrasing it and I'll 'correct' said rephrasing towards my intended meaning as possible, perhaps? I want to be understood. :)
↑ comment by Cthulhoo · 2011-11-14T12:49:23.338Z · LW(p) · GW(p)
Even if we postulate an increasing difficulty or threshold of "contemplative-productivity" per new "layer" of intelligence, the following remains true: Any AGI which is designed as more "intelligent" than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this. This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence's magnitude.
There isn't the need to have infinite recursion. Even if there is some cap on intelligence (due to resources optimization or something else we have yet to discover), the risk is still there if the cap isn't exceptionally near the human level. If the AI is to us what we are to chimps it may very well be enough.
Replies from: Logos01↑ comment by Logos01 · 2011-11-14T13:42:59.192Z · LW(p) · GW(p)
There isn't the need to have infinite recursion.
Or, frankly, recursion at all. Say we can't make anything smarter than humans... but we can make them reliably smart, and smaller than humans. AGI bots as smart as our average "brilliant" guy with no morals and the ability to accelerate as only solid-state equipment can... is frankly pretty damned scary all on its own.
(You could also count, under some auspices, "intelligence explosion" as meaning "an explosion in the number of intelligences". Imagine if for every human being the AGIs had 10,000 minds. Exactly what impact would the average human's mental contributions have? What, then, of 'intellectual labor'? Or manual labor?)
Replies from: Cthulhoo, TimS↑ comment by Cthulhoo · 2011-11-14T13:59:32.417Z · LW(p) · GW(p)
Good point.
In addition, supposing the AI is slightly smarter than humans and can easily replicate itself, Black Team effects could possibly be relevant (just an hypothesis, really, but still interesting to consider).
↑ comment by TimS · 2011-11-14T20:35:52.955Z · LW(p) · GW(p)
Could you expand this a little further. I'm not afraid of amoral, fast-thinking, miniature Isaac Newtons unless they are a substantial EDIT: number (>1000 at the very least) or are not known about by the relevant human policy-makers.
ETA: what it used to say at the edit was "faction of the human population (>1% at the very least)" TheOtherDave corrected my mis-estimate.
Replies from: RomeoStevens, TheOtherDave↑ comment by RomeoStevens · 2011-11-15T02:40:45.913Z · LW(p) · GW(p)
have you read that alien message? http://lesswrong.com/lw/qk/that_alien_message/
Replies from: TimS↑ comment by TimS · 2011-11-15T04:07:05.877Z · LW(p) · GW(p)
TheOtherDave showed that I mis-estimated the critical number. That said, there are several differences between my hypo and the story.
1) Most importantly, the difference between average human and Newton is smaller than the difference portrayed between aliens and humans.
2) There is a huge population of humans in the story, and I expressly limited my non-concern to much smaller populations.
3) The super-intelligents in the story do not appear to be know about by the relevant policy-makers (i.e. senior military officials) Not that it would matter in the story, but it seems likely to matter if the population of supers was much smaller.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2011-11-15T04:42:56.110Z · LW(p) · GW(p)
I'm not sure I see the point of the details you mention. The main thrust is that humans within the normal range given a million fold speedup (as silicon does) and unlimited collaboration would be a de facto super intelligence.
Replies from: TimS↑ comment by TimS · 2011-11-15T14:06:20.726Z · LW(p) · GW(p)
The humans were not within the current normal range. The average was explicitly higher. And I think that the aliens average intelligent was lower than the current human average, although the story is not explicit on that point. And there were billions of super-humans.
Let me put it this way: Google is smarter, wealthier, and more knowledgeable than I. But even if everyone at Google thought millions of times faster than everyone else, I still wouldn't worry about them taking over the world. Unless nobody else important knew about this capacity.
AI is a serious risk, but let's not underestimate how hard it is to be as capable as a Straumli Perversion.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2011-11-15T20:38:36.198Z · LW(p) · GW(p)
the higher average does not mean that they were not within the normal range. they are not individually super human.
↑ comment by TheOtherDave · 2011-11-14T21:18:49.761Z · LW(p) · GW(p)
I don't have a clear sense of how dangerous a group of amoral fast-thinking miniature Isaac Newtons might be but it would surprise me if there were a particularly important risk-evaluation threshold crossed between 70 million amoral fast-thinking miniature Isaac Newtons and a mere, say, 700,000 of them.
Admittedly, I may be being distracted by the image of hundreds of thousands of miniature Isaac Newtons descending on Washington DC or something. It's a far more entertaining idea than those interminable zombie stories.
Replies from: TimS↑ comment by TimS · 2011-11-14T21:58:27.759Z · LW(p) · GW(p)
You are right that 1% of the world population is likely too large. I probably should have said "substantial numbers in existence." I've adjusted my estimate, so amoral Newtons don't worry me unless they are secret or exist (>1000). And the minimum number gets bigger unless there is reason to think amoral Newtons will cooperate amongst themselves to dominate humanity.
Replies from: Logos01comment by timtyler · 2011-11-14T14:14:53.068Z · LW(p) · GW(p)
There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:
- Intelligence is goal-oriented.
- Intelligence can think ahead.
- Intelligence can jump fitness gaps.
- Intelligence can engage in direct experimentation.
- Intelligence can observe and incorporate solutions of other optimizing agents.
Much of this seems pretty inaccurate. The first three points are true, but not realy the issue - and explaning why would go uncomfortably close to the topic I am forbidden from talking about - but for the last two points, surely every organism is its own experiment, and mutualistic symbiosis allows creatures to use the results of experiments performed by members of other species.
Intelligence does make some differences, but not really these differences.
Replies from: JoshuaZ, jimrandomh↑ comment by JoshuaZ · 2011-11-14T16:21:57.524Z · LW(p) · GW(p)
The last two points are different though: Intelligence can engage in direct, systematic experimentation. Evolution in contrast only experiments in so far as it happens to stumble across things. Similarly, while you are correct about symbiosis to a limited extent (and there are also examples of horizontal gene transfer), intelligence can look at another solution and without happening to pick up genes or forming a symbiotic relationship can go simply steal a functional strategy.
Replies from: timtyler↑ comment by timtyler · 2011-11-14T16:39:27.415Z · LW(p) · GW(p)
The last two points are different though: Intelligence can engage in direct, systematic experimentation. Evolution in contrast only experiments in so far as it happens to stumble across things.
So: direct experimentation by animals is part of Skinnerian learning - which is hundreds of millions of years old. The results of the experiments go on to influence the germ line of organisms via selection. Would you claim that that is not part of "evolution"?
Similarly, while you are correct about symbiosis to a limited extent (and there are also examples of horizontal gene transfer), intelligence can look at another solution and without happening to pick up genes or forming a symbiotic relationship can go simply steal a functional strategy.
Right - so I agree that differences in evolution are developing a result of the development of engineers - but observing and incorporating the "solutions of other optimizing agents" doesn't really capture the difference in question. Imitating the strategies of others is hardly a new development either - animals have been imitating each other since around our LCA with songbirds - that too is an ordinary part of evolution.
Replies from: JoshuaZ, dlthomas↑ comment by JoshuaZ · 2011-11-14T17:04:43.165Z · LW(p) · GW(p)
So: direct experimentation by animals is part of Skinnerian learning
Very few animals engage in anything that can be described as experimentation in any systematic fashion nor do they actually learn from the results in any fashion other than simple operant conditioning. Thus for example, if cats are placed in a box with a trick latch where they are given food after they get out, they gradually learn to hit the latch to the point where they can do so routinely. But, if one graphs the time it takes them on average to do so, they have a slow decline rather than a steep decline as one would expect if they were actually learning. If by experiment you mean "try lots of random things and not even understand which one actually helped" then yes they experiment. If one means experiment in a more useful fashion then very few species (great apes, corvids, keas, possibly elephants, possibly dolphins) do so.
Right - so I agree that differences in evolution are developing a result of the development of engineers - but observing and incorporating the "solutions of other optimizing agents" doesn't really capture the difference in question. Imitating the strategies of others is hardly a new development either - animals have been imitating each other since around our LCA with songbirds - that too is an ordinary part of evolution.
This is a much more limited form of imitation, where evolution has simply given them an instinct to imitate sounds around them. In some cases this can be quite impressive (e.g. the lyrebird can imitate other birds, as well as camera clicks, car alarms and chainsaws) but they are essentially just imitating noises around them. The set of species that can look at another entity solve a problem and then learn from that to solve the problem is much tinier (humans, some other great apes, corvids, keas, African Grey Parrots, dolphins, and not much else). There's a real difference in the type of imitation going on here.
Replies from: timtyler↑ comment by timtyler · 2011-11-14T17:33:37.167Z · LW(p) · GW(p)
So: direct experimentation by animals is part of Skinnerian learning
Very few animals engage in anything that can be described as experimentation in any systematic fashion nor do they actually learn from the results in any fashion other than simple operant conditioning. Thus for example, if cats are placed in a box with a trick latch where they are given food after they get out, they gradually learn to hit the latch to the point where they can do so routinely. But, if one graphs the time it takes them on average to do so, they have a slow decline rather than a steep decline as one would expect if they were actually learning. If by experiment you mean "try lots of random things and not even understand which one actually helped" then yes they experiment. If one means experiment in a more useful fashion then very few species (great apes, corvids, keas, possibly elephants, possibly dolphins) do so.
Not one of the replies I expected - denying that animal trial-and-error learning is methodical enough to count as being "experimentation". That is probably true of some trial-and-error learning, but - as you seem to agree - some animal learning can be more methodical. I am still inclined to count experimentation as a "millions-of-years-old" phenomenon and part of conventional evolution - and not something to do with more recent developments involving engineering.
Imitating the strategies of others is hardly a new development either - animals have been imitating each other since around our LCA with songbirds - that too is an ordinary part of evolution.
This is a much more limited form of imitation, where evolution has simply given them an instinct to imitate sounds around them. In some cases this can be quite impressive (e.g. the lyrebird can imitate other birds, as well as camera clicks, car alarms and chainsaws) but they are essentially just imitating noises around them. The set of species that can look at another entity solve a problem and then learn from that to solve the problem is much tinier (humans, some other great apes, corvids, keas, African Grey Parrots dolphins, and not much else). There's a real difference in the type of imitation going on here.
I'd argue that even "local enhancement" - a very undemanding form of social learning - can result in incorporating the "solutions of other optimizing agents". Anyhow, again you seem to agree that animals have been doing such things for millions of years. So, this is still the domain of pretty conventional evolution.
↑ comment by dlthomas · 2011-11-14T17:07:27.571Z · LW(p) · GW(p)
So: direct experimentation by animals is part of Skinnerian learning - which is hundreds of millions of years old. The results of the experiments go on to influence the germ line of organisms via selection. Would you claim that that is not part of "evolution"?
This smells funny.
Replies from: timtyler↑ comment by timtyler · 2011-11-14T18:25:56.485Z · LW(p) · GW(p)
Sure: it conflicts with the popular misunderstanding of evolution being "blind" - and without foresight.
Replies from: dlthomas↑ comment by dlthomas · 2011-11-14T18:40:52.415Z · LW(p) · GW(p)
Pinning it down a little, the part I object to is:
The results of the experiments go on to influence the germ line of organisms via selection.
It is trivially true that any change will have an influence, chaos theory and all. It seems unlikely that this influence will in any way relate to the results of the experimentation, in terms of production of variance or difference in heritability. Yet this seems to be what you are hand-wavily suggesting, without backing it up with math.
Replies from: timtyler↑ comment by timtyler · 2011-11-14T18:50:16.786Z · LW(p) · GW(p)
It seems unlikely that this influence will in any way relate to the results of the experimentation, in terms of production of variance or difference in heritability. Yet this seems to be what you are hand-wavily suggesting, without backing it up with math.
We do know a fair bit about that. The idea is that it influences the germ line via selection.
If you perform experiments and the result is that you die - or fail to mate - the selective mechanism of information-transfer into the germ line is obvious. Milder outcomes have less dramatic effects - but still result in information transfer. The phenomenon has been studied under these names:
- Genetic_assimilation
- The Baldwin effect
- The assimilate-stretch principle
If we go as far as humans there are also things like: Biocultural evolution.
That learned information and genetic information were aspects of a more general underlying phenomenon was observed by Semon (1904) and later there were pioneering contributions by B.F.Skinner - for example in Selection by Consequences. Skinner explicitly based his theory of learning on Darwin's theory of evolution. The idea was refined further in Dennet's Tower of Generate-and-Test.
↑ comment by jimrandomh · 2011-11-14T15:26:03.996Z · LW(p) · GW(p)
You won't explain because the explanation is a neighbor of a topic that you were once asked to shut up about, three years ago, because the conversation had gotten repetitive? I think that's being too deferential.
Replies from: wedrifid, timtyler↑ comment by timtyler · 2011-11-14T15:49:51.183Z · LW(p) · GW(p)
Thanks for that - but for now I'll continue to go with what the site's moderator said.
I am not refusing to explain - the explanation I would offer now is much the same as it was back then. Anyone interested can go and look at the topic that I was forbidden from further discussing.