Criticisms of intelligence explosion
post by lukeprog · 2011-11-22T17:42:06.036Z · LW · GW · Legacy · 123 commentsContents
123 comments
On this page I will collect criticisms of (1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization. Please suggest your own, citing the original source when possible.
[Under construction.]
"AGI won't be a big deal; we already have 6 billion general intelligences on Earth."
Example: "I see no reason to single out AI as a mould-breaking technology: we already have billions of humans." (Deutsch, The Beginning of Infinity, p. 456.)
Response: The advantages of mere digitality (speed, copyability, goal coordination) alone are transformative, and will increase the odds of rapid recursive self-improvement in intelligence. Meat brains are badly constrained in ways that non-meat brains need not be.
"Intelligence requires experience and learning, so there is a limit to the speed at which even a machine can improve its own intelligence."
Example: "If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time--an exponential increase in intelligence--then it will never happen. Intelligence is largely defined by experience and training, not just by brain size or algorithms. It isn't a matter of writing software. Intelligent machines, like humans, will need to be trained in particular domains of expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have." (Hawkins, Tech Luminaries Address Singularity)
Response: Intelligence defined as optimization power doesn't necessarily need experience or learning from the external world. Even if it did, a superintelligent machine spread throughout the internet could gain experience and learning from billions of sub-agents all around the world simultaneously, while near-instantaneously propagating these updates to its other sub-agents.
"There are hard limits to how intelligent a machine can get."
Example: "The term 'singularity' applied to intelligent machines refers to the idea that when intelligent machines can design intelligent machines smarter than themselves, it will cause an exponential growth in machine intelligence leading to a singularity of infinite (or at least extremely large) intelligence. Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run... Exponential growth requires the exponential consumption of resources (matter, energy, and time), and there are always limits to this." (Hawkins, Tech Luminaries Address Singularity)
Response: There are physical limits to how intelligent something can get, but they easily allow the intelligence required to transform the solar system.
"AGI won't be malevolent."
Example: "No intelligent machine will 'wake up' one day and say 'I think I will enslave my creators.'" (Hawkins, Tech Luminaries Address Singularity)
Example: "...it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines." (Casti, Tech Luminaries Address Singularity)
Response: True. But most runaway machine superintelligence designs would kill us inadvertently. "The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else."
"If intelligence explosion was possible, we would have seen it by now."
Example: "I don't believe in technological singularities. It's like extraterrestrial life--if it were there, we would have seen it by now." (Rodgers, Tech Luminaries Address Singularity)
Response: Not true.
"Humanity will destroy itself before AGI arrives."
Example: "the population will destroy itself before the technological singularity." (Bell, Tech Luminaries Address Singularity)
Response: This is plausible, though there are many reasons to think that AGI will arrive before other global catastrophic risks do.
"The Singularity belongs to the genre of science fiction."
Example: "The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived." (Pinker, Tech Luminaries Address Singularity)
Response: This is not an issue of literary genre, but of probability and prediction. Science fiction becomes science fact several times every year. In the case of technological singularity, there are good scientific and philosophical reasons to expect it.
"Intelligence isn't enough; a machine would also need to manipulate objects."
Example: "The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don't see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans." (Moore, Tech Luminaries Address Singularity)
Response: Robotics is making strong progress in addition to AI.
"Human intelligence or cognitive ability can never be achieved by a machine."
Example: "Goedel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines." (Lucas, Minds, Machines and Goedel)
Example: "Instantiating a computer program is never by itself a sufficient condition of [human-liked] intentionality." (Searle, Minds, Brains, and Programs)
Response: "...nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain... As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on.... we can set aside these objections by stipulating that for the purposes of the argument, intelligence is to be measured wholly in terms of behaviour and behavioural dispositions, where behaviour is construed operationally in terms of the physical outputs that a system produces." (Chalmers, The Singularity: A Philosophical Analysis)
"It might make sense in theory, but where's the evidence?"
Example: "Too much theory, not enough empirical evidence." (MileyCyrus, LW comment)
Response: "Papers like How Long Before Superintelligence contain some of the relevant evidence, but it is old and incomplete. Upcoming works currently in progress by Nick Bostrom and by SIAI researchers contain additional argument and evidence, but even this is not enough. More researchers should be assessing the state of the evidence."
"Humans will be able to keep up with AGI by using AGI's advancements themselves."
Example: "...an essential part of what we mean by foom in the first place... is that it involves a small group accelerating in power away from the rest of the world. But the reason why that happened in human evolution is that genetic innovations mostly don't transfer across species. [But] human engineers carry out exactly this sort of technology transfer on a routine basis." (rwallace, The Curve of Capability)
Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it's not clear that it would share the 'secrets' of these improvements with humans.
"A discontinuous break with the past requires lopsided capabilities development."
Example: "a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided... [But] the lopsidedness is not occurring [in computers]. Obviously computer technology hasn't lagged in symbol processing - quite the contrary." (rwallace, The Curve of Capability)
Example: "Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing." (Katja Grace, How Far Can AI Jump?)
Response: It doesn't seem that symbol processing was the missing capability that made humans so powerful. Calculators have superior symbol processing, but have no power to rule the world. Also: many kinds of lopsidedness are occurring in computing technology that may allow a sudden discontinuous jump in AI abilities. In particular, we are amassing vast computational capacities without yet understanding the algorithmic keys to general intelligence.
"No small set of insights will lead to massive intelligence boost in AI."
Example: "...if there were a super mind theory that allowed vast mental efficiency gains all at once, but there isn’t. Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations." (Robin Hanson, Is the City-ularity Near?)
Example: "Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities... But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare." (Robin Hanson, The Betterness Explosion)
Response: An intelligence explosion doesn't require a breakthrough that improves all capabilities at once. Rather, it requires an AI capable of improving its intelligence in a variety of ways. Then it can use the advantages of mere digitality (speed, copyability, goal coordination, etc.) to improve its intelligence in dozens or thousands of ways relatively quickly.
To be added:
- Massimo Pigliucci on Chalmers' Singularity talk
- XiXiDu on intelligence explosion as a disjunctive or conjunctive event, on intelligence explosion as a low-priority global risk, on basic AI drives
- Diminishing returns from intelligence amplification
123 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2011-11-23T00:26:17.525Z · LW(p) · GW(p)
My problem with the focus on the idea of intelligence explosion is that it's too often presented as motivating the problem of FAI, but it's really not, it's a strategic consideration right there besides Hanson's malthusian ems, killer biotech and cognitive modification, one more thing to make the problem urgent, but still one among many.
What ultimately matters is implementing humane value (which involves figuring out what that is). The specific manner in which we lose ability to do so is immaterial. If intelligence explosion is close, humane value will lose control over the future quickly. If instead we change our nature through future cognitive modification tech, or by experimenting on uploads, then the grasp of humane value on the future will fail in orderly manner, slowly but just as irrevocably yielding control over to wherever the winds of value drift blow.
It's incorrect to predicate the importance, or urgency of gaining FAI-grade understanding of humane value on possibility of intelligence explosion. Other technologies that would allow value drift are for all purposes similarly close.
(That said, I do believe AGIs lead to intelligence explosions. This point is important to appreciate the impact and danger of AGI research, if complexity of humane value is understood, and to see one form that implementation of a hypothetical future theory of humane value could take.)
Replies from: RomeoStevens, Giles↑ comment by RomeoStevens · 2011-11-24T02:04:47.030Z · LW(p) · GW(p)
The question of "can we rigorously define human values in a reflectively consistent way" doesn't need to have anything to do with AI or technological progress at all.
↑ comment by Giles · 2011-11-24T04:25:45.855Z · LW(p) · GW(p)
This is a good point. I think there's one reason to give special attention to the intelligence explosion concept though... it's part of the proposed solution as well as one of the possible problems.
The two main ideas here are:
- Recursive self-improvement is possible and powerful
- Human values are fragile; "most" recursive self-improvers will very much not do what we want
These ideas seem to be central to the utliity-maximizing FAI concept.
comment by MileyCyrus · 2011-11-22T18:25:30.425Z · LW(p) · GW(p)
Too much theory, not enough empirical evidence. In theory, FAI is an urgent problem that demands most of our resources (Eliezer is on the record saying that the only two legitimate occupations are working on FAI, and earning lots of money so you can donate money to other people working on FAI).
In practice, FAI is just another Pascal's mugging/ lifetime dilemma/ St. Petersburg Paradox. From XiXIDu's blog:
Replies from: lukeprog, djcbTo be clear, extrapolations work and often are the best we can do. But since there are problems such as the above, that we perceive to be undesirable and that lead to absurd consequences, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics.
[...]
Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.
Until [various rationality puzzles] are resolved, or sufficiently established, I will continue to put vastly more weight on empirical evidence and my intuition than on logical implications, if only because I still lack the necessary educational background to trust my comprehension and judgement of the various underlying concepts and methods used to arrive at those implications.
↑ comment by djcb · 2011-11-22T19:27:20.937Z · LW(p) · GW(p)
I would also be very interested in seeing some smaller stepping stones implemented -- I imagine that creating an AGI (let alone FAI) will require massive amounts of maths, proofs and the like. It seems very useful to create artificialy intelligent mathematics software that can 'discover' and proof interesting theorems (and explain its steps). Of course, there is software that can proof relatively simple proofs, but there's nothing that could proof e.g. Fermat's Last Theorem -- we still need very smart humans for that.
Of course, it's extremely hard to create such software, but it would be much easier than AGI/FAI, and at the same time it can help with constructing those (and help in some other areas, say QM). The difficulty in constructing such software might also give us some understanding in the difficulties of constructing general artificial intelligence.
comment by Stuart_Armstrong · 2011-11-24T19:17:40.410Z · LW(p) · GW(p)
You should have entitled this post "Criticisms of Criticisms of intelligence explosion" :-)
comment by Shmi (shminux) · 2011-11-22T20:19:16.613Z · LW(p) · GW(p)
(2) the claim that intelligence explosion is likely to occur within the next 150 years
A "scientific" prediction with a time-frame of several decades and no clear milestones along the way is equivalent to a wild guess. From 20 Predictions of the Future (We’re Still Waiting For):
Weather Control: In 1966, a radio documentary, 2000 AD, was aired as a forum for various media and science personalities to discuss what life might be like in the year 2000. The primary theme running through the show concerned a prediction that no one in the year 2000 would have to work more than a day or two a week, and our leisure time would go through the roof. With so much free time, you can imagine that we would not want our vacations or day trips ruined by nasty weather, and therefore we should quickly develop a way to control the weather, shaping it to our needs. Taking the lighting from the clouds or the wind from the tornadoes were among the predictions, yet they were careful to note that we might not take weather control too far because of political reasons. Unfortunately, we here in the 2000’s still work full weeks, and we still get our picnics rained out from time to time.
One could argue that weather control is an easier problem than AGI (e.g. powerful enough equipment could "unwind" storms and/or redirect weather masses).
Replies from: magfrump, Logos01↑ comment by magfrump · 2011-11-23T04:25:53.611Z · LW(p) · GW(p)
perhaps this is a poor place to begin this, but I'll propose a couple of things I would think count as milestones toward a theory of AGI.
AIs producing original, valuable work (automated proof systems are an example; I believe there is algorithmically generated music as well that isn't awful though I'm not sure)
parsing natural language queries (Watson is a huge accomplishment in this direction)
- systems which reference different subroutines as appropriate (this is present in any OS I'm sure) and which are modular in their subroutines
- automated search for new appropriate subroutine (something like, if I get a new iphone and say "start a game of words with friends with danny" the phone automatically downloads the words with friends app and searches for danny's profile--I don't think this exists at present but it seems realistic soon)
- emulation of living beings (i.e. a way of parsing a computation so that it behaves exactly like, for starters, C. Elegans; then more complex beings)
- AI that can learn "practical" skills (i.e. AIXI learning chess against itself)
- robotics that can accomplish practical tasks like all-terrain maneuvering and fine manipulation (existent)
- AI that can learn novel skills (i.e. AIXI learning chess by being placed in a "chess environment" rather than having the rules explained to it)
- Good emulation or API reverse engineering (like WINE) and especially theoretical results about reverse engineering
- automated bug fixing programs (I don't program enough to know how good debugging tools are)
- chatbots winning at Turing tests (iirc there are competitions and humans do not always shut out the chat bots)
These all seem like practical steps which would make me think that AGI was nearer; many of them have come to pass in the past decade, very few came before that, some seem close, some seem far away but achievable. There are certainly many more although I would guess many would be technical and I'm not sufficiently expert to provide opinions.
I'd also make the distinction that the weather machine claim relied on the social structural claim that people would only work a day or two a week; social structures notoriously change much slower and no such assumption is necessary for AI to be studied.
Replies from: Normal_Anomaly, shminux↑ comment by Normal_Anomaly · 2011-11-23T15:38:59.979Z · LW(p) · GW(p)
AIs producing original, valuable work (automated proof systems are an example; I believe there is algorithmically generated music as well that isn't awful though I'm not sure)
Here is some computer-generated music. I don't have particularly refined taste, but I enjoy it. Note: the first link with all the short MP3s is from an earlier version of the program, which was intended only to imitate other composers.
↑ comment by Shmi (shminux) · 2011-11-23T21:28:12.245Z · LW(p) · GW(p)
My guess is that it will be not so much milestones as seemingly unrelated puzzle pieces suddenly and unexpectedly coming together. This is usually how things happen. From VCRs to USB storage to app markets, you name it. Some invisible threshold gets crossed, and different technologies come together to create a killer product. Chances are, some equivalent of an AGI will sneak up on us in a form no one will have foreseen.
Replies from: magfrump↑ comment by magfrump · 2011-11-24T01:29:31.810Z · LW(p) · GW(p)
I agree that the eventual creation of AGI is likely to come from seemingly unrelated puzzle pieces coming together. On the other hand, anything that qualifies as an AGI is necessarily going to have the capabilities of a chat bot, a natural language parser, etc. etc. So these capabilities existing makes the puzzle of how to fit pieces together easier.
My point is simply that if you predict AGI in the next [time frame] you would expect to see [components of AGI] in [time frame], so I listed some things I would expect if I expected AGI soon, and I still expect all of those things soon. This makes it (in my mind) significantly different than just a "wild guess".
Replies from: orthonormal↑ comment by orthonormal · 2011-11-25T23:20:36.719Z · LW(p) · GW(p)
On the other hand, anything that qualifies as an AGI is necessarily going to have the capabilities of a chat bot, a natural language parser, etc. etc.
If the "seed AI" idea is right, this claim can't be taken for granted, especially if there's no optimization for Friendliness.
Replies from: magfrump↑ comment by magfrump · 2011-11-26T02:38:24.491Z · LW(p) · GW(p)
I would make the case that anything that qualifies as an AGI would need to have some ability to interact with other agents, which would require an analogue of natural language processing, but I certainly agree that it isn't strictly necessary for an AI to come about. I do still think of it as (weak) positive evidence though.
Replies from: orthonormal↑ comment by orthonormal · 2011-11-26T05:40:07.875Z · LW(p) · GW(p)
Two things. First, a seed AI could present an existential risk without ever requiring natural language processing, for example by engineering nanotech.
Second, the absence of good natural language processing isn't great evidence that AI is far off, since even if it's a required component of the full AGI, the seed AI might start without it and then add that functionality after a few iterations of other self-improvements.
Replies from: magfrump↑ comment by magfrump · 2011-11-26T06:28:12.020Z · LW(p) · GW(p)
I don't think that we disagree here very much but we are talking past each other a little bit.
I definitely agree with your first point; I simply wouldn't call such an AI fully general. It could easily destroy the world though.
I also agree with your second point, but I think in terms of a practical plan for people working on AI natural language processing would be a place to start, and having that technology means such a project is likely closer as well as demonstrating that the technical capabilities aren't extremely far off. I don't think any state of natural language processing would count as strong evidence but I do think it counts as weak evidence and something of a small milestone.
Replies from: orthonormal↑ comment by Logos01 · 2011-11-23T02:58:42.570Z · LW(p) · GW(p)
One could argue that weather control is an easier problem than AGI (e.g. powerful enough equipment could "unwind" storms and/or redirect weather masses).
Such equipment, however, would have to have access to more power than our civilization currently generates. So while it may be more of an engineering problem than a theoretical one, I believe that AGI is more accessible.
comment by XiXiDu · 2011-11-22T18:49:27.467Z · LW(p) · GW(p)
I boldly claim that my criticisms are better than those of the Tech Luminaries:
- Is an Intelligence Explosion a Disjunctive or Conjunctive Event?
- Why an Intelligence Explosion might be a Low-Priority Global Risk (as an addition: No Basic AI Drives and my recent comment here)
Also see:
- The Curve of Capability by rwallace
- The Betterness Explosion by Robin Hanson
- Is The City-ularity Near? by Robin Hanson
- A summary of Robin Hanson's positions by Robin Hanson
- How far can AI jump? by Katja Grace
↑ comment by timtyler · 2011-11-23T18:30:34.652Z · LW(p) · GW(p)
Regarding the "also see" material in the parent:
The Curve of Capability makes only a little sense, IMO. The other articles are mostly about a tiny minority breaking away from the rest of civilization. That seems rather unrealistic to me too - but things become more plausible when we consider the possibility of large coalitions winning, or most of the planet.
comment by [deleted] · 2011-11-22T23:16:47.560Z · LW(p) · GW(p)
Crocker’s rules declared, because I expect this may agitate some people:
(1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization.
I accept (1) and (3). Where I depart somewhat from the LW consensus is in the belief that anyone is going to accept the idea that the singularity (in its intelligence explosion form) should go ahead, without some important intervening stages that are likely to last for longer than 150 years.
CEV is a bad idea. I am sympathetic towards the mindset of the people who advocate it, but even I would be in the pitchfork-wielding gang if it looked like someone was actually going to implement it. Try to imagine that this was actually going to happen next year, rather than being a fun thing discussed on an internet forum – beware far mode bias. To quote Robin Hanson in a recent OB post:
Immersing ourselves in images of things large in space, time, and social distance puts us into a “transcendant” far mode where positive feelings are strong, our basic ideals are more visible than practical constraints, and where analysis takes a back seat to metaphor.
I don’t trust fallible human programmers to implement soundly “knowing more”, “thinking faster” and “growing up together”, and deal with the problems of “muddle”, “spread” and “distance”. The idea of a “last judge” as a safety measure seems like a sticking plaster on a gaping wound. Neither do I accept that including all of humanity is anything other than misplaced idealism. Some people seem to think that even a faulty CEV initial dynamic magically corrects itself into a good one; that might happen, but not with nearly a high enough probability.
Another problem that has been scarcely discussed: what happens if, as Eliezer’s CEV document suggests might happen, the thing shuts itself down or the last judge decides it isn’t safe? And the second time we try it, too?
But the problem remains that a superintelligence needs a full set of human values in order for it to be safe. I don’t see any other tenable proposals for implementing this apart from CEV, therefore I conclude that building a recursively improving superintelligence is basically just unsafe, given present human competence levels. Given that fact, to conclude that because we are likely to obtain the means to bring about a (positive or negative) singularity at some point we cannot prevent it from happening indefinitely is like saying that because we possess nuclear technology we can’t prevent a nuclear extinction event from happening indefinitely. If FAI is an “impossible” challenge and NAI (No AI) is merely very difficult, there is something to recommend NAI.
That doesn’t mean to say that I disprove of what Eliezer et al are doing. The singularity is definitely an extremely important thing to be discussing. I just think that the end product is likely to be widespread recognition of the peril of playing around with AI, and this (along with appropriately severe action taken to reduce the peril) is just as much a solution to Yudkowsky’s fear that a bunch of above-average AI scientists can “learn from each other’s partial successes and accumulate hacks as a community” as is trying to beat them to the punch by rushing to create a positive singularity.
Although this is unfair there is probably some truth to the idea that people who devote their lives to studying AI and the intelligence explosion are likely to be biased towards solutions to the problem in which their work achieves something really positive, rather than merely acting as a warning. That is not to pre-judge the issue, but merely to recommend that a little more skepticism than normal is due.
On the other hand there is another tenable approach to the singularity that is less widely recognised here. Wei Dai’s posts here and here seem very sensible to me; he suggests that intelligence enhancement should have priority over FAI research:
Given that there are known ways to significantly increase the number of geniuses (i.e., von Neumann level, or IQ 180 and greater), by cloning or embryo selection, an obvious alternative Singularity strategy is to invest directly or indirectly in these technologies, and to try to mitigate existential risks (for example by attempting to delay all significant AI efforts) until they mature and bear fruit (in the form of adult genius-level FAI researchers). [...]
The chances of success seem higher, and if disaster does occur as a result of the intelligence amplification effort, we're more likely to be left with a future that is at least partly influenced by human values.
He quotes Eliezer as having said this (from pages 31-35 here):
If AI is naturally far more difficult than intelligence enhancement, no harm done; if building a 747 is naturally easier than inflating a bird, then the wait could be fatal. There is a relatively small region of possibility within which deliberately not working on Friendly AI could possibly help, and a large region within which it would be either irrelevant or harmful.
Wei Dai points out that it is worth distinguishing the ease of creating uFAI in comparison to FAI, rather than lumping these together as “AI”.
I also think that the difference in outcomes between “deliberately not working on Friendly AI” and “treating unsupervised AI work as a terrible crime” are worth distinguishing.
Even if human intelligence enhancement is possible, there are real, difficult safety considerations; I would have to seriously ask whether we wanted Friendly AI to precede intelligence enhancement, rather than vice versa.
This depends on the probability one assigns to CEV working. My probability that it would work given present human competence levels is low, and my probability that anyone would actually let it happen is very low.
The benefit of intelligence enhancement is that changes can be as unhurried and incremental as one likes (assuming that the risk of someone building uFAI is not considered to be imminent, due to stringent security measures); CEV is more a leap of faith.
Replies from: Giles, falenas108↑ comment by Giles · 2011-11-24T04:04:10.475Z · LW(p) · GW(p)
Seconded - I'd like to see some material from lukeprog or somebody else at SI addressing these kinds of concerns. A "Criticisms of CEV" page maybe?
[Edit: just to clarify, I wasn't seconding the part about the pitchforks and I'm not sure that either IA or an AGI ban is an obviously better strategy. But I agree with everything else here]
↑ comment by falenas108 · 2011-11-22T23:40:41.577Z · LW(p) · GW(p)
Given that fact, to conclude that because we are likely to obtain the means to bring about a (positive or negative) singularity at some point we cannot prevent it from happening indefinitely is like saying that because we possess nuclear technology we can’t prevent a nuclear extinction event from happening indefinitely.
The problem is that for a nuclear explosion to take place, the higher ups of some country have to approve it.
For AGI, all that has to occur is code is leaked or hacked, then preventing the AGI from being implemented somewhere is an impossible task. And right now, no major online institution in the entire world is safe from being hacked.
Replies from: None↑ comment by [deleted] · 2011-11-22T23:50:11.262Z · LW(p) · GW(p)
Perhaps that is a motivation to completely (and effectively) prohibit research in the direction of creating superintelligent AI, licensed or otherwise, and concentrate entirely on human intelligence enhancement.
Replies from: drethelin, drethelin↑ comment by drethelin · 2011-11-22T23:58:30.400Z · LW(p) · GW(p)
Ignoring how difficult this would be (due to ease of secret development considering it would largely consist of code rather than easy to track hardware) even if every country in the world WANTED to cooperate on it, the real problem comes from the high potential value of defecting. Much like nuclear weapons development, it would take a lot more than sanctions to convince a rogue nation NOT to try and develop AI for its own benefit, should this become a plausible course of action.
Replies from: None↑ comment by [deleted] · 2011-11-23T00:41:36.106Z · LW(p) · GW(p)
the real problem comes from the high potential value of defecting
What would anyone think they stood to gain from creating an AI, if they understood the consequences as described by Yudkowsky et al?
The situation is not "much like nuclear weapons development", because nuclear weapons are actually a practical warfare device, and the comparison was not intended to imply this similarity. I just meant to say that we manage to keep nukes out of the hands of terrorists, so there is reason to be optimistic about our chances of preventing irresponsible or crazy people from successfully developing a recursively self-improving AI - it is difficult, but if creating and successfully implementing a provably safe FAI (without prior intelligence enhancement) is hopelessly difficult - even if only because the large majority of people wouldn't consent to it - then it may still be our best option.
Replies from: drethelin↑ comment by drethelin · 2011-11-23T02:40:50.059Z · LW(p) · GW(p)
The same things WE hope to gain from creating AI. I do not trust north korea (for example) to properly decide on the relative risks/rewards of any given course of action it can undertake.
Replies from: None↑ comment by [deleted] · 2011-11-23T10:42:10.760Z · LW(p) · GW(p)
OK but it isn't hard (or wouldn't be in the context we are discussing) to come to the understanding that creating an intelligence explosion renders the interests of any particular state irrelevant. I've seen no evidence that the North Koreans are that crazy.
The problem would be people who think that something like CEV, implemented by present-day humans, is actually safe - and the people liable to believe that are more likely to be the type of people found here, not North Koreans or other non-Westerners.
I'd also be interested in hearing your opinion on the security concerns should we attempt to implement CEV, and find that it shut itself down or produced an unacceptable output.
Replies from: orthonormal↑ comment by orthonormal · 2011-11-25T23:26:14.197Z · LW(p) · GW(p)
OK but it isn't hard (or wouldn't be in the context we are discussing) to come to the understanding that creating an intelligence explosion renders the interests of any particular state irrelevant.
If you're correct, then the best way to stave off the optimists from trying is to make an indisputable case for pessimism and disseminate it widely. Otherwise, eventually someone else will get optimistic, and won't see why they shouldn't give it a go.
Replies from: None↑ comment by [deleted] · 2011-11-25T23:42:29.863Z · LW(p) · GW(p)
I expect that once recognition of the intelligence explosion as a plausible scenario becomes mainstream, pessimism about the prospects of (unmodified) human programmers safely and successfully implementing CEV or some such thing will be the default, regardless of what certain AI researchers claim.
In that case, optimists are likely to have their activities forcefully curtailed. If this did not turn out to be the case, then I would consider "pro-pessimism" activism to change that state of affairs (assuming nothing happens to change my mind between now and then). At the moment however I support the activities of the Singularity Institute, because they are raising awareness of the problem (which is a prerequisite for state involvement) and they are highly responsible people. The worst state of affairs would be one in which no-one recognised the prospect of an intelligence explosion until it was too late.
ETA: I would be somewhat more supportive of a CEV in which only a select (and widely admired and recognised) group of humans was included. This seems to create an opportunity for the CEV initial dynamic implementation to be a compromise between intelligence enhancement and ordinary CEV, i.e. a small group of humans can be "prepared" and studied very carefully before the initial dynamic is switched on.
So really it's a complex situation, and my post above probably failed to express the degree of ambivalence that I feel regarding this subject.
comment by lavalamp · 2011-11-22T19:35:28.237Z · LW(p) · GW(p)
This is like the kid version of this page. Where are the good opposing arguments? These are all terrible...
Replies from: Giles, None↑ comment by Giles · 2011-11-24T04:39:10.317Z · LW(p) · GW(p)
Something about this page bothers me - the responses are included right there with the criticisms. It just gives off the impression that a criticism isn't going to appear until lukeprog has a response to it, or that he is going to write the criticism in a way that makes it easy to respond to, or something.
Maybe it's just me. But if I wanted to write this page, I would try and put myself into the mind of the other side and try to produce the most convincing smackdown of the intelligence explosion concept that I could. I'd think about what the responses would be, but only so that I could get the obvious responses to those responses in first. In other words, aim for DH7.
The responses could be collected and put on another page, or included here when this page is a bit more mature. Does anyone think this approach would help?
Replies from: antigonus, lavalampcomment by antigonus · 2011-11-23T01:11:00.302Z · LW(p) · GW(p)
Distinguish positive and negative criticisms: Those aimed at demonstrating the unlikelihood of an intelligence explosion and those aimed at merely undermining the arguments/evidence for the likelihood of an intelligence explosion (thus moving the posterior probability of the explosion closer to its prior probability).
Here is the most important negative criticism of the intelligence explosion: Possible harsh diminishing returns of intelligence amplification. Let f(x, y) measure the difficulty (perhaps in expected amount of time to complete development) for an intelligence of IQ x to engineer an intelligence of IQ y. The claim that intelligence explodes is roughly equivalent to the thesis that f(x, x+1) decreases relatively quickly. What is the evidence for this claim? I haven't seen a huge amount. Chalmers briefly discusses the issue in his article on the singularity and points to how amplifying a human being's intelligence from average to Alan Turing's level has the effect of amplifying his intelligence-engineering ability from more or less nil to being able to design a basic computer. But "nil" and "basic computer" are strictly stupider than "average human" and "Alan Turing," respectively. It's evidence that a curve like f(x, x-1) - the difficulty of creating a being slightly stupider than yourself given your intelligence level - decreases relatively quickly. But the shapes of f(x, x+1) and f(x, x-1) are unrelated. The one can increase exponentially while the other decays exponentially. (Proof: set f(x, y) = e^(y^2 - x^2).)
See also JoshuaZ's insightful comment here on how some of the concrete problems involved in intelligence amplification are linked to to some (very likely) computationally intractable problems from CS.
Replies from: antigonus, jacob_cannell↑ comment by antigonus · 2011-11-23T06:52:33.748Z · LW(p) · GW(p)
Another thing: We need to distinguish between getting better at designing intelligences vs. getting better at designing intelligences which are in turn better than one's own. The claim that "the smarter you are, the better you are at designing intelligences" can be interpreted as stating that the function f(x, y) outlined above is decreasing for any fixed y. But the claim that the smarter you are, the easier it is to create an intelligence even smarter is totally different and equivalent to the aforementioned thesis about the shape of f(x, x+1).
I see the two claims conflated shockingly often, e.g., in Bostrom's article, where he simply states:
Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.
and concludes that superintelligence inevitably follows with no intermediary reasoning on the software level. (Actually, he doesn't state that outright, but the sentence is at the beginning of the section entitled "Once there is human-level AI there will soon be superintelligence.") That an IQ 180 AI is (much) better at developing an IQ 190 AI than a human is doesn't imply that it can develop an IQ 190 AI faster than the human can develop the IQ 180 AI.
Replies from: torekp↑ comment by torekp · 2011-11-26T00:50:12.722Z · LW(p) · GW(p)
Here's a line of reasoning that seems to suggest the possibility of an interesting region of decreasing f(x, x+1). It focuses on human evolution and evolutionary algorithms.
Human intelligence appeared relatively recently through an evolutionary process. There doesn't seem to be much reason to believe that if the evolutionary process were allowed to continue (instead of being largely pre-empted by memetic and technological evolution) that future hominids wouldn't be considerably smarter. Suppose that evolutionary algorithms can be used to design a human-equivalent intelligence with minimal supervision/intervention by truly intelligent-design methods. In that case, we would expect with some substantial probability that carrying the evolution forward would lead to more intelligence. Since the evolutionary experiment is largely driven by brute-force computation, any increase in computing power underlying the evolutionary "playing field" would increase the rate of increase of intelligence of the evolving population.
I'm not an expert on or even practitioner of evolutionary design, so please criticize and correct this line of reasoning.
Replies from: antigonus↑ comment by antigonus · 2011-11-26T10:38:23.021Z · LW(p) · GW(p)
I agree there's good reason to imagine that, had further selective pressure on increased intelligence been applied in our evolutionary history, we probably would've ended up more intelligent on average. What's substantially less clear is whether we would've ended up much outside the present observed range of intelligence variation had this happened. If current human brain architecture happens to be very close to a local maximum of intelligence, then raising the average IQ by 50 points still may not get us to any IQ 200 individuals. So while there likely is a nearby region of decreasing f(x, x+1), it doesn't seem so obvious that it's wide enough to terminate in superintelligence. Given the notorious complexity of biological systems, it's extremely difficult to extrapolate anything about the theoretical limits of evolutionary optimization.
↑ comment by jacob_cannell · 2011-12-11T00:11:44.554Z · LW(p) · GW(p)
See also JoshuaZ's insightful comment here on how some of the concrete problems involved in intelligence amplification are linked to to some (very likely) computationally intractable problems from CS.
Those insights are relevant and interesting for the type of self-improvement feedback loop which assumes unlimited improvement potential in algorithmic efficiency. However, there's the much more basic intelligence explosion which is just hardware driven.
Brain architecture certainly limits maximum practical intelligence, but does not determine it. Just as the relative effectiveness of current chess AI systems is limited by hardware but determined by software, human intelligence is limited by the brain but determined by acquired knowledge.
The hardware is qualitatively important only up to the point where you have something that is turing-complete. Beyond that the differences become quantitative: memory constrains program size, performance limits execution speed.
Even so, having AGI's that are 'just' at human level IQ can still quickly lead to an intelligence explosion by speeding them up by a factor of a million and then creating trillions of them. IQ is a red herring anyway. It's a baseless anthropocentric measure that doesn't scale to the performance domains of super-intelligences. If you want a hard quantitative measure, simply use standard computational measures: ie a human brain is a roughly < 10^15 circuit and at most does <10^18 circuit ops per second.
comment by Morendil · 2011-11-22T18:41:30.761Z · LW(p) · GW(p)
the claim that intelligence explosion is likely to occur within the next 150 years
"We have made little progress toward cross-domain intelligence."
That is, while human-AI comparison is turning to the advantage of AIs, so-called, in an increasing number of narrow domains, the goal of cross-domain generalization of insight seems as elusive as ever, and there doesn't seem to be a hugely promising angle of attack (in the sense that you see AI researchers swarming to explore that angle).
comment by Morendil · 2011-11-22T18:25:42.098Z · LW(p) · GW(p)
Meat brains are badly constrained in ways that non-meat brains need not be.
Agreed; and there's an overbroad reading of this claim, which I'm kind of worried people encountering it (e.g. in the guise of Eliezer's argument on "the space of all possible minds") can inadvertently fall into: assuming that just because we can't imagine them, there are no constraints that apply to any class of non-meat brains.
The movie that runs through our minds when we imagine "AGI recursive self-improvement" goes something like a version of Hollywood hacker movies, except with the AI in the role of the hacker. It's sitting at a desk wearing mirrorshades, and looking for the line in its own code that has the parameter for "number of working memory items". When it finds it, it goes "aha!" and suddenly becomes twice as powerful as before.
That is vivid, but probably not how it works. For instance, "number of working memory items" can be a functional description of the system, without having an easily identifiable bit of the code where it's determined, just as well in an AI's substrate as as in a human mind.
comment by [deleted] · 2011-11-23T10:09:04.046Z · LW(p) · GW(p)
"Response: There are physical limits to how intelligent something can get, but they easily allow the intelligence required to transform the solar system."
How do you know this? I would like some more argument behind this response. In particular, what if some things are impossible? For instance, it might be true that cold fusion is unacheivable, we will never travel faster than the speed of light (even by cheating) and nanotech suffers some hard limits.
comment by orthonormal · 2011-11-25T23:13:46.797Z · LW(p) · GW(p)
Science fiction becomes science fact
I grate my teeth whenever someone intentionally writes this cliche; "science fact" isn't a noun phrase one would use in any other context.
comment by lessdazed · 2011-11-22T18:56:36.771Z · LW(p) · GW(p)
"Intelligence requires experience and learning, so there is a limit to the speed at which even a machine can improve its own intelligence."
It's not like pure thought alone could have ruled out, say, Newtonian mechanics.
Replies from: gwern↑ comment by gwern · 2011-11-22T21:18:50.696Z · LW(p) · GW(p)
Close but maybe not quite right (because it does require observation of the night sky or at least noticing the fact that you have not been blasted to plasma) would be Olbers' paradox.
Replies from: lessdazedcomment by DavidPlumpton · 2011-11-24T02:01:20.261Z · LW(p) · GW(p)
What phrase would you use to describe the failure to produce an AGI over the last 50 years? I suspect that 50 years from now we will might be saying "Wow that was hard, we've learnt a lot, specific kinds of problem solving work well, and computers are really fast now but we still don't really know how to go about creating an AGI". In other words the next 50 years might strongly resemble the last 50 from a very high level view.
comment by Morendil · 2011-11-22T22:46:48.619Z · LW(p) · GW(p)
In particular, we are amassing vast computational capacities without yet understanding the algorithmic keys to general intelligence.
We are amassing vast computational capacities without yet understanding much of how to use them effectively, period. The Raspberry Pi exceeds the specs of a Cray-1, and costs a few hundred thousand times less. What will it be used for? Playing games, mostly. And probably not games that make us smarter.
comment by scientism · 2011-11-22T19:26:52.755Z · LW(p) · GW(p)
A common criticism is that intelligence isn't defined, is poorly defined, cannot be defined, can only be defined relative to human beings, etc.
I guess you could lump general criticism of the computationalist approach to cognition in here (dynamicism, embodiment, ecological psychology, etc). Perhaps intelligence explosion scenarios can be constructed for alternative approaches but they seem far from obvious.
Replies from: Nonecomment by Desrtopa · 2011-11-22T19:00:58.996Z · LW(p) · GW(p)
Example: "...it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines." (Casti, Tech Luminaries Address Singularity)
Even for someone who hasn't read the sequences, that sounds like a pretty huge "unless." If the two don't have exactly the same interests, why wouldn't their interests interfere?
Replies from: shminux↑ comment by Shmi (shminux) · 2011-11-22T20:10:29.381Z · LW(p) · GW(p)
Machines might not be interested in the messy human habitat, but would instead decide to go their own way (space, simulations, nicer subbranches of MWI, baby universes, etc.)
comment by faul_sname · 2011-11-24T03:10:56.404Z · LW(p) · GW(p)
"Humans will be able to keep up with AGI by using AGI's advancements themselves."
Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it's not clear that it would share the 'secrets' of these improvements with humans.
Why not? I can think of a couple possible explanations, but none that hold up to scrutiny. I'm probably missing something.
Humans can't alter their neural structure. This strikes me as unlikely. It is possible to create a circuit diagram of a neural structure, and likewise possible to electrically stimulate a neuron. By extension, it should be possible to replace a neuron with a circuit that does something identical to that neuron. This circuit could most certainly be altered. This may not be the most straightforward way of doing things, but it does mean alteration of neural structure is possible.
People would be unable to understand the developments the AI makes, and therefore could not implement them. The first part of this is probably true. I can't understand how some programs I wrote a year ago work, but I can see what the program does. I can't see why useful developments couldn't be implemented, even if we didn't understand how they worked.
People are very good at using tools like dictionaries, computer programs, and even other people to produce a more useful output than they could produce on their own. You can see this is true by looking at any intelligence test. How much better could someone do on a vocabulary test with access to a dictionary. Similarly, calculators break arithmetic tests and a computer with MatLab would allow for far superior performance on current math/logic tests. This is because the human-tool system can outperform the human alone. Two people working together can likewise outperform one person. So tools of intelligence ≤ humans can be implemented as extensions of the human mind. I don't see any reason this rule would not hold for a tool of higher intelligence than the user.
Replies from: Vaniver↑ comment by Vaniver · 2011-11-26T18:47:04.858Z · LW(p) · GW(p)
Why not? I can think of a couple possible explanations, but none that hold up to scrutiny. I'm probably missing something.
This isn't directly related to engineering, but consider the narrow domain of medicine. You have human doctors, who go to medical school, see patients one at a time, and so on.
Then you have something like Doctor Watson, one of IBM's goals for the technology they showcased in the Jeopardy match. By processing human speech and test data, it could diagnose diseases on comparable timescales as human doctors, but have the benefit of seeing every patient in the country / world. With access to that much data, it could gain experience far more quickly, and know what to look for to find rare cases. (Indeed, it would probably notice many connections and correlations current doctors miss.)
The algorithms Watson uses wouldn't be useful for a human doctor- the things they learned in medical school would be more appropriate for them. The benefits Watson has- the ability to accrue experience at a far faster rate, and the ability to interact with its memory on a far more sophisticated level- aren't really things humans can learn.
In creative fields, it seems likely that human/tool hybrids will outperform tools alone, and that's the interesting case for intelligence explosion. (Algorithmic music generation seems to generally be paired with a human curator who chooses the more interesting bits to save.) Many fields are not creative fields, though.
comment by Logos01 · 2011-11-23T02:39:03.137Z · LW(p) · GW(p)
"AGI won't be a big deal; we already have 6 billion general intelligences on Earth."
Problem of scale. AGI would let us have 6 trillion general intelligences. Having a thousand intelligences that like working without pay for every human that exists "won't be a big deal"?
I think otherwise. And that's even assuming these AGIs are 'merely' "roughly human-equivalent".
Replies from: faul_sname↑ comment by faul_sname · 2011-11-24T01:55:20.208Z · LW(p) · GW(p)
These intelligences would still require power to run. Right now, even 1 trillion computers running at 100 watts would cost somewhere upwards of 50 billion dollars an hour, which is a far cry from "working without pay". Producing these 6 trillion general intelligences you speak of would also be nontrivial.
That said, even one "human equivalent" AI could (and almost certainly would) far exceed human capabilities in certain domains. Several of these domains (i.e. self-improvement, energy production, computing power, and finance) would either directly or indirectly allow the AI to improve itself. Others would be impressive, but not particularly dangerous (natural language processing, for example).
Replies from: Logos01↑ comment by Logos01 · 2011-11-24T06:06:40.920Z · LW(p) · GW(p)
These intelligences would still require power to run. Right now, even 1 trillion computers running at 100 watts would cost somewhere upwards of 50 billion dollars an hour, which is a far cry from "working without pay". Producing these 6 trillion general intelligences you speak of would also be nontrivial.
Humans spend roughly 10% of their caloric intake on their brains; and Americans spend roughly the same amount as a percentage of their post-tax income. -- so 1% of the pay currently spent on Americans goes towards their cognition, on average. The average American worker also works 46 (out of 168) hours per week.
We have no way of knowing the material costs of constructing these devices, nor do we know how energy-efficient they will be compared to modern human brains. Given how much heat energy a brain produces, how far a given brain is from the theoretical limits on computational efficiency and computational density, it's fairly safe to say that the comparative costs of said brains is essentially negligible compared to the average worker today. If we compare the electrical-operational costs as equivalent to the energy costs of a human, then AGIs will have 1% those of a human. And they will work 4x as long -- so that's already a 400:1 ratio of cost per human and cost per AGI for operational budget. Then factor in the absence of travel energy expenditures, the absence of plumbing investment and other human-foible-related elements that machines just don't have -- and the picture painted quickly transitions towards that 1,000 AGIs per person being a "reasonable" number for economic reasons. (Especially since at least a large minority of them will, early on, be applied towards economic expansion purposes.)
So certainly, these intelligences would still require power to run. But they'd require vastly less -- for the same economic output -- than would humans. And all that economic output will be bent towards economic goals ... such as generating energy.
That said, even one "human equivalent" AI could (and almost certainly would) far exceed human capabilities in certain domains.
I don't find this to be a given at all. Brain-emulations would possess, most likely, equivalent capacities to human brains. There is no guarantee that any given AGI will be capable of examining its own code and coming up with better solutions than the people that created it. Nor is there a guarantee that an AGI will be more capable than the people who will have created it to access computation. Your further claims regarding energy production and finance just make no sense whatsoever.
Certainly, there do exist many models of conceived AGI that would possess many of these traits, but quite frankly it's just a bit presumptuous to assume that those models are "almost certainly" the ones that will come about. There's equally as many where AGI will start out dumber than people in most ways, or that people will augment themselves routinely before AGI kicks off, etc., etc..
Replies from: faul_sname↑ comment by faul_sname · 2011-11-24T07:21:16.784Z · LW(p) · GW(p)
We have no way of knowing the material costs of constructing these devices, nor do we know how energy-efficient they will be compared to modern human brains.
We can come up with at least a preliminary estimate of cost. The lowest estimate I have seen for the computational power of a brain is 38 pflops. The lowest cost of processing power is currently $1.80/gflops. This puts the cost of a whole-brain emulation at a bit under $70M in the best-case scenario. Assuming Moore's law holds, that number should halve every year. Comparatively speaking, human brains are far more energy-efficient than our computers. The best we have is about 2 gflops/watt, as opposed to at least 3,800,000gflops/watt (assuming 10 w) by the human brain. So unless there is a truly remarkable decrease (several orders of magnitude) in the cost of computing power, operating the equivalent power of a human brain will be costly.
That said, even one "human equivalent" AI could (and almost certainly would) far exceed human capabilities in certain domains.
I don't find this to be a given at all. Brain-emulations would possess, most likely, equivalent capacities to human brains.
I was unclear. I consider brain-emulations to be humans, not AIs. The majority of possible AGIs that are considered to be at the human level will almost certainly have different areas of strength and weakness from humans. In particular, they should be far superior in those areas our specialized artificial intelligences already exceed human ability (math, chess, jeapordy, etc.).
There's equally as many where AGI will start out dumber than people in most ways, or that people will augment themselves routinely before AGI kicks off, etc., etc..
I did stipulate "human-equivalent" AGI. I am well aware of the possibility that people will augment themselves before AGI comes about. We already do, just not through direct neural interfaces. I'm studying neuroscience with the goal of developing tools to augment intelligence.
Replies from: Logos01↑ comment by Logos01 · 2011-11-24T08:02:10.001Z · LW(p) · GW(p)
I did stipulate "human-equivalent" AGI.
Verbal slight of hand: "human-equivalent" includes Karl Childers just as much as it does Sherlock Holmes.
mate of cost. The lowest estimate I have seen for the computational power of a brain is 38 pflops. The lowest cost of processing power is currently $1.80/gflops. This puts the cost of a whole-brain emulation at a bit under $70M in the best-case scenario. Assuming Moore's law holds, that number should halve every year.
A couple of points here:
In 1961 that same cost would have been 38 * ($1.1*10\^12 )*10\^6 -- ~4.2 million trillion dollars.
The cost per gflop is decreasing by exponentially, not linearly, unlike what Moore's Law would extrapolate to.
Moore's Law hasn't held for several years now regardless. (See: "Moore's Gap").
This all rests on the notion of silicon as a primary substrate. That's just not likely moving forward; a major buzz amongst theoretical computer science is "diamondoid substrate" -- by which they mean chemical vapor deposited graphene doped with various substances to create a computing substrate that is several orders of magnitude 'better' than silicon due to several properties including its ability to retain semiconductive status at high temperatures, higher frequencies of operation for its logic gates, and potential transister-density. (Much of the energy cost of modern computers goes into heat dissipation, by the way.)
If the cost per gflop continues to trend similarly over the next forty years, and if AGI doesn't become 'practicable' until 2050 (a common projection) -- then the cost per gflop may well be so negligible that the 1000:1 ratio would seem conservative.
I was unclear. I consider brain-emulations to be humans, not AIs.
Fair enough. I include emulations as a form of AGI, if for no other reason than there being a clear path to the goal.
In particular, they should be far superior in those areas our specialized artificial intelligences already exceed human ability (math, chess, jeapordy, etc.).
This does not follow. Fritz -- the 'inheritor' to Deep Blue -- was remarkable not because it was a superior chess-player to Deep Blue ... but because of the way in which it was worse. Fritz initially lost to Kasparov, yet was more interesting. Why? What made it so interesting?
Fritz had the ability to be fuzzy, unclear, and 'forget'. To 'make mistakes'. And this made it a superior AI implementation than the perfect monolithic number-cruncher.
I see this sentiment in people in AGI all the time -- that AGIs will be perfect, crystalline, numerical engines of inerrant geometry. I used to believe that myself. I've learned better. :)
We already do, just not through direct neural interfaces. I'm studying neuroscience with the goal of developing tools to augment intelligence.
Sir, in reading this I have only one suggestion: Let's say you and I drop the rest of this B.S. and you explain to me what you mean. Because I'm all guinea-pig-eager over here.
Replies from: faul_sname↑ comment by faul_sname · 2011-11-25T01:28:21.306Z · LW(p) · GW(p)
My original point was that, based on current trends, AGIs would remain prohibitively expensive to run, as power requirements have not been dropping with Moore's law. The graphene transistors look like they could solve the power requirement problem, so it looks like I was wrong.
When I said 'one "human equivalent" AI could (and almost certainly would) far exceed human capabilities in certain domains.' I simply meant that it is unlikely that a (nonhuman) human-level AI would possess exactly the same skillset as a human. If it was better than humans at something valuable, it would be a game changer, regardless of it being "only" human-level.
This idea seems not to be as clear to readers as it is to me, so let me explain. A human and a pocket calculator are far better than just a human at arithmetic than a human. Likewise, a human and a notebook are better at memory than an unassisted human. This does not mean notebooks are very good at storing information, it means that people are bad at it. An AI that is as computationally expensive as a human will almost certainly be much better at the things people are phenomenally bad at.
Replies from: Logos01↑ comment by Logos01 · 2011-11-25T07:13:52.611Z · LW(p) · GW(p)
An AI that is as computationally expensive as a human will almost certainly be much better at the things people are phenomenally bad at.
I'm sorry, this is just plain not valid. I've already explained why. An AI that is "as computationally expensive as a human" is no more likely to be "much better at the things people are phenomenally bad at" than is a human. All of the computation that goes on in a human would quite likely need to be replicated by that AGI. And there is simply no guarantee that it would be any better than a human when it comes to how it accesses narrow AI mechanisms (storage methods, calculators, etc., etc..).
I really do wish I knew why you folks all always seem to assume this is an inerrant truth of the world. But based on what I have seen -- it's just not very likely at all.
Replies from: faul_sname↑ comment by faul_sname · 2011-11-25T07:45:34.799Z · LW(p) · GW(p)
I'm not sure exactly what part of my statement you disagree with.
1. People are phenomenally bad at some things.
A pocket calculator is far better than a human when it comes to performing basic operations on numbers. Unless you believe that a calculator is amazingly good at arithmetic, it stands to reason that humans are phenomenally bad at it.
2. An AGI would be better than people in the areas where humans suck
I am aware of the many virtues of fuzzy, unclear processes to arrive at answers to complex questions through massively parallel processes. However, there are some processes that are better done through serial, logical processes. I don't see why an AGI wouldn't pick these low hanging fruits. My reasoning is as follows: please tell me which part is wrong.
I. An emulation (not even talking about nonhuman AGIs at this point) would be able to perform as well as a human with access to a computer with, say, Python.
II. The way humans currently interact with computers is horribly inefficient. We translate our thoughts into a programming language, which we then translate into a series of motor impulses corresponding to keystrokes. We then run the program, which displays the feedback in the form of pixels of different brightness, which are translated by our visual cortex into shapes, which we then process for meaning.
III. There exist more efficient methods that, at a minimum, could bypass the barriers of typing speed and visual processing speed. (I suspect this is the part you disagree with)
What have you seen that makes you think AGIs with some superior skills to humans won't exist?
Replies from: Logos01↑ comment by Logos01 · 2011-11-25T08:01:25.528Z · LW(p) · GW(p)
What have you seen that makes you think AGIs with some superior skills to humans won't exist?
Human-equivalent AGIs. That's a vital element, here. There's no reason to expect that the AGIs in question would be better-able to achieve output in most -- if not all -- areas. There is this ingrained assumption in people that AGIs would be able to interface with devices more directly -- but that just isn't exactly likely. Even if they do possess such interfaces, at the very least the early examples of such devices are quite likely to only be barely adequate to the task of being called "human-equivalent". Karl Childers rather than Sherlock Holmes.
Replies from: faul_sname↑ comment by faul_sname · 2011-11-25T08:17:31.633Z · LW(p) · GW(p)
There's no reason to expect that the AGIs in question would be better-able to achieve output in most -- if not all -- areas.
I said some, not most or all. I expect there to be relatively few of these areas, but large superiority in some particular minor skills can allow for drastically different results. It doesn't take general superiority.
There is this ingrained assumption in people that AGIs would be able to interface with devices more directly -- but that just isn't exactly likely.
There is a reason we have this assumption. Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning meaning is the most efficient system?
Why is a superior interface unlikely?
Replies from: lessdazed, Logos01↑ comment by lessdazed · 2011-11-25T08:28:34.253Z · LW(p) · GW(p)
Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning value is the most efficient system?
Why is a superior interface unlikely?
Humans can improve their interfacing with computers too...though we will likely interact more awkwardly than AGIs will be able too. From TheOnion, my favorite prediction of man machine interface.
Replies from: faul_sname↑ comment by faul_sname · 2011-11-25T08:40:20.023Z · LW(p) · GW(p)
Is that "Humans can also improve their interfacing with computers" or "Humans can improve their interfacing with computers as well as AGI could"?
Replies from: lessdazed↑ comment by Logos01 · 2011-11-25T08:22:39.796Z · LW(p) · GW(p)
Why is a superior interface unlikely?
Because it will also require translation from one vehicle to another. The output of the original program will require translation into something other than logging output. Language, and the processes to formulate it, do not happen at all much quicker than they do the act of speaking. And we have plenty of programs out there that translate speech into text. Shorthand typists are able to keep up with multiple conversations, in real-time, no less.
And, as I have also said; early AGIs are likely to be idiots, not geniuses. (If for no other reason than the fact that Whole Brain Emulations are likely to require far more time per neuronal event than a real human does. I have justification in this belief; that's how neuron simulations currently operate.)
Replies from: faul_sname↑ comment by faul_sname · 2011-11-25T08:33:37.348Z · LW(p) · GW(p)
Because it will also require translation from one vehicle to another.
Even if this is unavoidable, I find it highly unlikely that we are at or near maximum transmission speed for that information, particularly on the typing/speaking side of things.
And, as I have also said; early AGIs are likely to be idiots, not geniuses.
Yes. Early AGIs may well be fairly useless, even with the processing power of a chimpanzee brain. Around the time it is considered "human equivalent", however, a given AGI is quite likely to be far more formidable than an average human.
Replies from: Logos01↑ comment by Logos01 · 2011-11-25T08:58:03.535Z · LW(p) · GW(p)
Around the time it is considered "human equivalent", however, a given AGI is quite likely to be far more formidable than an average human.
I strongly disagree, and I have given reasons why this is so.
Replies from: faul_sname↑ comment by faul_sname · 2011-11-25T09:17:13.633Z · LW(p) · GW(p)
Basically what you are saying is that any AGI will be functionally identical to a human. I strongly disagree, and find your given reasons fall far short of convincing me.
Replies from: Logos01↑ comment by Logos01 · 2011-11-25T09:42:22.519Z · LW(p) · GW(p)
Basically what you are saying is that any AGI will be functionally identical to a human.
No. What I have said is that "human-equivalent AGI is not especially likely to be better at any given function than a human is likely to." This is nearly tautological. I have explained that the various tasks you've mentioned already have methodologies which allow for the function to be performed at nearly- or -equal-to- realtime speeds.
There is this deep myth that AGIs will automatically -- necessarily -- be "hooked into" databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
That is a myth. Could those things be done? Certainly. But is it guaranteed?
By no means. As the example of Fritz shows -- there is just no justification for this belief that merely because it's in a computer it will automatically have access to all of these resources we traditionally ascribe to computers. That's like saying that because a word-processor is on a computer it should be able to beat video games. It just doesn't follow.
So whether you're convinced or not, I really don't especially care at this point. I have given reasons -- plural -- for my position, and you have not justified yours at all. So far as I can tell, you have allowed a myth to get itself cached into your thoughts and are simply refusing to dislodge it.
Replies from: faul_sname↑ comment by faul_sname · 2011-11-25T10:13:11.586Z · LW(p) · GW(p)
No. What I have said is that "human-equivalent AGI is not especially likely to be better at any given function than a human is likely to." This is nearly tautological.
This is nowhere near tautological, unless you define "human-level AGI" as "AGI that has roughly equivalent ability to humans in all domains" in which case the distinction is useless, as it basically specifies humans and possibly whole brain emulations, and the tiny, tiny fraction of nonhuman AGIs that are effectively human.
There is this deep myth that AGIs will automatically -- necessarily -- be "hooked into" databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
Integration is not a binary state of direct or indirect. A pocket calculator is a more direct interface than a system where you mail in a query and receive the result in 4-6 weeks, despite the overall result being the same.
As the example of Fritz shows -- there is just no justification for this belief that merely because it's in a computer it will automatically have access to all of these resources we traditionally ascribe to computers.
I don't hold that belief, and if that's what you were arguing against, you are correct to oppose it. I think humans have access to the same resources, but the access is less direct. A gain in speed can lead to a gain in productivity.
comment by Thomas · 2011-11-22T21:50:19.265Z · LW(p) · GW(p)
There are two kinds of people here. Those who thinks that an intelligence explosion is unlikely, and those who thinks it is uncontrollable.
I think it is likely AND controllable.
Replies from: TheOtherDave, amcknight↑ comment by TheOtherDave · 2011-11-22T22:04:51.446Z · LW(p) · GW(p)
From which we can infer that you aren't here.
Replies from: Thomas↑ comment by Thomas · 2011-11-23T06:32:40.996Z · LW(p) · GW(p)
Is here the fourth kind also? Those who thinks the IE is unlikely but controllable?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-11-23T14:44:34.711Z · LW(p) · GW(p)
I suspect that such people would not be terribly motivated to post about the IE in the first place, so available evidence is consistent with both their presence and their absence.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-11-23T15:19:33.290Z · LW(p) · GW(p)
But it's weak evidence of their absence, because them posting would be strong evidence of their presence.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-11-23T15:55:17.932Z · LW(p) · GW(p)
(nods) Certainly. But weak enough to be negligible compared to most people's likely priors.
I sometimes feel like we should simply have a macro that expands to this comment, its parent, and its grandparent.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-11-24T00:35:54.048Z · LW(p) · GW(p)
I sometimes feel like we should simply have a macro that expands to this comment, its parent, and its grandparent.
I'm not sure what you mean here. Is it something like,
There should be some piece of LW jargon encapsulating the idea that "the evidence is consistent with X and ~X, but favors X very weakly because absence of evidence is evidence of absence."
?
The closest thing we currently have is linking to the Absence of Evidence post.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-11-24T01:26:12.559Z · LW(p) · GW(p)
Something like, but more "the evidence is consistent with X and ~X, but favors X very weakly (because absence of evidence is evidence of absence), but sufficiently weakly that the posterior probability of X is roughly equal to the prior probability of X."
But I was mostly joking.
comment by gwern · 2011-11-22T19:37:34.152Z · LW(p) · GW(p)
On this page I will collect criticisms of (1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization. Please suggest your own, citing the original source when possible.
Wouldn't this make more sense on the wiki?
Replies from: shminux↑ comment by Shmi (shminux) · 2011-11-22T20:04:53.029Z · LW(p) · GW(p)
People are more likely to reply to a post than to edit the wiki. Presumably once this post thread settles, its content will be adapted into a wiki page.
Replies from: lukeprogcomment by JoshuaZ · 2011-11-28T20:17:15.507Z · LW(p) · GW(p)
Another criticism that's worth mentioning is an observational rather than modeling issue: If AI were a major part of the Great Filter then we'd see it in the sky from when the AGIs started to control space around them at a substantial fraction of c. This should discount the probability of AGIs undergoing intelligence explosions. How much it should do so is a function of how much of the Great Filter one thinks is behind us and how much one thinks is in front of us.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2011-11-29T01:06:01.494Z · LW(p) · GW(p)
No, that's backwards. If something takes over space at c, we never see it. The slower it expands, the easier it is to see, so the more our failure to observe it is evidence that it doesn't exist.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-29T01:20:52.185Z · LW(p) · GW(p)
In the hypothetical, it is expanding at a decent fraction of c, not at c. In order for us to see it it needs to expand at a decent fraction of c. For example, suppose it expands at 1 meter/s. That's fast enough to easily wipe out a planet before you can run away effectively, but how long will it take before it has a noticeable effect on even the nearest star? Well, if the planet is the same distance from the sun as the Earth (8 light minutes), it would take around 8*60*3*10^8 seconds, or around 4000 years. So we'll notice if we see something odd about just that star. But it won't ever expand fast enough to reach the next star.
The most easily noticeable things are things that travel at a decent fraction of c, fast enough for us to notice but not fast enough for it to be impossible for us to notice before we get wiped out. AGIs expanding a decent fraction of c would fall into that category. If something does expand at c you are correct that we won't notice.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2011-11-29T01:45:01.383Z · LW(p) · GW(p)
Something that expands at a fixed 1 m/s in all three of on a planet, in a solar system, and between stars qualifies as an artificial stupidity.
Something that expands at 0.1 c can be observed, but has heavy anthropic penalty: we should not be surprised not to see it.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-29T01:56:51.783Z · LW(p) · GW(p)
We don't have a good idea how quickly something can expand between stars. The gaps between stars are big and launching things fast is tough. The fastest we've ever launched something is Helios) which at maximum velocity was a little over 0.0002c. I agree that 1 m/s would probably be artificial stupid. There's clearly a sweet range here. If for example, your AI expanded at .01c then it won't ever reach us in time if it started in another galaxy. Even your example of .1c (which is extremely fast rate of expansion) means that one has to believe that most of the Filtration is not occurring due to AI.
If AI is the general filter and it is expanding at .1c then we need to live in an extremely rare lightcone for not seeing any sign of it. This argument is of course weak (and nearly useless) if one thinks that the vast majority of filtration is behind us. But either way, it strongly suggests that most of the Filter is not fast-expanding AI.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2011-11-29T02:47:44.471Z · LW(p) · GW(p)
Yes, if things expanding at 0.1c are common, then we should see galaxies containing them, but would we notice them? Would the galaxy look unnatural from this distance?
Not directly relevant, but I'm not sure how you're using filtration. I use it in a Fermi paradox sense: a filter is something that explains the failure to expand. An expanding filter is thus nonsense. I suppose you could use it in a doomsday argument sense - "Where does my reference class end?" - but I don't think that is usual.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-29T19:20:39.686Z · LW(p) · GW(p)
Yes, if things expanding at 0.1c are common, then we should see galaxies containing them, but would we notice them? Would the galaxy look unnatural from this distance?
This would depend on what exactly they are doing to those galaxies. If they are doing stellar engineering (e.g. making Dyson spheres, Matrioshka brains, stellar lifting) then we'd probably notice if it were any nearby galaxy. But conceivably something might try to deliberately hide its activity.
Not directly relevant, but I'm not sure how you're using filtration. I use it in a Fermi paradox sense: a filter is something that explains the failure to expand. An expanding filter is thus nonsense. I suppose you could use it in a doomsday argument sense - "Where does my reference class end?" - but I don't think that is usual.
Yes, I think I'm using it in some form closer to the second. In the context of the first one, in regards solely to the Fermi problem then AGI is simply not a filter at all which if anything makes the original point stronger.
comment by jacob_cannell · 2011-12-11T23:26:13.380Z · LW(p) · GW(p)
You are missing a train of argument which trumps all of these lines of criticism: the intelligence explosion is already upon us. Creating a modern microprocessor chip is a stupendously complex computational task that is far far beyond the capabilities of any number of un-amplified humans, no matter how intelligent. There was a time when chips were simple enough that a single human could do all of this intellectual work, but we are already decades past that point.
Today new chips are created by modern day super-intelligences (corporations) which in turn are composed of a mix of a variety of specialized meat brains running all kinds of natural language software and all kinds of computers running a complex ecosystem of programs in machine languages. Over time more and more of the total computational work is shifting from the meat brains to the machines. For a more concrete example of just one type of specialized machine that is a node in this overall meta-system, look at the big expensive emulation engines that are used to speed up final testing.
There is a trend to over-emphasis human's computational/intellectual contributions. When machines get better at particular sub-tasks or domains such as chess, we simply exclude them from our broad notions of 'intelligence'. Thus we tend to define intelligence at whatever humans are currently best at.
From that perspective I can predict a future intelligence extinction event as eventually all of the tasks move over to the machine substrate and eventually there are no tasks left favoring humans, and we are forced to conclude that intelligence no longer exists.
More seriously, the intelligence explosion is already well under way, we just aren't seeing it, for the most part. It's too big and abstract, too far from the simpler more visible physical manifestations we would expect.
That being said, there is certainly a further potential phase transition ahead when machine understanding of natural language matches then exceeds human level, when all of the existing human knowledge moves onto machines and they can run existing natural language 'software'. I find it likely this will lead to a further stage in the explosion, perhaps a transition to another S-curve of growth and dislocation, but such a transition would not fundamentally change either the computational problems important for progress (such as creating new chips), or the nature of the solutions.
Computers already do a huge chunk of the important work, thus completely taking humans out of the loop can only lead to limited additional gains in these fields where computers are already important (research, engineering).
comment by JoshuaZ · 2011-11-25T19:05:52.638Z · LW(p) · GW(p)
Computational complexity may place strong limits on how much recursive self-improvement can occur, especially in a software context. See e.g. this prior discussion and this ongoing one. In particular, if P is not equal to NP in a strong sense this may place serious limits on software improvement.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-11-26T18:48:09.085Z · LW(p) · GW(p)
In particular, if P is not equal to NP in a strong sense this may place serious limits on software improvement.
Why oh why do you still believe this? In my mind, this is strongly analogous to pointing out that there are physical limits on how intelligent an AI can get, which is true, but for all practical purposes irrelevant, since these limits are way above what humans can do, given our state of knowledge. This would only make sense if we see a specific reason that all algorithms can't exhibit superintelligent competence in the real world (as opposed to ability to solve randomly generated standard-form problems whose complexity can be analyzed by human mathematicians), but we don't understand intelligence nearly enough to carry out such inferences.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-26T20:22:34.921Z · LW(p) · GW(p)
Why oh why do you still believe this? In my mind, this is strongly analogous to pointing out that there are physical limits on how intelligent an AI can get, which is true, but for all practical purposes irrelevant, since these limits are way above what humans can do, given our state of knowledge.
This is not a good analogy at all. The probable scale of difference is what matters here. In this sort of context, we're extremely far from physical limitations mattering, as one can see for example by the fact that Koomey's law can continue for about forty years before hitting physical limits. (It will likely break down before then but that's not the point.) In contrast, our understanding of the limits of computational complexity are in some respects stricter but weaker in other respects. The conjectured limits of for example strong versions of the exponential time hypothesis place much more severe limits on what can occur.
It is important to note here that these sorts of limits are relevant primarily in the context of a software only or primarily software only recursive self-improvement. For essentially the reasons you outline (the large amount of apparent room for physical improvement), it seems likely that this will not matter much for an AGI that has much in the way of ability to discover/construct new physical systems. (This does imply some limits in that form, but they are likely to be comparatively weak).
comment by roland · 2011-11-23T12:45:53.752Z · LW(p) · GW(p)
The main problems that I see are as Eliezer told in the Singularity Summit: there are problems regarding AGI that we don't know how to solve even in principle(I'm not sure if this applies to AGI ingeneral or only to FAGI). So it might well be that we won't solve these problems ever.
The most difficult part will be to ensure the friendliness of the AI. The biggest danger is someone else carelessly making an AGI that is not friendly.
comment by DanielLC · 2011-11-22T23:55:43.849Z · LW(p) · GW(p)
I have a reason to believe it has less than a 50% chance of being possible. Does that count?
I figure after the low-hanging fruit is taken care of, it simply becomes a question of if a unit of additional intelligence is enough to add another additional unit of intelligence. If the feedback constant is less than one, intelligence growth stops. If it is greater, the intelligence grows until it falls below one. It will vary somewhat with intelligence, and it would have to fall below one eventually. We have no way of knowing what the feedback constant is, so we'd have to guess that there's a 50% chance of it being above 1, and a 50% chance of being below. Furthermore, it's unlikely to be right next to 1, so if it is possible, it will most likely get pretty darn intelligent.
Also, I figure that if the constant is below 1, or even close, humanity will be incapable of creating an AI from scratch, though if they get uploaded they could improve themselves.
Replies from: jsteinhardt, lessdazed↑ comment by jsteinhardt · 2011-11-23T01:45:09.828Z · LW(p) · GW(p)
We have no way of knowing what the feedback constant is, so we'd have to guess that there's a 50% chance of it being above 1, and a 50% chance of being below.
Not being able to determine what the constant is doesn't mean that there is a 50-50 chance that it is larger than 1. In particular, what in your logic prevents one from also concluding that there is also a 50-50 chance of it being larger than 2?
Replies from: DanielLC↑ comment by DanielLC · 2011-11-23T02:28:18.108Z · LW(p) · GW(p)
It can't be less than zero. From what I understand about priors, the maximum entropy prior would be a logarithmic prior. A more reasonable prior would be a log-normal prior with the mean on 1 and a high standard deviation
Replies from: jsteinhardt↑ comment by jsteinhardt · 2011-11-24T00:48:44.667Z · LW(p) · GW(p)
By logarithmic do you mean p(x) = exp(-x)? That would only have an entropy of 1, I believe, whereas one can easily obtain unboundedly large amounts of entropy, or even infinite entropy (for instance, p(x) = a exp(-a x) has entropy 1-log(a), so letting a go to zero yields arbitrarily large entropy).
Also, as I've noted before, entropy doesn't make that much sense for continuous distributions.
Replies from: DanielLC↑ comment by DanielLC · 2011-11-24T01:18:12.109Z · LW(p) · GW(p)
I mean p(x) = 1/x
I think it's Jeffreys prior or something. Anyway, it seems like a good prior. It doesn't have any arbitrary constants in it like you'd need with p(x) = exp(-x). If you change the scale, the prior stays the same.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2011-11-24T02:34:15.301Z · LW(p) · GW(p)
p(x) = 1/x isn't an integrable function (diverges at both 0 and infinity).
(My real objection is more that it's pretty unlikely that we really have so little information that we have to quibble about which prior to use. It's also good to be aware of the mathematical difficulties inherent in trying to be an "objective Bayesian", but the real problem is that it's not very helpful for making more accurate empirical predictions.)
Replies from: DanielLC↑ comment by DanielLC · 2011-11-24T03:22:47.907Z · LW(p) · GW(p)
p(x) = 1/x isn't an integrable function
Which is why I said a log-normal prior would be more reasonable.
My real objection is more that it's pretty unlikely that we really have so little information that we have to quibble about which prior to use.
How much information do we have? We know that we haven't managed to build an AI in 40 years, and that's about it.
We probably have enough information if we can process it right, but because we don't know how, we're best off sticking close to the prior.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2011-11-24T03:39:33.887Z · LW(p) · GW(p)
Which is why I said a log-normal prior would be more reasonable.
Why a log-normal prior with mu = 0? Why not some other value for the location parameter? Log-normal makes pretty strong assumptions, which aren't justified if we for all practical purposes we have no information about the feedback constant.
How much information do we have? We know that we haven't managed to build an AI in 40 years, and that's about it.
We may have little specific information about AIs, but we have tons of information about feedback laws, and some information about self-improving systems in general*. I agree that it can be tricky to convert this information to a probability, but that just seems to be an argument against using probabilities in general. Whatever makes it hard to arrive at a good posterior should also make it hard to arrive at a good prior.
(I'm being slightly vague here for the purpose of exposition. I can make these statements more precise if you prefer.)
(* See for instance the Yudkowsky-Hanson AI Foom Debate.)
↑ comment by lessdazed · 2011-11-23T07:19:16.959Z · LW(p) · GW(p)
If the feedback constant is less than one, intelligence growth stops.
You should distinguish between exponential and linear growth.
First, the feedback constant is different for every level of intelligence.
Whenever the constant is greater than one, the machine is limited by making the transformations involved, and the intelligence is not well characterized as being limited by its intelligence and instead should be thought of as limited by its resources.
Whenever the constant is less than one and greater than zero, intelligence growth is only linear, but it is not zero. If the constant remains low enough for long enough, whole periods of time, series of iterations, that have some places where the constant is above one also have sub-exponential growth.
The relationship between the AI's growth rate and our assisted intelligence growth rate (including FAIs, paper and pen, computers, etc.) is most of what is important, with the tie-breaker being our starting resources.
An AI with fast linear growth between patches of exponential growth, or even one with only fast linear growth, would quickly outclass humans' thinking.
Replies from: DanielLC↑ comment by DanielLC · 2011-11-23T07:52:39.327Z · LW(p) · GW(p)
First, the feedback constant is different for every level of intelligence.
I meant to mention that, but I didn't. It looks like I didn't so much forget as write an answer so garbled you can't really tell what I'm trying to say. I'll fix that.
Anyway, it will move around as the intelligence changes, but I figure it would be far enough from one that it won't cross it for a while. Either the intelligence is sufficiently advanced before the constant goes below one, or there's no way you'd ever be able to get something intelligent enough to recursively self-improve.
Whenever the constant is less than one and greater than zero, intelligence growth is only linear, but it is not zero.
No, it's zero, or at least asymptotic. If each additional IQ point allows you to work out how to grant yourself half an IQ point, you'll only ever get twice as many extra IQ points as you started with.
Having extra time will be somewhat helpful, but this is limited. If you get extra time, you'd be able to accomplish harder problems, but you won't be able to accomplish all problems. This will mean that the long-term feedback constant is somewhat higher, but if it's nowhere near one to begin with, that won't matter much.
Replies from: lessdazed↑ comment by lessdazed · 2011-11-23T10:05:42.342Z · LW(p) · GW(p)
Were you using "feedback constant" to mean the second derivative of intelligence, and assuming each increase in intelligence will be more difficult than the previous one (accounting for size difference)? I took "feedback constant" to mean the first derivative. I shouldn't have used an existing term and should have said what i meant directly.
Replies from: DanielLC↑ comment by DanielLC · 2011-11-23T18:13:26.863Z · LW(p) · GW(p)
I used "feedback constant" to mean the amount of intelligence an additional unit of intelligence would allow you to bring (before using the additional unit of intelligence). For example, if at an IQ of 1000, you can design a brain with an IQ of 1010, but with an IQ of 1001, you can design a brain with an IQ of 10012, the feedback constant is two.
It's the first derivative of the most intelligent brain you can design in terms of your own intelligence.
Looking at it again, it seems that the feedback constant and whether or not we are capable of designing better brains aren't completely tied together. It may be that someone with an IQ of 100 can design a brain with an IQ of 10, and someone with an IQ of 101 can design a brain with an IQ of 12, so the feedback constant is two, but you can't get enough intelligence in the first place. Similarly, the feedback constant could be less than one, but we could nonetheless be able to make brains more intelligent than us, just without an intelligence explosion. I'm not sure how much the two correlate.
comment by timtyler · 2011-11-22T21:00:48.535Z · LW(p) · GW(p)
Actually, the intelligence explosion has been in progress for a long time now - and effective intelligence is already increasing with unprecidented rapidity.
It is a combination of bad terminology and misunderstandings that pictures an explosion in intelligence as being a future event.
Replies from: gwern↑ comment by gwern · 2011-11-22T22:10:57.635Z · LW(p) · GW(p)
Increasing? What makes you think the Flynn effect is still operating in well-off populations? http://en.wikipedia.org/wiki/Flynn_effect#Possible_end_of_progression
And if you look at smart populations, they're quite stagnant. Check the graphs in 2011 "The Flynn effect puzzle: A 30-year examination from the right tail of the ability distribution provides some missing pieces" - the trends are pretty noisy, with large falls during some periods, and while the authors focus on the math stuff because you can pull an increase out of them, the English or verbal tests fall overall.
Replies from: timtyler↑ comment by timtyler · 2011-11-22T22:38:03.815Z · LW(p) · GW(p)
Well, the Flynn effect will not go on forever. I never claimed anything about elite subpopulations. Effective intelligence has also long been increasing via machine-based intelligence augmentation, though - and that trend will eventually dominate.
Replies from: gwern↑ comment by gwern · 2011-11-22T22:46:59.554Z · LW(p) · GW(p)
Well, the Flynn effect will not go on forever. I never claimed anything about elite subpopulations.
If the explosion has already stopped in some populations, then whence the present tense in "Actually, the intelligence explosion has been in progress for a long time now - and effective intelligence is already increasing with unprecidented rapidity."
Shouldn't that have read, "Actually, the intelligence explosion was in effect for a long time, and effective intelligence had increased with unprecedented rapidity."?
Replies from: timtyler↑ comment by timtyler · 2011-11-23T00:04:11.585Z · LW(p) · GW(p)
So: by "effective intelligence" I'm mostly talking about man plus computer systems. People are quite a bit smarter if they can use a camera, a net connection and have their test processed by a test-solving sweatshop. Expert systems are rising above human capabilities in many domains - in the form of large data centres.