AI Advantages [Gems from the Wiki]
post by habryka (habryka4), Kaj_Sotala · 2020-09-22T22:44:36.671Z · LW · GW · 7 commentsThis is a link post for https://www.lesswrong.com/tag/ai-advantages
Contents
Hardware advantages Self-improvement capabilities Co-operative advantages Human handicaps References None 7 comments
During the LessWrong 1.0 Wiki Import [LW · GW] we (the LessWrong team) discovered a number of great articles that most of the LessWrong team hadn't read before. Since we expect many others to also not have have read these, we are creating a series of the best posts from the Wiki to help give those hidden gems some more time to shine.
The original wiki article was fully written by Kaj Sotala [LW · GW], who I've added as a coauthor to this post. Thank you for your work on the wiki!
AI advantages are various factors that might favor AIs in case there was ever a conflict between them and humans. These can be classified as hardware advantages, self-improvement capabilities, co-operative advantages, and human handicaps.
Hardware advantages
- Superior processing power: Having more serial processing power would let an AI think faster than humans, while having more parallel processing power and more memory would let it think about more things at once.
Self-improvement capabilities
An AI with access to its source code may directly modify the way it thinks, or create a modified version of itself. An AI can intentionally be built in a manner that is easy to understand and modify, and may even read its own design documents. Self-improvement capabilities may enable recursive self-improvement [? · GW] to occur, thereby triggering an intelligence explosion [? · GW].
- Improving algorithms: An AI may modify its existing algorithms, e.g. making them faster, to consume less memory, or to rely on fewer assumptions.
- Designing new mental modules: A mental module is a part of a mind that specializes in processing a certain kind of information. An AI could create entirely new kinds of modules, custom-tailored for specific problems.
- Modifiable motivation systems: Humans frequently suffer from problems such as procrastination, boredom, mental fatigue, and burnout. A mind which did not become bored or tired with its work would have a clear advantage over humans.
Co-operative advantages
- Copyability: A digital mind can be copied very quickly, and doing so has no cost other than access to the hardware required to run it.
- Perfect co-operation: Minds might be constructed to lack any self-interest. Such entities minds could share the same goal system and co-operate perfectly with one another.
- Superior communication: AIs could communicate with each other at much higher bandwidths than humans, and modify themselves to understand each other better.
- Transfer of skills: To the extent that skills can be modularized, digital minds could create self-contained skill modules to be shared with others.
Human handicaps
Humans frequently reason in biased [? · GW] ways. AIs might be built to avoid such biases.
- Biases from computational limitations or false assumptions: Some human biases can be seen as assumptions or heuristics that fail to reason correctly in a modern environment, or as satisficing algorithms that do the best possible job given human computational resources.
- Human-centric biases: People tend to think of the capabilities of non-human minds, such as God or an artificial intelligence, as if the minds in question were human. This tendency persists even if humans are explicitly instructed to act otherwise.
- Biases from socially motivated cognition: It has also been proposed that humans have evolved to acquire beliefs which are socially beneficial, even if those beliefs weren't true.
References
- Kaj Sotala (2012): Advantages of Artificial Intelligences, Uploads, and Digital Minds. International Journal of Machine Consciousness 4 (1), 275-291.
7 comments
Comments sorted by top scores.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-09-23T08:10:05.437Z · LW(p) · GW(p)
A few more ideas:
-
How does "Quality intelligence" fit into this? For example, being disposed to come up with more useful concepts, more accurate predictions and models, more effective strategies, more persuasive arguments, more creative ideas. Perhaps the author meant quality intelligence to be subsumed under human handicaps or something, but I feel like it deserves its own section. It seems to me to be probably the most important of all the advantages AI might have.
-
This is a minor one, but: AIs run on servers rather than bodies. They can be downloaded onto hard drives and transported or hidden very easily. (Including over the internet!) In some scenarios AIs have the disadvantage here (e.g. when it takes a supercomputer to run an AI and there is a war, the supercomputers can be targeted and destroyed) but in most scenarios AIs have the advantage, I think. (Easy to assassinate even well-defended human leaders and scientists, almost impossible to assassinate even a poorly-defended AI, since all they need is a backup copy stored somewhere.)
-
Another minor one: Humans need to sleep, AIs don't, at least not current designs. In intense competitions and conflicts that take place over a few days, this is most important; in competitions that take place over months it is more like, idk, a 3% efficiency gain from not having to delegate to the night-shift leader (risking occasionally delegating too much or too little). (In conflicts that last only a few days, important decisions are numerous and urgent enough that there's no way to delegate them efficiently between leaders, and you have to either let your best leader go sleepless or accept that your second-best leader will be taking some important decisions.) (This is all my armchair speculation btw, not super trustworthy lol)
EDIT to add more:
-
Humans typically come with certain nonconsequentialist behaviors built-in, e.g. a sense of fairness, a tendency to get angry and take revenge under certain conditions, etc. Thus humans have certain somewhat-credible commitments that benefit them in most circumstances. This may be a disadvantage of AI, except that I suspect AIs might be able to make an even wider range of even more credible commitments. At least in principle they could. This is a broadly useful thing which is more powerful than it might seem; it helps you build and maintain coalitions and agreements while also helping you get more of what you want in bargaining and conflict situations. For more on this, see the commitment races problem. [AF · GW]
-
Novelty. There is a data asymmetry that probably favors AI over humans in conflicts: There is loads of data on humans and how they behave, but there will be comparatively little on AIs. It's as if an alien landed on earth, and had access to Earth's internet and history books, but no human had access to alien internet or history books. An advantage like this may have been part of the explanation for why the conquistadors were able to conquer so much so easily. [LW · GW]
↑ comment by Kaj_Sotala · 2020-09-23T16:38:18.295Z · LW(p) · GW(p)
How does "Quality intelligence" fit into this? For example, being disposed to come up with more useful concepts, more accurate predictions and models, more effective strategies, more persuasive arguments, more creative ideas.
I think I meant this to be covered by "designing new mental modules", as in "the AI could custom-design a new mental module specialized for some particular domain, and then be better at coming up with more useful concepts etc. in that domain". The original paper has a longer discussion about it:
A mental module, in the sense of functional specialization (Cosmides and Tooby 1994; Barrett and Kurzban 2006), is a part of a mind that specializes in processing a certain kind of information. Specialized modules are much more effective than general-purpose ones, for the number of possible solutions to a problem in the general case is infinite. Research in a variety of fields, including artificial intelligence, developmental psychology, linguistics, perception and semantics has shown that a system must be predisposed to processing information within the domain in the right way or it will be lost in the sea of possibilities (Tooby and Cosmides 1992). Many problems within computer science are intractable in the general case, but can be efficiently solved by algorithms customized for specific special cases with useful properties that are not present in general (Cormen et al. 2009). Correspondingly, many specialized modules have been proposed for humans, including modules for cheater-detection, disgust, face recognition, fear, intuitive mechanics, jealousy, kin detection, language, number, spatial orientation, and theory of mind (Barrett and Kurzban 2006).
Specialization leads to efficiency: to the extent that regularities appear in a problem, an efficient solution to the problem will exploit those regularities (Kurzban 2010). A mind capable of modifying itself and designing new modules customized for specific tasks might eventually outperform biological minds in any domain, even presuming no hardware advantages. In particular, any improvements in a module specialized for creating new modules would have a disproportionate effect.
It is important to understand what specialization means in this context, for several competing interpretations exist. For instance, Bolhuis et al. (2011) argue against functional specialization in nature by citing examples of “domain-general learning rules” in animals. On the other hand, Barrett and Kurzban (2006) argue that even seemingly domain-general rules, such as the modus ponens rule of formal logic, operate in a restricted domain: representations in the form of if-then statements. This paper uses Barrett and Kurzban’s broader interpretation. Thus, in defining the domain of a module, what matters is not the content of the domain, but the formal properties of the processed information and the computational operations performed on the information. Positing functional modules in humans also does not imply genetic determination, nor that the modules could necessarily be localized to a specific part of the brain (Barrett and Kurzban 2006).
A special case of a new mental module is the design of a new sensory modality, such as that of vision or hearing. Yudkowsky (2007) discusses the notion of new modalities, and considers the detection and identification of invariants to be one of the defining features of a modality. In vision, changes in lighting conditions may entirely change the wavelength of light that is reflected off a blue object, but it is still perceived as blue. The sensory modality of vision is then concerned with, among other things, extracting the invariant features that allow an object to be recognized as being of a specific color even under varying lighting.
Brooks (1987) mentions invisibility as an essential difficulty in software engineering. Software cannot be visualized in the same way physical products can be, and any visualization can only cover a small part of the software product. Yudkowsky (2007) suggests a codic cortex designed to model code the same way that the human visual cortex is evolved to model the world around us. Whereas the designer of a visual cortex might ask “what features need to be extracted to perceive both an object illuminated by yellow light and an object illuminated by red light as ‘the color blue’?” the designer of a codic cortex might ask “what features need to be extracted to perceive the recursive algorithm for the Fibonacci sequence and the iterative algorithm for the Fibonacci sequence as ‘the same piece of code’?” Speculatively, new sensory modalities could be designed for various domains for which existing human modalities are not optimally suited.
The "hardware advantages" section also has this:
Replies from: daniel-kokotajloAs the human brain works in a massively parallel fashion, at least some highly parallel algorithms must be involved with general intelligence. Extra parallel power might then not allow for a direct improvement in speed, but it could provide something like a greater working memory equivalent. More trains of thought could be pursued at once, and more things could be taken into account when considering a decision. Brain size seems to correlate with intelligence within rats (Anderson 1993), humans (McDaniel 2005), and across species (Deaner et al. 2007), suggesting that increased parallel power could make a mind generally more intelligent.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-09-24T07:40:29.612Z · LW(p) · GW(p)
I feel like neither of those things fully captures quality intelligence though. I agree that being able to design awesome modules is great, but an AI could have a quality intelligence advantage "naturally" without having to design for it, and it could be applied to their general intelligence rather than to skill at specific domains. And I don't think parallelism, working memory, etc. fully captures quality intelligence either. AWS has more of both than me but I am qualitatively smarter than AWS.
To use an analogy, consider chess-playing AI. One can be better than another even if it has less compute, considers fewer possible moves, runs more slowly, etc. Because maybe it has really good intuitions/heuristics that guide its search.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-09-24T08:33:51.256Z · LW(p) · GW(p)
it could be applied to their general intelligence rather than to skill at specific domains.
Note that in my framing, there is no such thing as general intelligence, but there are specific domains of intelligence that are very general (e.g. reasoning with if-then statements). So under this framing, something having a general quality intelligence advantage means that it has an advantage in some very generally applicable domain.
To use an analogy, consider chess-playing AI. One can be better than another even if it has less compute, considers fewer possible moves, runs more slowly, etc. Because maybe it has really good intuitions/heuristics that guide its search.
Having good intuitions/heuristics for guiding search sounds like a good mental module for search to me.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-09-24T10:04:27.970Z · LW(p) · GW(p)
I think I could define general intelligence even in your framing, as a higher-level property of collections of modules. But anyhow, yes, having good intuitions/heuristics for search is a mental module. But it needn't be one that the AI designed, heck it needn't be designed at all, or cleanly separate from other modules either. It may just be that we train an artificial neural net and it's qualitatively better than us and one way of roughly expressing that advantage is to say it has better intuitions/heuristics for search.
comment by FactorialCode · 2020-09-23T06:42:21.306Z · LW(p) · GW(p)
I think the cooperative advantages mentioned here have really been overlooked when it comes to forecasting AI impacts, especially in slow takeoff scenarios. A lot of forecasts, like what WFLL [LW · GW], mainly posit AI's competing with each other. Consequently Molochian dynamics come into play and humans easily lose control of the future. But with these sorts of cooperative advantages, AIs are in an excellent position to not be subject to those forces and all the strategic disadvantages they bring with them. This applies even if an AI is "merely" at the human level. I could easily see an outcome that from a human perspective looks like a singleton taking over, but is in reality a collective of similar/identical AI's working together with superhuman coordination capabilities.
I'll also add source-code-swapping and greater transparency to the list of cooperative advantages at an AI's disposal. Different AIs that would normally get stuck in a multipolar traps might not stay stuck for long if they can do things analogous to source code swap prisoners dilemmas [LW · GW].
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-09-23T19:06:59.917Z · LW(p) · GW(p)
For the most part, to the extent that we will have these advantages, it still doesn't suggest a discontinuity; it suggests that we will be able to automate tasks with weaker / less intelligent AI systems than you might otherwise have thought.
This applies even if an AI is "merely" at the human level.
I usually think of Part 1 of WFLL happening prior to reaching what I would call human level AI, because of these AI advantages. Though the biggest AI advantage feeding into this is simply that AI systems can be specialized to particular tasks, whereas humans become general reasoners and then apply their general reasoning to particular tasks.