Robin Hanson on Lumpiness of AI Services
post by DanielFilan · 2019-02-17T23:08:36.165Z · LW · GW · 2 commentsThis is a link post for http://www.overcomingbias.com/2019/02/how-lumpy-ai-services.html
Contents
2 comments
Basically, in this post Robin Hanson argues that AI systems will have multiple separate components, rather than be a big uniform lump, arguing by analogy with the multiplicity of firms in the economy. Some excerpts:
Long ago people like Marx and Engels predicted that the familiar capitalist economy would naturally lead to the immiseration of workers, huge wealth inequality, and a strong concentration of firms. Each industry would be dominated by a main monopolist, and these monsters would merge into a few big firms that basically run, and ruin, everything. (This is somewhat analogous to common expectations that military conflicts naturally result in one empire ruling the world.)...
Note that many people seem much less concerned about an economy full of small firms populated by people of nearly equal wealth. Actions seem more visible in such a world, and better constrained by competition. With a few big privately-coordinating firms, in contrast, who knows that they could get up to, and they seem to have so many possible ways to screw us...
In the area of AI risk, many express great concern that the world may be taken over by a few big powerful AGI (artificial general intelligence) agents with opaque beliefs and values, who might arise suddenly via a fast local “foom” self-improvement process centered on one initially small system. I’ve argued in the past that such sudden local foom seems unlikely because innovation is rarely that lumpy.
In a new book-length technical report, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Eric Drexler makes a somewhat similar anti-lumpiness argument. But he talks about task lumpiness, not innovation lumpiness. Powerful AI is safer if it is broken into many specific services, often supplied by separate firms. The task that each service achieves has a narrow enough scope that there’s little risk of it taking over the world and killing everyone in order to achieve that task. In particular, the service of being competent at a task is separate from the service of learning how to become competent at that task...
All these critics seem to agree with Drexler that it is harder to see and control the insides of services, relative to interfaces between them. Where they disagree is in seeing productive efficiency considerations as perhaps creating large natural service “lumps.” A big lumpy service does a large set of tasks with a wide enough scope, where it would be much less efficient to break that up into many services, and where we should be scared of what this lump might do if driven by the wrong values.
Note the strong parallels with the usual concern about large firms in capitalism. The popular prediction that unregulated capitalism would make a few huge firms is based on more than productive efficiencies; people also fear market power, collusion, and corruption of governance. But big size induced by productive efficiencies of scale is definitely one of the standard concerns.
Economics and business have large literatures not only on the many factors that induce large versus small firms, but also on the particular driver of production efficiencies. This often goes under the label “make versus buy”; making something within a firm rather than buying it from other firms tends to make a firm larger. It tends to be better to make things that need to be tightly coordinated with core firm choices, and where it is harder to make useful arm-length contracts. Without such reasons to be big, smaller tends to be more efficient. Because of these effects, most scholars today don’t think unregulated firms would become huge, contrary to Marx, Engels, and popular opinion.
Alas, as seen in the above criticisms [AF · GW] [links in a different spot in the original post], it seems far too common in the AI risk world to presume that past patterns of software and business are largely irrelevant, as AI will be a glorious new shiny unified thing without much internal structure or relation to previous things. (As predicted by far views.) The history of vastly overestimating the ease of making huge firms in capitalism, and the similar typical nubbie error of overestimating the ease of making large unstructured software systems, are seen as largely irrelevant.
2 comments
Comments sorted by top scores.
comment by Ofer (ofer) · 2019-02-18T08:36:20.559Z · LW(p) · GW(p)
Alas, as seen in the above criticisms [LW · GW] [links in a different spot in the original post], it seems far too common in the AI risk world to presume that past patterns of software and business are largely irrelevant, as AI will be a glorious new shiny unified thing without much internal structure or relation to previous things. (As predicted by far views.)
The rise of deep learning in recent years seems to be evidence in favor of [AI will be a glorious new shiny thing without much relation to previous things] (assuming "previous things" here is limited to things that affected markets at the time).
The history of vastly overestimating the ease of making huge firms in capitalism, and the similar typical nubbie error of overestimating the ease of making large unstructured software systems, are seen as largely irrelevant.
While I see how conventional economic models are obviously useful here, I do not see how they can be useful in predicting the performance of "novel computations" (e.g. a computation that uses 1,000,000 GPU hours and a shiny new neural architecture) or predicting some critical technical properties of the development of transformative systems (e.g. "is there a secret sauce that a top AI lab will suddenly find?").
comment by jedharris · 2019-02-18T06:04:23.690Z · LW(p) · GW(p)
Suppose, as this argues, that effective monopoly on AGI is a necessary factor in AI risk. Then effective anti-monopoly mechanisms (maybe similar to anti-trust?) would be significant mitigators of AI risk.
The AGI equivalent of cartels could contribute to risk as well, so the anti-monopoly mechanisms would have to deal with that as well. Lacking some dominant institutions to enforce cartel agreements, however, it should be easier to handle cartels than monopolies.
Aside from the "foom" story, what are the arguments that we are at risk of an effective monopoly on AGI?
And what are the arguments that large numbers of AGIs of roughly equal power still represent a risk comparable to a single monopoly AGI?