We shouldn't fear superintelligence because it already exists
post by Spencer Chubb (spencer-chubb) · 2024-01-07T17:59:55.297Z · LW · GW · 14 commentsContents
Claim 1: Markets are superintelligent Claim 2: Markets are exponentially self-improving Claim 3: Markets are hard to align Claim 3: Markets are black boxes My attempt to summarize AI risk None 14 comments
In this post I argue that a markets are very similar to AI. I then ponder whether AI poses an existential threat to humanity in light of the fact that we already have superintelligence.
Claim 1: Markets are superintelligent
I define a market to be a collection of humans and corporations who act in their own interest and trade goods, services, and assets.
I define superintelligent to mean "more intelligent than". In this case I argue that markets are more intelligent than unorganized humans.
The metric by which I measure intelligence is the ability to produce goods and services. Clearly markets are better than unorganized humans at producing goods and services. That is to say, if markets didn't exist, the total output of humanity would be much lower.
Claim 2: Markets are exponentially self-improving
Global GDP increases exponentially
Claim 3: Markets are hard to align
Even proponents of free markets agree that market failures occur for many reasons. For example, information asymmetry, externalities, and monopolies.
Claim 3: Markets are black boxes
At the risk of sounding cliché, I will mention "I, Pencil" as an example.
Markets make pencils, but humans don't know how they are made. We don't know who did the labor, we don't know who supplied the wood, we don't know who supplied the graphite, we don't know who supplied the rubber, we don't know who shipped the pencils. If we go another layer deeper, we don't know who provided supplies to the people who provided supplies to the pencil-maker. And so on.
The point is, we don't know how the pencils were made, but we still benefit from the pencils. Therefore markets are black boxes.
My attempt to summarize AI risk
AI may be an x-risk because:
- AI is superintelligent
- AI may exponentially self-improve
- AI is hard to align
- AI is a black box
Markets also have those 4 qualities! But clearly markets are not an x-risk, because humans have survived centuries with markets. Therefore these 4 conditions are not sufficient to say that something is an x-risk.
Please let me know if you agree or disagree, I am eager to see other arguments.
14 comments
Comments sorted by top scores.
comment by Said Achmiz (SaidAchmiz) · 2024-01-07T20:09:19.977Z · LW(p) · GW(p)
https://slatestarcodex.com/2015/12/27/things-that-are-not-superintelligences/
Replies from: spencer-chubb↑ comment by Spencer Chubb (spencer-chubb) · 2024-01-07T21:38:22.612Z · LW(p) · GW(p)
This post is interesting but it does not mention markets, and I argue that markets are intelligent. Not merely a team, or a company, or a civilization. I'm specifically talking about a collection of entities who transact with each other and behave in their own self interest. A market is an organizational technology that allows greater intelligence than could otherwise be achieved.
Replies from: mesaoptimizer, lahwran↑ comment by mesaoptimizer · 2024-01-07T23:23:43.650Z · LW(p) · GW(p)
Please read Nick Bostrom's "Superintelligence", it would really help you understand where everyone here has in mind when they talk about AI takeover.
↑ comment by the gears to ascension (lahwran) · 2024-01-08T05:06:19.710Z · LW(p) · GW(p)
I agree that markets are themselves mildly intelligent, but I think you are underestimating how weak they are compared to what can exist. eg, imagine a market between 15 housecats (say, imagine they all live nearby behind walls, and they can trade things between each other by dropping them in a chute, if the other cat puts in enough payment treats, or something). anyway, cats each have about 10 trillion synapses in 760 million neurons, vs a single human with about 150 trillion synapses in 100 billion neurons. the market is an enormously limiting information bottleneck compared to a higher-bandwidth communication system, and the adversarial nature makes it hard to transmit information usefully. There's no way the cats could implement market mechanisms well enough to make use of their neurons to, eg, learn abstract math efficiently. The human can. sure, the smartest cat will end up with the most stuff, and the market will have thereby allowed the most intelligent cat to direct the system. If some cats are better at some things, gains from trade will occur and they can each specialize somehow. Maybe one hunts their enclosure and the other maps the area or something, I'm not really imagining this in much detail. But even then, competitive low bandwidth communication just doesn't compare favorably to an integrated brain.
AI would be able to beat us because of being a big integrated thing that communicates far better than we can. In the short term it manipulates us like tiktok and youtube (or more accurately, the market - an intelligent, misaligned system - rewards humans who implement AI to manipulate other humans, which is, you know, why tiktok and youtube are big sites), then in the long term it has no need of us because the robots can make more robots and the ai is better at designing AI, not currently true but visibly possible now.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2024-01-08T20:16:38.447Z · LW(p) · GW(p)
Markets don't need to 'transmit information' at all to the observer, in order to be both useful and intelligent. For example, if aliens, who've never seen a single human or single word of human language, came by and inspected the Earth, left, then came back decades later, the changes physically observable to their sensors and associated derivations, would be.
Maybe not in the sense of an active biological intelligence but certainly in the sense such as 'The pyramids of Giza demonstrates their builder's intelligence'.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-01-08T22:11:31.837Z · LW(p) · GW(p)
but in order to implement a market, information (trade offers and trades) need to be transmitted. that's what I was referring to as an information bottleneck.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2024-01-11T01:48:47.501Z · LW(p) · GW(p)
Sure, but that doesn't matter to the alien observers, or anyone if they had the equivalent expertise and sensors. They can still gleam a super-intelligent equivalent amount of knowledge.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-01-11T02:29:47.666Z · LW(p) · GW(p)
i'm not sure what you're responding to. I don't think I made a claim to which the behavior of aliens would be relevant.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2024-01-23T00:29:45.705Z · LW(p) · GW(p)
"or anyone if they had the equivalent expertise and sensors" includes humans on Earth...
e.g. a lone hunter-gather living in a cave for a long period, coming out to survey the world with the latest tools and then going back into their cave and lifestyle with only the results.
comment by the gears to ascension (lahwran) · 2024-01-07T19:56:39.107Z · LW(p) · GW(p)
The only additional ingredient is that ai can win against us in the market, thereby incrementally taking over earth against humans. Initially it would be winning against poor people and this would seem fine to rich people, but then there's no reason to expect it would stop at any point - someone can always make their company win more by making it more ai driven and less dependent on humans - and eventually if it can do every job including fighting wars and trading, it wins economically against us and takes over the land we use to feed ourselves, and even if we try to fight it we just lose a war against it. Then we're all dead. We don't yet have the kind of AI that can just replace humans like that, but there's no reason to believe the next versions of AI can't be it, and once it can work in the world there's no reason to believe there's any job or role that can't be served more reliably by an ai powered robot. The owner class will initially be pleased by this, but they'd only be wealthy on paper and the robots will end up wealthier than they are and able to buy up all the farmland to build servers.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-01-08T04:52:56.223Z · LW(p) · GW(p)
I wish it would be easier to indicate an exploratory engineering mode/frame, with less detailed engineering connotations. What you describe is a good argument showing that at least this "can happen" in some unusual sense, while abstracting away the details of what else can happen, and what anyone would be motivated [? · GW] to let happen. This is an important form of argument that can be seen as much weaker than it is if it's perceived as forecasting of what will actually happen.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-01-08T04:55:06.725Z · LW(p) · GW(p)
hmm, I believe myself to be forecasting what will happen in the majority of timelines. can you clarify your point with one or several qualitatively different rephrasings, given that context? Or, how would your version of my point look?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-01-08T05:14:48.316Z · LW(p) · GW(p)
Sure, the point I'm making applies to what is intended to be actual forecasting just as well, if it has this exploratory engineering form. The argument becomes stronger if it's reframed from forecasting to discussion of technological affordances, if it in fact is a discussion of technological affordances, in a broad sense of "technological" that can include social dynamics.
An exploratory engineering sketch rests on assumptions of what can be done, and what the world chooses to do, to draw conclusions about what happens in that case, it's a study of affordances, of options. Its validity is not affected if in fact more can be done, or if in fact something else will be done. But validity of a forecast is affected by those things, so forecasting is more difficult and reframing an exploratory engineering sketch as forecasting unnecessarily damages its validity.
In this particular case, I don't expect this story to play out without superintelligence getting developed early on, which makes the rest of the story stop being a worthwhile thing for the superintelligence to let continue. And conversely, I don't expect such a story to start developing more than a few months before a superintelligence is likely to be built. But the story itself is good exploratory engineering, it shows that at least this level of economically unstoppable disempowerment is feasible.
comment by rsaarelm · 2024-01-08T09:04:34.489Z · LW(p) · GW(p)
On the exponentially self-improving part, No Evolutions for Corporations or Nanodevices.