Three Notions of "Power"
post by johnswentworth · 2024-10-30T06:10:08.326Z · LW · GW · 43 commentsContents
Emperor of China: Dominance Instagram Founder: Bargaining Power Benjamin Jesty: Getting What You Want None 43 comments
We begin with three stories about three people.
First, Zhu Di, emperor of China from 1402 to 1424. In that period, it was traditional for foreign envoys to present gifts to the emperor and make a show of submission, reinforcing the emperor’s authority and China’s image as the center of civilization. Yet the emperor would send the envoys off with gifts in return, often worth more than the gifts the envoy had given - suggesting that the emperor’s authority and dominance did not actually translate into much bargaining power.
Second, Kevin Systrom, one of the founders of Instagram. When Instagram was bought by Facebook for $1B in 2012, it had only 13 employees. Systrom presumably found himself with a great deal of money, the most direct form of bargaining power amongst humans. Yet with only 13 employees, he commanded little of the obedience or authority displayed by an emperor.
Third, Benjamin Jesty. In 1774, a wave of smallpox raged through England, and dairy farmer Jesty was concerned for his wife and children. He knew that milkmaids sometimes contracted cowpox from cows, and that the (much milder) disease would immunize them for life against smallpox. So, he intentionally infected his wife and children with cowpox, 20 years before Edward Jenner popularized the same technique as the first vaccine. That same year, Louis XV, king of France, died of smallpox. Despite both great authority and great wealth, Louis XV had no power to stop smallpox.
These people demonstrate three quite different kinds of “power”.
Emperor of China: Dominance
The emperor of China has power in the form of dominance. In any competition of dominance within the vast Chinese empire, the emperor was the presumed winner. In some sense, the central function of an “emperor” is to create one giant dominance hierarchy over all the people in some territory, and then sit at the top of that dominance hierarchy.
(Much the same could be said of a “king” or “president”.)
Humans have a strong built-in instinct for dominance, as do many other animals [LW · GW]. The ontology of a “dominance ranking” is justified experimentally, by the observation that various animals (including humans) form a consistent order of aggression/backing down - i.e. if Bob backs down in the face of Alice’s aggression, and Carol backs down in the face of Bob’s aggression, then empirically Carol will also back down in the face of Alice’s aggression. A > B and B > C (with “>” indicating dominance) empirically implies A > C.
I would guess that most humans intuitively see dominance as the main form of power, because of that built-in instinct. And humans seem to want dominance/submission as roughly-terminal goals. However, though dominance is hard-coded, it seems like something of a simple evolved hack to avoid costly fights among relatively low-cognitive-capability agents; it does not seem like the sort of thing which more capable agents (like e.g. future AI, or even future more-intelligent humans) would rely on very heavily.
One place where this comes up: ask someone to imagine AI “seizing power” or “taking over the world”, and I would guess that most people imagine some kind of display of dominance or creation of a dominance hierarchy. But I would guess that, to such an AI, “power” looks more like the other two categories below.
Instagram Founder: Bargaining Power
The instagram founders have power in the form of bargaining power, i.e. the ability to get others to give you things in trade. In practice, bargaining power mostly means money… and there’s good reason for that.
Bargaining problems are one of those domains where, as the problem gets bigger and has more degrees of freedom, things usually get simpler. With enough players and enough things up for trade, there’s usually some way to net-transfer a little bit of value from any player to any other. And if that’s the case, then (roughly speaking) the whole problem reduces to one dimension: everybody has a one-dimensional budget of bargaining power (which can be thought of in dollars), all the stuff up for trade has a price in bargaining power (which, again, can be thought of in dollars) coming from econ-101-style supply and demand, and players buy stuff with their bargaining power. That’s the ontological justification for a 1-dimensional notion of “bargaining power”, i.e. money.
This kind of “power” seems very natural in any system where many agents trade with each other.
Insofar as the generalized efficient markets hypothesis holds, bargaining power is basically synonymous with one’s ability to obtain any particular thing one wants - after all, under efficient markets, there’s no advantage to doing something oneself rather than outsourcing it, therefore one should be at-worst indifferent between obtaining X oneself vs doing something else to earn money and then spending that money to obtain X from someone else.
Of course, in reality the generalized efficient markets hypothesis is false, so bargaining power is importantly different from the ability to obtain what one wants - which brings us to our last concept of “power”.
Benjamin Jesty: Getting What You Want
Benjamin Jesty had power in the form of actually getting what he wanted - keeping his wife and kids safe from smallpox.
There are two main things to note about this form of power. First, it is distinct from the previous two. Second, unlike the previous two, it is not even approximately a single one-dimensional notion of power.
When and why is the ability to actually get what you want distinct from dominance or bargaining power? Well, in Benjamin Jesty’s case, there was no market for immunization against smallpox because nobody knew how to do it. Or rather, insofar as anyone did know how to do it, most people couldn’t distinguish the real thing from the many fakes, and the real thing was kinda icky and triggered lots of pushback even after Jenner’s famous public demonstration of vaccination. The hard part was to figure out what needed to be done to immunize against smallpox, and that step could not be outsourced because there were too many charlatans and naysayers; Jesty had to have the right knowledge and put together the pieces himself.
In short: not everything can be outsourced [? · GW]. Insofar as the things you want are bottlenecked by non-outsourceable steps, neither dominance nor bargaining power will solve that bottleneck.
That said, insofar as things can be outsourced, bargaining power has a big advantage over raw ability-to-get-what-you-want: bargaining power is fungible. Money can buy all sorts of different things. But ability-to-immunize-against-smallpox cannot easily be turned into e.g. ability-to-reverse-human-aging or ability-to-prove-the-Riemann-Hypothesis or ability-to-solve-AI-alignment. Those are all different difficult-to-outsource problems, the ability to solve one does not turn into ability to solve the others. Those abilities are all different dimensions of “power”; there isn’t a single underlying one-dimensional notion of “power” here.
… though looking at the apparent bottlenecks to these problems, there are some common themes; one could in-principle specialize in problems we don’t understand [LW · GW] and thereby gain a generalizable skillset targeted at basically the sorts of problems which are hard to outsource. So arguably, skill-in-problems-we-don't-understand could be a single one-dimensional notion of “power”. And if so, it’s a particularly useful type of power to acquire, since it naturally complements bargaining power.
43 comments
Comments sorted by top scores.
comment by tailcalled · 2024-10-30T07:55:17.452Z · LW(p) · GW(p)
However, though dominance is hard-coded, it seems like something of a simple evolved hack to avoid costly fights among relatively low-cognitive-capability agents; it does not seem like the sort of thing which more capable agents (like e.g. future AI, or even future more-intelligent humans) would rely on very heavily.
This seems exactly reversed to me. It seems to me that since dominance underlies defense, law, taxes and public expenditure, it will stay crucial even with more intelligent agents. Conversely, as intelligence becomes "too cheap to meter", "getting what you want" will become less bottlenecked on relevant insights, as those insights are always available.
Replies from: abramdemski, going-durden, khafra↑ comment by abramdemski · 2024-10-30T14:50:45.344Z · LW(p) · GW(p)
It seems to me that the importance and interaction of these different types of power in the future depends a lot on our choices now, ie, what kind of future we shape. Hierarchies could get smashed in one way or another, making John's prediction correct, or we could engineer a future that evolves from the present more smoothly, in which case you'd be correct.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-30T17:15:58.483Z · LW(p) · GW(p)
Hierarchies getting smashed requires someone to smash them, in which case that someone has the mandate of heaven. That's how it worked with John Wentworth's original example of Zhu Di, who overthrew Zhu Yunwen.
Replies from: johnswentworth↑ comment by johnswentworth · 2024-10-30T17:21:05.224Z · LW(p) · GW(p)
If they're being smashed in a literal sense, sure. I think the more likely way things would go is that hierarchies just cease to be a stable equilibrium arrangement. For instance, if the bulk of economic activity shifts (either quickly or slowly) to AIs and those AIs coordinate mostly non-hierarchically amongst themselves.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-30T17:42:50.365Z · LW(p) · GW(p)
I would expect the AI society to need some sort of monopoly on violence to coordinate this, which is basically the same as a dominance hierarchy.
Replies from: johnswentworth, D0TheMath↑ comment by johnswentworth · 2024-10-30T19:30:40.238Z · LW(p) · GW(p)
A monopoly on violence is not the only way to coordinate such things - even among humans, at small-to-medium scale we often rely on norms and reputation rather than an explicit enforcer with a monopoly on violence. The reason those mechanisms don't scale well for humans seems to be (at least in part) that human cognition is tuned for Dunbar's number.
And even if a monopoly on violence does turn out to be (part of) the convergent way to coordinate, that's not-at-all synonymous with a dominance hierarchy. For instance, one could imagine the prototypical libertarian paradise in which a government with monopoly on violence enforces property rights and contracts but otherwise leaves people to interact as they please. In that world, there's one layer of dominance, but no further hierarchy beneath. That one layer is a useful foundational tool for coordination, but most of the day-to-day work of coordinating can then happen via other mechanisms (like e.g. markets).
(I suspect that a government which was just greedy for resources would converge to an arrangement roughly like that, with moderate taxes on property and/or contracts. The reason we don't see that happen in our world is mostly, I claim, that the humans who run governments usually aren't just greedy for resources, and instead have a strong craving for dominance as an approximately-terminal goal.)
Replies from: tailcalled↑ comment by tailcalled · 2024-10-30T19:52:49.811Z · LW(p) · GW(p)
Given a world of humans, I don't think that libertarian society would be good enough at preventing competing powers from overthrowing it. Because there'd be an unexploitable-equillibrium condition where a government that isn't focused on dominance is weaker than a government more focused on government, it would generally be held by those who have the strongest focus on dominance. Those who desire resources would be better off putting themselves in situations where the dominant powers can become more dominant by giving them/putting them in charge of resources.
Given a world of AIs, I don't think the dominant AI would need a market; it could just handle everything itself.
As I understand it, libertarian paradises are basically fantasies by people who don't like the government, not realistically-achievable outcomes given the political realities.
Replies from: johnswentworth↑ comment by johnswentworth · 2024-10-30T22:01:24.616Z · LW(p) · GW(p)
Because there'd be an unexploitable-equillibrium condition where a government that isn't focused on dominance is weaker than a government more focused on government, it would generally be held by those who have the strongest focus on dominance.
This argument only works insofar as governments less focused on dominance are, in fact, weaker militarily, which seems basically-false in practice in the long run. For instance, autocratic regimes just can't compete industrially with a market economy like e.g. most Western states today, and that industrial difference turns into a comprehensive military advantage with relatively moderate time and investment. And when countries switch to full autocracy, there's sometimes a short-term military buildup but they tend to end up waaaay behind militarily a few years down the road IIUC.
Replies from: tailcalled, tailcalled, tailcalled↑ comment by tailcalled · 2024-10-31T17:04:55.373Z · LW(p) · GW(p)
Maybe one could say the essence of our difference is this:
You see the dominance ranking as defined by the backing-off tendency and assume it to be mainly an evolutionary psychological artifact.
Meanwhile, I see the backing-off tendency as being the primary indicator of dominance, but the core interesting aspect of dominance to be the tendency to leverage credible threats, which of course causes but is not equivalent to the psychological tendency to back off.
Under my model, dominance would then be able to cause bargaining power (e.g. robbing someone by threatening to shoot them), but one could also use bargaining power to purchase dominance (e.g. spending money to purchase a gun).
This leaves dominance and bargaining power independent because on the one hand you have the weak-strong axis where both increase but on the other hand you have the merchant-king axis where they directly trade off.
↑ comment by tailcalled · 2024-10-31T08:02:08.873Z · LW(p) · GW(p)
I guess to expand, the US military doctrine since the world war has been that there's a need to maintain dominance over countries focused on military strength to the disadvantage of their citizens. Hence while your statement is somewhat-true, it's directly and intentionally the result of a dominance hierarchy maintained by the US.
↑ comment by tailcalled · 2024-10-31T06:47:04.864Z · LW(p) · GW(p)
Western states today use state violence to enforce high taxes and lots of government regulations. In my view they're probably more dominance-oriented than states which just leave rural farmers alone. At least some of this is part of a Keynesian policy to boost economic output, and economic output is closely related to military formidability (due to ability to afford raw resources and advanced technology for the military).
Hm, I guess you would see this as more closely related to bargaining power than to dominance, because in your model dominance is a human-psychology-thing and bargaining power isn't restricted to voluntary transactions?
↑ comment by Garrett Baker (D0TheMath) · 2024-10-30T18:28:21.405Z · LW(p) · GW(p)
I am going to guess that the diff between you and John's models here is that John thinks LDT/FDT solves this, and you don't.
Replies from: johnswentworth, sharmake-farah, tailcalled↑ comment by johnswentworth · 2024-10-30T19:14:38.830Z · LW(p) · GW(p)
Good guess, but that's not cruxy for me. Yes, LDT/FDT-style things are one possibility. But even if those fail, I still expect non-hierarchical coordination mechanisms among highly capable agents.
Gesturing more at where the intuition comes from: compare hierarchical management to markets, as a control mechanism. Markets require clean factorization - a production problem needs to be factored into production of standardized, verifiable intermediate goods in order for markets to handle the production pipeline well. If that can be done, then markets scale very well, they pass exactly the information and incentives people need (in the form of prices). Hierarchies, in contrast, scale very poorly. They provide basically-zero built-in mechanisms for passing the right information between agents, or for providing precise incentives to each agent. They're the sort of thing which can work ok at small scale, where the person at the top can track everything going on everywhere, but quickly become extremely bottlenecked on the top person as you scale up. And you can see this pretty clearly at real-world companies: past a very small size, companies are usually extremely bottlenecked on the attention of top executives, because lower-level people lack the incentives/information to coordinate on their own across different parts of the company.
(Now, you might think that an AI in charge of e.g. a company could make the big hierarchy work efficiently by just being capable enough to track everything themselves. But at that point, I wouldn't expect to see an hierarchy at all; the AI can just do everything itself and not have multiple agents in the first place. Unlike humans, AIs will not be limited by their number of hands. If there is to be some arrangement involving multiple agents coordinating in the first place, then it shouldn't be possible for one mind to just do everything itself.)
On the other hand, while dominance relations scale very poorly as a coordination mechanism, they are algorithmically relatively simple. Thus my claim from the post that dominance seems like a hack for low-capability agents, and higher-capability agents will mostly rely on some other coordination mechanism.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-30T19:31:23.218Z · LW(p) · GW(p)
(Now, you might think that an AI in charge of e.g. a company could make the big hierarchy work efficiently by just being capable enough to track everything themselves. But at that point, I wouldn't expect to see an hierarchy at all; the AI can just do everything itself and not have multiple agents in the first place. Unlike humans, AIs will not be limited by their number of hands. If there is to be some arrangement involving multiple agents coordinating in the first place, then it shouldn't be possible for one mind to just do everything itself.)
My model probably mostly resembles this situation. Some ~singular AI will maintain a monopoly on violence. Maybe it will use all the resources in the solar system, leaving no space for anyone else. Alternatively (for instance if alignment succeeds), it will leave one or more sources of resources that other agents can use. If the dominant AI fully protects these smaller agents from each other, then they'll handle their basic preferences and mostly withdraw into their own world, ending the hierarchy. If the dominant AI has some preference for who to favor, or leaves some options for aggression/exploitation which don't threaten the dominant AI, then someone is going to win this fight, making the hierarchy repeat fractally down.
Main complication to this model is inertia; if human property rights are preserved well enough, then most resources would start out owned by humans, and it would take some time for the economy to equillibrate to the above.
↑ comment by Noosphere89 (sharmake-farah) · 2024-10-30T18:55:28.556Z · LW(p) · GW(p)
Maybe, but I'm not sure it's even necessary to invoke LDT/FDT/UDT, and instead argue that coordinating even through solely causal methods is very cheap for AIs to the point where coordination, and as a side effect, interfaces become quite a lot less of a bottleneck compared to today.
In essence, I think the diff between John's models and tailcalled's models is plausibly in how easy coordination in a more general sense can ever be for AIs, and whether AIs have much better ability to coordinate compared to humans today, where John thinks that coordination is a taut constraint for humans but not for AI, but tailcalled thinks it's hard to coordinate for both AIs and humans due to fundamental limits.
↑ comment by tailcalled · 2024-10-30T18:48:16.941Z · LW(p) · GW(p)
LDT/FDT is a central example of rationalist-Gnostic heresy [LW · GW].
↑ comment by Going Durden (going-durden) · 2024-10-31T10:10:40.885Z · LW(p) · GW(p)
Dominance underlies the things that can be done most efficiently with dominance. The moment dominance is no longer the most efficient force, it collapses, because in the vast majority of cases, dominating others takes a lot of time, energy and effort. This is actually how and why slavery (pretty much the most powerful example of dominance) was abolished: it started to make less economic sense than Bargaining (paid employment of freemen) and just Getting Things Done (through better tools and ultimately machines), so even its most ardent supporters became dispirited.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-31T11:13:17.794Z · LW(p) · GW(p)
Slavery was abolished and remains abolished through dominance:
- first by getting outlawed by the Northern US and Great Britain, who drew strong economic benefit from higher labor prices due to them industrializing earlier for geographic reasons,
- secondly by leveraging state dominance during the great depression to demand massive increases in quality and quantity of production, to make it feasible to maintain a non-slave-holding society without having excess labor forces being forced to starve,
- thirdly, endless policies that use state violence and reduce the total fertility rate as a side-effect,
Throughout most of history, there has been excess labor, making the value of work fall down close to the cost of subsistence, being only sustainable because landowners see natural fluctuations in their production and therefore desire to keep people around even if it doesn't make short-term economic sense. This naturally creates serfdom and indentured servitude.
It's only really prisoners of war (e.g. African-American chattel slaves) who are slaves due to dominance; ordinary slavery is just poor bargaining power.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-10-31T11:27:03.996Z · LW(p) · GW(p)
The current number for slaves in the world in something like 50 million. It's a significant amount of people. In prostitution, you unfortunately have women that are enslaved with violence and other forms of dominance that are not just about the poor bargaining power of the prostitutes.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-31T11:36:40.526Z · LW(p) · GW(p)
Can you expand on the methodology behind this number?
Replies from: ChristianKl↑ comment by ChristianKl · 2024-10-31T11:46:51.832Z · LW(p) · GW(p)
The number is from the International Labour Organization of the UN.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-31T11:53:56.773Z · LW(p) · GW(p)
I didn't ask where the number is from, I asked how they came up with that number. Their official definition is "all work or service which is exacted from any person under the menace of any penalty and for which the said person has not offered himself voluntarily", but taking that literally would seem to imply that additional work to pay taxes is slavery and therefore ~everyone is a slave. This is presumably not what they meant, and indeed it's inconsistent with their actual number, but it's unclear what they mean instead.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-10-31T12:48:40.993Z · LW(p) · GW(p)
I consider it worthwhile to provide sources so that you can read their methodology yourself. I don't see a good reason for me to give you my interpretation of their methodology.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-31T13:00:56.346Z · LW(p) · GW(p)
Your link doesn't contain a detailed description of their methodology or intermediate results. I would have to do a lot of digging to make heads or tails of it.
I guess feel free to opt out of this conversation if you want but then ultimately I don't see you as having contributed any point that is relevant for me to respond to or update with.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-10-31T16:32:05.938Z · LW(p) · GW(p)
The key issue I raised was not about the exact number but about prostitutes not being prisoners of war and still better modeled as slaves due to dominance than slaves due to poor bargaining power.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-31T17:13:54.316Z · LW(p) · GW(p)
Your sources are not very clear about that, and it contradicts what I've heard elsewhere, but yes I do admit at the boundaries of where society enforces laws, there do exist people who are forced to do things including prostitution via dominance.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-11-01T11:24:17.433Z · LW(p) · GW(p)
If we take the issue of forced prostitution and the official numbers are estimates and by their nature estimates are not exact.
https://www.spiegel.de/international/germany/human-trafficking-persists-despite-legality-of-prostitution-in-germany-a-902533.html would be a journalistic story about prostitution in Germany that describes what happens here with legalized prostitution.
I was once talking with someone who in the past was thinking about opening a brothel and who had some insight about how brothels are run in Germany and who said that a lot of coercion is used.
Recently, I read something from a policeman who was complaining about how the standard of proving coercion for prostitutes is too high. Proving that a prostitute who's over 21 who left was beaten was not enough in court to convince the court that she falls under the criteria of outlawed exploitation of prostitutions.
Replies from: tailcalled↑ comment by tailcalled · 2024-11-01T21:27:51.978Z · LW(p) · GW(p)
I don't really have any end-to-end narrative, but here's a bunch of opinion-fragments:
- There's lots of good reason to believe that sometimes there's dominance-based prostitution. However, all the prostitutes I've talked with on these subjects have emphasized that there's a powerful alliance between prudish leftists and religious people who misrepresent what's really going on in order to marginalize prostitution, so I'm still inclined to hold your claims to especially high standards, and I don't really know why you (apparently) trust the organizations that prostitutes oppose so much.
- Der Speigel does not describe how they sampled the individual stories they ended up with, and it seems very unlikely that they sampled them the same way that the UN number did, so it doesn't seem like the UN number should be assumed to reflect stories like the ones in Der Speigel.
- The article has multiple mentions of women who left prostitution but then later returned. In one of them, it's for the pay, which seems like a bargaining power issue (unless we go into the mess of counting taxes). In another, it's more complicated as it was out of hope that a customer would fall in love with her. (Which seems unlikely to happen? Would maybe count as something similar to the vaccine situation, insight into how to achieve one's goals.)
- In the case of e.g. Alina, it sounds like the main form of dominance was dominance by proxy: "She says that she was hardly ever beaten, nor were the other women. "They said that they knew enough people in Romania who knew where our families lived. That was enough," says Alina.". This matches my proposal about how the dominance needs to occur at the boundary of where the law can enforce.
- This sounds very suspicious to me: "There are many women from EU countries "whose situation suggests they are the victims of human trafficking, but it is difficult to provide proof that would hold up in court," reads the BKA report. Everything depends on the women's testimony, the authors write, but there is "little willingness to cooperate with the police and assistance agencies, especially in the case of presumed victims from Romania and Bulgaria.""
↑ comment by ChristianKl · 2024-11-02T11:18:04.022Z · LW(p) · GW(p)
I gave you three sources that are influential to my views. A Spiegel article, a conversation with someone who in the past was planning to run a brothel (and spoke with people who actually run brothels in Germany for that reason) and police sources.
I did not link to some activist NGO run by prudish leftists or religious people or making claims as a reason for me believing what I believe.
In general, it's hard to know what's actually going on when it comes to crime. If you spoke in the 1950s about the Italian mafia, you had plenty of people calling you racist against Italians and say that there's no mafia.
↑ comment by khafra · 2024-10-31T12:00:49.927Z · LW(p) · GW(p)
Clarifying question: If A>B on the dominance hierarchy, that doesn't seem to mean that A can always just take all B's stuff, per the Emperor of China example. It also doesn't mean that A can trust B to act faithfully as A's agent, per the cowpox example.
If all dominance hierarchies control is who has to signal submission to whom, dominance seems only marginally useful for defense, law, taxes, and public expenditure; mostly as a way of reducing friction toward the outcome that would have happened anyway.
It seems like, with intelligence too cheap to meter, any dominance hierarchy that doesn't line up well with the bargaining power hierarchy or the getting-what-you-want vector space is going to be populated with nothing but scheming viziers.
But that seems like a silly conclusion, so I think I'm missing something about dominance hierarchies.
↑ comment by tailcalled · 2024-10-31T12:08:16.138Z · LW(p) · GW(p)
I think John Wentworth and I are modelling it in different ways and that may be the root of your confusion. To me, dominance is something like the credible ability and willingness to impose costs targeted at particular agents, whereas John Wentworth is more using the submission signalling definition.
Replies from: khafra↑ comment by khafra · 2024-11-01T07:13:21.242Z · LW(p) · GW(p)
Your definition seems like it fits the Emperor of China example--by reputation, they had few competitors for being the most willing and able to pessimize another agent's utility function; e.g. 9 Familial Exterminations.
And that seems to be a key to understanding this type of power, because if they were able to pessimize all other agents' utility functions, that would just be an evil mirror of bargaining power. Being able to choose a sharply limited number of unfortunate agents, and punish them severely pour encourager les autres, seems like it might just stop working when the average agent is smart enough to implicitly coordinate around a shared understanding of payoff matrices.
So I think I might have arrived back to the "all dominance hierarchies will be populated solely by scheming viziers" conclusion.
↑ comment by tailcalled · 2024-11-01T07:30:45.583Z · LW(p) · GW(p)
Can you explain what this coordination would look like?
↑ comment by johnswentworth · 2024-10-31T15:53:29.698Z · LW(p) · GW(p)
I think that conclusion is basically correct.
comment by romeostevensit · 2024-10-31T05:58:49.799Z · LW(p) · GW(p)
I have attempted to communicate to ultra-high-net-worth individuals, seemingly to little success so far, that given the reality of limited personal bandwidth, with over 99% of their influence and decision-making typically mediated through others, it’s essential to refine the ability to identify trustworthy advisors in each domain. Expert judgment is an active field of research with valuable, actionable insights.
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-30T21:32:18.042Z · LW(p) · GW(p)
I don't think the Benjamin Jesty case fully covers the space of 'directly getting what you want'.
That seems like a case of 'intelligent, specialized solution to a problem, which could then be solved with minimal resources.'
There's a different class of power in the category of 'directly getting what you want' which looks less like a clever solution, and more like being in control of a system of physical objects.
This could be as simple as a child holding a lollipop, or a chipmunk with cheeks full of seeds.
Or it can be a more complicated system of physical control, with social implications. For instance, being armed with a powerful gun while facing an enraged charging grizzly bear. It doesn't take a unique or clever or skillful action for the armed human to prevail in that case. And without a gun, there is little hope of the human prevailing.
Another case is a blacksmith with some iron ore and coal and a workshop. So long as nothing interferes, then it seems reasonable to expect that the blacksmith could solve a wide variety of different problems which needed a specific but simple iron tool. The blacksmith has some affordance in this case, and is more bottlenecked on supplies, energy, and time than on intellect or knowledge.
I discuss this here: https://www.lesswrong.com/posts/uPi2YppTEnzKG3nXD/nathan-helm-burger-s-shortform?commentId=ZsDimsgkBNrfRps9r [LW(p) · GW(p)]
I can imagine a variety of future scenarios that don't much look like the AI being in a dominance hierarchy with humanity, or look like it trading with humanity or other AIs.
For instance: if an AI were building a series of sea-floor factories, using material resources that it harvested itself. Some world governments might threaten the AI, tell it to stop using those physical resources. The AI might reply: "if you attack me, I will respond in a hostile manner. Just leave me alone." If a government did attack the AI, the AI might release a deadly bioweapon which wiped out >99% of humanity in a month. That seems less like some kind of dominance conflict between human groups, and more like a human spraying poison on an inconvenient wasp nest.
Similarly, if an AI were industrializing and replicating in the asteroid belt, and some world governments told it to stop, but the AI just said No or said nothing at all. What would the governments do? Would they attack it? Send a rocket with a nuke? Fire a powerful laser? If the AI were sufficiently smart and competent, it would likely survive and counterattack with overwhelming force. For example, by directing a large asteroid at the Earth, and firing lasers at any rockets launched from Earth.
Or perhaps the AI would be secretly building subterranean factories, spreading through the crust of the Earth. We might not even notice until a whole continent started noticeably heating up from all the fission and fusion going on underground powering the AI factories.
If ants were smart enough to trade with, would I accept a deal from the local ants in order to allow them to have access to my kitchen trashcan? Maybe, but the price I would demand would be higher than I expect any ant colony (even a clever one) to be able to pay. If they invaded my kitchen anyway, I'd threaten them. If they continued, I'd destroy their colony (assuming that ants clever enough to bargain with weren't a rare and precious occurrence). This would be even more the case, and even easier for me to do, if the ants moved and thought at 1/100th speed of a normal ant.
I don't think it requires assuming that the AI is super-intelligent for any of these cases. Even current models know plenty of biology to develop a bioweapon capable of wiping out most of humanity, if they also had sufficient agency, planning, motivation, robotic control, and access to equipment / materials. Similarly, directing a big rock and some lasers at Earth doesn't require superintelligence if you already have an industrial base in the asteroid belt.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-30T21:41:38.564Z · LW(p) · GW(p)
Except for the child and the blacksmith, all of these seem like dominance conflicts to me. The blacksmith plausibly becomes a dominance conflict too once you consider how he ended up with the resources and what tasks he's likely to face. You contrast these with conflicts between human groups, but I'd compare to e.g. a conflict between a drunk middle-aged loner who is looking for a brawl vs two young policemen and a bar owner.
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-30T21:52:12.156Z · LW(p) · GW(p)
I think we're using different concepts of 'dominance' here. I usually think of 'dominance' as a social relationship between a strong party and a submissive party, a hierarchy. A relationship between a ruler and the ruled, or an abuser and abused. I don't think that a human driving a bulldozer which destroys an anthill without the human even noticing that the anthill existed is the same sort of relationship. I think we need some word other than 'dominant' to describe the human wiping out the ants in an instant without sparing them a thought. It doesn't particularly seem like a conflict even. The human in a bulldozer didn't perceive themselves to be in a conflict, the ants weren't powerful enough to register as an opponent or obstacle at all.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-30T21:55:57.443Z · LW(p) · GW(p)
What phenomenon are you modelling where this distinction is relevant?
Replies from: nathan-helm-burger, johnswentworth, nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-30T22:18:03.023Z · LW(p) · GW(p)
Thinking a bit more about this, I might group types of power into:
Power through relating: Social/economic/government/negotiating/threatening, reshaping the social world and the behavior of others
Power through understanding: having intellect and knowledge affordances, being able to solve clever puzzles in the world to achieve aims
Power through control: having physical affordances that allow for taking potent actions, reshaping the physical world
They all bleed together at the edges and are somewhat fungible in various ways, but I think it makes sense to talk of clusters despite their fuzzy edges.
↑ comment by johnswentworth · 2024-10-30T22:07:09.678Z · LW(p) · GW(p)
Human psychology, mainly. "Dominance"-in-the-human-intuitive-sense was in the original post mainly because I think that's how most humans intuitively understand "power", despite (I claimed) not being particularly natural for more-powerful agents. So I'd expect humans to be confused insofar as they try to apply those dominance-in-the-human-intuitive-sense intuitions to more powerful agents.
And like, sure, one could use a notion of "dominance" which is general enough to encompass all forms of conflict, but at that point we can just talk about "conflict" and the like without the word "dominance"; using the word "dominance" for that is unnecessarily confusing, because most humans' intuitive notion of "dominance" is narrower.
Replies from: tailcalled↑ comment by tailcalled · 2024-10-31T07:17:49.024Z · LW(p) · GW(p)
Ah. I would say human psychology is too epiphenomenal so I'm mainly modelling things that shape (dis)equillibria in complex ecologies.
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-30T22:01:16.411Z · LW(p) · GW(p)
The post seems to me to be about notions of power, and the affordances of intelligent agents. I think this is a relevant kind of power to keep in mind.