Multinational corporations as optimizers: a case for reaching across the aisle

post by sudo-nym · 2023-12-06T00:14:35.831Z · LW · GW · 10 comments

One thing I've noticed when hanging around leftist circles is that they talk about values alignment as well. Once you get past the completely different vocabulary, they too are trying to figure out how to deal with large superhuman entities optimizing for a goal where humans are irrelevant. It's just that what they're talking about is global, publicly traded megacorps optimizing for money.

Publicly held companies have the terminal value of accumulating as much money as possible. This is called "shareholder value maximization".

Even if most people inside the organization may value other things, their job description is to contribute to the ultimate goal of maximizing money, and they are paid and incentivized to do so. In this they create procedures and policies, and then tell their sub-employees to execute them.

Procedures and policies subsume the people performing them, especially on the lowest level, which means that they could be seen as programs being executed manually. Of course, humans are not silicon; running a procedure using humans as both computing substrate and world-manipulators is slow and imperfect. However, I believe the analogy still holds.

The ways in which this shareholder value maximization has already seriously damaged the world and compromised the quality of human life are myriad and easily observable: pollution, climate change, and other such externalities. Companies' disregard for human suffering further enhances this comparison.

In conclusion I believe that the "friendly AI" problem has enough similarities with the "friendly multinational megacorporation" problem that some cross-pollination could be productive. Even if most of their ideas are implausible for use with an AI, the fact that they also have thoughts related to the creation of superhuman agents and the ethics thereof is still worth looking at.

10 comments

Comments sorted by top scores.

comment by Walker Vargas · 2023-12-09T17:24:43.677Z · LW(p) · GW(p)

This is also a plausible route for spreading awareness of AI safety issues to the left. The downside is that it might make AI safety a "leftest" issue if a conservative analogy is not introduced at the same time.

Replies from: lahwran, sudo-nym
comment by the gears to ascension (lahwran) · 2023-12-09T17:28:19.445Z · LW(p) · GW(p)

the problem is most folks I've talked to on the left with this pitch are even more skeptical of the idea that high capability intelligent software can exist. they generally seem to assume the current level is the peak and progress is stuck. solving that would make progress communicating it to them.

Replies from: Walker Vargas
comment by Walker Vargas · 2023-12-10T13:35:57.955Z · LW(p) · GW(p)

Do they think it's a hardware/cost issue? Or do they think that "true" intelligence is beyond our abilities?

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-12-12T02:33:09.348Z · LW(p) · GW(p)

it's the full range of things people say, just a higher ratio of people saying them on the left, in my experience. Also, re: making it a leftist issue - right now it's a liberal issue, and only a liberal issue; liberal CEOs have offended both right-wingers and leftists regarding AI safety, so it's possible that at least getting the actual left on board might be promising somehow. Not sure. Seems like this discussion should be had on less wrong itself first. I've certainly seen leftists worrying about ai aligned to megacorporations.

comment by sudo-nym · 2023-12-21T18:11:13.552Z · LW(p) · GW(p)

It may also be worth noting how a sufficiently advanced "algorithm" could start making its own "decisions"; for example, a search/display algorithm that has been built to maximize advertisement revenue, if given enough resources and no moral boundaries, may suppress search results that contain negative opinions on itself, promote taking down competitors, and/or preferentially display news and arguments that are in favor of allowing Algorithms more power. Skepticism about The Algorithm is a cause many political parties are already able to agree on; the possibility of The Algorithm going FOOM might accelerate public discussions about the development of AI in general.

comment by jacob_cannell · 2023-12-12T04:05:53.349Z · LW(p) · GW(p)

Corporations only exist within a legal governance infrastructure that permits incorporation and shapes externalities into internalities. Without such infrastructure you have warring tribes/gangs, not corporations.

The ways in which this shareholder value maximization has already seriously damaged the world and compromised the quality of human life are myriad and easily observable: pollution, climate change, and other such externalities. Companies' disregard for human suffering further enhances this comparison.

This is the naive leftist/marxist take. In practice communist countries such as Mao era China outpaced the west in pollution and environmental destruction.

Neither government bureaucracies or corporations are aligned by default - that always require careful mechanism design. As markets are the pareto efficient organization structure, they also tend to solve [LW · GW] these problems quicker and more effectively with appropriate legal infrastructure to internalize externalities.

Replies from: sudo-nym
comment by sudo-nym · 2023-12-21T18:33:43.182Z · LW(p) · GW(p)

Skepticism about the alignment of government and the incentives thereof has existed for almost as long as governments have. Elections, for example, are a crude but better-than-nothing attempt to align political interests with public interests, and much ink has been spilled on the subject of improving this alignment and even whether alignment to the general public opinion is a good idea.

Far less such discussion has occurred in the case of extremely large companies, as they are a relatively newer concept.

comment by sudo-nym · 2023-12-21T18:21:27.607Z · LW(p) · GW(p)

A pithier version of this has been suggested to me as "[Corporations are] like paperclippers except for money".

comment by Bezzi · 2023-12-10T14:15:26.631Z · LW(p) · GW(p)

The problem with this analogy is that megacorps must at least pay lip service to the rule of law, and there's no way a megacorp would survive if the government decide that they shouldn't. Any company is ultimately made of people, and those people can be individually targeted by the legal system (or worse). What's the equivalent for AGI?

Replies from: sudo-nym
comment by sudo-nym · 2023-12-21T18:24:19.699Z · LW(p) · GW(p)

The fact that there is not really an equivalent for AGI is admittedly a place where this analogy breaks down.