Distinguishing ways AI can be "concentrated"

post by Matthew Barnett (matthew-barnett) · 2024-10-21T22:21:13.666Z · LW · GW · 2 comments

Contents

2 comments

2 comments

Comments sorted by top scores.

comment by Dana · 2024-10-22T03:26:12.285Z · LW(p) · GW(p)

I do not really understand your framing of these three "dimensions". The way I see it, they form a dependency chain. If either of the first two are concentrated, they can easily cut off access during takeoff (and I would expect this). If both of the first two are diffuse, the third will necessarily also be diffuse.

How could one control AI without access to the hardware/software? What would stop one with access to the hardware/software from controlling AI?

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2024-10-22T04:03:58.046Z · LW(p) · GW(p)

How could one control AI without access to the hardware/software? What would stop one with access to the hardware/software from controlling AI?

One would gain control by renting access to the model, i.e., the same way you can control what an instance of ChatGPT currently does. Here, I am referring to practical control over the actual behavior of the AI, when determining what the AI does, such as what tasks it performs, how it is fine-tuned, or what inputs are fed into the model.

This is not too dissimilar from the high level of practical control one can exercise over, for example, an AWS server that they rent. While Amazon may host these servers, and thereby have the final say over what happens to the computer in the case of a conflict, the company is nonetheless inherently dependent on customer revenue, implying that they cannot feasibly use all their servers privately for their own internal purposes. As a consequence of this practical constraint, Amazon rents these servers out to the public, and they do not substantially limit user control over AWS servers, providing for substantial discretion to end-users over what software is ultimately implemented.

In the future, these controls could also be determined by contracts and law, analogously to how one has control over their own bank account, despite the bank providing the service and hosting one's account. Then, even in the case of a conflict, the entity that merely hosts an AI may not have practical control over what happens, as they may have legal obligations to their customers that they cannot breach without incurring enormous costs to themselves. The AIs themselves may resist such a breach as well.

In practice, I agree these distinctions may be hard to recognize. There may be a case in which we thought that control over AI was decentralized, but in fact, power over the AIs was more concentrated or unified than we believed, as a consequence of centralization over the development or the provision of AI services. Indeed, perhaps real control was always in the hands of the government all along, as they could always choose to pass a law to nationalize AI, and take control away from the companies.

Nonetheless, these cases seem adequately described as a mistake in our perception of who was "really in control" rather than an error in the framework I provided, which was mostly an attempt to offer careful distinctions, rather than to predict how the future will go.

If one actor—such as OpenAI—can feasibly get away with seizing practical control over all the AIs they host without incurring high costs to the continuity of their business through loss of customers, then this indeed may surprise someone who assumed that OpenAI was operating under different constraints. However, this scenario still fits nicely within the framework as I've provided, as it merely describes a case in which one was mistaken about the true degree of concentration along one axis, rather than one of my concepts intrinsically fitting reality poorly.