How do open AI models affect incentive to race?
post by jessicata (jessica.liu.taylor) · 20240507T00:33:20.658Z · LW · GW · 13 commentsThis is a link post for https://unstablerontology.substack.com/p/howdoopensourceaimodelsaffect
I see it said sometimes that open models contribute to AI race dynamics. My guess is that they don't, and if anything, reduce AI race dynamics.
I will consider a simplified model that only takes into account the cost of training a model, not the cost to deploy it (which tends to be small relative to revenue anyway). Let f(x) map a training expense x to a "value per day per customer" of the trained model, under the assumption that the training makes efficient use of the cost. That is, a customer values using an AI model trained with x compute at $f(x) per day.
I assume there are n identical customers here; of course, there are complexities where some customers value AI more than others, incentivizing price discrimination, but I'm abstracting this consideration out. (In general, variation in how much customers value a product will tend to increase consumer surplus while reducing revenue, as it makes it harder to charge customers just under the maximum amount they're willing to pay.)
I'm also assuming there is only one company that trains closed models for profit. This assumption is flawed because there is competition between different companies that train closed models. However, perfect competition assumptions would tend to reduce the incentive to train models. Suppose two companies have closed models of equivalent expense x. They each want to charge slightly less than the minimum of f(x) and the competitor's price, per customer per day. If each competitor undercuts the other slightly, the cost will approach 0. See the Traveler's Dilemma for a comparison. The reasons why this doesn't happen have to do with considerations like differences in models' performance on different tasks, e.g. some models are better for programming than others. If models are sufficiently specialized (allowing this sort of nichemonopolization), each specialized type of model can be modeled independently as a monopoly. So I'll analyze the case of a closed model monopoly, noting that translation to the real world is more complex.
Suppose the best open model has compute x and a company trains a closed model with compute y > x. Each customer will now spend up to f(y)  f(x) per day for the model; I'll assume the company charges f(y)  f(x) and the customers purchase this, noting that they could charge just below this amount to create a positive incentive for customers. So the company's revenue over m days is nm(f(y)  f(x)). Clearly, this is decreasing in x. So the better the open model is, the less expected revenue there is from training a closed model.
But this is simply comparing doing nothing to training a model of a fixed cost y. So consider instead comparing expected revenue between two different model costs, y and z, both greater than x. The revenue from y is nm(f(y)  f(x)), and from z it is nm(f(z)  f(x)). The difference between the z revenue and the y revenue is nm(f(z)  f(y)). This is unaffected by x.
This can model a case where the company has already trained a model of cost y and is considering upgrading to z. In this case, the open model doesn't affect the expected additional revenue from the upgrade.
Things get more complex when we assume there will be a future improvement to the open model. Suppose that, for k days, the open model has training cost x, and for the remaining mk days, it has training cost x' > x.
Now suppose that the closed AI company has already trained a model of cost y, where x < y < x'. They are considering upgrading to a model of cost z, where z > x'.
Suppose they do not upgrade. Then they get nk(f(y)  f(x)) revenue from the first k days and nothing thereafter.
Suppose they do upgrade, immediately. Then they get nk(f(z)  f(x)) revenue from the first k days, and n(mk)(f(z)  f(x')) from the remaining days.
Clearly, increasing x' past y will result in less revenue for the upgrade in comparison to not upgrading. So the announcement of the upgrade of the open model to x' compute will reduce the incentive to race by training a closed model with z compute.
So in this simplified analysis, release of better open models reduces the incentive to race, or does nothing. This is overall not surprising, as intellectual property laws are motivated by incentivizing production of intellectual property, and open content tends to reduce the value of intellectual property.
There are a number of factors that could be taken into account in other analyses, including:

Effects of open models on ease of training closed models

Substitution effects between different model niches (i.e. a model with an absolute advantage at mathematics may still be useful for writing essays)

Effects of uncertainty over open model releases

Different customers valuing the AI differently, driving price discrimination

Nonstraightforward incentives such as prestige/recruitment from releasing models

Oligopoly dynamics

Time discounting

Changes in customer demand over time
It should go without saying that effects on race dynamics are not the only relevant effect of open model releases. Isolating and estimating different effects, however, will help in making an overall evaluation.
I suggest that someone who still believes that open models increase race dynamics clarify what economic assumptions they are using and how they differ from this model.
13 comments
Comments sorted by top scores.
comment by RobertM (T3t) · 20240507T03:11:08.209Z · LW(p) · GW(p)
I'm not sure I personally endorse the model I'm proposing, but imagine a slightly less spherical AGI lab which has more than one incentive (profit maximization) driving its behavior. Maybe they care at least a little bit about not advancing the capabilities frontier as fast as possible. This can cause a preference ordering like:
 don't argmax capabilities, because there's no opensource competition making it impossible to profit from currentgen models
 argmax capabilities, since you need to stay ahead of opensource models nipping at your heels
 don't argmax capabilities; go bankrupt because opensource catches up to you (or gets "close enough" for enough of your customers)
ETA: But in practice most of my concerns [LW(p) · GW(p)] around opensource AI development are elsewhere.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 20240507T03:31:08.753Z · LW(p) · GW(p)
I think you are assuming something like a sublinear utility function in the difference (quality of own closed model  quality of best open model). Which would create an incentive to do just a bit better than the open model.
I think if there is a penalty term for advancing the frontier (say, for the quality of one's released model minus the quality of the open model) that can be modeled as dividing the revenue by a constant factor (since, revenue was also proportional to that). Which shouldn't change the general conclusion.
Replies from: T3t↑ comment by RobertM (T3t) · 20240507T04:08:36.899Z · LW(p) · GW(p)
Yeah, there needs to be something like a nonlinearity somewhere. (Or just preference inconsistency, which humans are known for, to say nothing of larger organizations.)
comment by Wei Dai (Wei_Dai) · 20240507T03:04:21.086Z · LW(p) · GW(p)
I think open source models probably reduce profit incentives to race, but can increase strategic (e.g., national security) incentives to race. Consider that if you're the Chinese government, you might think that you're too far behind in AI and can't hope to catch up, and therefore decide to spend your resources on other ways to mitigate the risk of a future transformative AI built by another country. But then an open model is released, and your AI researchers catch up to near stateoftheart by learning from it, which may well change your (perceived) tradeoffs enough that you start spending a lot more on AI research.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 20240507T03:26:46.753Z · LW(p) · GW(p)
It seems this is more about open models making it easier to train closed models than about nations vs corporations? Since this reasoning could also apply to a corporation that is behind.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 20240507T03:42:14.869Z · LW(p) · GW(p)
Hmm, open models make it easier for a corporation to train closed models, but also make that activity less profitable, whereas for a government the latter consideration doesn't apply or has much less weight, so it seems much clearer that open models increase overall incentive for AI race between nations.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 20240507T03:48:13.314Z · LW(p) · GW(p)
For corporations I assume their revenue is proportional to f(y)  f(x) where y is cost of their model and x is cost of open source model. Do you think governments would have a substantially different utility function from that?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 20240507T04:38:50.813Z · LW(p) · GW(p)
A government might model the situation as something like "the first country/coalition to open up an AI capabilities gap of size X versus everyone else wins" because it can then easily win a tech/cultural/memetic/military/economic competition against everyone else and take over the world. (Or a fuzzy version of this to take into account various uncertainties.) Seems like a very different kind of utility function.
comment by Zach SteinPerlman · 20240507T02:54:45.061Z · LW(p) · GW(p)
I mostly agree. And I think when people say race dynamics they often actually mean speed of progress and especially "Effects of open models on ease of training closed models [and open models]," which you mention.
But here is a racedynamics story:
Alice has the best open model. She prefers for AI progress to slow down but also prefers to have the best open model (for reasons of prestige or, if different companies' models are not interchangeable, future market share). Bob releases a great open model. This incentivizes Alice to release a new stateoftheart model sooner.
comment by Chris_Leong · 20240507T03:51:38.224Z · LW(p) · GW(p)
This fails to account for one very important psychological fact: the population of startup founders who get a company off the ground is very heavily biased toward people who strongly believe in their ability to succeed. So it'll take quite a while for "it'll be hard to make money" to flow through and slow down training. And, in the mean time, it'll be acceleratory from pushing companies to stay ahead.
comment by Radford Neal · 20240507T01:45:20.307Z · LW(p) · GW(p)
"Suppose that, for k days, the closed model has training cost x..."
I think you meant to say "open model", not "closed model", here.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 20240507T01:45:59.271Z · LW(p) · GW(p)
Thanks, fixed.
comment by Mikhail Samin (mikhailsamin) · 20240509T00:13:45.068Z · LW(p) · GW(p)
 If the new Llama is comparable to GPT5 in performance, there’s much less shortterm economic incentive to train GPT5.
 If an open model allows some of what people would otherwise pay a close model developer for, there’s less incentive to be a close model developer.
 People work on frontier models without trying to get to AGI. Talent is attracted to work at a lab that releases models and then work on random corporate ML instead of building AGI.
But:
 Sharing information on frontier models architecture and/or training details, which inevitably happens if you release an opensource model, gives the whole field insights that reduce the time until someone knows how to make something that will kill everyone.
 If you know a version of Llama comparable to GPT4 is going to be released, you want to release a model comparable to GPT4.5 before your customers stop paying you as they can switch to opensource.
 People gain experience with frontier models and the talent pool for racing to AGI increases. If people want to continue working on frontier models but their workplace can’t continue to spend as much as frontier labs on training runs, they might decide to work for a frontier lab instead.
 Not sure, but maybe some of the infrastructure powered by open models might be switchable to close models, and this might increase profits for close source developers if customers become familiar with/integrate opensource models and then want to replace them with more capable systems, when it’s costeffective?
 Mostly less direct: availability of opensource models for irresponsible use might make it harder to put in place regulation that’d reduce the race dynamics (vis various destabilizing ways they can be used).