Notes on Potential Future AI Tax Policy

post by Zvi · 2023-04-25T13:30:01.015Z · LW · GW · 6 comments

Contents

  The Ancient Art of Taxation
  The Almost as Ancient Art of the Tax Dodge
  Focus On Training Runs
  Enforcement is a Problem
  What About Minimum Reaction Time?
  Executive Summary by GPT-4 (using system message and lightly edited)
  Conclusion
None
6 comments

Response To: Via Marginal Revolution, Brian Slesinsky proposes a tax on language model API calls.

Brian Slesinsky: My preferred AI tax would be a small tax on language model API calls, somewhat like a Tobin tax on currency transactions. This would discourage running language models in a loop or allowing them to “think” while idle.

For now, we mostly use large language models under human supervision, such as with AI chat. This is relatively safe because the AI is frozen most of the time [1]. It means you get as much time as you like to think about your next move, and the AI doesn’t get the same advantage. If you don’t like what the AI is saying, you can simply close the chat and walk away.

Under such conditions, a sorcerer’s apprentice shouldn’t be able to start anything they can’t stop. But many people are experimenting with running AI in fully automatic mode and that seems much more dangerous. It’s not yet as dangerous as experimenting with computer viruses, but that could change.

Such a tax doesn’t seem necessary today because the best language models are very expensive [2]. But making and implementing tax policy takes time, and we should be concerned about what happens when costs drop.

Another limit that would tend to discourage dangerous experiments would be a minimum reaction time. Today, language models are slow. It reminds me of using a dial-up modem in the old days. But we should be concerned about what happens when AI’s start reacting to events much quicker than people.

Different language models quickly reacting to each other in a marketplace or forum could cause cascading effects, similar to a “flash crash” in a financial market. On social networks, it’s already the case that volume is far higher than we can keep up with. But it could get worse when conversations between AI’s start running at superhuman speeds.

Financial markets don’t have limits on reaction time, but there are trading hours and circuit breakers that give investors time to think about what’s happening in unusual situations. Social networks sometimes have rate limits too, but limiting latency at the language model API seems more comprehensive.

Limits on transaction costs and latency won’t make AI safe, but they should reduce some risks better than attempting to keep AI’s from getting smarter. Machine intelligence isn’t defined well enough to regulate. There are many benchmarks and it seems unlikely that researchers will agree on a one-dimensional measurement, like IQ in humans.

This seems like it has even worse versions of all the issues Tyler Cowen takes with regulatory proposals for AI, while also not doing what you want it to do? Why does Tyler suddenly fail to notice these concerns? If indeed we are worried about losing to China or slowing down progress or unenforceable rules that imply dystopias once you work out their implications, don’t we need to consistently apply such worries?

As always, an AI regulation can either take existential risk seriously, it can take existential risk non-seriously or in a confused way, or it can ignore existential risk entirely.

This seems like it falls into the second category, for a few reasons.

The Ancient Art of Taxation

One simple reason for this would be, when do you pay your taxes? Who is making you?

If a dangerous AI were to come into existence, and start setting up API calls to itself or something similar where we would want the tax to apply, does the taxman appear and say ‘no! You shall not compute until you pay?’ At best, this happens if you are already paying for the API calls with a third party, which will cut you off already after a certain point. When one is worried about rapid expansions in capabilities, or such systems rapidly getting out of control, a tax won’t help.

Let’s say the tax somehow did apply in advance in an enforceable way. How big a tax are we talking, anyway? The price of compute is rapidly declining. Are you going to impose a tax that is most of the marginal cost of usage? If you don’t, at its theoretical best this buys you a small amount of time.

Notice the contrast between a tax on API calls versus a tax on tokens, as you pay by the token to use API calls right now, to reflect real costs. If you put a large tax on API calls, the same way you have a limited number of GPT-4 calls as a human, what happens? You do more and more increasingly bespoke prompt engineering, and turn up the size and complexity of responses, and do more outside-LLM processing.

Worst of all, this tax applies at the model usage level, where there is mundane utility and relatively low marginal risk, and ignores the training level. Each API call currently has super low marginal cost, versus a high fixed cost to train the model. Once the model is trained, the API calls aren’t free exactly, but they are quite cheap, unless you are doing industrial-strength things they are basically free and will get cheaper over time.

So when you tax API calls, yes you are discouraging some specific types of strange loops of consideration differentially. But effectively most of what you are discouraging is extraction of small amounts of mundane utility per call.

The Almost as Ancient Art of the Tax Dodge

In turn, this moves people away from ‘train a bespoke set of distinct models and call between them’ or ‘use a smaller model that calls itself with scaffolding or does sampling or what not’ and towards exactly the worst possible thing, which is training the most powerful model possible, so that the marginal benefit per API call can afford to pay the taxes. Or even to automatically incorporate strange loops into the API call as a tax dodge, or similar.

Why would one impose a fixed tax per action on an action with variable marginal costs and marginal benefits, rather than a percentage tax on profits or revenue? If you’re going to impose a >100% tax on some actions, and a very low tax on others, you need to know exactly what you are doing and want to discourage.

Also, what happens with non-transactional API calls, or things that are not API calls at all? What happens with open source software run on some college student’s computer? What happens when the program itself starts doing operations that use the model’s capabilities without being API calls, a scenario that is obviously quite worrisome? How are you going to get a reasonable definition here that doesn’t actively drive activity to get that much more dangerous?

How are you going to check if every computer in the world is running an LLM? Consider the parallel to taxing humans for thinking. Strange that Tyler Cowen did not point out the obvious issues here.

Thus, as written, this seems like pretty terrible tax policy.

Focus On Training Runs

A much better tax policy would be to focus on a tax on training runs. Training that scales too much is where the biggest danger lies. This is what we want to discourage. So let’s tax that, ideally in super-linear fashion as the model size, compute used and data used go up.

That would encourage more use of less dangerous smaller models for mundane utility, while discouraging the activity with potentially limitless negative externalities.

One could then potentially also impose an additional tax on marginal usage of existing models if one is worried about humans being unfairly taxed compared to computers, perhaps, in addition to being worried about existential risks.

This should be proportional to compute to preserve economic efficiency. Otherwise, you are going to distort LLM use towards use of longer and more detailed API calls, which will be wasteful and create a lot of deadweight loss.

Another potential target would be to directly tax GPU hardware, either one time on manufacturing or sale, continuously or both, which would also create a tracking regime.

What about if the fear is ordinary, non-existential dangers of out of control AutoGPT-style activities, as opposed to fear of economic competition or existential risk? This is a tough threat model to properly respond to here.

The tax levels required to actually do much discouragement would be prohibitive. It is going to be rare that such a process has as its primary cost the API calls, as opposed to such actions imposing the very risks we want to minimize, and the requirements to get it set up correctly. To the extent that marginal costs start to approach marginal benefits, one assumes that is because similar such systems are in economic competition, where a small tax wouldn’t change things much.

Enforcement is a Problem

There’s also still the issue of how to enforce such a tax. If you’re worried about an out-of-control intelligent agent on the internet, are you going to count on the IRS to shut it down before damage is done?

As partly noted above we have at least four very difficult enforcement issues. All of them are serious problems for the proposed regime.

The first difficulty is internal use. Models calling themselves, using open source software on your own computer, or a corporation using its own models. These are relatively dangerous cases. How are we going to detect and tax such usage? How are we going to enforce this reliably and quickly enough to prevent a dangerous situation if one arises?

The second difficulty is rogue use. In many scenarios we wish to guard against, we wish to guard against them exactly because the AI has escaped human control. It is not in any particular location or on any one computer in a way that lets us enforce tax collection upon it.

The third difficulty is timing. Even if we did have the ability to enforce taxes eventually, the time scale of developing AI threats is a different order of magnitude of speed from the time scale of IRS enforcement actions. By the time we even notice taxes are not paid, let alone enforce, how is it not already too late?

The fourth difficulty is international agreement. Tax havens are a known problem with any tax regime. If China isn’t willing to slow down its AI development under any circumstances, why would they agree to a prohibitive taxation scheme, especially one requiring such dystopian monitoring? What happens when the Cayman Islands refuses to collect the tax and starts selling bespoke services using AI? If the main marginal cost of AI is tax, and the main cost of many economically valuable actions is the cost of AI, the advantages offered by tax havens would be extreme.

To solve these overlapping problems robustly would, even more so than any rule against AI model development, require a rather draconian monitoring regime be implemented worldwide, whether or not one views this as dystopian. Otherwise, one is handing the future to whoever is most willing and able to dodge taxes, or to the AI models on their computers, depending on how that interaction proceeds.

Contrast this with a tax on training runs. You still have the issue of detection and monitoring, but that problem becomes far easier, as does enforcement. You have much easier targets to track, and you can ‘look at the results’ as well – as a last resort, if there is a model being used, you can ask whether tax was paid. If someone tries to dodge such a tax, they would do so in ways that we are likely to prefer.

What About Minimum Reaction Time?

Should ‘we should be concerned when AIs start reacting to events quicker than people’?

Yes, I think we should. Also that time has already come and gone, as anyone familiar with financial markets or AIs knows. GPT-4 happens to be ‘slow’ in some ways right now, but it is far faster than humans in others, most other current AI systems are universally faster than humans, and future AI systems will doubtless be far faster than us in all ways that matter.

Inevitably, AIs will be interacting with each other far faster than humans can process info. It would be wise to tackle with the implications now. Could we meaningfully impose some sort of rate limitation?

Possibly?

If we did impose such a restriction, there would be many places it did not meaningfully bind and was at most slightly annoying, and other places where it very much did bind.

An exciting potential upside would be if this reaction time applied during training, and it greatly slowed the training of newer models, while noting the enforcement and international agreement issues this implies.

One big problem with this proposal is that there is a trade-off between model capabilities and model speed. The bigger and more capable the model, and thus the more dangerous in most ways, the slower it will run. If you introduce a minimum reaction time, you are pushing people towards developing and using more dangerous models, and towards more complex reactions designed to minimize the number of reaction steps, which could be importantly wasteful and distortionary.

Centrally, I notice that this is another case where all the standard anti-restriction, anti-regulation arguments come right back into play. In the financial markets an unrestricted agent is going to eat the restricted one’s lunch in an eyeblink. Many other such cases, too.

If AIs are constantly interacting with each other at faster-than-human speed, and meaningful restrictions are imposed on ours but not on others, isn’t that a huge strategic issue? If they are imposed on everyone, doesn’t this slow economic growth?

Why doesn’t this fail under the broad ‘lose to China’ objection? If we don’t build the fast AI someone else will, or even easier if we don’t run our AIs fast someone else will run them faster. It would be very easy for the open source people to take the wait command out of their function calls or some financial firm to cheat on this, and that’s that. Not a very dignified way to die, I’d say.

Thus, we’d once again be talking about a far far more dystopian and extreme surveillance and electronic monitoring state than one focused on GPU restrictions, if we wanted such restrictions to hold and have teeth, as there would be no physical thing one could target, you’d need to be up in the system of every computer on the planet.

That’s not going to happen. Seems like a bad place to focus.

Executive Summary by GPT-4 (using system message and lightly edited)

– Brian Slesinsky proposes a tax on language model API calls to discourage dangerous AI experiments and reduce risks.

– Issues with Slesinsky’s proposal:

– Tax enforcement and timing: difficult to enforce on rogue/internal use and international agreement needed.

– Tax may not be effective in preventing dangerous AI scenarios.

– Distortion of AI usage: fixed tax per action could lead to wasteful and dangerous behavior.

– Alternative tax policy suggestions:

– Best option: Tax on training runs: discourages dangerous large models, easier to enforce.

– Proportional tax on compute usage: preserves economic efficiency.

– Tax on GPU hardware: creates a tracking regime.

– Minimum reaction time proposal:

– AIs already interact faster than humans, introducing restrictions could create strategic issues.

– Imposing restrictions would require extreme surveillance and electronic monitoring.

– Key considerations for AI tax policy:

– Protecting against existential risks and dangerous AIs.

– Keeping activity legible and within human control.

– Raising revenue and addressing tax code favoritism.

– Challenges in defining, enforcing, and agreeing on international tax rules.

Conclusion

There are several distinct reasons to consider some form of an AI tax, including:

  1. Protecting against existential risks and dangerous AIs.
  2. Keeping activity legible and within human control.
  3. Raising revenue.
  4. Stopping the tax code from favoring AI use over humans, as humans are taxed.
  5. Making the tax code favor humans over AIs, to protect jobs and people.

The dangers of such regimes include:

  1. Defining what is being taxed is often tricky.
  2. Tax avoidance that could steer activity towards inefficient or dangerous behavior.
  3. Taxes on model use could drive investment in more dangerous models.
  4. Enforcement of such laws is extremely difficult, implying extreme surveillance.
  5. International agreement on such rules seems necessary, and difficult to get.

6 comments

Comments sorted by top scores.

comment by Dagon · 2023-04-25T15:41:34.910Z · LW(p) · GW(p)

To paraphrase an old adage:

You have a problem that is NOT "how do I collect more money from my subjects", and you've decided to use tax policy to solve it.  Now you have two problems.

Replies from: Thomas Sepulchre
comment by Thomas Sepulchre · 2023-04-26T12:17:51.773Z · LW(p) · GW(p)

I don't know where this "old adage" of yours comes from, but a tax can be a useful tool for solving some problems. A carbon tax, for example, would be a tax not intended to collect money, but instead intended to modify behaviors, and correct a market inefficiency. This is one example of a pigouvian tax.

Replies from: Dagon
comment by Dagon · 2023-04-26T13:49:33.496Z · LW(p) · GW(p)

I was being a bit glib - of course there are some variations of taxation that fit pretty well with non-revenue policy goals.  I do think they're less frequently a good match, and MUCH less often implemented reasonably than wonks and economists (and rationalist chatterers) seem to believe.

I'm not sure how well any real carbon taxes have worked in terms of revenue OR reducing overall emissions.  I haven't studied deeply, so I could easily be wrong (and would love to learn), but my sense is that they're applied unevenly and are somewhat easy to game, so tend not to actually get paid.  Which makes them non-Pigouvian, as the revenue isn't enough to actually mitigate the harm.

Replies from: Thomas Sepulchre
comment by Thomas Sepulchre · 2023-04-26T14:20:47.304Z · LW(p) · GW(p)

I'm not sure how well any real carbon taxes have worked in terms of revenue OR reducing overall emissions.

I don't know either, I know that carbon taxes are widely considered to be a good tool against amongst economists, but I don't know if the real carbon taxes have been evaluated.

Which makes them non-Pigouvian, as the revenue isn't enough to actually mitigate the harm.

I'm not sure I follow you here. The role of a pigouvian tax is to correct market inefficiencies, not produce revenue.

The classical model goes as follows: assume a factory with a production level  produces  utility for itself, but causes  for others. The optimum in term of total utility would be , but since the factory doesn't pay for this externality, it can produce much more. This is inefficient.

Now you introduce a tax amounting to . The factory does the math and reduces its production to , reaching the utility maximizing solution. Now you also receive  which you could redistribute to whoever suffers from the factory but you don't need to. The optimum is reached whenever the tax is applied, regardless of whether or not the "victims" are being compensated. You could introduce a tax of , this way producing no revenue at the equilibrium, and the result would be the same.

If now the factory is very good at dodging taxes and only pays  for some reason, it will still have an incentive to reduce its production. The new optimum will not be , but introducing such a tax will still move the system closer to the optimum.

(Although introducing a new tax could also have some negative effects, especially if all actors are not equally good at dodging taxes, so there is surely a level at which, if such a system is too easy to game, it becomes detrimental)

Replies from: Dagon
comment by Dagon · 2023-04-26T15:38:07.560Z · LW(p) · GW(p)

Hmm, I'm apparently misremembering the rationale Pigeau used - certainly including the cost in the producer's optimization calculation is one part of it, but I thought it was also calculated to compensate or offset the damage from the externality.  You're absolutely right that "tax what you don't want, subsidize what you do" is a core element of tax theory, but I will still argue that it's secondary to the core of tax reality, which is that revenue is the real metric of impact.

Replies from: AnthonyC
comment by AnthonyC · 2024-06-26T15:02:26.603Z · LW(p) · GW(p)

An optimal and consistent application of the idea would presumably also apply a Pigouvian subsidy to actions with positive externalities, which would give you more of those actions in proportion to how good they are. If you did this for everything perfectly then you wouldn't need to explicitly track which taxes pay for which subsidies. Every cost/price would correctly reflect all externalities and the market would handle everything else. In principle, "everything else" could (as with some of Robin Hanson's proposals) even include things like law enforcement and courts, with a government that basically only automatically and formulaically cashes and writes checks. I don't expect any real-world  regime could (or should try to) actually approach this limit, in practice, though.

I would note that, in aggregate, the government's net revenue is not the thing government, or tax policy, are optimized for. Surplus, deficit, and neutrality can all be optimal at different times. If the government wanted to maximize net revenue in the long run, I doubt the approach would look much like taxation. Maybe more like a sovereign wealth fund. 

As an example: If a carbon tax had its desired effect, it would collect money now (though the money might be immediately spent on related subsidies or tax breaks), but in the long run, if it's successful, we'd hope to reach a point where it's never collected again, because non-GHG-emitting options became universally better.