What the cost difference in processing input vs. output tokens with LLMs?

post by kotrfa · 2024-08-08T10:43:18.049Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    4 leogao
    2 mishka
None
No comments

Hi,

I am trying to understand the difference in the cost of producing a single input token vs output token.

Based on some articles, I came to the following conclusion:

  1. Input tokens processing scales quadratically, there’s no way around it, you have to compute attention (K and V by passing it through encoders) for each token with each other token.
  2. Output tokens scale linearly thanks to using KV cache (otherwise quadratic without KV cache, as linear tokens do) which is what everyone seems to do when hosting these models. I.e. you trade compute for memory, and try to be clever about how you store and compute all these stuff (like sparse, cache eviction…). I believe this is simply an empirically practical way around the quadratic scaling.
  3. Therefore, realistically, it’s really the memory capacity and bandwidth being the problem for output tokens rather than raw FLOPS. KV cache grows linearly with the sequence length regardless input/output tokens ratio.
  4. I am confused here why input tokens are almost universally cheaper - 2 to 5 times - to output tokens. Also, is the evidence that providers are pricing things linearly for memory being the problem here (as they do now, you pay the same price for 11th token as for 10001)? If not and they were FLOPS bounded, I would expect the providers to price stuff quadratically, not linearly. Or is it just for client-pricing simplicity/marketing?

I want to be able to estimate the cost of processing a single token, and I cannot wrap my head around this. I theoretically estimated based on GPU rent price and separately based on power consumption (assuming some utilization such as 10%), and I believe I somehow need to differentiate between input/output tokens here.

In one tweet from LeptonAI who hosts these LLM, I also saw that there are usually 3-10 times more input tokens than output tokens. Again, if input tokens dominate the sequence and it was FLOPS the issue, I would expect to reflect that in the pricing. Not sure what role it plays in these calculations so far.

Any help is appreciated, thanks!

Answers

answer by leogao · 2024-08-08T15:34:01.180Z · LW(p) · GW(p)

With KV caching, it costs almost exactly as many FLOPs to take 100 input tokens and generate 900 output tokens, as to take 900 input tokens and generate 100 output tokens. However, you need a lot more memory/memory bandwidth to process an output token than an input token, because to process an output token you also need to fit the KV cache in memory.

comment by kotrfa · 2024-08-08T16:17:00.268Z · LW(p) · GW(p)

Got it, thanks!

But to process the 1001st input token, you also need to load all the 1000 tokens in memory, forming the cache (it does happen in one step though). And for each new output token, you surely don't dump all the existing KV cache after each generation, only to load it again to append an extra KV vectors for the last generated token. So isn't the extra work for output tokens just that the KV cache is accessed, generated, expanded, one token at a time, and that's where the "more work" come from?

Is there any reason why this would imply the ratio of pricing of output:input tokens being commonly something like 3:1?

Replies from: kotrfa
comment by kotrfa · 2024-08-08T16:42:43.005Z · LW(p) · GW(p)

Heh, I actually think it's answered here [LW(p) · GW(p)].

answer by mishka · 2024-08-08T13:23:50.347Z · LW(p) · GW(p)

I doubt the actual cost can be linear. It's probably done to make things easier and more predictable for customers.

Intuitively, it seems that output tokens should be more expensive. The autoregressive model has to run once for each output token, and as these runs progress, output tokens gradually become a part of the input (so the last token is generated with context being all input and almost all output).

But exact formulas would depend on the implementation. I think they do amortize their costs among all uses. A number of runs (number of output tokens) multiplied by a (varying) cost of the each run is unlikely to be close to linear.

comment by kotrfa · 2024-08-08T13:59:51.733Z · LW(p) · GW(p)

Thanks for the answer, I appreciate it!

Intuitively, it seems that output tokens should be more expensive. The autoregressive model has to run once for each output token, and as these runs progress, output tokens gradually become a part of the input (so the last token is generated with context being all input and almost all output).

I agree with the intuition, but I think that's where I am confused. Thanks to the KV cache we do not run the new input sequence (previous sequence + last generated token) through the encoders (as we do for the input sequence during prefill). It's all cached (from prefill + from the last token generation for that sequence+token). So... I don't know - it doesn't feel like the output tokens are more expensive in this case (you run "once", the same way as you run "once" for every input token)?

I think they do amortize their costs among all uses. A number of runs (number of output tokens) multiplied by a (varying) cost of the each run is unlikely to be close to linear.

Do you mind saying more about this? I am not sure what you mean. I.e. some pay more and some pay less (e.g. heavy hitters pay less while small prompters pay comparatively more per token?)

Replies from: mishka, tjbai
comment by mishka · 2024-08-08T17:08:23.326Z · LW(p) · GW(p)

it doesn't feel like the output tokens are more expensive in this case (you run "once", the same way as you run "once" for every input token)?

One has to run the whole Transformer once for an output token. So if we ignore the difference between runs, the complexity would be the number of output tokens multiplied by the cost of a single run.

Now, what is the cost of the run, and how does it vary?

The context to that run is sometimes all inputs, sometimes all inputs and almost all outputs, and sometimes something in between. If we disregard the fact that output tokens are only included in contexts of some of the runs, then input and output tokens should contribute similarly to the cost of the typical run (although, in reality output tokens contribute less, because a typical output token only participates in approximately half of those runs, with early output tokens participating in almost all runs and late output tokens participating in almost no runs). I am not sure how things like KV caching affect that. So a typical output token would contribute about a half of the contribution of an input token to the cost of a single run.

(I would assume that a clever caching schema would eliminate or reduce the difference between input and output tokens in the sense of their contribution to the cost of a single run.)

But the number of runs is exactly the number of output tokens, so the overall cost seems to grow much faster when the number of output tokens grows.

Do you mind saying more about this? I am not sure what you mean. I.e. some pay more and some pay less (e.g. heavy hitters pay less while small prompters pay comparatively more per token?)

No, the typical pricing tends to be the same (unless one can score a discounted plan).

But some queries might bring more profit, some queries might bring less profit, and some queries might be at a loss, depending on the query. The provider is OK with non-uniform margin on the queries and with dependency of that margin on the number of input tokens, the number of output tokens, and so on. They care about average payments vs average costs.

(Similarly, with flat monthly fee in interfaces like ChatGPT, they make more money off light users, and might even be OK with taking a loss with the heaviest users as long as there are not too many of them.)

Replies from: kotrfa
comment by kotrfa · 2024-08-09T08:59:40.135Z · LW(p) · GW(p)

Thanks. I think I get it now. (at least one of) my confusion was something between confusing a "transformer run" and "number of FLOPS".

And I get the thing about cost, that's what I meant but I articulated it poorly.

Replies from: mishka
comment by mishka · 2024-08-16T00:46:44.490Z · LW(p) · GW(p)

An extra recent observation point: currently GPT-4o cost is $5.00 / 1M input tokens and $15.00 / 1M output tokens https://openai.com/api/pricing/

They just made an experimental "long output" up to 64K output tokens per request available for "alpha users", and here was what they did for pricing https://openai.com/gpt-4o-long-output/:

Long completions are more costly from an inference perspective, so the per-token pricing of this model is increased to match the costs.

$6.00 / 1M tokens and $18.00 / 1M tokens

Replies from: kotrfa
comment by kotrfa · 2024-08-16T20:24:38.615Z · LW(p) · GW(p)

Interesting, thanks!

comment by tjbai · 2024-08-08T17:08:46.838Z · LW(p) · GW(p)

Output tokens certainly do not scale linearly, even with a KV cache. The KV cache means you don't need to recompute the k/q/v vectors for each of the previous tokens, but you still need to compute n kq dot products for the (n+1)'st token.

No comments

Comments sorted by top scores.