OpenAI introduce ChatGPT API at 1/10th the previous $/token

post by Arthur Conmy (arthur-conmy) · 2023-03-01T20:48:51.636Z · LW · GW · 4 comments

This is a link post for https://openai.com/blog/introducing-chatgpt-and-whisper-apis

Contents

4 comments

OpenAI add gpt-3.5-turbo to their API, charging $0.002 per 1k tokens. They cite "a series of system-wide optimizations" for 90% cost reduction.

Another example of the dizzying speed of language model progress.

4 comments

Comments sorted by top scores.

comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2023-03-02T01:11:59.899Z · LW(p) · GW(p)

# Notes:

## Whisper API:

* Whisper API is 4x cheaper than google's Speech to Text API.
* Max file size is 25 MB, rate limit is 50 requests per minute
* I think you would run into problems if you tried uploading 1.25 GB per minute though.
* Whisper pricing is minute-based! 
* That means it is not token or bandwidth based!
* How is accuracy impacted if I preprocess my audio to 2x or 5x speed?
* Trimming long silent pauses would also obviously reduce cost.
* Going to wait for load to increase before I attempt profiling endpoint latency.
* Did they give up on audio generation? Haven't heard anything since MuseNet/Jukebox.
* Barely documented, but `verbal_json` is the `response_format` you'll want.
* Provides stuff like duration, avg_logprob, compression_ratio, no_speech_prob, tokens, and transient.
* Huh, the Whisper repo uses GPT2TokenizerFast instead of tiktoken, wonder why.

## ChatGPT API:
* Chat API messages are “role” and “content” pairs.
* Three "roles":
* System: prompt, can add a `name` field with `example_user` or `example_assistant` (not nested)
* User: prompt, has more impact on output more than system prompt somehow. (Details?)
* Assistant: output of language model.
* Eventually "role" will be a more general header, to no one's surprise.
* Eventually "content" will be multimodal, again to no one's surprise.
* This feels like they released their actual first version out instead of taking time to refine/iterate.
* Subjectively: `response[‘choices’][0][‘message’][‘content’]`looks very ugly.
* What happened to the OpenAI I knew? Just reread https://blog.gregbrockman.com/my-path-to-openai#initial-spark_1 and it sounds like a completely different company.
* They didn't put the Chat model in the playground. Deliberate omission or not part of launch list?
* Also omitted from being added to the [Prompt Comparison tool](https://gpttools.com/comparisontool) by Andrew Mayne (Science Communicator, 2.75 years tenure). 
* 12 params in chat vs 16 in completion. No best_of, echo, logprobs, or suffix.
* Won't miss any of them except logprobs. Hope they add them back!
* 4096 max tokens for gpt-3.5-turbo.
* Training data **up to Sep 2021**
* Will receive regular updates. Hopefully they don't do them silently like code-davinci-002.
* Input and output tokens treated equally for billing even though prefill is cheaper than decode.
* Consequence: high margins when conversation history is long and next message is short.
* Feel like there's a difference between this model and what you get at chat.openai.com, need to do some more analysis of the model generated content to be sure.

## New Terms of Service

* The only interesting part in the new terms for me was this:

* "Processing of Personal Data. ... If you are governed by the GDPR or CCPA and will be using OpenAI for the processing of “personal data” as defined in the GDPR or “Personal Information,” please fill out this form to request to execute our Data Processing Addendum."
* Also, 3(c) is interesting since it says Non-API content will still be used for training, only API content is excluded by default. Retention period is 30d, no idea how easy it is for any random employee to pull up your content.
* New jobs posted in the last 24 hours:
* Order Management Specialist
* Software Engineer, Triton Compiler
* Security Engineer, Detection and Response
* Software Engineer, Full-Stack (for Codegen team and Programming Assistant team)
* Software Engineer, Billing and Monetization
* Feel like there's more "Legal Counsel" on https://openai.com/careers/search than there used to be.

Links:

* Python lib commit by Atty Eleti https://github.com/openai/openai-python/commit/62b73b9bd426d131910534ae6e0d23d7ae4f8fde
 * He joined relatively recently (5 months ago), background is in graphic design. 2017 grad.
* Node lib commit by David Schnurr: https://github.com/openai/openai-node/commit/75f685369dd82be07a13d12828b6128669ee45b8
* Same guy as usual, 2.75 years of tenure, has a background in data visualization. 2012 grad.
* ChatML.md by Logan Kilpatrick https://github.com/openai/openai-python/commit/75c90a71e88e4194ce22c71edeb3d2dee7f6ac93?short_path=7f2aec2#diff-7f2aec20608b2dd1799a950e8f79c9a16415289e7d195434751e4985c06c2140
* First developer relations person, 4 months of tenure.
* Walkthrough notebook by Ted Sanders https://github.com/openai/openai-cookbook/commit/73a64ff7da07ce2e90de2f43dfc75cbf68773300?short_path=b335630#diff-2d4485035b3a3469802dbad11d7b4f834df0ea0e2790f418976b303bc82c1874
* Machine learning engineer, 1 year 4 months of tenure, background in consulting and data science, PhD Applied Physics 2016.
* This branch of Whisper by Jong Wook Kim: https://github.com/openai/whisper/tree/word-level-timestamps
* 3 years and 8 months of tenure.
* Transition Guide by Joshua J: https://help.openai.com/en/articles/7042661-chatgpt-api-transition-guide
* Chat API FAQ by Johanna C: https://help.openai.com/en/articles/7039783-chatgpt-api-faq
* Data Usage for Consumer Services FAQ https://help.openai.com/en/articles/7039943-data-usage-for-consumer-services-faq
* API reference for chat endpoint: https://platform.openai.com/docs/api-reference/chat
* Guide for chat endpoint: https://platform.openai.com/docs/guides/chat
* GPT-3.5 Models Page: https://platform.openai.com/docs/models/gpt-3-5
* New terms of use: https://openai.com/policies/terms-of-use
* Blog post: https://openai.com/blog/introducing-chatgpt-and-whisper-apis
* Authors not accounted for: Eli Georges, Joanne Jang, Rachel Lim, Luke Miller, Michelle Pokras.

comment by LawrenceC (LawChan) · 2023-03-01T21:09:31.117Z · LW(p) · GW(p)

Also, you can now use Whisper-v2 Large via API, and it's very fast!

comment by ponkaloupe · 2023-03-01T21:53:52.503Z · LW(p) · GW(p)

further down on that page:

We are also now offering dedicated instances for users who want deeper control over the specific model version and system performance. By default, requests are run on compute infrastructure shared with other users, who pay per request. Our API runs on Azure, and with dedicated instances, developers will pay by time period for an allocation of compute infrastructure that’s reserved for serving their requests.

Developers get full control over the instance’s load (higher load improves throughput but makes each request slower), the option to enable features such as longer context limits, and the ability to pin the model snapshot.

Dedicated instances can make economic sense for developers running beyond ~450M tokens per day.

that suggests one shared “instance” is capable of processing > 450M tokens per day, i.e. $900 of API fees at this new rate. i don’t know what exactly their infrastructure looks like, but the marginal costs of the compute here have got to be still an order of magnitude lower than what they’re charging (which is sensible: they do have fixed costs they have to recoup, and they are seeking to profit).

comment by sanxiyn · 2023-03-02T03:27:25.088Z · LW(p) · GW(p)

Any idea what those optimizations are? I am drawing a blank.