Photo by
Midjourney

Prompt Economics

How Does OpenAI Manage to Price Their Massive LLM So Low?

OneAI
OneAI
·
May 1, 2023
·
3 min read

OpenAI's GPT models have taken the world by storm, and rightfully so. We recognize their immense value and have integrated them into our platform. However, while conducting our own pricing exercises, we found ourselves puzzled by OpenAI's pricing, which seemed almost too good to be true. We decided to investigate further and delve into the details to gain a better understanding of it.

Methodology & Assumptions

We wanted to get a good grasp of the situation, so we tested hundreds of tasks with different prompt lengths and at various times of the day. We measured response times and figured out the compute time to find out the cost of each API call. We also looked at the number of tokens in the prompts and outputs to calculate the average processed tokens per second, cost per token, and how it all relates to pricing.

We assumed a $3/h per GPU cost with a 5-GPU cluster running the davinci-003, and a quarter of it to run gpt-3.5-turbo. As for our methodology, we tested tasks with prompt lengths ranging from 34 to 917 words and output lengths between 18 and 1572 words. The tasks included a variety of activities, such as writing summaries, tweets, LinkedIn posts, and blog articles.

Prompt economics

Our research suggests that OpenAI might be losing quite a bit on each API call. Here are the economics of the “average prompt”:

gpt-3.5-turbo davinci-003
Prompt tokens 520 519
Completion tokens 346 331
Cost per token $0.000056 $0.000178
Cost per prompt $0.019 $0.059
List price (1K tokens) $0.002 $0.02
Revenue per prompt $0.0017 $0.02
Margin (loss) -91% -71%

Discussion

These findings are fascinating. Given that cloud businesses typically maintain a gross margin of 50-80%, it's astonishing to discover that OpenAI not only operates at a loss, but also incurs such substantial losses on each API call. We presume that OpenAI is trying to take steps to tackle this, and we are eager to learn about their approach.

Operating at loss is a common strategy to build great business (checkout Salesforce, Amazon, and others tech stars that were losing money for decades before turning green). Moreover, offering the GPT models at a loss helps cultivate a community of developers and spark innovative applications. 

Yet, some questions arise:

  1. How long can OpenAI sustain this strategy, especially as they skyrocket their growth, and losses?
  2. Can businesses trust that OpenAI will not dramatically increase their prices in the near future, when it might be difficult for those businesses to extricate this technology from their products and services that rely on it?
  3. How can the utilization of significant financial resources by large players undermine and suppress smaller AI companies and impact the AI market as a whole? 

Get involved!

We warmly invite you to dive into our data and calculations in this spreadsheet, share your thoughts, and help us improve our understanding of OpenAI's pricing strategy. If you're interested in exploring further, just get in touch with us for more information!