DeepSeek's Open Source LLM 90-95% Less Expensive To Operate than OpenAI's o1?
Cost and model transparency are the primary reasons to deploy an open source LLM such as DeepSeek’s R1 model and Meta’s Llama models. Our view is that most products and services built on top of LLMs will be built on top of open source LLMs such as those developed by DeepSeek and Meta.
We are advocates for open source LLMs. DeepSeek’s R1 model is 90-95% cheaper to run than OpenAI’s o1 model. Further, it is not as though open source models are years behind proprietary frontier models, more like only 2-3 months behind.
Here’s a piece from VB about DeepSeek’s R1 model: HERE
OpenAI o1 costs $15 per million input tokens, and $60 per million output tokens. Comparativly, DeepSeek Reasoner costs $0.55 per million input tokens and $2.19 per million output tokens.