Meta Still Sees OpenAI as a Competitor, But Not DeepSeek Anymore 

The vibe at LlamaCon 2025—Meta’s first developer summit—was noticeably different. It wasn’t about chasing headlines or claiming dominance in the AI race. Instead, Meta focused on building cost-efficient tools for developers and enterprises. 

At the event, the company launched a standalone Meta AI app powered by Llama 4 to compete with ChatGPT, and introduced the Llama API to help enterprises customise Llama models.

Both announcements reflect Meta’s strategy to go head-to-head with OpenAI, which is reportedly working on a social app.

The Llama API offers one-click key generation and an interactive playground for exploring models like Llama 4 Scout and Llama 4 Maverick. “We provide a lightweight SDK in both Python and Typescript,” Meta said during the LlamaCon event, adding that the API is also compatible with OpenAI’s SDK for easy migration.

The company also rolled out tools for model fine-tuning and evaluation. Developers can customise the Llama 3.3 8B model, generate training data, and evaluate results directly through the API. It has partnered with Cerebras and Groq to support Llama 4 API inference.

Meta Moves Past DeepSeek

Meta still positions itself as the open-source torchbearer. Llama recently crossed 1 billion downloads. 

In a conversation with Meta chief Mark Zuckerberg, Databricks CEO Ali Ghodsi said the open-source nature of LLMs has people “super excited to mix and match the different models.”

DeepSeek is better, Qwen is better at something. As developers, you have the chance to take the best parts of the intelligence from the different models and produce exactly what you need,” said Zuckerberg.

For instance, Alibaba’s latest Qwen3’s 235B parameter model outperforms OpenAI’s o1 and o3-mini (medium) reasoning models on benchmarks that evaluate its abilities in mathematical and programming tasks. 

“People are doing crazy things—slicing, combining models, and getting better results. All of this is completely impossible if it wasn’t open source,” said Ali Ghodsi. “When it comes to model API business and serving LLMs, every model will be open source. You might not know it yet.”

Zuckerberg acknowledged that every time Meta releases a new Llama model, competitors’ API prices drop. “Every time we do a Llama release, all the other companies drop their API prices,” he said.

Claude 3.7 Sonnet is priced at $3 per million input tokens and $15 for output. Gemini 2.5 Pro costs $1.25 for input and $10 for output. GPT-4.1 comes in at $2 and $8, respectively.

Moreover, Ghodsi observed two emerging trends among customers. First, there is a shift towards smaller models designed for specific use cases, and second, there is an increased focus on inference-time compute and reasoning models. “The most common model people were using on Databricks was the Llama-distilled DeepSeek ones, where you took the R1 reasoning and distilled it on top of Llama.”

Ghodsi said that most organisations don’t need a model that can do everything—they just need a smaller model that performs well on a specific task they repeat often. He explained that by using distillation, they can retain the intelligence of the larger model but make it smaller, faster, and more cost-effective to run billions of times a day.

Meta’s Next Model and Strategy

Zuckerberg revealed that Meta is working on a new model, internally referred to as “Little Llama.” However, it is worth noting that Meta hasn’t released any reasoning model yet. 

Meanwhile, OpenAI chief Sam Altman recently confirmed that a powerful new open-weight model, with strong reasoning capabilities, will be shipped soon.

Zuckerberg, in a recent podcast with Dwarkesh Patel, stated that comparing Llama 4 with DeepSeek R1 isn’t fair, as Meta hasn’t yet released its reasoning model. “We’re basically in the same ballpark on all the text stuff that DeepSeek is doing, but with a smaller model. The cost-per-intelligence is lower with what we’re doing for Llama on text,” he said.

Moreover, when Patel asked that Llama 4 models, including Maverik, haven’t been that impressive on Chatbot Arena lagging behind Gemini 2.5 Flash and o4-mini of similar size,  Zuckerberg clarified that these open-source benchmarks like Chatbot  Arena tend to evaluate language models using narrow or artificial tasks that don’t reflect real-world use cases or how people interact with products. 

“As a result, these benchmarks can give a skewed or misleading view of a model’s usefulness in real products,” he noted. 

On licensing, Zuckerberg acknowledged concerns from open-source purists over the level of openness in Llama’s license. However, he noted that most companies haven’t raised objections, even with the clause requiring companies with more than 700 million users to contact Meta.

​​He also suggested that it’s reasonable for Meta to want large companies to discuss their needs with them before using a model that costs them billions to train. “I think asking the other companies—the huge ones that are similar in size and can easily afford to have a relationship with us—to talk to us before they use it seems like a pretty reasonable thing,” he said.

The post Meta Still Sees OpenAI as a Competitor, But Not DeepSeek Anymore  appeared first on Analytics India Magazine.

Scroll to Top