With the Llama 4, Meta’s newest series of open-weight AI models, only achieving some critical acclaim and reflections on part of the company, Zuckerberg seems determined to pull the company back into the AI game.
Word about Meta is back in the AI business, not too kind though on occasions such as this post on X by a user identifying as an AI researcher and software engineer, who comments on Llama 4 while sharing a note: “it’s a model that shouldn’t have been released”.
Critical feedback has possibly helped Zuckerberg decide to skip over Artificial General Intelligence (AGI) altogether, repositioning Meta as a company chasing superintelligence instead.
He recently stated that superintelligence is now within reach. “Over the last few months, we’ve started to see signs of our AI systems improving themselves. The progress is slow for now, but it’s undeniable. Developing superintelligence is now in sight,” he wrote in his blog post.
When it comes to superintelligence, Zuckerberg is not alone. Last year, OpenAI CEO Sam Altman wrote in a blog post that superintelligence could just be a thousand days away. While announcing the Norway Stargate, Altman said that the company is now starting to look ahead to superintelligence.
“This is a technology that will reshape the global economy and really the whole way we live our lives. It’s critical that superintelligence becomes cheap, broadly available, and not that concentrated with any one person, company, or country,” he said.
Even former OpenAI researcher Ilya Sutskever left the company to launch his own venture, Safe Superintelligence, which is developing new methods for building intelligence safely.
Zuckerberg signals caution
Now, Zuckerberg is signalling caution too. He clarified that Meta’s upcoming model may not be open-sourced. “Superintelligence will raise novel safety concerns. We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source,” he said.
This marks a 180-degree shift from what he said last year. At the time, Zuckerberg wrote a blog post, stating that open-source was the path forward, calling Llama models the “Linux moment” for AI.
However, during Meta’s latest earnings call, he struck a more confident tone, and said the company is making strong progress on Llama 4.1 and 4.2. “We’re also working on our next generation of models that will push the frontier in the next year or so.”
He said AI labs are not open-sourcing frontier models because they are too large to be practical for developers and would primarily benefit competitors. He assured that the company will continue to open-source some models.
Harneet SN, co-founder of Rabbitt AI, told AIM that while Meta’s Llama 4 appears promising on paper with its Mixture-of-Experts architecture and native multimodality, its real-world performance has left some gaps. “Its long-context capabilities don’t quite hit the mark they advertise, and image understanding can sometimes be a bit off, leading to unexpected outputs,” he noted.
For Pluto, their marketing agent, Llama 4 Maverick has felt “competent,” particularly in research and lead enrichment tasks, but Harneet said it “hasn’t really blown us away” and has struggled with more complex UI/UX tasks.
While many AI leaders casually throw around terms like AGI and superintelligence, Anthropic CEO Dario Amodei dismisses them as “totally meaningless.”
He said, “I don’t know what AGI is. I don’t know what superintelligence is… it sounds like a marketing term—something designed to activate people’s dopamine.” Amodei added that although he avoids using such buzzwords in public, he remains one of the most bullish voices on the rapid progress of AI capabilities.
Billions Spent
To support this pivot, Zuckerberg said Meta is building a “talent-dense” team to push the boundaries of AI development. Alexandr Wang is leading the overall effort, with Nat Friedman heading AI products and applied research, and Shengjia Zhao serving as chief scientist.
“They’re all incredibly talented leaders, and I’m excited to work closely with them and the world-class group of AI researchers, and infrastructure and data engineers we’re assembling,” he said.
According to Zuckerberg, the reason so many top-tier people are joining is Meta’s unique position in the AI landscape: “Meta has all the ingredients required to build leading models and deliver them to billions of people.”
Meta has spent billions of dollars to bring new talent on board, the firm said in its Q2 earnings call. “We expect full year 2025 total expenses to be in the range of $114-118 billion, narrowed from our prior outlook of $113-118 billion and reflecting a growth rate of 20-24% year-over-year,” said Susan Li, Meta’s chief financial officer (CFO).
For 2026, Meta expects expenses to rise even more sharply than in 2025. The primary driver will be infrastructure-related costs.
The second major factor behind the projected rise in expenses is employee compensation. Li said the company will continue to hire technical talent in priority areas, and 2026 will include a full year of salary and benefits for employees onboarded throughout 2025.
Despite Meta’s efforts to attract top minds, a recent report claims that not a single person from Mira Murati’s Thinking Machine Labs accepted an offer from Meta.
“We now expect our 2025 capital expenditures, including principal payments on finance leases, to range between $66 billion and $72 billion,” said the CFO.
Zuckerberg said that the new hires will have access to massive computing resources, including several multi-gigawatt clusters under development.
Meta’s Prometheus cluster is expected to go live next year and, Zuckerberg said, “we think it’ll be the world’s first 1GW+ cluster.” Another initiative, Hyperion, is being designed to scale up to 5GW over the next few years. Several additional titan-scale clusters are also in the pipeline.
Is China Already Ahead?
While Meta has offered no update on the status of its open-source models, China is accelerating fast. In July alone, multiple Chinese companies released open-source models across general-purpose, coding, and translation domains.
On July 11, Moonshot AI open-sourced Kimi K2, a mixture-of-experts (MoE) model with 1 trillion total parameters and 32 billion active parameters, built for coding and agentic tasks.
Alibaba followed on July 23 with Qwen3-Coder-480B-A35B, an MoE model tailored for software development workflows like code generation, debugging, and tool use. It features 480 billion total and 35 billion active parameters.
Alibaba also released Qwen3-235B-A22B, a reasoning-focused model with 235 billion total and 22 billion active parameters, supporting a 256,000-token context length—positioning it for complex reasoning workloads.
In addition, Alibaba launched Qwen-MT, a multilingual translation model covering 92 languages, also built on the MoE architecture.
On July 28, Zhipu AI introduced GLM-4.5 and GLM-4.5-Air, both large MoE models. GLM-4.5, with 355 billion total and 32 billion active parameters, is optimised for agentic use cases like reasoning, coding, and even content creation, including PowerPoint generation.
Harneet said that Chinese open-source models like DeepSeek and Qwen are more aligned with their needs. They’ve recently started experimenting with Kimi K2 for Agentic Tasks and are seeing encouraging results. “DeepSeek, for example, has consistently shown stronger performance in areas critical to our work, like coding and mathematical reasoning,” he added.
He also pointed out that Qwen models—particularly QwQ-32B—have performed well in coding tasks. Beyond technical capability, the team appreciates the cost-efficiency of these models, which helps make advanced capabilities more accessible. “Qwen’s extensive multilingual support is a huge benefit for our global reach,” he added. “For the kind of robust, practical applications we’re building, we’re finding these Chinese models to be more consistently reliable and advantageous.”
Talk of superintelligence sounds exciting, but users and developers want results they can use today. Chinese companies are quickly filling that gap with models that deliver value now. Meta needs to prove that its long game doesn’t come at the cost of short-term impact.
The post Meta Just Ghosted AGI appeared first on Analytics India Magazine.