OpenAI’s Open-Weights Could Just Be the Prelude for io

OpenAI just launched two models — gpt-oss-120b and gpt-oss-20b, under Apache 2.0, soon after the release of several advanced open-source models from China, including DeepSeek V3, Kimi K2, Qwen3-Coder, GLM-4.5, and MiniMax-M1, in an imminent push towards a return to its open-source roots.

These models aren’t just symbolic either, they’re punching in the same league as o4-mini and o3-mini. Clement Delangue, CEO and co-founder of Hugging Face, shared that OpenAI’s gpt-oss is now the number one trending model on the platform, out of nearly 2 million open models. 

“China was dominating open source– multiple Qwen releases, Moonshot’s Kimi K2, etc. Now, OpenAI’s gpt-oss has matched them (maybe exceeded), with o4-mini level intelligence running locally,” said LinkedIn co-founder Reid Hoffman in a post on X. “It shows that the United States can compete with China even in open-source races.” 

With Llama 4 falling short of expectations, OpenAI’s new models have restored confidence in US contributions to the open-source ecosystem.

The ChatGPT creator has gone open, nearly six years after launching GPT-2 in 2019, and the developer community seems to be loving it. 

“Welcome back, folks, it’s great to have you in the open-source community again. Excited to see what people will build on top of gpt-oss,” said Thomas Wolf, co-founder of Hugging Face, in a post on X. 

While Many have been referring to the new development as Open-Sourcing, the models, however, are open-weight. Unlike open-source models, which provide full access to their code, training data, and internal parameters, open-weight models offer a more limited form of transparency.

Not That Great?

However, early reactions from users of the new open-source models have been mixed. 

“I have been testing GPT-oss-120b for a while. My initial feeling is that the model hallucinates a lot! It’s definitely way worse than gpt-o4-mini,” posted an AI researcher on X. He added that his hunch is the model has been completely distilled from GPT-5 or GPT-o4 using a massive amount of synthetic reasoning tokens, which might be contributing to excessive hallucination.

Echoing similar sentiments, Jason D Lee, associate professor of EECS and statistics at UC Berkeley, said the model is “complete junk.” 

“It’s a hallucination machine — overfit to reasoning benchmarks and has absolutely zero recall ability.”

“Early indications are that the new GPT open source model is worse than the top Qwen model on LiveBench AI,” said Bindu Reddy, founder of Abacus AI, in a post on X. “Qwen will retain its crown as the king of open source.” 

Building the Smartest Device

At the same time, OpenAI is working with Jony Ive to build its own hardware device through a new entity, io. These open-weight models could be the first step toward creating a developer ecosystem, much like what Android did for smartphones. The startup is currently hiring for multiple positions in the consumer hardware sector, fuelling speculation about what the company might develop.

“Someday soon, something smarter than the smartest person you know will be running on a device in your pocket, helping you with whatever you want,” said OpenAI CEO Sam Altman in a post on X.

On the hardware side, several companies like Groq, Cerebras Systems, and Qualcomm have stepped forward.

“Running on our third-generation Wafer Scale Engine, OpenAI’s gpt-oss-120b runs at up to 3,000 tokens per second – the fastest speed achieved by an OpenAI model in production,” said Andrew Feldman, founder of Cerebras Systems.

Qualcomm described OpenAI’s gpt-oss-20b as a turning point for on-device AI. Through early access and integration with the Qualcomm AI engine and AI stack, the company found the 20B parameter model to be highly capable of enabling chain-of-thought reasoning entirely on-device.

“This is a glimpse into the future of AI, where even rich assistant-style reasoning will happen locally,” the company said. Qualcomm noted that this moment reflects the maturity of the AI ecosystem, where contributions from leaders like OpenAI can be rapidly deployed by partners and developers using Snapdragon processors. 

The model’s ability to support on-device inference, Qualcomm added, brings tangible benefits around privacy and latency while still complementing cloud-based AI agents.

For the first time, an OpenAI model can now run locally on Windows.

“Starting today, gpt-oss-20B is available through Windows AI Foundry — bringing high-performance, open-weight models directly to your device. No black box. No cloud. Just fast, local inference on modern, high-performance Windows PCs,” said Yusuf Mehdi, corporate vice president and consumer chief marketing officer.

Big Tech Joins the OpenAI Party

In a surprising turn, OpenAI’s latest move is breaking down long-standing walls between tech giants. AWS is now hosting OpenAI’s open-weight models on Amazon Bedrock and SageMaker, marking a first-time collaboration and opening new doors for generative AI development.

Notably, AWS is also a key backer of Anthropic, which, on the same day, released Claude Opus 4.1, showcasing upgrades in coding, reasoning, and agentic task performance.

OpenAI’s gpt-oss models are now live on Google’s Vertex AI, with Databricks also making them available.“This is a REALLY good set of models and now all Databricks users can benefit with governance, monitoring, and customisation of the MosaicAI platform,” said Naveen Rao, VP of generative AI at Databricks.

The models are also available in Snowflake Cortex AI! “We believe in the power of community and open innovation, and we’re excited to bring OpenAI’s latest open source models directly to our customers — all from within Snowflake’s secure boundary,” said Dwarak Rajagopal, vice president and head of AI engineering.

OpenAI’s return to open-weight isn’t just a nostalgic move. It is a strategic shift that’s winning over the ecosystem, from cloud giants to chipmakers. With gpt-oss models already running on Apple Silicon, it wouldn’t be surprising if Apple itself joins the party and starts using the models.

In a world where even your pocket device could soon house top-tier intelligence, OpenAI is quietly setting the stage for what’s next.

The post OpenAI’s Open-Weights Could Just Be the Prelude for io appeared first on Analytics India Magazine.

Scroll to Top