Google has launched Deep Think, a new AI reasoning feature, in the Gemini app for Google AI Ultra subscribers. The model, based on Gemini 2.5, is designed to handle complex reasoning tasks and is now available for broader use after months of internal testing and research improvements.
Subscribers can enable Deep Think in the Gemini app by selecting Gemini 2.5 Pro and toggling the “Deep Think” option. The feature comes with a fixed number of prompts per day and works with tools like code execution and Google Search.
Google also plans to expand access through the Gemini API, allowing developers and enterprise testers to explore its applications more broadly.
Alongside the public release, Google is giving select mathematicians access to the version of Deep Think that performed at a gold-medal level in the 2025 International Mathematical Olympiad (IMO). The current version in the Gemini app offers faster performance and has achieved bronze-level results in internal evaluations.
“We’re also sharing the official version of the Gemini 2.5 Deep Think model that achieved the gold-medal standard with a small group of mathematicians and academics,” Google said. “We look forward to hearing how it could enhance their research and inquiry.”
Deep Think extends Gemini’s reasoning capabilities by using a technique called parallel thinking, where the model generates and evaluates multiple solution paths simultaneously. “By extending the inference time or “thinking time,” we give Gemini more time to explore different hypotheses, and arrive at creative solutions to complex problems,” the tech giant said.
The company explained that just as people tackle complex problems by taking the time to explore different angles, weigh potential solutions, and refine a final answer, Deep Think pushes the frontier of thinking capabilities by using parallel thinking techniques.
Deep Think has shown strong performance in areas such as scientific research, mathematical problem-solving, iterative design, and advanced programming. According to internal benchmarks, it leads in tasks measured by LiveCodeBench V6 and Humanity’s Last Exam, outperforming other models like Gemini 2.5 Pro, OpenAI o3, and Grok 4 in reasoning, code, and math categories.
To ensure safe deployment, Google has introduced safety checks and mitigation measures in the development lifecycle. The model card for Gemini 2.5 Deep Think includes further details about safety outcomes, including its cautious behaviour in some benign query refusals.
The post Google Launches Gemini 2.5 Deep Think, Outperforms Grok-4 & OpenAI o3 appeared first on Analytics India Magazine.