Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Google DeepMind is offered Gemini 2.5 deep thoughtAs the company says, it is the most advanced thinking model in artificial intelligence, able to answer questions by exploring multiple ideas and thinking simultaneously and then using these outputs to choose the best answer.
Google subscribers worth $ 250 a month Superior You will be able to subscribe to Gemini 2.5 Deep Think in the Gemini app from Friday.
It was first unveiled in May in Google I/O 2025, Gemini 2.5 Deep Think is the first multi -factor model available from Google. These systems are notified of multiple factors to address a question in parallel, a process that uses a much more calculation resources than one agent, but they tend to lead to better answers.
Google used a contrast in Gemini 2.5 Deep Think Gold medal score In the International Mathematics Olympics for this year (IMO).
Besides Gemini 2.5 Deep Think, the company says it is echoing the model it used in IMO for a selected group of mathematicians and academics. “It takes hours to mind,” Google says, instead of seconds or minutes like most artificial intelligence models facing the consumer. The company hopes that the IMO model will enhance research efforts, and aims to obtain comments on how to improve the multi -agent system for academic use.
Google indicates that the Gemini 2.5 Deep Think model is a great improvement in what it announced in I/O. The company also claims to have developed “new promotion techniques” to encourage Gemini 2.5 Deep Think to better benefit from thinking paths.
“Deep thinking can help people face problems that require creativity, strategic planning and step -by -step improvements,” Google said in a common blog post with Techcrunch.
TECHRUNCH event
San Francisco
|
27-29 October, 2025
The company says that Guemini 2.5 Deep Think achieves a newer performance in the recent Humanity Exam (Hle)-a difficult test that measures the ability of artificial intelligence to answer thousands of collective questions through mathematics, humanity and science. Google claims that its model recorded 34.8 % on Hle (without tools), compared to Xai’s Grok 4, which recorded 25.4 %, and Openai’s O3, which recorded 20.3 %.
Google also says that Gemini 2.5 Deep Think exceeds the artificial intelligence models of Openai, Xai and HotHropic on LiveCodebeench6, which is a difficult test of competitive coding tasks. Google Model 87.6 %, while GROK 4 recorded 79 %, Openai’s O3 72 % record.

Gemini 2.5 Deep Think operates automatically with tools such as implementing software and searching in Google, and the company says it is able to produce “much longer responses” than traditional artificial intelligence models.
In the Google test, the model has produced more detailed and beautiful web development tasks compared to other artificial intelligence models. The company claims that the model can help researchers and “may accelerate the way of discovery.”

It seems that many leading artificial intelligence laboratories are close to a multi -agent approach.
Elon Musk’s Xai recently released its own multi -agent system, Groc 4 heavyAnd that she says she was able to achieve a pioneer in industry on many criteria. Openai Noam Brown researcher said on A. Podcast The unprecedented artificial intelligence model for the company to achieve a gold medal in the International Sports International this year was also a multi -agent system. During, Human Research AgentWhich generates comprehensive research summaries, it is also operated by a multi -agent system.
Despite the strong performance, multi -agent systems appear to be more expensive to serve than traditional artificial intelligence models. This means that technology companies may keep these systems a gateway behind the most expensive subscription plans, which chose Xai and now Google.
In the coming weeks, Google says she plans to share Gemini 2.5 Deep Think with a selection of laboratories via Gemini Application interface. The company says it wants to better understand how developers and institutions can use its multi -agent system.