Google has released Gemini 3.1 Pro in preview, positioning it as an upgraded “core intelligence” model designed for complex problem-solving and advanced reasoning. The rollout spans consumer products like the Gemini app and NotebookLM, as well as developer and enterprise platforms including the Gemini API, Google AI Studio, Vertex AI, and Gemini Enterprise.
In its announcement, Google described Gemini 3.1 Pro as a step forward in “core reasoning” within the Gemini 3 series. The company said the model is meant for situations where a quick response is not enough—tasks that require deeper thinking, step-by-step planning, or synthesizing information into a clear output.
Google also highlighted benchmark results to support its claims of improved performance. On ARC-AGI-2, a benchmark aimed at measuring the ability to solve new logic patterns, Google reported that Gemini 3.1 Pro achieved a verified score of 77.1%. In the same announcement, Google said that score is “more than double” the reasoning performance of Gemini 3 Pro.
Where Google says Gemini 3.1 Pro is rolling out
Google said Gemini 3.1 Pro is shipping across both consumer and professional offerings, with different access paths depending on the user group.
For consumers, Google said Gemini 3.1 Pro is rolling out in the Gemini app and in NotebookLM. Google also stated that, in the Gemini app, Gemini 3.1 Pro is rolling out with higher limits for users on the Google AI Pro and Ultra plans, and that NotebookLM access is exclusive to Pro and Ultra users.
For developers and enterprises, Google said the model is available in preview via the Gemini API in Google AI Studio, and also through Gemini CLI, Android Studio, and Google’s “agentic development platform” called Google Antigravity. Google also said enterprises can access Gemini 3.1 Pro through Vertex AI and Gemini Enterprise.
Benchmarks and “agentic” competition
Third-party coverage framed Gemini 3.1 Pro as part of an increasingly competitive AI model landscape, especially for agentic work and multi-step reasoning.
TechCrunch reported that Google shared results from independent benchmarks such as Humanity’s Last Exam, which the publication said showed the new model performing “significantly better” than its previous version. TechCrunch also noted that Brendan Foody, CEO of AI startup Mercor, said in a social media post that Gemini 3.1 Pro was at the top of the “APEX-Agents leaderboard,” a system Mercor uses to measure performance on “real professional tasks.”
Google’s own messaging emphasizes practical uses tied to advanced reasoning. In its product post, Google said the model is designed to help with applications like getting clear visual explanations of complex topics, synthesizing data into a single view, and supporting creative projects.
What Google and DeepMind say the model can do
Google DeepMind’s model card describes Gemini 3.1 Pro as the next iteration in the Gemini 3 series and calls it Google’s “most advanced model for complex tasks” as of the model card’s publication date.
According to the same model card, Gemini 3.1 Pro is a natively multimodal model that can process multiple input types, including text, audio, images, and video. The model card also states the model can handle a token context window of up to 1 million tokens, with text output up to 64,000 tokens.
DeepMind also described intended use cases that include advanced coding, “agentic performance,” long-context or multimodal understanding, and algorithmic development. The model card adds that Gemini 3.1 Pro was evaluated across benchmarks covering areas such as reasoning, multimodal capabilities, agentic tool use, multilingual performance, and long-context.
Preview now, broader release later
Google and third-party reporting both characterize Gemini 3.1 Pro as a preview release, with a broader launch planned later.
In Google’s post, the company said it is releasing Gemini 3.1 Pro in preview to validate updates and to continue advancing areas such as “ambitious agentic workflows” before making it generally available “soon.” TechCrunch similarly reported that Google said the model is currently available as a preview and will be generally released soon.
For users watching the rapid pace of model updates, the launch is also notable for the versioning itself. 9to5Google reported that this “.1” increment is a first for Google, contrasting it with previous mid-year updates that used “.5” version bumps.
