Apple is heavily investing in the future of artificial intelligence with a highly anticipated Gemini-powered Siri upgrade scheduled to arrive early this year. In tandem with this major software shift, Google has officially launched a native Gemini desktop application for macOS devices. These parallel developments highlight a deepening collaboration between the two technology companies, fundamentally changing how Apple users interact with their mobile and desktop devices.
The February Rollout for iOS Updates
According to industry reports, Apple plans to introduce its revamped voice assistant in the second half of February. This initial Gemini-powered Siri update will roll out as part of the iOS 26.4 beta testing phase. A broader public release is expected to follow in March or early April.
Internally referred to as Apple Foundation Models version 10, the updated system operates on approximately 1.2 trillion parameters. While this initial February launch will not introduce a fully conversational chatbot, it promises significant functional improvements. Users can expect the assistant to possess enhanced screen awareness and a deeper understanding of personal context. It will also gain the ability to execute multi-step tasks across different applications without requiring constant follow-up prompts, such as summarizing messages or pulling specific details from emails.
The Full Chatbot Experience in June
The most substantial leap for Apple’s voice assistant is slated for the summer. At the Worldwide Developers Conference beginning on June 8, 2026, Apple is expected to reveal a fully reimagined, chatbot-style version of the assistant alongside iOS 27, iPadOS 27, and macOS 27.
Codenamed Campos, this advanced software architecture relies on Apple Foundation Models version 11 and is designed to handle sustained, back-and-forth conversations. Some of these capabilities may be processed directly on Google’s cloud infrastructure to ensure rapid and accurate responses.
Strategic Shifts and Model Flexibility
Apple’s decision to rely on external intelligence marks a distinct pivot in its corporate strategy. Following performance hurdles and delays with its internal artificial intelligence models, Apple secured a multiyear agreement with Google, reportedly valued at approximately one billion dollars annually.
Interestingly, the new Siri architecture is designed to be completely model-agnostic. This means Apple maintains the flexibility to swap out Google’s intelligence for other providers in the future. By treating artificial intelligence models as interchangeable commodities, Apple avoids the massive capital expenditures required to build and maintain vast data centers. Instead, the company is choosing to focus its resources on user experience, privacy controls, and software distribution.
Internal Restructuring and AI Bootcamps
Behind the scenes, Apple is working intensely to meet its upcoming software deadlines. To accelerate development, the company is sending just under two hundred software engineers from the voice assistant team to a multiweek coding bootcamp. This training program focuses on utilizing artificial intelligence-assisted coding tools like Anthropic’s Claude Code and OpenAI’s Codex.
While this large group undergoes training, a core team of sixty engineers remains focused on primary development, and another sixty are dedicated to evaluating performance and safety standards. This internal push follows significant leadership changes. Former artificial intelligence chief John Giannandrea recently departed the company, leaving software engineering head Craig Federighi to oversee the AI initiatives. Meanwhile, Mike Rockwell has taken charge of the core development team for the voice assistant.
Google’s Native Mac Application
As Apple overhauls its mobile ecosystem, Google is simultaneously enhancing the desktop environment. Google recently introduced a dedicated Gemini application for Apple computers running macOS version 15 and higher. Available to download directly from the web, this native program integrates smoothly into daily workflows.
Users can activate the assistant instantly using an “Option + Space” keyboard shortcut, allowing them to access help without leaving their active windows. A standout feature of this desktop application is its screen-sharing capability. Users can allow the assistant to view their active screen or local files, providing vital context for summarizing complex datasets or finding visual patterns. The application also supports creative tasks, offering image generation and video creation capabilities, though generating numerous videos requires a premium subscription plan.
