The Nvidia Thinking Machines deal marks a significant financial investment in the artificial intelligence startup founded by former OpenAI Chief Technology Officer Mira Murati. The two technology companies officially announced a multi-year strategic partnership on March 10, 2026. As part of this comprehensive agreement, Thinking Machines will deploy a minimum of one gigawatt of Nvidia’s next-generation computing hardware to train its algorithms and operate its customizable AI products. The startup plans to begin utilizing these advanced resources early next year.
While the exact size of the cash infusion was not disclosed, the Nvidia Thinking Machines deal involves a massive scale of infrastructure. The chip supply arrangement will require the startup to purchase billions of dollars worth of hardware. To provide context for this deployment, Nvidia Chief Executive Officer Jensen Huang previously estimated that building a single gigawatt of AI data center capacity costs approximately $50 billion. Within that budget, graphics processing units typically account for roughly two-thirds of the total expense.
A Gigawatt Computing Infrastructure
To reach this gigawatt threshold, Thinking Machines will utilize Nvidia’s forthcoming Vera Rubin systems. The hardware deployment will feature the latest-generation Rubin graphics processing units, including the Rubin CPX accelerator, which is specifically optimized for calculations involved in running inference workloads. The startup will also deploy the standard Rubin chip, which supports a broader range of use cases and is built with 336 billion transistors.
Furthermore, the infrastructure will incorporate Nvidia’s Vera central processing units. Each Vera processor contains 88 cores utilizing the Armv9.2 instruction set and has the capacity to run 176 threads simultaneously.
Deep Technical Collaboration
Beyond the massive hardware procurement, the partnership encompasses deep technical collaboration between the two companies. Thinking Machines and Nvidia plan to co-develop training and serving systems specifically tailored for Nvidia architectures. This involves optimizing the startup’s software offerings to run efficiently on Nvidia’s processors.
Serving systems typically manage the software stack that distributes inference-related calculations across multiple graphics processing units. Nvidia currently maintains its own open-source serving system known as Dynamo, highlighting the company’s existing expertise in this area.
Leadership from both organizations emphasized the strategic importance of this alliance. “Nvidia’s technology serves as the cornerstone for the entire industry,” Murati stated in the official announcement. She noted that the partnership directly accelerates her team’s capacity to build artificial intelligence that individuals can customize and make their own, ultimately expanding human potential.
Huang echoed this enthusiasm, describing artificial intelligence as the most powerful tool for knowledge discovery in human history. He added, “Thinking Machines has assembled an exceptional team to push the boundaries of AI.”
Rapid Growth and Financial Backing
The multi-year agreement provides crucial infrastructure for a startup that has grown rapidly since Murati founded it in February 2025. Over the past year, Thinking Machines has expanded its workforce from approximately 30 employees to roughly 120 staff members. According to an insider familiar with the company, the startup is currently attracting more new hires from leading rival AI laboratories than it is losing to competitors, despite experiencing some notable staff exits since its establishment.
Prior to this latest deal, Thinking Machines had already secured $2 billion in seed funding, achieving a substantial valuation of $12 billion. That initial investment consortium included participation from prominent venture capital firms Andreessen Horowitz and Accel. It also featured backing from major technology corporations, including ServiceNow Inc., Nvidia, and uniquely, Advanced Micro Devices Inc., which operates as Nvidia’s primary competitor in the chip market. It remains unclear if this newest financial injection from Nvidia has altered the startup’s $12 billion valuation.
Customizable AI and Future Plans
The company’s core mission centers on creating AI systems that are widely understood, generally capable, and highly customizable. This focus on adaptability is intended to distinguish Thinking Machines from competitors like OpenAI and Anthropic, which primarily sell relatively fixed models. The startup has already introduced one cloud service, called Tinker, which allows developers to create fine-tuned versions of open-source large language models, including Meta Platforms Inc.’s Llama series.
Tinker utilizes LoRA technology, which attaches a small number of customized model weights to an existing open-source language model. This method eliminates the need to modify the original model’s existing weights, significantly reducing training costs for customers.
Securing a gigawatt of computing power positions Thinking Machines to compete at the cutting edge of the industry. The massive deal reflects a broader trend in the artificial intelligence sector, where frontier labs are racing to secure infrastructure before next-generation hardware even exists.
For Nvidia, the partnership generates substantial revenue from chip sales while securing a long-term strategic relationship. Meanwhile, the massive compute commitment suggests Murati intends to keep her organization independent, having reportedly turned down an acquisition offer from Meta Chief Executive Officer Mark Zuckerberg last year. Recent job postings indicate the startup is now utilizing its resources to develop custom implementations of AI building blocks, alongside new models optimized for audio processing and visual reasoning.
