Alphabet’s Google is in talks with Marvell Technology to develop two new AI chips designed to run artificial intelligence models more efficiently, according to reports published on April 19. The reported Google-Marvell AI chips effort centers on a memory processing unit that would work with Google’s tensor processing units, or TPUs, and a separate TPU built specifically for running AI models. The discussions point to a possible expansion of Google’s custom chip strategy as the company pushes harder into AI infrastructure.
The reported plan is focused on inference, the stage when trained AI models are used to answer requests and power services at scale rather than being trained from scratch. One report said the two chips are meant to reduce latency, improve efficiency, and lower costs for large AI workloads, especially in data centers. Another report said Google and Marvell aim to finalize the design of the memory processing unit as soon as next year before sending it for test production.
Two chips, two jobs
The first chip under discussion is a memory processing unit designed to work alongside Google’s existing TPUs. Reports describe that chip as targeting memory-heavy work and data movement, which can be a major bottleneck when large AI models are running. In one account, the chip is intended to pair with existing TPUs rather than replace them.
The second chip is described as a new TPU optimized for inference. Reports say this chip would be designed for running AI models after training, not for building the models themselves. That makes it different from hardware aimed mainly at training workloads, and it shows Google’s interest in tailoring chips to specific parts of the AI pipeline.
One report said the split between a memory-focused chip and an inference-focused TPU would let Google target different problems in AI computing with separate pieces of hardware. Another report said the two-chip approach is intended to improve how efficiently AI models run inside data centers. Taken together, the reports present the project as a push to improve performance while also managing the cost of serving AI at scale.
Google’s TPU strategy
Google has been building its own AI chips, known as TPUs, since 2015. The company has been pushing to make TPUs a viable alternative to Nvidia’s dominant GPUs. That effort has become more important as demand for AI hardware rises and competition over chips intensifies.
Reports also say TPUs have become more than an internal Google tool. They are now a key part of Google’s cloud offering, where customers can rent access to this hardware to run AI workloads. Reuters said TPU sales have become a key driver of growth in Google’s cloud revenue as the company works to show investors that its AI spending is producing returns.
That business angle helps explain why Google would look for more specialized chip designs. Republic World said custom chips give Google more control over performance, cost, and how the hardware fits with its own services. The same report said inference is where much of real-world AI use happens, including applications such as chatbots, search, and recommendation systems.
Why Marvell matters
Marvell’s possible involvement would add another name to Google’s chip design ecosystem. One report said Google has worked closely with Broadcom on TPU development, and bringing Marvell into the picture would suggest a wider supplier and design strategy. LetsDataScience also reported that Marvell would become a third design partner alongside Broadcom and MediaTek if the talks lead to an agreement.
Republic World described Marvell as a company that specializes in custom silicon and data center hardware. That profile fits the kind of highly specialized chips discussed in the reports. At the same time, one source said the discussions are not yet a signed contract, showing that the project is still at the talks stage rather than a finalized partnership.
What comes next
The clearest timeline mentioned in the reports concerns the memory processing unit. Reuters said the companies aim to complete that design as soon as next year and then hand it off for test production. No comparable public timeline was detailed for the inference-focused TPU in the reports reviewed here.
The companies have not confirmed the reported talks. Reuters said it could not immediately verify the report, and both Google and Marvell did not immediately respond to requests for comment. Even so, the reported discussions have drawn notice because they suggest Google is still broadening its AI chip ambitions as it tries to strengthen both its internal systems and its cloud business.
