Meta has officially introduced a major expansion of its in-house silicon efforts, unveiling four new Meta Training and Inference Accelerator (MTIA) processors. Announced on March 11, the MTIA 300, 400, 450, and 500 models are specifically designed to power the company’s demanding generative AI features and complex content ranking systems. This aggressive two-year roadmap introduces a brand-new chip generation every six months, moving roughly four times faster than the traditional industry standard of one to two years.
The accelerated rollout of Meta MTIA AI chips comes just weeks after the tech giant signed multibillion-dollar agreements with Nvidia and committed to a massive six-gigawatt power deal with AMD. Facing an unprecedented explosion in inference demand and soaring data center operating costs, Meta is rapidly developing its own custom silicon to gradually reduce its reliance on third-party hardware. To fully support these massive infrastructure upgrades, the company recently raised its 2026 capital spending budget to between $115 billion and $135 billion, effectively doubling its financial outlay from the previous year.
The MTIA Roadmap: Scaling Up for Generative AI
Meta’s new hardware strategy pivots heavily toward generative AI operations while continuing to support the critical recommendation algorithms that drive user engagement across Facebook and Instagram. To achieve immediate data center deployment, the MTIA 400, 450, and 500 models will all share the exact same chassis, server rack, and network infrastructure. This unified design allows Meta to upgrade its computing power seamlessly without overhauling the surrounding physical hardware.
MTIA 300: Production Ready Base
The MTIA 300 is currently in active production and serves as the baseline for the new lineup. Operating with an 800-watt thermal design power, this processor is explicitly optimized for training the recommendation and ranking models that currently dominate Meta’s daily workloads. It features 216 gigabytes of High Bandwidth Memory capacity and delivers a solid memory bandwidth of 6.1 terabytes per second.
MTIA 400: Active Data Center Deployment
Moving beyond basic recommendation systems, the MTIA 400 represents a major architectural upgrade highly focused on broader generative AI applications. Meta has already completed all internal testing and is currently deploying these processors into its active data centers, placing exactly 72 chips inside a single operational server rack.
Operating at 1,200 watts, the MTIA 400 features 288 gigabytes of memory capacity and increases overall memory bandwidth to 9.2 terabytes per second—a 51% direct improvement over its predecessor. It also delivers 400% higher FP8 FLOPS than the 300 model and successfully achieves 12 PFLOPS of MX4 performance. This chip physically combines two compute chiplets to maximize computing density and fully supports the new low-precision data formats essential for efficient artificial intelligence inference.
MTIA 450: Optimizing Inference Bottlenecks
Driven by the surging industry need for specialized generative AI inference, the upcoming MTIA 450 model doubles the memory bandwidth of the 400 series to an impressive 18.4 terabytes per second. Running at a higher 1,400 watts, it retains a 288-gigabyte memory capacity but pushes raw computing performance to 21 PFLOPS.
Meta deliberately engineered this chip to eliminate common processing bottlenecks that slow down network operations. It provides a massive 75% boost in MX4 FLOPS to cleanly handle complex mixture-of-experts computations. Furthermore, it includes dedicated hardware acceleration specifically designed for common data operations like FlashAttention and Softmax.
MTIA 500: The Future of Mass Deployment
Scheduled for official mass deployment in 2027, the MTIA 500 represents the most advanced processor in the current roadmap. It features a demanding 1,700-watt design and increases memory bandwidth by another 50% to safely reach 27.6 terabytes per second. Memory capacity jumps significantly as well, offering between 384 and 512 gigabytes.
The MTIA 500 consistently delivers 30 PFLOPS of performance and introduces a highly advanced physical architecture. It uses a specialized two-by-two configuration of smaller compute chiplets securely surrounded by multiple memory stacks and two distinct network chiplets. Additionally, it integrates a dedicated System-on-Chip to ensure seamless high-speed connectivity back to host server processors.
Shifting the AI Hardware Landscape
This incredibly fast-paced development cycle highlights the intense pressure global technology companies face to secure sufficient computing power. “It is unusual for a silicon company team to build a new chip every six months. It is a very quick cadence,” noted Meta representative Song, explaining that the rapid pace is strictly necessary to continuously expand infrastructure capacity. “At any given time, we want to have the state-of-the-art to deploy.”
Meta’s aggressive push into custom silicon reflects a much broader industry trend among major cloud providers looking to control their own supply chains. The deployment of these advanced processors will also ripple through the hardware supply chain, with reports indicating that each new MTIA chip will utilize roughly 23 Baseboard Management Controller units. According to market analysts, custom application-specific integrated circuits like the MTIA series are expected to account for 27.8% of the global AI server market by 2026. As North American tech giants like Meta and Google rapidly expand their proprietary hardware efforts, the fierce competition for data center dominance is moving securely in-house.
