Samsung Electronics announced on February 12, 2026, that it has begun mass production of HBM4 — High Bandwidth Memory 4th generation — and has already delivered commercial shipments to customers. The South Korean tech giant claims to be the first in the industry to reach this milestone, representing a significant turn for a company that had previously struggled to keep pace with rivals in supplying earlier generations of high-bandwidth memory for AI applications.
Breaking Speed Records for AI
Samsung’s HBM4 delivers a consistent processing speed of 11.7 gigabits-per-second (Gbps) per pin, roughly 46% faster than the 8Gbps JEDEC industry standard for HBM4 and already surpassing the 9.6Gbps peak of its predecessor, HBM3E. Samsung says performance can be pushed even further — to 13Gbps — under certain conditions. Total memory bandwidth per single stack reaches 3.3 terabytes-per-second, a 2.7 times improvement over HBM3E.
Context adds real weight to these numbers. When JEDEC — the governing body for computer memory standards — formalized HBM4, it deliberately set a lower per-pin speed than HBM3E while doubling interface pins from 1,024 to 2,048, a tradeoff meant to improve power efficiency and thermal control. Samsung has far exceeded that standard while also delivering a 40% improvement in energy efficiency, 10% lower thermal resistance, and 30% better heat dissipation compared to HBM3E.
Built on Next-Generation Process Technology
The chips run on Samsung’s 6th-generation 10 nanometer-class DRAM process — internally called “1c” — and incorporate a 4 nanometer logic base die. Samsung says it achieved stable production yields from the very start, without requiring any redesigns, which the company highlights as a key engineering achievement given the product’s complexity.
“Instead of taking the conventional path of utilizing existing proven designs, Samsung took the leap and adopted the most advanced nodes like the 1c DRAM and 4nm logic process for HBM4,” said Sang Joon Hwang, Executive Vice President and Head of Memory Development at Samsung Electronics. “By leveraging our process competitiveness and design optimization, we are able to secure substantial performance headroom, enabling us to satisfy our customers’ escalating demands for higher performance, when they need them.”
Current products use 12-layer stacking and are available in capacities from 24 gigabytes (GB) to 36GB. Samsung plans to scale that to 48GB with a future 16-layer design as customer needs grow.
What Samsung’s Roadmap Looks Like
Samsung expects its HBM sales to more than triple in 2026 compared to 2025, and is actively expanding production capacity to match that projected demand. After HBM4’s commercial rollout, the company plans to begin sampling its next-generation HBM4E memory to customers in the second half of 2026. Custom HBM products built to individual customer specifications are expected to reach clients in 2027.
Investor confidence in Samsung’s renewed momentum was visible immediately — the company’s shares closed 6.4% higher on the day of the announcement.
Micron Also Joins the Race
Samsung is not alone in reaching this milestone. A day before Samsung’s announcement, Micron CFO Mark Murphy, speaking at an event hosted by Wolfe Research, disclosed that Micron had also begun high-volume HBM4 production and had already shipped units to customers — delivering a full quarter ahead of its earlier forecast.
“Our HBM yield is on track. Our HBM4 yield is on track. Our HBM4 product delivers over 11 gigabits per second speeds, and we’re highly confident in our HBM4 product performance and quality and reliability,” Murphy said. He also confirmed that Micron had pre-sold every HBM4 chip it can produce in 2026. Investors reacted strongly, with Micron’s share price rising by nearly 10% on the news.
With both Samsung and Micron now shipping HBM4, SK Hynix remains the only major memory manufacturer yet to make a formal production announcement.
The Nvidia Vera Rubin Connection
The broader significance of these developments centers on Nvidia. The GPU giant plans to release its Vera Rubin AI accelerators in the second quarter of 2026 and has confirmed that the platform will use HBM4 memory, with Samsung among the expected suppliers. Samsung’s ability to ramp up mass production is therefore a critical enabler for Nvidia’s next-generation AI hardware rollout.
There is a wider market consequence, however. As leading chipmakers shift production capacity toward high-margin HBM products for AI workloads, prices for conventional DRAM are climbing — a side effect that consumers and businesses outside the AI industry are already beginning to feel.
