Samsung Begins Mass Production Of HBM4 Memory
Samsung has officially announced the start of mass production and shipment of its first HBM4 memory units.
These high-performance memory chips will be used in future AI accelerators and advanced graphics solutions, marking a new step forward in a fast-growing market.
Samsung HBM4 Delivers Higher Speed And Bandwidth
Samsung’s new HBM4 memory not only meets industry standards but goes beyond them. While JEDEC sets a speed of 8 Gbps per pin, Samsung’s solution reaches transfer speeds of up to 11.7 Gbps per pin.
This allows for bandwidth of up to 3.3 TB/s per stack, which is about 2.4 times higher than the previous generation, HBM3E.
This speed increase is very important for next-generation AI accelerators, such as NVIDIA’s Vera Rubin platform. These systems require a constant and fast flow of data to process large language models and real-time AI tasks without bottlenecks.
To achieve these results, Samsung is using its 4-nanometer process node for the base logic die, combined with sixth-generation 10nm-class DRAM (1c DRAM). This combination improves both performance and efficiency.
After falling behind competitors like SK Hynix during the HBM3E era, Samsung has accelerated its development timeline with HBM4.
The company has already secured orders from major players such as NVIDIA, AMD, and Google. Samsung expects HBM4 to play a key role in its financial recovery in 2026, with projected HBM memory sales growth of more than 180%.


















