Samsung’s new HBM2E Boasts Up to 16GB Per-Stack
Ron Perillo / 1 year ago
Double’s HBM2 Capacity and has Higher Bandwidth
Samsung used the recent GTC 2019 to unveil their new High Bandwidth Memory (HBM2E) standard update. This basically improves upon HBM2‘s bandwidth-per-pin by approximately 33%, increasing up to 3.2Gbps. Furthermore, the maximum capacity per die is also twice that of HBM2. Now at 16GB from 8GB per stack.
The new package dubbed “Flashbolt” has the same number of dies per stack (8). Although the increased bandwidth totals to 410GB/s per stack from the previous 307.2GB/s of the “Aquabolt” package.
“Flashbolt’s industry-leading performance will enable enhanced solutions for next-generation data centers, artificial intelligence, machine learning, and graphics applications,” said Jinman Han, senior vice president of Memory Product Planning and Application Engineering Team at Samsung Electronics.
When Can We See Graphics Cards Using These?
It will take a while considering the new RTX graphics cards just came out. Samsung announcing this at NVIDIA‘s event also suggests that they have a deal to use it for their graphics cards. AMD was the first to use HBM on their Fury line of desktop gaming graphics cards. Meanwhile, NVIDIA reserves the use of HBM on their Titan V and Tesla GPUs.
More than likely, it will see enterprise applications prior to desktop GPUs. The massive increase in bandwidth for example is something data centers and machine learning can benefit from. There is also the added cost of incorporating these compared to GDDR6. Especially since these require an interposer due to the large amount of pins.
Samsung did not reveal when volume production for HBM2E will start at this time yet.