Hynix has introduced the first-ever 16-Hi HBM4 memory stack, marking a significant milestone in the semiconductor industry. This development is particularly pivotal as these memory stacks are set to be integrated into future AI accelerators, providing unprecedented data transmission speeds of 10 GT/s. This figure represents a substantial 25% increase over the JEDEC’s official specifications, thanks in part to the remarkably wide 2048-bit bus.
The HBM4 memory stack boasts an impressive 48 GB capacity, operating at a data rate of 11.7 Gbps and offering a bandwidth of 2.9 TB/s. These metrics suggest a promising boost in performance for AI-driven applications, potentially accelerating developments in machine learning and data processing capabilities.

The HBM3 and the newly launched HBM4 retain similar dimensions of approximately 10.5 × 12.0 mm, but HBM4 allows for taller stacks reaching up to 950 µm for 16-Hi, compared to 750 µm for 12-Hi HBM3. Such enhancements are critical in meeting the growing data demands and computational requirements of modern AI systems.
The technical advancement of HBM4 is attributed to the use of specialized DRAM crystals, manufactured using the 1b-nm process, the fifth generation of the 10-nanometer class technology. This process not only enhances performance metrics but also contributes to energy efficiency, a crucial factor in sustainable tech development.
Experts in the semiconductor field suggest that this innovation by Hynix could shift market dynamics by setting new standards for memory technology, possibly influencing other manufacturers to elevate their own tech offerings.