Memory Market HBM4 and DDR5: What Lies Behind the Sensational Headlines

From Susanne Braun Susanne Braun | Translated by AI 4 min Reading Time

Related Vendors

Data centers not only require vast amounts of water and electricity but also enormous amounts of memory. At the beginning of 2026, two interesting trends are emerging in the market: price explosions for DDR5 and longer waiting times for HBM4. Why?

Memory technologies: A symbolic image.(Image: Dall-E / AI-generated)
Memory technologies: A symbolic image.
(Image: Dall-E / AI-generated)

The demand for data centers and the hardware to equip them according to (often AI-driven) needs remains consistently high and is apparently putting particular pressure on the memory market at the beginning of 2026. At least two interesting pieces of news from this sector have managed to make it to the headlines of all tech news at the start of the year. We briefly shed light on what is actually going on.

Exorbitant Prices for DDR5

DRAM is the main memory in data centers and is used for the ongoing operation of operating systems, applications, virtual machines, and databases. Data center servers are equipped with vast amounts of DRAM to enable many applications to run simultaneously and large amounts of data to be quickly accessible. Unlike specialized solutions such as HBM, DRAM is not designed for maximum speed but for large storage capacities at reasonable costs. It is pluggable (DIMMs), replaceable, and forms the backbone of modern cloud, enterprise, and AI servers.

Recently, headlines emerged that DDR5 has become costly on the Chinese spot market—so much so that a box of 100 high-capacity DDR5 modules currently costs about as much as an apartment in Shanghai. This was reported by editors from Tom's Hardware based on an article from the South China Morning Post. Individual 256 GB server modules are reportedly being traded at prices exceeding 40,000 yuan (~5,700 USD), with some listings even higher.

Notably, these prices do not appear to be supported by equally strong demand. Sellers at the Huaqiangbei market in Shenzhen report that many buyers are currently choosing to wait given the high price expectations. Luke James from Tom's Hardware explains why this is particularly significant: “Huaqiangbei plays a special role in China's semiconductor ecosystem. It sits between official contract channels and the gray or secondary market, where prices can quickly fluctuate due to shortages or sudden changes in availability. This makes it a useful early indicator of tensions but also a place where prices can become significantly decoupled from underlying demand. In this case, sellers say that fear of being stuck with costly inventory is freezing activity.”

So what's going on? The global memory industry has shifted from an oversupply to a shortage. DDR5 RAM is currently so expensive because the memory market is overall under pressure. The major providers, especially SK hynix and Samsung, are reallocating their capacities to servers and high-bandwidth memory for AI workloads, primarily HBM3e, making production capacities for high-capacity server DDR5 correspondingly scarce. The reasons for this can be found, among other things, in the explanations regarding the HBM4 headlines.

And the DDR5 price spikes among sellers in nervous secondary markets leave room for unusual price comparisons currently making the rounds, such as "a box of memory costs as much as a house in Shanghai." Or for headlines, reporting that manufacturers like Apple, Dell, Amazon, and Google are booking long-term stays in hotels near SK hynix and Samsung production facilities to secure long-term memory supply contracts.

Design Adjustments for HBM4

HBM (High Bandwidth Memory) is a high-speed specialized memory directly connected to GPUs and AI accelerators in data centers, primarily used for AI training, AI inference, and high-performance computing. It delivers very high data rates with low latency and high energy efficiency, preventing powerful computing chips from being bottlenecked by slow memory. Due to its integration within the chip package, HBM is expensive, capacity-limited, and non-replaceable, yet it is indispensable for modern AI data centers.

The current generation is HBM3e, but the well-known memory manufacturers SK hynix, Samsung, and Micron, who dominate a large portion of the market, have been working on HBM4 for some time. The largest customer for it would be Nvidia with the Rubin platform. However, the HBM4 specifications for Rubin, according to research by Trendforce experts, were updated by Nvidia in the third quarter of 2025. The required speed per pin of HBM4 was increased to over 11 Gbit/s, and this new requirement has led the three major HBM suppliers to adjust their designs. As a result, mass production of HBM4 is expected to fully commence only by the end of the first quarter of 2026.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Trendforce notes that "SK hynix, Samsung, and Micron have all resubmitted HBM4 samples and are further refining their designs in response to Nvidia's stricter requirements. Compared to its competitors, Samsung has taken an early lead by introducing a 1cnm process for HBM4 and utilizing advanced in-house foundry technology for the base chip."

The 1cnm process belongs to the sixth generation of the 10-nm class and achieves a line width of approximately 11 to 12 nanometers, making it finer than the previous 1bnm process of the fifth generation with around 12 to 13 nanometers. According to Trendforce, Samsung could achieve a technical lead in the rapid qualification of HBM4 by combining the 1c DRAM process with HBM4 design—potentially translating into a competitive advantage in certain high-end AI segments.

Samsung is therefore expected to be the first to qualify as a supplier for HBM4 for the Rubin platform. Meanwhile, Nvidia has adjusted the supply chain for the Blackwell platform in response to high demand and currently requires HBM3e in particular. Additionally, under certain conditions, shipments of H200 chips to China have been re-enabled; these AI chips also use HBM3e. This has given memory producers additional time to adjust the design of HBM4 products to meet the new requirements. (sb)