Data centers not only require vast amounts of water and electricity but also enormous amounts of memory. At the beginning of 2026, two interesting trends are emerging in the market: price explosions for DDR5 and longer waiting times for HBM4. Why?
Memory technologies: A symbolic image.
(Image: Dall-E / AI-generated)
The demand for data centers and the hardware to equip them according to (often AI-driven) needs remains consistently high and is apparently putting particular pressure on the memory market at the beginning of 2026. At least two interesting pieces of news from this sector have managed to make it to the headlines of all tech news at the start of the year. We briefly shed light on what is actually going on.
Exorbitant Prices for DDR5
DRAM is the main memory in data centers and is used for the ongoing operation of operating systems, applications, virtual machines, and databases. Data center servers are equipped with vast amounts of DRAM to enable many applications to run simultaneously and large amounts of data to be quickly accessible. Unlike specialized solutions such as HBM, DRAM is not designed for maximum speed but for large storage capacities at reasonable costs. It is pluggable (DIMMs), replaceable, and forms the backbone of modern cloud, enterprise, and AI servers.
Recently, headlines emerged that DDR5 has become costly on the Chinese spot market—so much so that a box of 100 high-capacity DDR5 modules currently costs about as much as an apartment in Shanghai. This was reported by editors from Tom's Hardware based on an article from the South China Morning Post. Individual 256 GB server modules are reportedly being traded at prices exceeding 40,000 yuan (~5,700 USD), with some listings even higher.
Notably, these prices do not appear to be supported by equally strong demand. Sellers at the Huaqiangbei market in Shenzhen report that many buyers are currently choosing to wait given the high price expectations. Luke James from Tom's Hardware explains why this is particularly significant: “Huaqiangbei plays a special role in China's semiconductor ecosystem. It sits between official contract channels and the gray or secondary market, where prices can quickly fluctuate due to shortages or sudden changes in availability. This makes it a useful early indicator of tensions but also a place where prices can become significantly decoupled from underlying demand. In this case, sellers say that fear of being stuck with costly inventory is freezing activity.”
So what's going on? The global memory industry has shifted from an oversupply to a shortage. DDR5 RAM is currently so expensive because the memory market is overall under pressure. The major providers, especially SK hynix and Samsung, are reallocating their capacities to servers and high-bandwidth memory for AI workloads, primarily HBM3e, making production capacities for high-capacity server DDR5 correspondingly scarce. The reasons for this can be found, among other things, in the explanations regarding the HBM4 headlines.
And the DDR5 price spikes among sellers in nervous secondary markets leave room for unusual price comparisons currently making the rounds, such as "a box of memory costs as much as a house in Shanghai." Or for headlines, reporting that manufacturers like Apple, Dell, Amazon, and Google are booking long-term stays in hotels near SK hynix and Samsung production facilities to secure long-term memory supply contracts.
Design Adjustments for HBM4
HBM (High Bandwidth Memory) is a high-speed specialized memory directly connected to GPUs and AI accelerators in data centers, primarily used for AI training, AI inference, and high-performance computing. It delivers very high data rates with low latency and high energy efficiency, preventing powerful computing chips from being bottlenecked by slow memory. Due to its integration within the chip package, HBM is expensive, capacity-limited, and non-replaceable, yet it is indispensable for modern AI data centers.
The current generation is HBM3e, but the well-known memory manufacturers SK hynix, Samsung, and Micron, who dominate a large portion of the market, have been working on HBM4 for some time. The largest customer for it would be Nvidia with the Rubin platform. However, the HBM4 specifications for Rubin, according to research by Trendforce experts, were updated by Nvidia in the third quarter of 2025. The required speed per pin of HBM4 was increased to over 11 Gbit/s, and this new requirement has led the three major HBM suppliers to adjust their designs. As a result, mass production of HBM4 is expected to fully commence only by the end of the first quarter of 2026.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Trendforce notes that "SK hynix, Samsung, and Micron have all resubmitted HBM4 samples and are further refining their designs in response to Nvidia's stricter requirements. Compared to its competitors, Samsung has taken an early lead by introducing a 1cnm process for HBM4 and utilizing advanced in-house foundry technology for the base chip."
The 1cnm process belongs to the sixth generation of the 10-nm class and achieves a line width of approximately 11 to 12 nanometers, making it finer than the previous 1bnm process of the fifth generation with around 12 to 13 nanometers. According to Trendforce, Samsung could achieve a technical lead in the rapid qualification of HBM4 by combining the 1c DRAM process with HBM4 design—potentially translating into a competitive advantage in certain high-end AI segments.
Samsung is therefore expected to be the first to qualify as a supplier for HBM4 for the Rubin platform. Meanwhile, Nvidia has adjusted the supply chain for the Blackwell platform in response to high demand and currently requires HBM3e in particular. Additionally, under certain conditions, shipments of H200 chips to China have been re-enabled; these AI chips also use HBM3e. This has given memory producers additional time to adjust the design of HBM4 products to meet the new requirements. (sb)