The hunger for data due to AI workloads is forcing hyperscalers to flirt more with HDDs again. Western Digital wants to increase capacity to 140 TB and has now presented the roadmap with new technologies to achieve this: Dual pivot actuator technology or proprietary laser lithography should make such dimensions possible.
Old form factor, new technologies: Ahmed Shihab, Chief Product Officer at Western Digital, shows the physical drive, which is designed to hold up to 140 TB, at Innovation Day.
(Image: Western Digital)
For a long time, the hard disk was regarded as a pure data tomb: cheap, high-capacity, but technologically exhausted. It was inferior to the long-established SSD competition in many ways—above all in terms of transfer performance. But in the age of AI, which is sucking up SSDs, DRAM and flash cells in general like a black hole, the good old magnetic disk architecture is undergoing a fundamental reassessment. AI systems not only need fast GPUs for training, but also a massive, economically scalable "memory" for the inference history. Every query to an AI model generates context data that needs to be stored to keep the system adaptive.
Tape (yes, that still exists) is too slow and flash is too expensive for this "inference history". The HDD occupies the critical position here as the primary exabyte repository. But to survive in modern AI clusters, it has to solve a fundamental problem: The mismatch between increasing capacity and limited interface bandwidth.
Gallery
The SATA Paradox: Why Flash is Slowed Down
A previously unassailable argument put forward by flash proponents was performance with falling prices. The latter is now invalid. Western Digital (WD) is countering this with an analysis of the system architecture, which can be described as a "SATA paradox".
In the huge object storage infrastructures of hyperscalers (object stores), drives are connected almost exclusively via the SATA interface. This limits the throughput to approx. 530 MB/s net. A QLC SSD may be able to communicate internally between flash and controller in gigabytes per second. But as soon as it is inserted into the SATA slot, the speed advantage is lost at the interface bottle neck. Of course, there have long been faster interfaces, but NVME and the like are still too expensive for these capacity dimensions.
The AI data cycle: HDDs not only serve as "cold storage" in the Object Store (orange), but also cover critical phases such as data ingestion, preparation and storage of the inference history.
(Image: Western Digital)
For data center operators, this results in an economic irrationality: Why pay ten times the price per capacity for QLC flash when the system architecture (SATA) caps performance at a level that HDDs have now also reached? In addition, QLC SSDs suffer from "wear-out" during the write-intensive ingest workloads of AI systems, while HDDs, at least according to WD, offer constant reliability.
The logical consequence for WD: It is not the storage medium that needs to be changed, but the HDD must be able to fully utilize the SATA interface.
With increasing capacities (30, 40, 60 TB) comes the risk of "stranded capacity": storage space that is available but cannot be used efficiently due to I/O bottlenecks. In other words: the IOPS per terabyte decrease. In order to prevent this, WD reaches deep into its mechatronics box of tricks:
To fully utilize the SATA bus, WD uses the precision of the proven Triple Stage Actuator to read or write from multiple data tracks simultaneously. The result is a doubling of the sequential throughput and a 1.7-fold increase in random read/write accesses. In the future, it should be possible to address up to eight tracks simultaneously.
The implementation of this actuator is a lesson in control technology: Since the track density (tracks per inch, TPI) of modern drives is in the hundreds of thousands, a simple voice coil motor (VCM) is no longer sufficient to compensate for mechanical resonances and vibrations. WD relies on a three-stage cascade here:
VCM (Voice Coil Motor): Takes over the rough positioning of the entire arm over the entire swivel range.
Milli actuator: A piezo element on the arm itself, which is responsible for fine positioning and fast corrections in the medium frequency range.
Micro actuator: Another piezo element that sits directly on the slider (the carrier of the read/write head). This regulates high-frequency interference and enables the extreme precision required to keep track at rotation speeds of 7,200 rpm at nanometer level. And this is now done in parallel on several disk stacks.
Dual Pivot: the IOPS Accelerator
Perhaps the most radical mechanical innovation is the "Dual Pivot" architecture. Instead of moving all read/write heads via a single pivot point, WD uses two separate actuators on two independent pivot points.
The effect: double transactions per second (IOPS) and eliminates vibration effects, as the impulse forces of the opposing arm movements can partially compensate each other.
What's special: Unlike previous multi-actuator attempts in the industry, the dual-pivot design fits into the existing 3.5-inch form factor and requires no changes to the chassis or power supply ("drop-in replacement").
The Optical Turnaround: VCSEL Lasers And Wafer-Level Testing
The path to 140 TB: The switch from conventional edge emitters (left) to in-house VCSEL technology (center) enables a more thermally efficient writing method. In future, this will allow designs with up to 14 boards and 10 TB surface density (right).
(Image: Western Digital)
While the mechanics ensure the speed, the material science ensures the density. WD relies entirely on HAMR (Heat-Assisted Magnetic Recording), but differentiates itself through a high level of vertical integration: the lasers are manufactured in-house.
VCSEL (Vertical-Cavity Surface-Emitting Laser) technology is used here, which is superior to conventional edge-emitting lasers (edge emitters) in production:
Design & thermals: VCSEL modules are smaller and more thermally efficient. This is crucial in order to keep the "head-media spacing" (distance between head and plate) to a minimum and to minimize thermal expansion effects on the write head.
Wafer-level testing: A decisive advantage for process engineers. As the laser emits vertically, its function can be tested on the wafer before it is cut. With edge emitters, this is only possible after separation. This massively increases the yield and reduces the costs per unit.
The Mathematics of 140 Terabytes
Anatomy of the HAMR drive: Western Digital demonstrates the current structure with 4 TB per platter. With 11 platters, this results in a total capacity of 44 TB, whereby the basic architecture remains identical to previous ePMR models.
(Image: Western Digital)
Miniaturization through VCSEL allows WD to pack more platters (discs) into the 3.5-inch housing.
Today: 10 to 11 plates.
Roadmap: Up to 14 plates are possible with the new lasers.
Density: The target is an area density of 10 TB per plate by 2028.
Capacity: 10 TB x 14 disks = 140 TB per drive.
Cool Tier: Energy Efficiency Through Optimized Speed
In modern data centers, energy consumption (PUE value) has long been just as critical a KPI as pure storage capacity. AI databases not only generate "hot data" (inference), but also huge amounts of "warm" or "cool data" that is accessed less frequently, but which nevertheless cannot be stored on tape as the access times must remain in the millisecond range.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
WD is addressing this "cool tier" with a new generation of drives that uses energy efficiency as a primary design parameter. By optimizing the firmware and motor control—in effect lowering the speed—energy consumption is reduced by a significant 20 percent. The trade-off is calculated: The sequential transfer rate only drops by 5 to 10 percent. For massive object stores, where network latency is often more dominant than pure disk mechanics, this is an attractive trade-off that drastically reduces the total cost of ownership (TCO).
Interface to the Software: Open API Instead of Black Box
Hardware abstraction: A "Single Open API" is intended to hide the physical differences between flash and HDD (such as zoning or SMR management) from the host system in order to simplify integration into software-defined storage environments.
(Image: Western Digital)
For decades, the hard disk was a "black box" for the operating system: The OS sent logical block addresses (LBA) and the disk controller decided where and how this data was physically stored in a non-transparent manner. In times of highly optimized software-defined storage stacks, this overhead is undesirable.
WD is therefore driving forward the opening of the hardware interface by means of an Open API. The aim is to make technologies such as Zoned Namespaces (ZNS) and host-managed SMR (Shingled Magnetic Recording) accessible to smaller cloud providers ("NeoClouds") without them having to develop proprietary firmware stacks. The principle: the hard disk reports its physical geometry (zones) to the host. The file system (e.g. btrfs or f2fs) then places data directly and sequentially in the appropriate zones. This eliminates the disk's internal garbage collection overhead, reduces write amplification and ensures deterministic latency—a decisive factor for near real-time AI applications.
The Dual Roadmap: Risk Minimization for Hyperscalers
Despite the progress made in HAMR, the industry is conservative. A change in storage technology often requires months of re-qualification of the software stacks. WD counters this with a dual strategy:
ePMR (Energy-assisted Perpendicular Magnetic Recording): In contrast to HAMR, no laser is used here, but an electric current ("bias current") is applied to the recording head. This generates a supporting magnetic field that stabilizes the writing process ("jitter" reduction) and thus enables narrower tracks. This proven technology is being further developed: WD is currently qualifying the first 40 TB ePMR HDD (11 platters). A 12-platter design is planned for the step up to 60 TB. This serves as a "safety net" for customers who do not yet want to switch to HAMR.
HAMR (Heat-Assisted Magnetic Recording): The ramp-up of HAMR, which technologically paves the way to 100+ TB, is running in parallel. Here, a microscopic laser on the write head heats the medium selectively for nanoseconds. This brief heating drastically reduces the coercivity of the magnetic material, allowing data to be written to much smaller, more thermally stable grains, which is the key to extreme areal densities.
As both technologies are based on the same mechanical platform (chassis, firmware base), customers can theoretically mix ePMR and HAMR drives in the same rack without having to adapt their software architecture.
"Stop! HAMR-Time!": New Spring for the Hard Disk?
Technology roadmap: In addition to pure capacity (capacity drives), WD will gradually introduce performance features such as "High Bandwidth" and "Dual Pivot" from 2026/27 in order to keep the IOPS-per-TB ratio stable.
(Image: Western Digital)
The hard disk is by no means a discontinued model in the age of AI, but is becoming a strategic component for scalable AI infrastructures. The innovations around VCSEL lasers and dual-pivot mechanics show that the physical limits of magnetic storage are far from being reached. Rather, Western Digital is shifting the focus from pure capacity to usable I/O performance in order to effectively counter the "SATA paradox" in data centers.
While flash memory remains indispensable for hot-tier applications, HDD is consolidating its role as the economic backbone for the massive amounts of data used for inference history and training. GPU for calculation and HDD for data storage will thus become a team for efficient AI clusters of the future. The path to 140 terabytes is therefore not just a roadmap note, but a necessary condition to be able to map the massive data growth of the coming years economically at all. (mc)
Secure and Compliant Authentication in Laboratories