AI chip Nvidia Blackwell: Next-generation AI revolution

From Hendrik Härter | Translated by AI 3 min Reading Time

Related Vendors

The chip company Nvidia aims to expand its leading role in artificial intelligence technology. Company CEO Jensen Huang presented the current AI platform: the Blackwell chip. It is even more powerful than the current Grace Hopper.

With the Blackwell AI chip, Nvidia aims to usher in the next generation of artificial intelligence.(Image: Nvidia)
With the Blackwell AI chip, Nvidia aims to usher in the next generation of artificial intelligence.
(Image: Nvidia)

Nvidia is benefiting from the current boom in artificial intelligence. To ensure the company continues to play a leading role in AI technology, Nvidia CEO Jensen Huang introduced a new generation of AI computing platform in his keynote last Monday. The system, named Blackwell, is described by Nvidia as "the driving force behind a new industrial revolution." The AI chip is named after the American mathematician David Blackwell.

The company also aims to expand its role in content creation with artificial intelligence. "The Blackwell system is 30 times better at this than Hopper," emphasized Huang on Monday at the company's in-house developer conference GTC in San Jose.

The GB200 Grace Blackwell chip connects two Tensor-Core GPUs of type B200 with the Grace CPU via a die-to-die link. Specifically, it is the Ultra-Low-Power NVLink with a data rate of 900 GB/s. The Blackwell GPU, developed by Nvidia, is supposed to be capable of computing models with up to 10 trillion parameters. For this purpose, there are 208 billion transistors available on the chip.

For correspondingly high AI computing power, systems operated with the GB200 can be connected to the Ethernet platforms Quantum-X800 InfiniBand and Spectrum-X800. This allows for speeds of up to 800 Gbit/s to be achieved.

Use of the Blackwell chip

The GB200 is used as a key component in the GB200 NVL72. This is a high-performance, water-cooled rack solution that contains 36 GB200 accelerators. Each GB200 accelerator consists of a Grace CPU and two Blackwell GPUs, as opposed to the previous GH200 accelerator, which contained a Grace CPU and a Hopper GPU.

The GPUs are no longer located on the same board as the Grace CPU but on a separate module within the server. The connection between the Grace CPU and the Blackwell GPUs is via NVLink C2C with a bidirectional bandwidth of 900 GB/s. Each GB200 accelerator has a total memory capacity of 864 GBytes, with each of the two Blackwell GPUs equipped with 192 GBytes of HBM3E memory connected to the Grace CPU.

Processing data with the BlueField-3

The GB200 NVL72 includes the BlueField-3 data processing units, enabling cloud network acceleration, composable storage, zero-trust security, and GPU compute elasticity in hyperscale AI clouds.

This combination offers up to a 30-fold increase in performance compared to the same number of H100 Tensor Core GPUs for Nvidia's LLM inference workloads. Additionally, the GB200 NVL72 reduces costs and energy consumption by up to 25 times. This integration of BlueField-3 into the GB200 NVL72 enables advanced features and a significant improvement in performance and efficiency in compute-intensive AI applications and cloud environments.

What is possible with the AI chip

Nvidia's hardware dominates data centers for AI training. In addition, Nvidia plans to expand its role in creating content with artificial intelligence. CEO Jensen Huang emphasized that the Blackwell system is 30 times better than Hopper for AI training. Besides this performance optimization, Nvidia offers new software that can also be used via interfaces in the cloud.

"With Grace Hopper, for example, you could have trained the chatbot ChatGPT within three months with 8,000 Nvidia chips and a power consumption of 15 megawatts," said Huang. "With Blackwell, you can do that in the same time with 2,000 chips and 4 megawatts of power."

The future of artificial intelligence

Nvidia has developed a computer system for the future aimed at having most content not retrieved pre-made from storage, but rather freshly generated by AI software based on the current situation.

Jensen Huang, CEO of Nvidia, is convinced that this development is imminent. An example of this is the possibility, in the future, to communicate with buildings via chatbot instead of accessing data in various locations. This vision demonstrates the progress in AI technology and how it will be capable of generating personalized and contextual content in real-time, thus taking interactions with technology to a new level.

Techniques originally developed by Nvidia for graphics cards have long since proven themselves in computing work for applications with artificial intelligence. Competitors such as Intel and AMD have so far been unable to catch up. This is causing Nvidia's business - and stock value - to grow rapidly. Large AI companies like Microsoft, Google, and Amazon are already planning to deploy Blackwell.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent