NVIDIA Announces HGX H100, Key GPU Server Building Block
NVIDIA has announced the HGX H100, a powerful AI supercomputing framework using the power of the H100, a new GPU using the new Hopper architecture.
Created on April 21|Last edited on April 21
Comment
NVIDIA HGX is a supercomputing platform capable of combining the power of multiple gpus together, optimized for AI computing. With support from NVLink and InfiniBand, communication between GPUs in the network system is far faster than common PCIe bussing.

NVIDIA HGX H100
What is the NVIDIA HGX H100?
The NVIDIA HGX H100 is a sequal of sorts to the NVIDIA HGX A100. The big difference between the two is the upgrade from the A100 Tensor Core GPU chip which uses Ampere architechute, to the recently announced Hopper architecture of the H100 Tensor Core GPU.
There's a few options to choose from, including a 4-GPU version, an 8-GPU version, and an 8-GPU version with NVLink-Network support. In each version, each H100 GPU is connected to the others via NVLink cables, with NVSwitch facilitating connections between GPUs in the 8-GPU models.
The 4-GPU version offers a more compact form factor and additional direct-to-cpu connection options. The 8-GPU version lets you harness the power of 8 H100 GPUs, while the 8-GPU version with NVLink-Network support allows for the direct bridging between individual HGX H100 systems, up to a maximum of 256 H100 GPUs.
With the growing requirement for massive models, the networking support
How does the NVIDIA HGX H100 compare to the A100?
NVIDIA's new Hopper architecture lets the H100 soar above the A100 in terms of performance across the board. Add the new networking option and you've got speeds blazing past that of the A100.

More detailed comparisons between the A100 and H100 can be found on the official announcement post, including improvement ratios of up to 32x.
Find out more
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.