An order-of-magnitude leap for accelerated computing.
Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU.
With NVIDIA® NVLink® Switch System, up to 256 H100s can be connected to accelerate exascale workloads, along with a dedicated Transformer Engine to solve trillion-parameter language models.
H100’s combined technology innovations can speed up large language models by an incredible 30X over the previous generation to deliver industry-leading conversational AI.
Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU.
With NVIDIA® NVLink® Switch System, up to 256 H100s can be connected to accelerate exascale workloads, along with a dedicated Transformer Engine to solve trillion-parameter language models.
H100’s combined technology innovations can speed up large language models by an incredible 30X over the previous generation to deliver industry-leading conversational AI.
Form Factor
H100 SXM
H100 PCIe
FP64
34 teraFLOPS
26 teraFLOPS
FP64 Tensor Core
67 teraFLOPS
51 teraFLOPS
FP32
67 teraFLOPS
51 teraFLOPS
TF32 Tensor Core
989 teraFLOPS*
756teraFLOPS*
BFLOAT16 Tensor Core
1,979 teraFLOPS*
1,513 teraFLOPS*
FP16 Tensor Core
1,979 teraFLOPS*
1,513 teraFLOPS*
FP8 Tensor Core
3,958 teraFLOPS*
3,026 teraFLOPS*
INT8 Tensor Core
3,958 TOPS*
3,026 TOPS*
GPU memory
80GB
80GB