Nvidia H200 NVL Graphic Card 141 GB Passive PCIe – 900-21010-0040-000
Check serviceability
at your Pincode
NVIDIA H200 Tensor Core GPU - PCIe Powerhouse
Breakthrough Performance for AI and Data Center Applications
The NVIDIA H200 Tensor Core GPU in its PCIe form factor offers groundbreaking performance for AI workloads, featuring 141GB of memory and a staggering 4.8TB/s bandwidth.
This configuration is optimized for large-scale deployments, supporting up to 8 GPUs per server and utilizing NVLink bridges for high-speed data transfer at 900GB/s.
With advanced tensor cores delivering nearly 4,000 TFLOPS in FP8 and INT8 operations, the H200 PCIe is designed for demanding data center environments, scalable AI, and multi-tenant workloads through MIG partitioning for maximum efficiency.
This configuration is optimized for large-scale deployments, supporting up to 8 GPUs per server and utilizing NVLink bridges for high-speed data transfer at 900GB/s.
With advanced tensor cores delivering nearly 4,000 TFLOPS in FP8 and INT8 operations, the H200 PCIe is designed for demanding data center environments, scalable AI, and multi-tenant workloads through MIG partitioning for maximum efficiency.
Specification
H200 NVL (PCIe)
FP64
34 TFLOPS
FP64 Tensor Core
67 TFLOPS
FP32
67 TFLOPS
TF32 Tensor Core²
989 TFLOPS
BFLOAT16 Tensor Core²
1,979 TFLOPS
FP16 Tensor Core²
1,979 TFLOPS
FP8 Tensor Core²
3,958 TFLOPS
INT8 Tensor Core²
3,958 TFLOPS
GPU Memory
141GB
GPU Memory Bandwidth
4.8TB/s
Decoders
7 NVDEC, 7 JPEG
Confidential Computing
Supported
Max Thermal Design Power (TDP)
Up to 600W (configurable)
Multi-Instance GPUs
Up to 7 MIGs @16.5GB each
Form Factor
PCIe
Interconnect
2- or 4-way NVIDIA NVLink bridge: 900GB/s, PCIe Gen5: 128GB/s
Server Options
NVIDIA MGX™ H200 NVL partner and NVIDIA-Certified Systems with up to 8 GPUs
NVIDIA AI Enterprise
Add-on