Enterprise adoption of AI is now mainstream, and organizations need end-to-end, AI-ready infrastructure that will accelerate them into this new era.
H800 for mainstream servers comes with a five-year subscription, including enterprise support, to the NVIDIA AI Enterprise software suite, simplifying AI adoption with the highest performance.
This ensures organizations have access to the AI frameworks and tools they need to build H800-accelerated AI workflows such as AI chatbots, recommendation engines, vision AI, and more.
This ensures organizations have access to the AI frameworks and tools they need to build H800-accelerated AI workflows such as AI chatbots, recommendation engines, vision AI, and more.
Specification
Details
Graphics Processor
GPU Name
GH100
Architecture
Hopper
Foundry
TSMC
Process Size
4 nm
Transistors
80,000 million
Die Size
814mm²
Graphics Card
Release Date
Mar 2022
Generation
Tesla Hopper
Type
Desktop
Bus Interface
PCIe 5.0 x16
Clock Speeds
Base Clock
1095MHz
Boost Clock
1755MHz
Memory Clock
1500MHz
Memory
Memory Size
80GB
Memory Type
HBM3
Memory Bus
5120bit
Bandwidth
1920 GB/s
Render Config
Shading Units
8448
TMUs
528
ROPs
24
SM Count
132
Tensor Cores
528
L1 Cache
192 KB (per SM)
L2 Cache
50MB
Theoretical Performance
Pixel Rate
42.12 GPixel/s
Texture Rate
926.6 GTexel/s
FP16 (half)
118.6 TFLOPS
FP32 (float)
29.65 TFLOPS
FP64 (double)
14.83 TFLOPS
Board Design
TDP
700W
Suggested PSU
1100W
Outputs
No outputs
Power Connectors
8-pin EPS
Graphics Features
DirectX
N/A
OpenGL
N/A
OpenCL
3.0
Vulkan
N/A
CUDA
9.0
Shader Model
N/A