NVIDIA Tesla P100 PCIe 16 GB
The NVIDIA Tesla P100 PCIe 16 GB is a powerhouse graphics processing unit (GPU) built on the Pascal architecture.
With its 16 nm manufacturing process and 3584 CUDA Cores, this GPU delivers exceptional performance for demanding computational workloads.
Equipped with 640 Tensor Cores, it also excels at deep learning and AI applications.
The GPU boasts a substantial 16 GB of high-bandwidth HBM2 memory with a 4096-bit interface, offering a remarkable memory bandwidth of 732 GB/s.
This ensures rapid data access and transfer, enabling seamless handling of large datasets.
With a thermal design power (TDP) of 250 W and a max power consumption of 300 W, the Tesla P100 is designed to handle intensive tasks while maintaining efficient power usage.
With its 16 nm manufacturing process and 3584 CUDA Cores, this GPU delivers exceptional performance for demanding computational workloads.
Equipped with 640 Tensor Cores, it also excels at deep learning and AI applications.
The GPU boasts a substantial 16 GB of high-bandwidth HBM2 memory with a 4096-bit interface, offering a remarkable memory bandwidth of 732 GB/s.
This ensures rapid data access and transfer, enabling seamless handling of large datasets.
With a thermal design power (TDP) of 250 W and a max power consumption of 300 W, the Tesla P100 is designed to handle intensive tasks while maintaining efficient power usage.
Specification
Value
Graphics Processing Unit (GPU)
NVIDIA Tesla P100
GPU Architecture
Pascal
Manufacturing Process
16 nm
Cores
3584 CUDA Cores
Tensor Cores
640 Tensor Cores
Memory Capacity
16 GB HBM2
Memory Interface
4096-bit
Memory Bandwidth
732 GB/s
Memory Speed
1.4 Gbps
TDP (Thermal Design Power)
250 W
Power Connectors
1x 8-pin PCIe Power Connector
Max Power Consumption
300 W
Form Factor
Dual Slot, Full-Height
Max Resolution
5120×2880 (Digital)
Display Outputs
None
DirectX Support
12
OpenGL Support
4.5
Vulkan Support
1.0
Compute Performance
9.3 TFLOPs (FP32), 18.7 TFLOPs (FP16)