I'd be happy to run any calcs for people considering I have a 24 core server that's pretty much idle. It's also got a Tesla P100 GPU meant for compute and not for video output.
GPU Architecture NVIDIA Pascal
NVIDIA CUDAŽ Cores 3584
Double-Precision Performance 4.7 TeraFLOPS
Single-Precision Performance 9.3 TeraFLOPS
Half-Precision Performance 18.7 TeraFLOPS
GPU Memory 16GB CoWoS HBM2 at 732 GB/s or 12GB CoWoS HBM2 at 549 GB/s
System Interface PCIe Gen3
Max Power Consumption 250 W
ECC Yes
Thermal Solution Passive
Form Factor PCIe Full Height/Length
Compute APIs CUDA, DirectCompute, OpenCL™, OpenACC
|