Sergey Nivens - Fotolia
Raw performance is the most noteworthy point of comparison for graphics processing units. Some other GPU specs...
to consider include the number of cores and memory.
GPU performance is measured in trillions of floating point operations per second (TFLOPS). This number indicates how many calculations a GPU can complete in one second. When running 64-bit floating point operations, a consumer Nvidia GeForce GTX 1080 Ti can run up to 0.355 TFLOPS, whereas the enterprise Nvidia Tesla P100 is rated for 4.7 to 5.3 TFLOPS and the Tesla V100 is rated for 7 to 7.8 TFLOPS.
There are also GPUs that use 16-bit floating point operations per second, which yields performance roughly four times better than the 64-bit scores because these GPUs use half the storage and bandwidth of double-precision GPUs.
The number of cores is another GPU spec to consider. Consumer GPUs have one GPU chip with hundreds of cores, whereas enterprise-class GPUs offer more GPU chips and thousands of cores for increased parallel processing capabilities. For example, the Nvidia Tesla M10 provides four GPU chips with 640 compute unified device architecture cores per GPU, for a total of 2,560 cores.
The differences in architecture between GPU models and manufacturers make it difficult to directly compare cores. What's important to know for GPU specs is that enterprise-class GPUs have more cores and allow for much greater parallelism.
You must also evaluate graphics memory and memory bandwidth. Enterprise-class GPUs have more fast graphics memory -- such as GDDR5 -- compared to GPUs sold on the consumer market, and they support low-latency RAM. Memory hardware uses higher memory bandwidth technology to avoid processing bottlenecks.
Considerations beyond GPU specs
The way GPUs are installed on a server also affects performance. PCIe interfaces where the GPU is an expansion card offer broad compatibility between any server with a PCIe slot. They also cover the physical space and power needed to support a large card, but are limited to a peak bandwidth of about 16 GBps. A dedicated interconnect fabric such as Nvidia NVLink can pass bidirectional data up to 300 GBps between GPUs or CPUs.
Also, pay attention to GPU OS support. Nvidia does not have GeForce drivers available for Windows Server versions. This means other enterprise GPU products, such as Nvidia Tesla and Quadro GPUs, must be used with Windows Server. Linux drivers are available for all Nvidia GPUs.
Be sure to assess application compatibility. It may be possible to run an optimized application on other GPUs, but it might not have the best possible performance, and some applications may not run at all if they aren't coded for GPU hardware.
Dig Deeper on Data center hardware and strategy
Related Q&A from Stephen J. Bigelow
There are advantages and disadvantages to using NAS or object storage for unstructured data. Find out what to consider when it comes to scalability, ... Continue Reading
Knowing hardware maximums and VM limits ensures you don't overload the system. Learn hypervisor scalability limits for Hyper-V, vSphere, ESXi and ... Continue Reading
Fog computing vs. edge computing -- while many IT professionals use the terms synonymously, others make subtle but important distinctions between ... Continue Reading