zagandesign - Fotolia
Enterprise-grade, data center GPUs help organizations harness parallel processing capabilities through hardware upgrades. This helps organizations accelerate workflows and graphics-intensive applications, but admins are still finding new ways to use GPUs.
Engineers originally designed GPU chips to offload any graphics processing from the CPU. GPUs supported graphics applications and found a niche use case in the gaming industry. Once the technology was established, organizations started using GPUs to support virtualized infrastructure and power users.
In 2009, organizations started to regularly use GPUs for high-performance computing (HPC) applications and AI. Many organizations use GPU-based servers for AI training, but HPC deployments are more common at universities, research labs and companies with specific compute needs. The more traditional HPC use cases are in the scientific sectors, but the technology is also useful for industries such as finance and healthcare.
To meet the requirements of applications such as HPC, virtualization and AI, organizations must have the proper framework to support a large number of GPU-based servers. This effective adoption of GPUs is challenging and requires technical sophistication. The majority of adoption occurs within enterprises -- not SMBs, according to Linley Gwennap, president and principal analyst of The Linley Group.
"Both Nvidia and AMD [Advanced Micro Devices] produce high-quality, enterprise-grade cards that are designed for data center processing. Not only are these cards suitable for running in actively cooled servers, they also have error-correcting code memory, which can fix bad data before it ruins data sets. This feature is a crucial requirement for running any sort of data processing and analytics process," Ami Gal, CEO of SQream, said.
Considering GPUs in the data center
Organizations should install a data center GPU architecture that is powerful enough to support multiple users and provide enough bandwidth for everyone on the network.
"It's like, if I have a three-bedroom apartment, I don't want 15 people sharing that," explained Sarosh Irani, director of product management, GPU server group at Supermicro. If too many people must use GPU resources, then they strain the hardware and defeat the purpose of using high-bandwidth processing components.
Providing good performance requires admins to estimate the number of users per GPU but also the requirements for each user; one workflow might need more memory or storage than another.
Optimized GPU usage and management enable admins to offload the more intensive graphics processing workloads and free up the CPU for more consistent performance with low-bandwidth applications.
One of the main challenges that organizations face with GPUs is integrating new devices on existing infrastructure. Irani compared it to the number of gas versus electric vehicles on the road; there are networks in place for electrical charging stations, but they are significantly outnumbered by gas stations. This doesn't make it impossible for electric cars to charge and run, but it's an adoption barrier if there's a smaller network to help electric vehicles stay on the road.
"We cannot turn infrastructure on a dime," Irani said. Current data centers mostly support CPU power structures, so the addition of GPUs changes power, heating and cooling requirements. Retrofitting the power delivery in a legacy data center is also something organizations need to consider, but cost and time are more of a factor than difficulty of upgrade.
The software side
Software is another major component of data center GPU adoption.
"You can have the most powerful hardware in the world, but if you don't have software [to] smartly use those cores, then they'll just sit idly. You definitely need software to go with the hardware," Irani said.
The software needs code that knows to direct calculations to GPU resources instead of just using the CPU for processing. Nvidia's CUDA is seen as the industry standard, Irani noted.
The parallel computing platform and programming model makes it easier for programmers to add extensions based on keywords. Though data center admins don't usually code software directly, it's useful to know about for management and workload balancing.
Software development is a challenge that startups face. They have the hardware, but they need to make sure there's a software abstraction layer available, Irani added. The lack of staff members with the skills to create a software layer complicates organizations' ability to adopt data center GPUs.
A look at the market
This past year saw steady adoption for GPUs, despite their heavy cost and integration considerations.
Nvidia reported $1.9 billion in data center revenue for 2017. This number is expected to grow over 2019, following a year of growth in 2018, according to Gwennap.
He added that, in 2018, Nvidia's revenue grew approximately 55%. This growth level is expected to slow in 2019 to about 10% more than 2018. This slowdown appears to be related to a cyclical reduction in cloud server purchases, as well as a decline in prices due to increased competition, Gwennap explained. AMD has smaller projected numbers because Nvidia takes up so much of the market.
At the beginning of 2018, Nvidia updated its licensing agreement to prohibit data center-level deployments for GeForce and Titan software drivers. According to a statement from Nvidia, the software is not licensed for data center deployment, with the exception of blockchain use cases.
Gwennap stated that few data centers were using consumer GPUs because of performance and reliability reasons, so the license change had little effect on the market.
This means, if smaller organizations decide they want to run higher-grade GPUs, they will need to evaluate options beyond the GeForce and Titan product lines.
As GPU-based computing grows within organizations, some innovative applications include self-driving cars, AI and cryptocurrency.
"AI is a very hot field, and there's a lot of investment going on right now, not just from large computing companies, but also on the [venture capital] front," Irani said.