Nvidia GTC 24: Are you ready for the future of AI?

The 2024 Nvidia GTC conference focused on the company's new Blackwell GPU Platform. With advancements in AI taking off, companies need to decide how and when they will join the world of AI.

The 2024 Nvidia GTC conference covered topics surrounding the era of AI, which led to the following overall question: Is your organization's cloud and IT infrastructure ready for the future of AI?

During the keynote, Nvidia CEO Jensen Huang spoke for two hours to an audience at the SAP Center in San Jose, Calif. The star of his presentation was the new Blackwell GPU Platform, which is designed to run real-time generative AI on trillion-parameter large-language models. Each Blackwell chip offers 208 billion transistors, and the latest iteration of NVLink offers 1.8TBps of bidirectional throughput.

This recent announcement fuels the growing excitement about the potential of AI. According to TechTarget's Enterprise Strategy Group research, 54% of organizations will have or expect to have generative AI in production within the next year.

Even with all this excitement, however, the scale of the technology presented at Nvidia GTC should give pause to any enterprise decision-maker. Before making any decisions, ask yourself: Is there a need for that level of technology? Can the company afford it? How do I rightsize my AI infrastructure investments for my organization and use case?

Although GPU technology can sometimes be in short supply, there's no shortage of available infrastructure options. The three major public cloud providers -- AWS, Google Cloud Platform and Microsoft Azure -- announced plans to leverage the new Blackwell technology during the week of Nvidia GTC.

Those public cloud services offer organizations the option to harness the latest GPU technology without having to procure and deploy infrastructure on premises. While significant cloud adoption is expected, AI and generative AI workloads are helping to fuel a bit of a renaissance for on-premises infrastructures.

A full 78% of organizations said they prefer to keep their high-value, proprietary data in their own data centers, according to Enterprise Strategy Group research. The success of AI initiatives is driven by data, and organizations want to deploy AI workloads closer to where the data resides to reduce cost and accelerate time to value.

As a result, infrastructure providers are actively working to accelerate time to value for AI initiatives by providing integrated and validated infrastructure offerings that combine their technology with Nvidia's.

On the show floor, I saw products from Dell Technologies, DataDirect Networks, Hitachi Vantara, Hammerspace, Liqid, Pure Storage, Vast Data and Weka. Each is designed to simplify the deployment and integration of Nvidia technology while accelerating time to value for AI initiatives. Despite similarities in purpose, these products address different areas of the AI data pipeline, from helping simplify data preparation to accelerating training and support inference activities. As a result, these options vary in scale, performance and cost. 

The diverse infrastructure options now available for AI workloads further reinforces the need to identify high-priority uses along with the data sets suited for AI prior to going down the infrastructure investment path.

It is imperative to identify a use case that can deliver a quick win for your organization -- that is, one that can validate the investment in AI. According to Enterprise Strategy Group research, "Navigating the Evolving AI Infrastructure Landscape," 72% of organizations said they see value from their AI projects in just three months or less.

Ideally, you should identify a use case that is self-contained and where the cost of hallucinations would be minimal. While techniques such as retrieval-augmented generation can reduce the likelihood of hallucinations with existing models, prior bad experiences can hinder enthusiasm among internal users for AI projects.

Use case identification is critical because infrastructure demands can vary greatly based on the scale of the data used, the number of parameters used in training the models, and whether you plan to develop your own models or augment existing ones with your own data. The use of retrieval-augmented generation with off-the-shelf models can also make the infrastructure investment much lighter than one might expect.

When it comes to establishing use cases, organizations can utilize a growing ecosystem of service partners to identify the right strategy. In addition, infrastructure providers such as Dell Technologies are augmenting their infrastructure options with a complementary portfolio of advisory services to help organizations identify and define uses, as well as prepare their data for use in AI initiatives.

AI and generative AI are poised to change the entire business landscape. Given the power of the latest Nvidia technology, it's easy to become overwhelmed. Don't panic. AI initiatives can often start with reasonable infrastructure investments. What matters is to identify the right use and data sets to start, and utilize partners early in the process.

Scott Sinclair is Practice Director with TechTarget's Enterprise Strategy Group, covering the storage industry.

Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.

Dig Deeper on Data center design and facilities

Cloud Computing
and ESG