What Nvidia's $78B quarter tells you about enterprise AI

Nvidia's latest earnings reveal more than impressive revenue figures. They highlight the accelerating adoption of enterprise AI and the growing pressure on infrastructure capacity.

Five companies reported earnings on the same night. Nvidia got most of the attention.

I get it: $68 billion in quarterly revenue and $78 billion in forward guidance are hard numbers to ignore. But I've been watching the coverage roll in, and most of it stops at the headline: Nvidia beat estimates again, and the stock moved.

That's not the story, though. The story is what Nvidia's numbers tell you about the state of enterprise AI adoption, and where the pressure points are building.

The numbers behind the AI bubble debate

Nvidia's data center segment accounted for $62.3 billion in the second quarter of fiscal year 2026, making up 91% of total revenue. That number alone tells you where real enterprise spending is going. When a single company generates that kind of revenue almost entirely from AI infrastructure, the bubble argument starts to require a lot of creative accounting to hold together.

Nvidia's growth is a direct proxy for how much AI compute is being consumed through platforms such as Snowflake, Salesforce, Databricks and the broader cloud ecosystem. When Nvidia guides $78 billion in revenue for the next quarter, it's because downstream demand for AI workloads is accelerating.

The industry got confirmation of that acceleration on the same earnings night as Nvidia's. Snowflake reported 9,100 accounts now running AI workloads, with Intelligence deployments nearly doubling in a single quarter. Salesforce's Agentforce hit $800 million in annual recurring revenue and processed 11.14 trillion tokens in Q4. Those numbers in a single quarter tells you this is production-grade adoption. All of it runs on compute infrastructure with Nvidia at the center, even if the supply chain extends well beyond Nvidia.

Why inference costs matter more than Nvidia's revenue

The inference economy is where enterprise AI lives now -- every Cortex query inside Snowflake, every Agentforce agent handling customer interactions, and every model serving predictions inside a data pipeline. That volume is growing fast because the number of enterprises deploying AI features is expanding exponentially, while the number of companies training frontier models remains small.

Nvidia knows where the growth is. Its Rubin architecture claims a 10x reduction in inference token costs compared to Blackwell, which is really a price signal for how fast AI features become economically viable inside the platforms that enterprise data leaders use every day. If Rubin delivers on that cost reduction, every enterprise data platform gets more room to embed AI into production workflows. If it doesn't, expect a lot of tough conversations about which use cases justify the spend.

$97B in free cash flow is the number that matters most

Most people fixate on Nvidia's revenue. I think the free cash flow number is more revealing.

Nvidia generated $97 billion in free cash flow in FY 2026, and it's using that cash flow to embed itself deeper in the AI ecosystem, well beyond selling GPUs. Consider its $2 billion CoreWeave investment, its sovereign AI deals and its strategy of funding smaller cloud providers to reduce dependency on AWS, Azure and Google Cloud. Nvidia is actively building the infrastructure layer that every platform vendor I cover depends on.

For companies like Snowflake, Salesforce and Databricks, this creates an interesting dynamic. The compute capacity their customers rely on is increasingly financed and influenced by a single company. That's worth paying attention to, especially as AI workload volumes scale and the negotiating leverage around compute pricing shifts.

Nobody's talking about the supply side

Everyone wants to talk about demand, and the demand numbers are genuinely impressive. But I think it's important not to get lost in that story, because the supply side is flashing warning signs.

Nvidia's $78 billion guidance assumes that supply can scale to meet what appears to be bottomless demand for AI compute. Gross margins held at 75% through the Blackwell production ramp, indicating pricing power is fully intact. That's great for Nvidia's shareholders, but it also means enterprise compute costs aren't declining as quickly as platform vendors need them to.

Component availability constraints and rising input costs are already showing up in earnings calls across the industry. The demand curve has received all the attention, while the supply curve has seen almost none. I think that's a mistake. The pace of enterprise AI adoption over the next 12 to 18 months will be determined as much by compute economics as it will by the quality of the models and platforms themselves.

What this means for data and AI leaders

If you're running an enterprise data or AI strategy right now and you're not already factoring Nvidia's earnings into your planning, you're behind.

The adoption curve is real. The AI bubble narrative keeps getting contradicted by actual revenue numbers across the entire stack, from silicon to software. But the capacity to support that level of AI workload growth is getting tighter, and the cost of running AI in production isn't dropping as fast as most enterprise budgets are counting on.

The organizations that pull ahead will be those that treat efficiency as a core part of their AI strategy from the start. Data quality, workload optimization and smart platform choices matter a lot more when compute resources are constrained and expensive. The difference is that early AI deployments could absorb premium pricing. Running AI across an entire enterprise at production scale is a different cost conversation entirely.

AI strategy and resource planning need to be the same conversation. Right now, though, in most organizations I talk to, they're still happening in separate rooms.

What I'm watching next

Inference cost curves are the single most important variable for enterprise AI adoption. If Rubin delivers on the 10x claim, we'll see a meaningful acceleration in AI feature deployment across data platforms. If it doesn't, expect harder prioritization conversations inside enterprises about which AI use cases are worth running.

Watch the supply chain signals. Nvidia's ability to deliver on $78 billion depends on the same component ecosystem that other vendors are already flagging as constrained. Any disruption there ripples through the entire AI platform economy.

And stop treating Nvidia earnings as a chip story. It's the clearest barometer we have for where enterprise AI is headed. The numbers tell you about inference economics, platform adoption and infrastructure capacity all at once. That's the lens worth paying attention to.

Mike Leone is practice director for data management, analytics and AI at Omdia.

Omdia is a division of Informa TechTarget. Its analysts have business relationships with technology vendors.

Next Steps

Is Citrini Research's dark AI future a given?

Dig Deeper on Artificial intelligence platforms