SUSECON 2026: The push to operationalize AI
At SUSECON 2026, SUSE outlined a sovereignty-driven strategy for AI and infrastructure, pairing agentic operations with partnerships spanning NVIDIA, Switch and Cloudbase.
"Don't just architect for survival, architect for innovation through choice," said Dr. Thomas Di Giacomo, CTO and CPO at SUSE during his keynote.
This is an excellent summary of the strategy SUSE brought to Prague this year.
Innovation here has a pragmatic definition: give staff room to tinker, then make sure you can actually move the results to production if successful. The first half is easy. The second half is where enterprise IT has struggled for decades. It is exciting to see SUSE taking on this challenge, as the payout of the entire organization being able to experiment with readily scalable AI models cannot be overestimated.
To get there, SUSE is leading with the foundational concept of sovereignty. More specifically, this means owning and controlling data, infrastructure and AI models independently of where they run or what technology stack they are running on.
"Resilience needs choice," said Frank Feldmann, SUSE's Chief Strategy Officer, repeatedly throughout the week. Based on Feldmann’s definition, choice requires two core components:
- Exit velocity. This is getting out of a vendor contract quickly once it stops being economically viable.
- Pivot ability. This means switching technology platforms without interrupting the business.
These are not abstract principles. They are direct responses to current pressures on the enterprise: sudden licensing changes, growing dependence on a small number of hyperscalers, conflicting regulatory requirements across regions and the risk of getting locked into a certain AI stack.
SUSE's framing of resilience pushes past the traditional reactive understanding. Di Giacomo described infrastructure as the synapse between stimulus and response, arguing that what matters is the system's capacity for 'digital neuroplasticity' -- the ability to reroute, create new pathways and choose its response to disruptions and opportunities rather than merely survive them. This reframes the operator's job from playing defense on uptime to creating capacity for the business. Chris Scott of PepsiCo put it plainly on stage: eliminating the archeological layer of technical debt gave his teams back their most valuable resource -- capacity to partner with the business, experiment and move on opportunities that legacy sprawl had been burning. Fewer incidents is half of it; provisioning platforms predictably and at speed is the other half. Both come out of the same shift to a proactive posture.
If resilience is the outcome, agentic operations is the mechanism SUSE is betting on in 2026 to produce it. The stage framing at SUSECON was that the paradigm is shifting from software-defined -- humans encoding the rules that machines execute -- to agentic, where models choose the actions that meet the intended outcome, and the operator moves from authoring policy to auditing autonomous decisions. SUSE is making that concrete by shipping MCP server interfaces across its infrastructure portfolio -- Multi-Linux Manager and Rancher at the core, with SLES, Rancher Prime's Liz assistant and SUSE's observability and MCP-enabled security tooling alongside. The signal worth extracting is posture, not product count: SUSE is treating MCP as the default way its infrastructure talks to agents rather than as a one-off integration.
The SAP HANA patching scenario SUSE demonstrated in the opening keynote shows why agentic operations matters. Today that workflow is a software-defined nightmare: specialist-maintained playbooks coordinated across change windows, DR replicas, and compliance signoffs, reliably late by a quarter or two. In the MCP-wired version, an ops agent queries Trento for topology and applicable SAP notes, Multi-Linux Manager for patch baselines and drift, checks a business-intent constraint ("Friday 02:00–04:00 UTC only, no simultaneous HANA replica patching"), proposes a rolling plan and stops at an approval gate where Liz, Rancher Prime's AI assistant, presents its reasoning. The interesting part isn't speed -- it's that every step is a named MCP tool call against a named product, which makes the decision process auditable. Auditability is little comfort if the substrate can't survive a rollback, a partial apply or a human overruling the plan at 03:00. This is why the sovereignty-choice-resilience stack matters -- it's what makes the agent trustworthy enough to let into production.
My read: This is coherent and well differentiated positioning. In a market where every infrastructure vendor is chasing the same agent story, leading with sovereignty is a real differentiator. The open question is whether SUSE can convert positioning into procurement preference against competitors with larger distribution and ecosystem gravity. This is why the SUSECON 2026 announcements lean heavily on partnerships.
SUSE AI Factory with NVIDIA
SUSE AI Factory with NVIDIA, previewed at SUSECON 2026 and slated for GA later this year, bundles SUSE AI and NVIDIA AI Enterprise into a single software factory for building and running agentic AI applications. NVIDIA brings the model and runtime layer -- NIM (prepackaged model endpoints that run as microservices), Nemotron (NVIDIA's open LLM family), NeMo (the framework for building and fine-tuning agents), Run:ai (GPU scheduling), and two agent runtimes, OpenShell and NemoClaw, that execute what those agents decide to do. SUSE brings the delivery and platform layers -- pre-validated blueprints, a Rancher-based control plane, GitOps workflows for operating at scale and the SLES 16 plus Rancher Prime plus K3s stack underneath. The framing is sandbox-to-production -- build locally, ship to production through one interface, run anywhere from air-gapped edge to core datacenter to public cloud.
Two aspects are worth isolating. First, the platform is hardware-neutral -- it runs on any NVIDIA-certified system including Dell, HPE, Lenovo, Cisco, Supermicro or Fujitsu, across GPU tiers from RTX Pro 6000 and 4500 workstations to Blackwell rack systems to the Vera Rubin generation landing this summer. That puts SUSE on a different axis from the OEM-anchored AI factories those same vendors ship under their own brands. Second, SUSE's K3s is embedded inside NVIDIA's OpenShell and NemoClaw runtimes -- a product-level integration, not just a joint go-to-market handshake. The stack above the hardware stays configurable too -- swap Nemotron variants, start from the RAG or AI-Q digital assistant blueprint, or build the pipeline from scratch.
Every major enterprise vendor is shipping an AI factory in 2026. SUSE's version differs on what it refuses to close off: a single vendor owns support across the full stack, but hardware choice and software composition stay with the buyer. Add the sovereignty framing -- data and models stay inside customer infrastructure, which is what mandates like the EU AI Act now require -- and the pitch is coherent. OEM-anchored factories win on velocity by closing the substrate. SUSE's bet is that buyers want the same velocity without that trade off.
Switch partnership
The partnership that deserves equal airtime with NVIDIA is Switch. Switch is the Las Vegas–based AI infrastructure operator whose campuses host workloads for neoclouds -- the tier of GPU-specialized cloud providers, like CoreWeave, that rent AI capacity to the market and compete with the hyperscalers. In the SUSECON day-two keynote, Corinne Winfield, Switch's VP of Data, Operations and Business Analysis, walked through how Switch uses SUSE AI Factory alongside NVIDIA's Omniverse DSX blueprint -- NVIDIA's reference design for AI-factory datacenters, including physics-based digital twins of the facility itself -- to run Switch's "Living Data Center" control platform and the physics-validated digital twin of its newest EVO AI-factory campus designs.
The integration reframes what SUSE AI Factory is. It is no longer positioned purely as a stack for an enterprise IT shop deploying a RAG chatbot on-prem. In the Switch case, SUSE is embedded inside the operating fabric of the AI factory itself. Winfield explicitly called it a "trusted software supply chain" and "platform governance and shared infrastructure," meaning SLES 16 and Rancher Prime sitting beneath NVIDIA's Blackwell rack systems, NIM microservices, Nemotron models, and the Omniverse blueprint. That is a materially different positioning from the NVIDIA-only announcement. The NVIDIA story speaks to enterprises consuming AI, the Switch story speaks to operators producing it.
The open question for SUSE is whether Switch becomes a template for the neocloud tier or remains a single showcase. If it templates, SUSE moves up a layer, from distribution inside enterprise datacenters to substrate inside the facilities that rent GPU capacity to everyone else.
Cloudbase VM migrations
SUSE's partnered with Cloudbase Solutions to embed the Coriolis migration tool directly into SUSE Virtualization. This addresses what has arguably been the single biggest obstacle to VMware alternatives gaining traction since Broadcom reset vSphere pricing: the operational risk of actually moving the workloads, not the decision to move them. Coriolis itself is not a new piece of engineering. Cloudbase has been developing it for years, with roots in OpenStack migrations. The "warm" migration approach of replicating data while the source VM keeps running and then cutting over is well-understood.
The more interesting move is productizing it as a first-class path into SUSE Virtualization rather than leaving it as a services engagement, which lowers the perceived execution risk for buyers who have already decided to exit vSphere but have stalled on the how. The cloud repatriation angle is worth separating from the VMware exit narrative. Repatriation demand is real and tracks with tighter FinOps scrutiny of hyperscaler bills, but most repatriation conversations still hit a wall on application dependencies and data gravity rather than on hypervisor-to-hypervisor mechanics. The Coriolis integration helps but it is not the binding constraint.
The SAP HANA and SAP NetWeaver support is the more consequential piece, because SAP's certified hypervisor list is the gating item for the mission-critical workloads SUSE most wants to retain, and KVM certification under SUSE Linux Enterprise for SAP Applications is where the substantive competitive story against Red Hat OpenShift Virtualization and Nutanix AHV actually plays out. What the announcement does not address is management-plane parity. Coriolis moves the VM, but day-two operations, storage integration, networking overlays, DR orchestration and lifecycle tooling are where vSphere shops feel the drop in maturity, and migration is a necessary but not sufficient condition for that transition to stick.
Industrial edge
The SUSECON announcement worth isolating is SUSE Industrial Edge, the new product built on Losant -- the industrial IoT platform company SUSE acquired in February. Edge in Linux-vendor keynotes has long been a bucket for anything outside the datacenter. Here it gets something sharper: an operational-awareness layer meant to sit on the plant floor, speak industrial protocols, run drag-and-drop workflows and keep live and historical data close enough to act on.
The through-line that makes this land is the one that actually matters -- the OT/IT divide is dissolving. Facilities that used to run on ten services per building now run on more than a hundred, and the people responsible for them can no longer keep their engineering discipline on opposite sides of a fence. OT is the operational technology that runs the physical plant -- programmable logic controllers, sensors, control systems -- and has historically lived outside IT's software discipline. A Linux vendor stepping onto that fence with a containerized, open-source IIoT platform -- and taking a steering seat at Margo, the industrial edge interoperability alliance -- is a meaningful shift in how this space is being argued.
It is also a late entry into a market the big industrial OEMs and the hyperscalers have spent a decade shaping. The graveyard of proprietary IIoT platforms that promised convergence and quietly narrowed their scope is the reason the actual wager here is the open-source commitment, not the acquisition itself.
Conclusion
The four partnerships do the same job in four different domains. NVIDIA extends sovereignty into AI compute. Switch extends it into the operator tier that rents AI capacity. Cloudbase extends it into the VMware exit path. Losant extends it into the plant floor. Each keeps the buyer's substrate choice open while SUSE carries the support contract -- the pattern is substrate, not ceiling.
That is the coherent read of SUSECON 2026: SUSE is not trying to own the application layer, the model layer or the hardware. It is trying to be the neutral ground underneath whichever stack the buyer actually chooses.
The strategy is well differentiated. The execution question is whether sovereignty-first positioning can overcome the distribution and ecosystem gravity of vendors with larger installed bases. The proof points to watch over the next twelve to eighteen months are as followins:
- The NVIDIA AI Factory GA.
- A second neocloud signing after Switch.
- The first brand-name SAP HANA migration through Coriolis
- The first Margo-governed industrial deployment in production.
SUSECON 2026 made the positioning case. The procurement case gets made in the year that follows.
Torsten Volk is principal analyst at Omdia covering application modernization, cloud-native applications, DevOps, hybrid cloud and observability.
Omdia is a division of Informa TechTarget. Its analysts have business relationships with technology vendors.