|
This video is part of the appearance, “Mirantis presents at AI Infrastructure Field Day 3“. It was recorded as part of AI Infrastructure Field Day 3 at 8:00-10:00 on September 11, 2025.
Watch on YouTube
Watch on Vimeo
Kevin Kamel, VP of Product Management at Mirantis, opened with a wide-ranging overview of the company’s heritage, its evolution, and its current mission to redefine enterprise AI infrastructure. Mirantis began as a private cloud pioneer, gained deep expertise operating some of the world’s largest clouds, and later played a formative role in advancing cloud-native technologies, including early stewardship of Kubernetes and acquisitions such as Docker Enterprise and Lens. Today, Mirantis leverages this pedigree to address the pressing complexity of building and operating GPU-accelerated AI infrastructure at scale.
Kamel highlighted three key challenges driving market demand: the difficulty of transforming single-tenant GPU hardware into multi-tenant services; the talent drain that leaves enterprises and cloud providers without the expertise to operationalize these environments; and the rising expectation among customers for hyperscaler-style experiences, including self-service portals, integrated observability, and efficient resource monetization. Against this backdrop, Mirantis positions its Mirantis k0rdent AI platform as a turnkey solution that enables public clouds, private clouds, and sovereign “NeoClouds” to operationalize and monetize GPU resources quickly.
What sets Mirantis apart, Kamel emphasized, is its composable architecture. Rather than locking customers into vertically integrated stacks, Mirantis k0rdent AI provides configurable building blocks and a service catalog that allows operators to design bespoke offerings—such as proprietary training or inference services—while maintaining efficiency through features like configuration reconciliation and validated GPU support. Customers can launch services internally, expose them to external markets, or blend both models using hybrid deployment approaches that include a unique public-cloud-hosted control plane.
The section also introduced Nebul, a sovereign AI cloud in the Netherlands, as a case study. Nebul initially struggled with the technical sprawl of standing up GPU services—managing thousands of Kubernetes clusters, enforcing strict multi-tenancy, and avoiding stranded GPU resources. By adopting Mirantis k0rdent AI, Nebul streamlined cluster lifecycle management, enforced tenant isolation, and gained automation capabilities that allowed its small technical team to focus on business growth rather than infrastructure firefighting.
Finally, Kamel discussed flexible pricing models (OPEX consumption-based and CAPEX-aligned licensing), Mirantis’ ability to support highly regulated environments with FedRAMP and air-gapped deployments, and its in-house professional services team that can deliver managed services or bridge skills gaps. He drew parallels to the early OpenStack era, where enterprises faced similar knowledge gaps and relied on Mirantis to deliver production-grade private clouds. That same depth of expertise, combined with long-standing open source and ecosystem relationships, underpins Mirantis’ differentiation in today’s AI infrastructure market.
Personnel: Kevin Kamel, Shaun O’Meara