Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Appearances / Mirantis presents at AI Infrastructure Field Day 3

Mirantis presents at AI Infrastructure Field Day 3



AI Infrastructure Field Day 3

Kevin Kamel presented for Mirantis at AI Infrastructure 3

This Presentation date is September 11, 2025 at 8:00-10:00.

Presenters: Anjelica Ambrosio, Kevin Kamel, Shaun O’Meara


Mirantis Company Overview


Watch on YouTube
Watch on Vimeo

Kevin Kamel, VP of Product Management at Mirantis, opened with a wide-ranging overview of the company’s heritage, its evolution, and its current mission to redefine enterprise AI infrastructure. Mirantis began as a private cloud pioneer, gained deep expertise operating some of the world’s largest clouds, and later played a formative role in advancing cloud-native technologies, including early stewardship of Kubernetes and acquisitions such as Docker Enterprise and Lens. Today, Mirantis leverages this pedigree to address the pressing complexity of building and operating GPU-accelerated AI infrastructure at scale.
Kamel highlighted three key challenges driving market demand: the difficulty of transforming single-tenant GPU hardware into multi-tenant services; the talent drain that leaves enterprises and cloud providers without the expertise to operationalize these environments; and the rising expectation among customers for hyperscaler-style experiences, including self-service portals, integrated observability, and efficient resource monetization. Against this backdrop, Mirantis positions its Mirantis k0rdent AI platform as a turnkey solution that enables public clouds, private clouds, and sovereign “NeoClouds” to operationalize and monetize GPU resources quickly.

What sets Mirantis apart, Kamel emphasized, is its composable architecture. Rather than locking customers into vertically integrated stacks, Mirantis k0rdent AI provides configurable building blocks and a service catalog that allows operators to design bespoke offerings—such as proprietary training or inference services—while maintaining efficiency through features like configuration reconciliation and validated GPU support. Customers can launch services internally, expose them to external markets, or blend both models using hybrid deployment approaches that include a unique public-cloud-hosted control plane.

The section also introduced Nebul, a sovereign AI cloud in the Netherlands, as a case study. Nebul initially struggled with the technical sprawl of standing up GPU services—managing thousands of Kubernetes clusters, enforcing strict multi-tenancy, and avoiding stranded GPU resources. By adopting Mirantis k0rdent AI, Nebul streamlined cluster lifecycle management, enforced tenant isolation, and gained automation capabilities that allowed its small technical team to focus on business growth rather than infrastructure firefighting.

Finally, Kamel discussed flexible pricing models (OPEX consumption-based and CAPEX-aligned licensing), Mirantis’ ability to support highly regulated environments with FedRAMP and air-gapped deployments, and its in-house professional services team that can deliver managed services or bridge skills gaps. He drew parallels to the early OpenStack era, where enterprises faced similar knowledge gaps and relied on Mirantis to deliver production-grade private clouds. That same depth of expertise, combined with long-standing open source and ecosystem relationships, underpins Mirantis’ differentiation in today’s AI infrastructure market.

Personnel: Kevin Kamel, Shaun O’Meara

Mirantis Solution Approach: GPU Cloud in a Box with Shaun O’Meara


Watch on YouTube
Watch on Vimeo

Shaun O’Meara, CTO at Mirantis, presented the company’s approach to simplifying GPU infrastructure with what he described as a “GPU Cloud in a Box.” The concept addresses operational bottlenecks that enterprises and service providers face when deploying GPU environments: fragmented technology stacks, resource scheduling difficulties, and lack of integrated observability. Rather than forcing customers to assemble and maintain a full hyperscaler-style AI platform, Mirantis packages a complete, production-ready system that can be deployed as a single solution and then scaled or customized as requirements evolve.

The design is centered on Mirantis k0rdent AI, a composable platform that converts racks of GPU servers into consumable services. Operators can partition GPU resources into tenant-aware allocations, apply policy-based access, and expose these resources through service catalogs aligned with existing cloud consumption models. Lifecycle automation for Kubernetes clusters, GPU-aware scheduling, and tenant isolation are embedded into the system, reducing the engineering burden that is typically required to make such environments reliable.

A live demonstration was presented by Anjelica Ambrosio, AI Developer Advocate. For the first demo, she reviewed the customers’ experience using the Product Builder. She showed how a user can log into the Mirantis k0rdent AI self-service portal and provision products with the Product Builder within minutes, selecting from preconfigured service templates. The demo included creating a new cluster product, setting parameters, and deploying the product to the marketplace. Real-time observability dashboards displayed GPU utilization, job performance, and service health. The demonstration highlighted how the platform turns what was once a multi-week manual integration process into a repeatable and governed workflow. The next demo Anjelica presented was the Product Builder from the Operator’s experience, showing how products can be created using nodes and configuring dependencies with Graph View.

O’Meara explained that the “Cloud in a Box” model is not a closed appliance but a composable building block. It can be deployed in a data center, at an edge location, or within a hybrid model where a public cloud-hosted control plane manages distributed GPU nodes. Customers can adopt the system incrementally, beginning with internal workloads and later extending services to external markets or partners. This flexibility is particularly important for organizations pursuing sovereign cloud strategies, where speed of deployment, transparent governance, and monetization are essential.

The value is both technical and commercial. Technically, operators gain a validated baseline architecture that reduces common failure modes and accelerates time-to-service. Commercially, they can monetize GPU investments by offering consumption-based services that resemble hyperscaler offerings without requiring the same level of capital investment or staffing. O’Meara positioned the solution as a direct response to the core challenge confronting enterprises and service providers: transforming expensive GPU hardware into sustainable and revenue-generating AI infrastructure.

Personnel: Anjelica Ambrosio, Shaun O’Meara

Mirantis IaaS Technology Stack with Shaun O’Meara


Watch on YouTube
Watch on Vimeo

Shaun O’Meara, CTO at Mirantis, described the infrastructure layer that underpins Mirantis k0rdent AI. The IaaS stack is designed to manage bare metal, networking, and storage resources in a way that removes friction from GPU operations. It provides operators with a tested foundation where GPU servers can be rapidly added, tracked, and made available for higher level orchestration.

O’Meara emphasized that Mirantis has long experience operating infrastructure at scale. This history informed a design that automates many of the tasks that traditionally consume engineering time. The stack handles bare metal provisioning, integrates with heterogeneous server and network vendors, and applies governance for tenancy and workload isolation. It includes validated drivers for GPU hardware, which reduces the risk of incompatibility and lowers the time to get workloads running.

Anjelica Ambrosio demonstrated how the stack works in practice. She created a new GPU cluster through the Mirantis k0rdent AI interface, with the system automatically discovering hardware, configuring network overlays, and assigning compute resources. The demo illustrated how administrators can track GPU usage down to the device level, observing both allocation and health data in real time. What would normally involve manual integration of provisioning tools, firmware updates, and network templates was shown as a guided workflow completed in minutes.

O’Meara pointed out that the IaaS stack is not intended as a general-purpose cloud platform. It is narrowly focused on preparing infrastructure for GPU workloads and passing those resources upward into the PaaS layer. This focus reduces complexity but also introduces tradeoffs. Operators who need extensive support for legacy virtualization may need to run separate systems in parallel. However, for organizations intent on scaling AI, the IaaS layer provides a clear and efficient baseline.

By combining automation with vendor neutrality, the Mirantis approach reduces the number of unique integration points that operators must maintain. This lets smaller teams manage environments that previously demanded much larger staff. O’Meara concluded that the IaaS layer is what makes the higher levels of Mirantis k0rdent AI possible, giving enterprises a repeatable way to build secure, observable, and tenant-aware GPU foundations.

Personnel: Anjelica Ambrosio, Shaun O’Meara

Mirantis PaaS Technology Stack with Shaun O’Meara


Watch on YouTube
Watch on Vimeo

Shaun O’Meara, CTO at Mirantis, described the platform services layer that sits above the GPU infrastructure and is delivered through Mirantis k0rdent AI. The PaaS stack is organized around composable service templates that let operators expose training, inference, and data services to tenants. Services can be chained, extended, and validated without requiring custom integration work for every new workload.

A central example in this segment was the use of NVIDIA’s Run.ai as the delivery platform for inference workloads. Anjelica Ambrosio demonstrated the workflow. She deployed an inference cluster template, selected GPU node profiles, and then added Run.ai services as part of the cluster composition. From the Mirantis k0rdent AI portal, she navigated into the Run.ai console to show inference jobs running against the GPU pool. The demonstration highlighted how Mirantis integrates Run.ai into its templated deployment model so that all dependencies, such as cert-manager, GPU operators, and Argo, are automatically provisioned. What would normally require a complex chain of manual installations was shown as a single cluster deployment taking about fifteen minutes on AWS, most of which was machine startup time.

O’Meara explained that the catalog approach lets operators bring in Run.ai alongside other frameworks like Kubeflow or MLflow depending on customer preference. The system labels GPU nodes during cluster creation, and Run.ai validates those labels to ensure that only GPU-backed nodes run GPU workloads while other tasks are placed on CPU nodes. This improves cost efficiency and prevents GPU starvation.

The PaaS stack makes GPU infrastructure usable in business terms. Enterprises can use the catalog internally to accelerate development or publish services externally for customers. Sovereign operators can keep the Run.ai-based services on local GPU hardware in air-gapped form, while hybrid operators can extend them across public and private GPU footprints. By integrating NVIDIA Run.ai directly into Mirantis k0rdent AI, the platform demonstrates how complex AI services can be delivered quickly, with governance and observability intact, and without the fragile manual integration that normally burdens GPU PaaS environments.

Personnel: Anjelica Ambrosio, Shaun O’Meara

  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • Oct 9-Oct 9 — Tech Field Day Exclusive with Microsoft Security
  • Oct 15-Oct 15 — Tech Field Day Experience at NetApp INSIGHT 2025
  • Oct 22-Oct 23 — Cloud Field Day 24
  • Oct 29-Oct 30 — AI Field Day 7
  • Nov 5-Nov 6 — Networking Field Day 39
  • Nov 11-Nov 12 — Tech Field Day at KubeCon North America 2025
  • Jan 28-Jan 29 — AI Infrastructure Field Day 4
  • Apr 29-Apr 30 — Security Field Day 15

Latest Coverage

  • Celona’s Private Cellular Vision: Redefining Enterprise Connectivity at Scale
  • How Arista is Unifying Campus Networks with Wi-Fi 7, AIOps and Zero-Trust
  • From GPU Gold Rush to Revenue Reality: How Mirantis k0rdent Transforms AI Infrastructure Dreams into Dollars
  • 68 Days Ahead: Turning DNS Data into Compliance and Cyber Resilience
  • What If Your Storage Knew How to Talk Back?

Tech Field Day News

  • Tech Field Day Experience at NetApp Insight 2025 Keynote Live Blog
  • The Latest in Cybersecurity Innovation at Security Field Day 14

Return to top of page

Copyright © 2025 · Genesis Framework · WordPress · Log in