Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Videos / Mirantis Solution Approach: GPU Cloud in a Box with Shaun O’Meara

Mirantis Solution Approach: GPU Cloud in a Box with Shaun O’Meara



AI Infrastructure Field Day 3


This video is part of the appearance, “Mirantis presents at AI Infrastructure Field Day 3“. It was recorded as part of AI Infrastructure Field Day 3 at 8:00-10:00 on September 11, 2025.


Watch on YouTube
Watch on Vimeo

Shaun O’Meara, CTO at Mirantis, presented the company’s approach to simplifying GPU infrastructure with what he described as a “GPU Cloud in a Box.” The concept addresses operational bottlenecks that enterprises and service providers face when deploying GPU environments: fragmented technology stacks, resource scheduling difficulties, and lack of integrated observability. Rather than forcing customers to assemble and maintain a full hyperscaler-style AI platform, Mirantis packages a complete, production-ready system that can be deployed as a single solution and then scaled or customized as requirements evolve.

The design is centered on Mirantis k0rdent AI, a composable platform that converts racks of GPU servers into consumable services. Operators can partition GPU resources into tenant-aware allocations, apply policy-based access, and expose these resources through service catalogs aligned with existing cloud consumption models. Lifecycle automation for Kubernetes clusters, GPU-aware scheduling, and tenant isolation are embedded into the system, reducing the engineering burden that is typically required to make such environments reliable.

A live demonstration was presented by Anjelica Ambrosio, AI Developer Advocate. For the first demo, she reviewed the customers’ experience using the Product Builder. She showed how a user can log into the Mirantis k0rdent AI self-service portal and provision products with the Product Builder within minutes, selecting from preconfigured service templates. The demo included creating a new cluster product, setting parameters, and deploying the product to the marketplace. Real-time observability dashboards displayed GPU utilization, job performance, and service health. The demonstration highlighted how the platform turns what was once a multi-week manual integration process into a repeatable and governed workflow. The next demo Anjelica presented was the Product Builder from the Operator’s experience, showing how products can be created using nodes and configuring dependencies with Graph View.

O’Meara explained that the “Cloud in a Box” model is not a closed appliance but a composable building block. It can be deployed in a data center, at an edge location, or within a hybrid model where a public cloud-hosted control plane manages distributed GPU nodes. Customers can adopt the system incrementally, beginning with internal workloads and later extending services to external markets or partners. This flexibility is particularly important for organizations pursuing sovereign cloud strategies, where speed of deployment, transparent governance, and monetization are essential.

The value is both technical and commercial. Technically, operators gain a validated baseline architecture that reduces common failure modes and accelerates time-to-service. Commercially, they can monetize GPU investments by offering consumption-based services that resemble hyperscaler offerings without requiring the same level of capital investment or staffing. O’Meara positioned the solution as a direct response to the core challenge confronting enterprises and service providers: transforming expensive GPU hardware into sustainable and revenue-generating AI infrastructure.

Personnel: Anjelica Ambrosio, Shaun O’Meara


  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • Sep 10-Sep 11 — AI Infrastructure Field Day 3
  • Sep 24-Sep 25 — Security Field Day 14
  • Oct 9-Oct 9 — Tech Field Day Exclusive with Microsoft Security
  • Oct 22-Oct 23 — Cloud Field Day 24
  • Oct 29-Oct 30 — AI Field Day 7
  • Nov 5-Nov 6 — Networking Field Day 39
  • Nov 11-Nov 12 — Tech Field Day at KubeCon North America 2025

Latest Coverage

  • How Mainframe Observability Bridges Legacy and Modern Systems
  • Share Cleveland 25 Took Mainframe to the Next Level
  • PopUp Mainframe: The Key to Faster, Cheaper, and Better Mainframe DevOps
  • Using Agentic AI to Assist Resilience with Opengear
  • cPacket: Every Packet Counts

Tech Field Day News

  • Pushing the Boundaries of AI Performance, Scale, and Innovation at AI Infrastructure Field Day 3
  • A Look at Mainframe Innovation at Tech Field Day Extra at SHARE Cleveland 25

Return to top of page

Copyright © 2025 · Genesis Framework · WordPress · Log in