Mirantis IaaS Technology Stack with Shaun O’Meara
Event: AI Infrastructure Field Day 3
Appearance: Mirantis presents at AI Infrastructure Field Day 3
Company: Mirantis
Video Links:
- Vimeo: Mirantis IaaS Technology Stack with Shaun O’Meara
- YouTube: Mirantis IaaS Technology Stack with Shaun O’Meara
Personnel: Anjelica Ambrosio, Shaun O’Meara
Shaun O’Meara, CTO at Mirantis, described the infrastructure layer that underpins Mirantis k0rdent AI. The IaaS stack is designed to manage bare metal, networking, and storage resources in a way that removes friction from GPU operations. It provides operators with a tested foundation where GPU servers can be rapidly added, tracked, and made available for higher level orchestration.
O’Meara emphasized that Mirantis has long experience operating infrastructure at scale. This history informed a design that automates many of the tasks that traditionally consume engineering time. The stack handles bare metal provisioning, integrates with heterogeneous server and network vendors, and applies governance for tenancy and workload isolation. It includes validated drivers for GPU hardware, which reduces the risk of incompatibility and lowers the time to get workloads running.
Anjelica Ambrosio demonstrated how the stack works in practice. She created a new GPU cluster through the Mirantis k0rdent AI interface, with the system automatically discovering hardware, configuring network overlays, and assigning compute resources. The demo illustrated how administrators can track GPU usage down to the device level, observing both allocation and health data in real time. What would normally involve manual integration of provisioning tools, firmware updates, and network templates was shown as a guided workflow completed in minutes.
O’Meara pointed out that the IaaS stack is not intended as a general-purpose cloud platform. It is narrowly focused on preparing infrastructure for GPU workloads and passing those resources upward into the PaaS layer. This focus reduces complexity but also introduces tradeoffs. Operators who need extensive support for legacy virtualization may need to run separate systems in parallel. However, for organizations intent on scaling AI, the IaaS layer provides a clear and efficient baseline.
By combining automation with vendor neutrality, the Mirantis approach reduces the number of unique integration points that operators must maintain. This lets smaller teams manage environments that previously demanded much larger staff. O’Meara concluded that the IaaS layer is what makes the higher levels of Mirantis k0rdent AI possible, giving enterprises a repeatable way to build secure, observable, and tenant-aware GPU foundations.