|
This video is part of the appearance, “Platform9 Presents at Cloud Field Day 21“. It was recorded as part of Cloud Field Day 21 at 8:00-10:00 on October 23, 2024.
Watch on YouTube
Watch on Vimeo
Platform9’s presentation at Cloud Field Day 21 focused on the multi-tenancy and self-service capabilities of their Private Cloud Director. Pooja Ghumre, Principal Engineer, explained how Platform9 allows users to create multiple tenants for different organizations, providing complete isolation between them. Administrators can configure quotas for compute, block storage, and network resources, ensuring that tenants only use the resources allocated to them. Additionally, the platform supports SSO integration for external identity providers and offers features like VM leases, which allow administrators to set time limits on virtual machines, with options to either power off or shut down VMs after expiration.
The presentation also highlighted the platform’s support for infrastructure as code, enabling users to automate complex application deployments using orchestration templates. These templates can define resources such as VMs, volumes, networks, and security groups, and they support auto-scaling based on CPU utilization. Platform9 also integrates with Terraform providers for users who prefer that approach. The platform includes features like virtual machine high availability and resource rebalancing, which ensure that workloads are automatically migrated to active nodes in case of host failures. Resource rebalancing allows administrators to optimize power consumption or distribute resources across hosts, depending on their needs.
In terms of multi-tenancy, Platform9 offers different roles, such as administrator and self-service user, with varying levels of access. Administrators can manage multiple tenants and configure networking and resource settings, while self-service users are limited to their own tenant. The discussion also touched on support for AI/ML workloads, particularly with NVIDIA GPUs. While Platform9 supports running NVIDIA GPUs in virtualized environments, the team recommended using Kubernetes on bare metal for better GPU utilization and flexibility, especially for containerized applications. This approach allows for more efficient use of resources, such as slicing GPUs with MiG, and is better suited for modern AI/ML workloads.
Personnel: Pooja Ghumre