Monitoring Cost, Capacity, and Health of VMware Cloud Foundation

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Kelcey Lemon

VCF Operations offers full-stack visibility into the VMware Cloud Foundation based private clouds you manage. This includes the cloud components, infrastructure, virtual machines (VMs), containers and applications. VCF Operations provides continuous health updates, performance optimization, as well as efficient cost and capacity management.

● The demonstration will cover the following:
● Full Stack visibility and health across your VCF clouds
● Application and container monitoring
● Cost analytics, including TCO and potential savings
● Predictive capacity management with actionable rightsizing and reclamation tools

In this presentation, Kelcey Lemon from Broadcom demonstrates how VMware Cloud Foundation (VCF) Operations provides comprehensive monitoring and management tools for private clouds. The platform offers full-stack visibility, allowing users to monitor the health of their cloud infrastructure, including virtual machines (VMs), containers, and applications. VCF Operations consolidates various diagnostic tools into a single interface, enabling users to track performance, identify underutilized resources, and address capacity shortfalls through predictive analytics. The platform also helps with cost management by offering insights into total cost of ownership (TCO), potential savings, and resource reclamation, such as deleting idle VMs or old snapshots. Additionally, it provides compliance monitoring, ensuring that the infrastructure adheres to industry standards like HIPAA and CIS, while also tracking sustainability metrics like power consumption and carbon footprint.

The presentation also highlights the platform’s capacity management features, which use historical data and predictive analytics to forecast future resource demands. VCF Operations offers tools for right-sizing VMs, helping users optimize performance and reduce costs by adjusting CPU and memory allocations. The platform also includes reclamation tools that allow users to reclaim unused resources, further optimizing capacity and reducing operational expenses. The unified dashboard provides a centralized view of the entire cloud environment, enabling users to quickly identify and address issues, such as capacity shortages or performance bottlenecks. Overall, VCF Operations aims to streamline cloud management by offering a comprehensive, user-friendly interface that integrates health monitoring, cost management, and capacity planning.


Import, Migrate, and Extend VMware Cloud Foundation

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Eric Gray

VMware Cloud Foundation simplifies infrastructure administration by automating lifecycle management workflows and password/certificate rotation at scale. You don’t have to start from scratch to benefit from these capabilities. Discover how the latest release enhancements enable you to manage existing VMware vSphere deployments with VCF, as well as seamlessly migrate to and from VCF environments, whether on-premises or in the public cloud. In this presentation we will review:
● Introduction to VCF Import
● HCX overview and mention other tools to migrate to VCF
● Extend to the cloud with a focus on Google Cloud VMware Engine

In this presentation, Eric Gray from VMware by Broadcom discusses the technical aspects of importing, migrating, and extending VMware Cloud Foundation (VCF) environments. He explains how VCF simplifies infrastructure lifecycle management by automating tasks like updates, password management, and certificate rotation. One of the key points is that organizations don’t need to start from scratch to benefit from VCF. Existing VMware vSphere environments can be integrated into VCF through a process called “conversion” for management domains and “import” for workload domains. This allows administrators to bring their current infrastructure under VCF management, enabling them to take advantage of VCF’s lifecycle management features without having to rebuild their environments. Gray also highlights the flexibility of storage options, noting that while VCF typically uses vSAN, existing environments with NFS or Fibre Channel storage can still be imported and managed.

The second part of the presentation focuses on migrating workloads to VCF environments, particularly using VMware HCX, a tool that facilitates seamless migration of virtual machines (VMs) from older vSphere environments or even non-vSphere infrastructures like Hyper-V and KVM. HCX allows for live migrations with zero downtime by stretching Layer 2 networks, ensuring that workloads can retain their IP addresses during the move. Gray demonstrates how HCX can be used to migrate workloads between on-premises environments and public cloud platforms like Google Cloud VMware Engine (GCVE). He also touches on other migration options, such as NSX Layer 2 extensions and cross-vCenter migrations, but emphasizes that HCX is included with VCF and is a robust, mature solution for large-scale migrations. The session concludes with a live demo showing the migration of a web application from an on-premises environment to GCVE, illustrating the ease and efficiency of the process.


VMware Cloud Foundation Platform Overview

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Rick Walsworth

This session provides a summary of the VMware Cloud Foundation platform architecture, components and outcomes to build, deploy, operate and consume Private Cloud infrastructure for traditional and modern app workloads. This session will close with a brief overview of the latest VCF 5.2 release.

In this presentation, Rick Walsworth from VMware, now part of Broadcom, provides an overview of VMware Cloud Foundation (VCF) and its evolution since the Broadcom acquisition. He explains how the acquisition has streamlined operations, allowing for faster innovation and a more focused go-to-market strategy. VCF is designed to help organizations build, deploy, and manage private cloud infrastructure using cloud methodologies, while still maintaining the privacy, security, and performance of on-premises systems. Walsworth highlights the challenges customers face when modernizing their infrastructure, particularly when trying to integrate cloud methodologies with traditional three-tier architectures. He notes that many organizations initially view public cloud as a quick solution but often face cost overruns, leading to a trend of repatriating workloads back on-premises. VCF aims to provide a hybrid solution by combining the best of both worlds—on-premises control with cloud-like automation and scalability.

Walsworth also delves into the architecture of VCF, which caters to two main personas: cloud administrators and platform teams. Cloud administrators are provided with tools for capacity management, tenancy management, and fleet management, enabling them to operate infrastructure at scale. Platform teams, on the other hand, focus on delivering infrastructure as a service to developers, often using a combination of traditional VMs and containers orchestrated by Kubernetes. VCF integrates with various advanced services, such as AI workloads, disaster recovery, and security features, which can be added on top of the core platform. The platform also supports automation through infrastructure-as-code methodologies, allowing for seamless integration with DevOps pipelines. Walsworth emphasizes the importance of ongoing education and professional services to help customers fully utilize the platform’s capabilities, especially as they scale and customize their environments.


Private Cloud, Simplified – Introducing VMware Cloud Foundation

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Prashanth Shenoy

Since the completion of Broadcom’s acquisition of VMware, we have been all about change. For the VMware Cloud Foundation division, all of this change was necessary to transform our business to deliver faster innovation with more value to customers, and even better profitability and market opportunity for our partners. So, what’s changed now, post-acquisition, and why will this benefit your organization? In this session we will lay out VMware’s business model transition to subscription licensing which is the standard in the industry, the radical simplification across our portfolio, go-to-market and organizational structure to make it easier to do business with us, and the standardization across our ecosystem.

In this presentation, Prashanth Shenoy from VMware, now part of Broadcom, discusses the significant changes and strategic shifts that have occurred since the acquisition. The primary focus has been on simplifying VMware’s offerings, particularly through the VMware Cloud Foundation (VCF), which integrates various components like vSphere, vSAN, NSX, and ARIA automation into a unified private cloud platform. Shenoy emphasizes that the goal is to provide a consistent and seamless experience for customers, whether they deploy on-premises, at the edge, or through hyperscalers like AWS or Google Cloud. This shift is aimed at addressing customer feedback about the complexity of VMware’s previous offerings, which included thousands of SKUs and multiple independent business entities. By consolidating these into a single business division and product line, VMware aims to accelerate innovation and provide a more streamlined, integrated solution.

Additionally, Shenoy highlights the transition to a subscription-based licensing model, which aligns with industry standards and offers customers greater flexibility in deploying their cloud infrastructure. VMware has also introduced license portability, allowing customers to move their licenses between on-premises and cloud environments without additional costs. The company is also focusing on customer success by offering value-added services, professional support, and training programs to help customers adopt and optimize their private cloud deployments.


Automation Is Overblown – Cloud Field Day 21 Delegate Roundtable Discussion

Event: Cloud Field Day 21

Appearance: Delegate Roundtable at Cloud Field Day 21

Company: Tech Field Day

Video Links:

Personnel: Stephen Foskett

The roundtable discussion at Cloud Field Day 21 delved into the complexities and implications of automation, particularly in the context of cloud computing. The delegates acknowledged that while automation can streamline tedious or dangerous tasks, it also has significant social and economic impacts. Automation has historically displaced workers, and the conversation touched on the ethical considerations of who controls the automation process. The example of CNC milling machines was used to illustrate how automation can shift control from skilled workers to management, often resulting in job losses or reduced wages. The delegates emphasized the need to be mindful of the human element in automation, ensuring that workers are not left behind in the transition.

In the context of cloud computing, automation is seen as essential due to the rapid pace of change and the complexity of managing cloud environments. Automating routine tasks allows cloud professionals to focus on higher-value work, but it also introduces risks. The discussion highlighted the fragility of automated systems, which can fail catastrophically if not properly managed. The delegates shared personal experiences of how automation, when done thoughtfully, can drastically improve efficiency, such as reducing ticket queues for IT teams. However, they also noted that automation should not be implemented blindly; it requires careful consideration of what tasks should be automated and how to maintain quality control.

The conversation also touched on the broader societal implications of automation, including the need for upskilling workers to adapt to new technologies. The delegates agreed that automation is inevitable, but it should be approached with caution and a focus on human oversight. They discussed the importance of maintaining a balance between technological progress and the well-being of workers, suggesting that automation should empower people rather than replace them. Ultimately, the roundtable concluded that while automation offers significant benefits, it also comes with risks that must be carefully managed to avoid unintended consequences.


Cloud Repatriation and Rationalization – Cloud Field Day 21 Delegate Roundtable Discussion

Event: Cloud Field Day 21

Appearance: Delegate Roundtable at Cloud Field Day 21

Company: Tech Field Day

Video Links:

Personnel: Stephen Foskett

The roundtable discussion at Cloud Field Day 21 centered around the growing trend of cloud repatriation, where companies are considering moving workloads from the cloud back to on-premises environments. The delegates discussed the reasons behind this shift, with cost being a primary driver. Many organizations initially moved to the cloud during the COVID-19 pandemic due to the need for remote access and scalability. However, as businesses return to more traditional operations, they are reassessing the financial implications of cloud services, especially when unexpected operational expenses arise. The unpredictability of cloud costs, particularly in OPEX models, has led some companies to reconsider whether certain workloads are better suited for on-premises infrastructure, where they feel they have more control over expenses.

Another key point raised was the evolution of technology and operational models. While many applications were initially lifted and shifted to the cloud, not all of them are optimized for cloud environments, leading to inefficiencies and higher costs. The delegates noted that newer applications, especially those designed to be cloud-native, are likely to remain in the cloud, but legacy applications may be better suited for on-premises environments. Additionally, the conversation highlighted how cloud operational models, such as automation and elasticity, have been adopted in on-premises environments, blurring the lines between cloud and traditional data centers. This shift in operational models has made it easier for companies to repatriate workloads without losing the benefits of cloud-like flexibility.

The discussion also touched on the broader implications of repatriation, including the role of financial models and the importance of aligning technology decisions with business needs. The delegates emphasized that repatriation is not just a technical decision but also a financial one, driven by the need to balance CAPEX and OPEX. They also pointed out that the decision to repatriate should be based on the specific workload and its requirements, rather than a blanket move away from the cloud. Ultimately, the conversation suggested that cloud repatriation is a nuanced topic, with companies needing to carefully evaluate their workloads, costs, and operational models to determine the best approach for their infrastructure.


Cloud Native Qumulo Architecture and Demo

Event: Cloud Field Day 21

Appearance: Qumulo Presents at Cloud Field Day 21

Company: Qumulo

Video Links:

Personnel: Dack Busch

Qumulo’s cloud-native architecture, as presented by Dack Busch, emphasizes elasticity, performance, and cost-efficiency, particularly in AWS environments. The system is designed to scale dynamically, allowing users to adjust the number of nodes in a cluster based on workload demands. This flexibility is crucial for industries like media and entertainment, where workloads can spike unpredictably. Qumulo’s architecture allows users to scale up or down without service disruption, and even change EC2 instance types in real-time to optimize performance and cost. The system’s read cache is stored locally on NVMe drives, while the write cache is stored on EBS, which is more cost-effective and ensures data durability. The architecture also supports multi-AZ deployments, ensuring high availability and durability by spreading clusters across multiple availability zones.

One of the key features of Qumulo’s cloud-native solution is its integration with S3 for persistent storage. The system defaults to S3 Intelligent Tiering, but users can choose other S3 classes based on their needs. The architecture is designed to be highly efficient, with a focus on data consistency and cache coherency. Unlike some cloud systems that are eventually consistent, Qumulo ensures that data is always consistent, which is critical for customers who prioritize data integrity. The system also supports global namespace metadata, allowing users to access their data from anywhere as if it were local. This is particularly useful for scenarios where data needs to be accessed across different regions or environments, such as in disaster recovery or cloud bursting scenarios.

Qumulo’s architecture also offers significant economic advantages. Customers only pay for the capacity they use, and there is no need to provision storage in advance. This pay-as-you-go model aligns with the principles of cloud-native design, where resources are only consumed when needed. The system also supports automated scaling through CloudWatch and Lambda functions, allowing users to add or remove nodes based on real-time performance metrics. Additionally, Qumulo’s integration with third-party tools and its ability to ingest data from existing S3 buckets make it a versatile solution for organizations looking to migrate or manage large datasets in the cloud. The demo showcased the system’s ability to scale from three to 20 nodes in just a few minutes, demonstrating its real-time elasticity and high-performance capabilities.


What Qumulo is Hearing from Customers

Event: Cloud Field Day 21

Appearance: Qumulo Presents at Cloud Field Day 21

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw

In this presentation, Brandon Whitelaw, VP of Cloud at Qumulo, discusses the evolving landscape of data management and the challenges customers face in adopting hybrid and multi-cloud strategies. He highlights that the traditional approach of consolidating disparate file systems into a single scale-out system is no longer sufficient, as most companies now operate across multiple clouds and geographic locations. Whitelaw points out that 94% of companies have adopted multi-cloud strategies, often using different clouds for different workloads, which adds complexity. He emphasizes that file data, once considered secondary, has now become critical, especially with the rise of AI and other next-gen applications. However, many file systems struggle to operate efficiently in the cloud, often offering only a fraction of their on-prem performance at a much higher cost.

Whitelaw explains that one of the key challenges is the inefficiency of moving data between on-prem and cloud environments, particularly when using traditional file systems that are not optimized for cloud performance. He notes that many companies end up creating multiple copies of their data across different systems, which increases costs and complexity. Qumulo aims to address this by providing a unified data fabric that allows seamless access to data across on-prem, cloud, and edge environments. This approach reduces the need for data replication and ensures that data is accessible and performant, regardless of where it resides. Qumulo’s solution also includes real-time file system analytics, which helps optimize data access and performance by preemptively caching frequently accessed data.

The presentation also delves into the technical aspects of Qumulo’s cloud-native file system, which is designed to leverage the scalability and cost-effectiveness of object storage like AWS S3 or Azure Blob, while overcoming the performance limitations typically associated with these storage types. By using advanced data layout techniques and caching mechanisms, Qumulo ensures that data stored in object storage can be accessed with the performance of a traditional file system. This approach allows customers to benefit from the elasticity and cost savings of cloud storage without having to rewrite their applications. Whitelaw concludes by emphasizing the importance of providing a consistent, high-performance data experience across all environments, enabling customers to focus on their workloads rather than managing complex data pipelines.


Douglas Gourlay Introduces the Qumulo Cloud Data Platform

Event: Cloud Field Day 21

Appearance: Qumulo Presents at Cloud Field Day 21

Company: Qumulo

Video Links:

Personnel: Douglas Gourlay

Douglas Gourlay, CEO of Qumulo, introduced the Qumulo Cloud Data Platform by discussing the unprecedented growth in data and the challenges it presents for storage and processing. He highlighted how data growth is outpacing the ability of traditional storage solutions, such as SSDs and HDDs, to keep up, especially in environments where power and space are limited. This has led organizations to explore alternatives like cloud storage or building new data centers. Gourlay emphasized that the data being generated today is not just for storage but is increasingly valuable, feeding into critical applications like medical imaging, AI processing, and research. He shared examples of customers dealing with massive amounts of data, such as research institutions generating hundreds of terabytes weekly, and the need to move this data efficiently to processing centers.

Gourlay also addressed the ongoing debate between cloud and on-premises storage, noting that the industry is moving towards a hybrid model where both options are viable depending on the specific needs of the business. He criticized the myopic views of some industry players who advocate for cloud-only or on-prem-only solutions, arguing that businesses need the freedom to choose the best option for their workloads. Qumulo’s strategy is to eliminate technological barriers, allowing customers to make decisions based on business needs rather than being constrained by the limitations of the technology. By normalizing the cost of cloud storage and making it comparable to on-prem solutions, Qumulo aims to provide flexibility and enable businesses to store and process data wherever it makes the most sense.

The Qumulo Cloud Data Platform is designed to run anywhere, whether on x86, AMD, or ARM architectures, and across multiple cloud providers like Amazon and Azure. The platform’s global namespace feature ensures that data is available everywhere it is needed, with strict consistency to prevent data loss. Gourlay explained how Qumulo’s system optimizes data transfer across wide-area networks, significantly reducing the time it takes to move large datasets between locations. The platform also integrates with AI systems, enabling customers to leverage their data in advanced AI models while protecting their data from being absorbed into the AI’s training process. Looking ahead, Qumulo aims to build a global data fabric that supports both unstructured and structured data, with features like global deduplication and automated data management to ensure data is stored in the most efficient and cost-effective way possible.


Integrated Kubernetes Control Plane in Platform9 Private Cloud Director

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Chris Jones

Platform9’s latest product, Private Cloud Director, introduces a new approach to managing Kubernetes on-premises by eliminating the need for users to manage the control plane. Traditionally, Kubernetes deployments require both control plane nodes and worker nodes, with the control plane being responsible for managing the cluster. In public cloud environments, the control plane is typically managed by the cloud provider, but this is not the case for on-premise deployments. Platform9’s solution moves the control plane services into a management plane, which can be either self-hosted or managed as a SaaS offering by Platform9. This shift allows users to avoid the overhead of managing control plane nodes, which can result in significant resource savings, especially for large-scale deployments with multiple clusters.

The Private Cloud Director is particularly appealing to service providers looking to offer Kubernetes as a service. By offloading the control plane management to Platform9, service providers can focus on their core competencies and go to market faster without the need to build and maintain their own Kubernetes infrastructure. Platform9 also provides backend support for Kubernetes, including Q&A, break-fix, and upgrades, which further reduces the operational burden on service providers. The platform integrates seamlessly with existing infrastructure, allowing users to deploy Kubernetes clusters in a manner similar to public cloud services like EKS, AKS, or GKE, but without the need to manage the control plane.

In terms of security and customization, Platform9 offers a default configuration for Kubernetes clusters, which can be further tailored to meet specific customer needs. The platform supports automated upgrades for both the control plane and worker nodes, ensuring that clusters remain up-to-date without diverging too far between versions. Additionally, Platform9 provides optional add-ons, such as Prometheus for monitoring and load balancing services, which can be easily integrated into the Kubernetes environment. This flexibility, combined with the managed control plane, makes Private Cloud Director a compelling solution for organizations looking to simplify their Kubernetes operations while maintaining control over their on-premise infrastructure.


Multi Tenancy and Self Service in Platform9 Private Cloud Director

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Pooja Ghumre

Platform9’s presentation at Cloud Field Day 21 focused on the multi-tenancy and self-service capabilities of their Private Cloud Director. Pooja Ghumre, Principal Engineer, explained how Platform9 allows users to create multiple tenants for different organizations, providing complete isolation between them. Administrators can configure quotas for compute, block storage, and network resources, ensuring that tenants only use the resources allocated to them. Additionally, the platform supports SSO integration for external identity providers and offers features like VM leases, which allow administrators to set time limits on virtual machines, with options to either power off or shut down VMs after expiration.

The presentation also highlighted the platform’s support for infrastructure as code, enabling users to automate complex application deployments using orchestration templates. These templates can define resources such as VMs, volumes, networks, and security groups, and they support auto-scaling based on CPU utilization. Platform9 also integrates with Terraform providers for users who prefer that approach. The platform includes features like virtual machine high availability and resource rebalancing, which ensure that workloads are automatically migrated to active nodes in case of host failures. Resource rebalancing allows administrators to optimize power consumption or distribute resources across hosts, depending on their needs.

In terms of multi-tenancy, Platform9 offers different roles, such as administrator and self-service user, with varying levels of access. Administrators can manage multiple tenants and configure networking and resource settings, while self-service users are limited to their own tenant. The discussion also touched on support for AI/ML workloads, particularly with NVIDIA GPUs. While Platform9 supports running NVIDIA GPUs in virtualized environments, the team recommended using Kubernetes on bare metal for better GPU utilization and flexibility, especially for containerized applications. This approach allows for more efficient use of resources, such as slicing GPUs with MiG, and is better suited for modern AI/ML workloads.


Virtual Machines, Images, and Volumes in Platform9 Private Cloud Director

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Pooja Ghumre

In this presentation, Pooja Ghumre, Principal Engineer at Platform9, discusses the process of creating virtual machines (VMs) within the Platform9 Private Cloud Director. After the initial onboarding and network setup, users can create VMs either from pre-existing images or volumes. Platform9 allows administrators to upload and manage images, which can be designated as public, private, or shared among tenants. The platform supports various storage protocols, including iSCSI, fiber channel, and NFS, providing flexibility in how VMs are deployed. Users can also select from predefined “t-shirt sizes” for VMs, which determine the CPU, memory, and disk requirements, or create custom sizes based on specific needs, such as isolating VMs to certain hardware configurations.

The platform also offers a robust image library, similar to VMware’s vSphere content library, where users can upload new images or use predefined ones like Ubuntu or CentOS. Additionally, users can configure VMs with multiple network interfaces, choose between provider or virtual networks, and apply affinity or anti-affinity rules to control VM placement. Platform9 also supports cloud-init configurations, allowing users to run custom scripts during VM boot-up. Security groups can be applied to filter traffic, and key-value pairs can be added for easier VM management and searchability.

In terms of policy management, Ghumre explains that Platform9 allows users to map VM flavors to host aggregates, which helps in scheduling VMs based on specific performance or resource requirements. This mapping ensures that VMs are placed on the appropriate hosts that meet the defined criteria, such as high-performance storage or specific licensing needs. The platform also supports live migration, enabling users to move VMs between nodes without downtime, further enhancing the flexibility and resilience of the cloud environment.


Software-Defined Networking in Platform9 Private Cloud Director

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Pooja Ghumre

Platform9’s presentation at Cloud Field Day 21 focused on their implementation of software-defined networking (SDN) within their Private Cloud Director, which is built on open-source technologies like Open Virtual Network (OVN) and Open Virtual Switch (OVS). This SDN solution is comparable to VMware’s distributed virtual switch, providing packet forwarding and enabling the creation of self-service virtual networks and routers. The platform supports advanced enterprise features such as SRIOV for low-latency applications, IPv6, and dual-stack networking. Security is a key focus, with support for security groups that filter traffic based on IP addresses, ports, and protocols at the L3 and L4 levels. For more advanced use cases, Platform9 offers extensions like DNS, firewall, and load balancer services, with the option to integrate third-party solutions such as InfoBlox, Fortigate, and F5.

The demo portion of the presentation showcased how users can create and manage virtual networks within different tenants, such as QA and Dev environments. The demo illustrated the creation of subnets, virtual routers, and the configuration of external networks for north-south traffic. The platform allows for inter-tenant communication through virtual routers, and public IPs can be associated with virtual machines for external access. Platform9 supports multiple underlay network types, including VLAN, VXLAN, and Geneva, with the flexibility to scale beyond the limitations of VLANs. The platform also allows for self-service users to create virtual networks once the blueprint is set up by the administrator.

In terms of routing and traffic management, Platform9 offers both distributed and centralized routing options. Users can configure routers to handle north-south traffic through specific nodes or distribute routing across multiple servers. Security groups can be customized with inbound and outbound rules based on protocols like TCP, UDP, and ICMP, and more advanced firewall capabilities are in development. The platform also supports policies for east-west traffic isolation within tenant networks, with the option to configure external interfaces for north-south traffic. Overall, Platform9’s SDN solution provides a flexible and scalable networking environment with robust security and integration options for enterprise and multi-tenant use cases.


Platform9 Cluster Blueprints in Private Cloud Director

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Pooja Ghumre

Platform9’s Private Cloud Director aims to simplify the private cloud experience for users, particularly those transitioning from public cloud environments. The platform introduces the concept of “cluster blueprints,” which allow administrators to define a common template for managing clusters of hypervisors. These clusters are essentially groups of co-located hosts that share similar networking and hardware configurations. The platform uses KVM as the underlying hypervisor, which is a widely adopted virtualization technology. One of the key features of the platform is its ability to over-provision resources, allowing users to maximize hardware utilization by running more virtual machines than the physical cores available, based on the assumption that not all VMs will use their full resources simultaneously.

The platform also supports advanced day-two operations, such as live migration and high availability, which are crucial for minimizing downtime during maintenance or hardware failures. Live migration allows workloads to be moved between hosts without downtime, while high availability ensures that workloads can be redistributed in case of hardware failure. The platform’s multi-tenancy feature is designed for managed service providers (MSPs) and enterprises, allowing them to create isolated environments for different customers or departments. Each tenant can have its own set of resources, such as virtual machines and networks, while sharing the underlying hardware across the region.

In terms of storage, Platform9 supports a variety of backends, including NFS, NetApp, and Ceph, and allows administrators to configure storage policies based on the needs of specific workloads. The platform also offers flexibility in managing hosts at scale, with features like host aggregates and metadata tagging, which make it easier to filter and manage large numbers of nodes. For large-scale environments, such as those with thousands of nodes spread across multiple regions, the platform provides search and bulk operation capabilities to streamline management tasks. Additionally, the platform integrates with automation tools like Ansible, making it easier to onboard new hosts and scale the infrastructure efficiently.


Platform9 Private Cloud Director Proactive Operations

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Chris Jones, Tanay Patankar

This Platform9 presentation at Cloud Field Day 21 focuses on their approach to proactive operations and support, particularly in the context of day-two operations after a cloud migration. Chris Jones emphasized how Platform9 flips the traditional support model by proactively monitoring customer environments 24/7 and reaching out to them when issues arise, rather than waiting for customers to submit support tickets. This proactive approach accounts for a significant portion of their support load, with 65% of tickets being generated by Platform9’s monitoring systems. The company uses a centralized management plane, which integrates with various enterprise monitoring tools like Datadog, Splunk, and Grafana, allowing customers to maintain their existing observability stacks while benefiting from Platform9’s oversight.

The discussion also touched on the challenges of self-hosted environments, where customers may not have the same level of integration with Platform9’s management plane. In these cases, Platform9 provides templates and guidance for integrating with third-party monitoring tools, but customers are responsible for configuring their own log aggregation and monitoring systems. The team acknowledged that while they could offer more operational tools as a service, their current focus is on core virtualization and Kubernetes management, leaving observability to specialized vendors like Datadog. The conversation highlighted the importance of meeting enterprise customers where they are, especially those transitioning from VMware environments, which often come with pre-packaged monitoring solutions like vRealize Operations.

Finally, the presentation covered Platform9’s upgrade strategy and migration capabilities. The company offers a structured upgrade process with one major release and three minor releases per year, allowing customers to schedule upgrades at their convenience. They also provide a “Canary Environment” for testing upgrades before applying them to production. The session concluded with a demonstration of a successful live migration from VMware to Platform9’s environment using their open-source tool, VJLBric, which converts VMware VMDK disks to a raw format for more efficient operation. The tool currently supports VMware as the source and OpenStack as the destination, with potential for future expansion based on customer needs.


Platform9 Private Cloud Director Architecture

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Chris Jones, Roopak Parikh

Platform9’s Private Cloud Director (PCD) architecture is designed to provide a managed private cloud experience, with distinct components that handle different aspects of the system. The architecture is divided into three main areas: the operations plane, the management plane, and the infrastructure layer. The operations plane is only available for customers who are hosted by Platform9, and it handles tasks like log collection, metrics, and alerting, allowing Platform9 to monitor and manage the infrastructure. For self-hosted customers, these capabilities are not integrated, but they can still use third-party tools like Splunk or Datadog for monitoring. The management plane, which runs on Kubernetes, is responsible for managing APIs, databases, and other core services, and it can be deployed either by Platform9 or by the customer in their own environment.

The process of onboarding servers into PCD is straightforward, thanks to Platform9’s bootstrapping agent, PF9. This agent is deployed on each server and reports back to the management plane, allowing users to configure their infrastructure through the UI or API. The system supports various operating systems like Red Hat, Ubuntu, and CentOS, and Platform9 provides tools like Ansible, Pixie, or Ironic to automate the deployment of hundreds of nodes. Once the servers are onboarded, users can configure enterprise features such as high availability, virtual machine rebalancing, and virtual network creation. Additionally, Kubernetes clusters can be easily created on top of the virtual machines, with Platform9 handling the automation of network and load balancer setup.

Platform9’s architecture also draws parallels to VMware’s ecosystem, with features like DRS (Distributed Resource Scheduler) being comparable to Platform9’s rebalancing feature. The company acknowledges that for users familiar with VMware, there may be a need for a “Rosetta stone” to translate between VMware’s terminology and Platform9’s offerings. Throughout the presentation, Platform9 aims to highlight these parallels to make the transition easier for users coming from a VMware environment. The architecture is designed to be flexible and scalable, catering to both hosted and self-hosted environments, while providing a robust set of tools for managing private cloud infrastructure.


Migrating from VMware to Platform9 Private Cloud Director

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Chris Jones

In this presentation, Chris Jones from Platform9 discusses the challenges and solutions involved in migrating from VMware to Platform9’s Private Cloud Director (PCD). He highlights that one of the most common concerns for organizations looking to move away from VMware is the complexity of migrating their virtualized infrastructure. Platform9 has developed a tool, internally named “vJailbreak,” to address these challenges. This tool is designed to help organizations migrate their virtual machines (VMs) from VMware to an OpenStack-based environment, which is the foundation of Platform9’s PCD. The tool is open-source and available on GitHub, and it aims to handle various migration complexities, such as high-performance networking, large data volumes, and minimizing downtime during the migration process.

The presentation also delves into the technical aspects of the migration process. One of the key challenges in migrating VMs between different hypervisors is the compatibility of device drivers, such as storage controllers and network controllers, which may not work seamlessly after migration. Platform9’s solution involves converting virtual disks and ensuring that the necessary drivers are updated during the migration process. The tool also supports both cold and warm migrations, allowing organizations to choose between a quicker migration with downtime or a more seamless migration where the application continues running while the data is being transferred. The flexibility of the tool extends to mapping storage and network configurations between VMware and OpenStack environments, giving administrators control over how resources are allocated in the new environment.

In a live demonstration, Tanay Patankar, Software Engineer, showcases the migration of a Windows-based e-commerce application from VMware to Platform9’s PCD. The demo highlights the ease of use of the vJailbreak tool, which allows administrators to specify credentials, map networks and storage, and configure migration options such as cutover windows and post-migration scripts. The tool also supports bulk migrations, enabling organizations to migrate multiple VMs simultaneously while minimizing service interruptions. The demo concludes by showing that the application server, after migration, continues to communicate with the database server still running in the VMware environment, demonstrating the ability to operate hybrid environments during the migration process.


Introducing Platform9 Private Cloud Director

Event: Cloud Field Day 21

Appearance: Platform9 Presents at Cloud Field Day 21

Company: Platform9

Video Links:

Personnel: Roopak Parikh

In this presentation, Roopak Parikh, CTO of Platform9, introduces the company’s new product, Private Cloud Director, which aims to address the challenges IT managers face when balancing between public and private cloud infrastructures. He begins by drawing an analogy to the Perigian spring tide, where the Earth, moon, and sun align to create extreme tidal forces, likening it to the pressures IT managers face from the rising costs of both public and private cloud solutions. Parikh shares an example of a large corporation that has been using public cloud for five years but found it too expensive, while their private cloud costs are also increasing. This situation, he explains, is common for many organizations that are trying to manage a mix of public cloud, private cloud, and on-premises infrastructure, including Kubernetes, virtual machines, and bare metal servers.

Platform9 has been in the industry for eight years, managing production workloads for various customers, and has developed a deep understanding of the complexities involved in running both virtualized and containerized environments. Parikh highlights that Platform9 has historically offered multiple products to cater to the private cloud market, and with their new Private Cloud Director, they are offering an enterprise-grade, production-ready solution that integrates both containers and virtualization. This solution is designed to be cost-effective and developer-friendly, addressing the needs of organizations that are struggling to manage their hybrid cloud environments. The product is built on years of experience managing large-scale infrastructures, including customers with over 10,000 physical servers and numerous clusters running both virtual machines and containers.

The Private Cloud Director offers three key elements, starting with virtualized clusters, which allow organizations to take their physical servers, virtualize them, and create multiple clusters. These clusters can be used for both virtual machines and containers, providing a unified infrastructure for different workloads. The solution also includes an extension framework that integrates services across both virtualized and containerized environments, offering a seamless experience. Traditionally, Platform9 has provided its solutions as a SaaS-managed service with “always-on assurance,” but they are now also offering a self-hosted version for organizations with specific security or sovereignty requirements. This flexibility allows organizations to choose the deployment model that best fits their needs while maintaining the benefits of a fully integrated cloud management solution.


DevSecOps Built Right

Event: Security Field Day 12

Appearance: Security Field Day 12 Delegate Roundtable

Company: Tech Field Day

Video Links:

Personnel: Alastair Cooke

DevSecOps is the latest trend that is being embraced in the enterprise. Developers are “shifting left” to incorporate security into their processes. Everyone should make security their job. However, we are still not any more secure than we have been. In this delegate roundtable discussion, Alastair Cooke leads the conversation about how security is being incorporated into DevOps but the lessons aren’t being taken to heart. The delegates opine about the path of least resistance and how shortcuts abound in a world where people are pushed to their limits.


It’s Time To Update Your Password Policies

Event: Security Field Day 12

Appearance: Security Field Day 12 Delegate Roundtable

Company: Tech Field Day

Video Links:

Personnel: Tom Hollingsworth

NIST has released new password policy suggestions and the security community is ready for them. However, businesses are not. The challenge of keeping up with the modern security landscape is balancing the need to keep your users and data safe while also adhering to business policy and regulations. In this roundtable discussion, Tom Hollingsworth leads the Security Field Day delegates in understanding the new NIST guidelines as well as how to keep users on the cutting edge as well as ensuring legal compliance.