Next-Level Security and Resilience with VMware Cloud Foundation 9.0

Event:

Appearance: VMware Cloud Foundation 9.0 Showcase – Modern Private Cloud

Company: Broadcom

Video Links:

Personnel: Bob Plankers

When you think about cloud infrastructure security there are three main goals you are trying to achieve. First, you want to be secure quickly and stay that way. Second, you want to drive trust in your infrastructure. Third, you want to be resilient, easily. Broadcom’s Bob Plankers will take you through the latest security innovations in VMware Cloud Foundation 9.0 for providing next-level security, trust and resilience, empowering IT operations amidst regulatory complexities and geopolitical uncertainty.

The presentation focused on security and trust in VCF 9.0, emphasizing a “security first” approach, prioritizing ongoing security practices over infrequent compliance audits. A key theme was enabling customers to be secure faster, recognizing that security is a means to delivering services and running workloads. Plankers highlighted the importance of resilience, referencing features like vMotion and the EU’s Digital Operational Resilience Act, addressing both tactical and strategic scenarios such as failed application upgrades and disaster recovery.

The core differentiator of VCF 9.0 is inherent trust in the stack, moving towards less trust and more continuous verification. This includes verifying the platform’s security state, data sovereignty, and controlled access. The discussion covered lifecycle patching enhancements with Lifecycle Manager, aiming to simplify updates and manage multi-vendor cluster images. Features like live patching, custom EVC profiles, and improved GPU usage were also discussed as facilitating easier maintenance and patching, reducing friction.

The presentation went into deep dive on enhancements inside the hypervisor for security, including code signing, secure boot, and sandboxing. Confidential computing with AMD SEV-ES and Intel SGX technologies was explored, along with the introduction of a user-level monitor to de-privilege VM escapes. Workload security improvements encompass secure boot, hardened virtual USB, TPM 2.0 updates, and forensic snapshots. Cryptographic enhancements included TLS 1.3 by default, cipher suite selection, and key wrapping. Centralized password management, unified security operations, and standardized APIs for role-based access control further enhance security and automation.


Unpacking Storage In VMware Cloud Foundation 9.0

Event:

Appearance: VMware Cloud Foundation 9.0 Showcase – Modern Private Cloud

Company: Broadcom

Video Links:

Personnel: John Nicholson

VMware vSAN as a part of VMware Cloud Foundation 9.0 brings new functionality that not only extends its capabilities in ways never seen before, but also integrates into VCF in a manner that makes it a natural and cohesive extension. vSAN is clearly the premier storage solution for VMware Cloud Foundation. Broadcom’s John Nicholson will take you through the latest storage innovations, and how they deliver enhanced TCO and flexibility, secure and resilient storage, multi-site operations, and a storage platform for all workloads.

John Nicholson from Broadcom detailed the storage enhancements in VMware Cloud Foundation 9.0, focusing on improvements to operations, disaster recovery, performance, and security. He highlighted new operational consoles and tools for multi-site management, including diagnostics capabilities and IO Insight for workload analysis. The presentation included a demo showcasing the IO Trip Analyzer for end-to-end IO path troubleshooting and discussed the overhead of VSCSI tracing.

A key feature discussed was the new cluster-wide global deduplication for vSAN ESA, which uses a 4K fixed block granularity and is performed asynchronously to minimize impact on write performance. Nicholson addressed concerns about encrypted storage, emphasizing that vSAN offers data-at-rest and data-in-transit encryption to meet compliance requirements while still enabling compression and deduplication where possible. The presentation also covered support for multiple vSAN deployment types, including single-site clusters, disaggregated storage clusters, and imported clusters, along with the ability to split networking for vSAN storage clusters.

Nicholson also presented vSAN to vSAN replication, enhancing data protection by integrating with VMware Live Recovery (formerly Site Recovery Manager). He showed how this combined solution supports replication, disaster recovery, and ransomware protection, all managed through a single appliance. He also covered improvements in stretch cluster support, like site-based maintenance mode and forced recovery takeover. The presentation concluded with a discussion about the current state of storage technology, highlighting the cost-effectiveness and scalability of NVMe drives and the benefits of vSAN within the VMware ecosystem.


VMware Cloud Foundation 9.0 – A Unified Platform for All Applications

Event:

Appearance: VMware Cloud Foundation 9.0 Showcase – Modern Private Cloud

Company: Broadcom

Video Links:

Personnel: Katarina Brookfield

As modern applications continue to evolve, so must the platforms that support them. VMware Cloud Foundation (VCF) is uniquely positioned as a single platform that seamlessly runs both VMs and containers – bridging the gap between traditional workloads and modern, cloud-native applications. In this session, Broadcom’s Katarina Brookfield will explore the latest innovations in vSphere Supervisor, the integrated Kubernetes-based declarative API layer that’s become foundational to the private cloud experience in VCF. Learn how these enhancements accelerate Kubernetes operations while preserving the control and consistency enterprises demand. We’ll dive into the latest capabilities that elevate flexibility, isolation, and operational efficiency – highlighting enhancements like Management and Workload Zone separation, modular enablement of the Supervisor, namespace isolation integrated with VPCs, and significant improvements to the VM Service, including support for importing existing VMs. We’ll also showcase updates to the vSphere Kubernetes Service (VKS), offering a powerful, built-in Kubernetes runtime optimized for VCF environments.

The VCF 9 presentation highlighted its unified platform approach, leveraging the vSphere Supervisor declarative API to manage both VMs and containers, providing a cloud-like experience within a private cloud environment. The core idea is extensibility, allowing users to select capabilities from a catalog and introduce new functionalities, while abstracting away the underlying infrastructure complexities like compute, storage, and networking. Katarina Brookfield demonstrated deploying a virtual machine and a Kubernetes cluster through a single user interface, emphasizing new features in VCF9 such as deploying VMs from ISO images, enhanced network configuration with VPC integration, and guided CloudInit inputs, plus improvements to customization of VMs, all handled through a curated interface by administrators.

A significant portion of the presentation focused on vSphere Kubernetes Service (VKS), showcasing its ease of operation and extensive functionality. Users can customize Kubernetes clusters, mixing operating systems and adding labels. The VCF CLI facilitates managing these clusters, allowing users to register clusters, create contexts, and manage packages, including Istio support. Brookfield demonstrated how cloud admins can update the VKS service version, unlocking new Kubernetes releases for consumer deployment, ensuring governance remains with the cloud admin while empowering consumers with the flexibility to update their clusters.

The presentation concluded with a demonstration of GitOps patterns using Argo CD service, a new addition that enables continuous delivery of applications. Katarina Brookfield showed how to deploy an Argo CD instance and integrate it with a GitHub repository containing YAML files for both Kubernetes clusters and virtual machines. The talk also touched on how the Supervisor layer is decoupled to expedite release of new features. Broadcom emphasized that the latest functionalities are best experienced by making VCF Automation the single point of entry to the whole ecosystem.


VMware Cloud Foundation’s Shift to Self-Service Private Cloud Consumption

Event:

Appearance: VMware Cloud Foundation 9.0 Showcase – Modern Private Cloud

Company: Broadcom

Video Links:

Personnel: Vincent Riccio

Unlock the next era of private cloud innovation and discover how VCF Automation within VMware Cloud Foundation is shaping the future of cloud infrastructure. This session explores how VCF Automation facilitates modern private cloud operations, enabling quick provisioning and simplified scaling in multi-tenant environments through self-service IaaS. Gain the speed to bring applications to market faster—without compromising control, thanks to policy-based governance designed for the mordern enterprise. Broadcom’s Vincent Riccio will take you on a technical deep dive into the innovations powering this shift: a Modern Cloud Interface that delivers public cloud-like IaaS straight out-of-the-box, advanced tenant management, centralized content control, policy as code, and more. See how your organization can build, run, and manage diverse workloads—faster, smarter, and more securely—as you step into a future-ready private cloud.

Vincent Riccio’s presentation at the VCF 9.0 Showcase focused on Broadcom’s efforts to automate private cloud investments within VCF9. He emphasized the shift towards a self-service consumption model, enabling business units to deploy applications and services with greater agility. Key components of this automation include improved tenant management through the introduction of “organizations,” centralized content control via content libraries, and policy-as-code capabilities for governance. Riccio also highlighted the integration of vSphere services, such as the VM service and VKS service, into the automation framework, enabling users to deploy VMs and Kubernetes clusters more easily.

The presentation also delved into the architecture of the new solution, emphasizing the importance of the supervisor in vCenter for enabling the “all-apps” experience. Riccio explained how regions, comprised of one or more supervisors, abstract resources across the VCF fleet for consumption. He introduced the concept of projects within organizations, enabling further isolation and management of users and namespaces. The presentation concluded with a demonstration of the new features, including the deployment of VMs and Kubernetes clusters using the services UI and the exploration of the catalog for more curated, “anything as a service” type deployments.


VMware Cloud Foundation 9.0 – The Smarter Way to Operate Private Cloud

Event:

Appearance: VMware Cloud Foundation 9.0 Showcase – Modern Private Cloud

Company: Broadcom

Video Links:

Personnel: Kelcey Lemon, Kyle Gleed

Effectively managing a large-scale Private Cloud environment demands robust operational strategies to effectively operate a large-scale private cloud. VMware Cloud Foundation Operations delivers these capabilities, enabling IT teams to ensure consistent access, security, lifecycle management as well as performance, cost efficiency, resource utilization, and infrastructure and application health Whether you’re an experienced VCF administrator or beginning your VCF journey, you’ll leave this session equipped to confidently operate and optimize VMware Cloud Foundation at enterprise scale.

The presentation highlights the new innovative features available with VCF operations and VMware Cloud Foundation 9.0, focusing on fleet management and chargeback capabilities. Fleet management includes a unified single sign-on (SSO) capability via the VCF Identity Broker (VIDB), centralized certificate and password management, configuration drift updates, and simplified lifecycle management for applying patches and upgrades. The goal is to reduce the number of UIs and management points, providing a seamless operating experience from VCF operations, also with automation and API improvements.

The chargeback feature streamlines FinOps processes by integrating financial management with operational processes, enabling cost transparency and accountability. Key capabilities include defining rate cards for compute, storage, and networking, generating bills on demand or via scheduling, and sharing bills with tenants who can view detailed cost breakdowns within the automation console of VCF. The chargeback feature complements VCF’s showback capabilities, which provide visibility into the total cost of ownership, potential savings opportunities, and resource optimization. The demonstration illustrates the cost-saving opportunities through resource reclamation, rightsizing, and transparent resource cost.


Best Practices for Adopting & Deploying VMware Cloud Foundation 9.0

Event:

Appearance: VMware Cloud Foundation 9.0 Showcase – Modern Private Cloud

Company: Broadcom

Video Links:

Personnel: Jared Burns

VMware Cloud Foundation 9.0 introduces significant architectural enhancements that impact how modern private clouds are built and managed. This session provides some of the real-world upgrade pathways for both existing VCF 5.x users and the broader base of non-VCF customers looking to adopt the platform. From greenfield deployments to brownfield upgrades, Broadcom’s Jared Burns walks through the best practices, key considerations, and deployment strategies that align with diverse IT environments and business needs. This session is designed for IT professionals, cloud architects, and decision-makers who want to understand VCF 9’s transformative architecture and gain actionable insights into a smooth upgrade.

Jared Burns of Broadcom highlights the new VCF 9 architecture centered around the concept of a “VCF Cloud Foundation Fleet.” This fleet consists of one instance along with operations and automation that run across it. The fleet enables centralized management across multiple VCF instances, and multiple VCF fleets can be grouped into a VMware Cloud Foundation and Private Cloud. Key design considerations include centralized operations management, initial deployment based on the VCF fleet deployment basic design, and flexibility with multiple clusters within a single domain. Four deployment designs are presented: Basic, Site High Availability, Disaster Recovery, and a combined HA/DR approach, with the Basic design serving as the foundation for the others.

A key shift in VCF 9 is the increased flexibility in storage options. While vSAN remains supported, Fibre Channel and NFS are now supported out of the box for the management domain, offering more choices for Greenfield deployments. The presentation outlines detailed design decisions for Greenfield deployments, including considerations for fault domains, operations placement, scale, and organizational separation. Two deployment models for operations, Simple and High Availability, are discussed, along with scalability options.  Additional considerations include vCenter limits, host limits, HCL compliance, and IP/DNS requirements. The upgrade process emphasizes the importance of design planning and performing all prerequisites, due to changes like the removal of Enhanced Linked Mode and VMware Update Manager.

For upgrades from vSphere environments, a nine-step process is outlined, emphasizing the shift to keyless licensing and the move to vSphere Lifecycle Manager. The VCF installer now handles the conversion process, simplifying upgrades compared to previous versions. Customers are expected to be able to perform these upgrades themselves with the help of available upgrade guides. Significant changes include the replacement of Enhanced Link Mode with VMware Cloud Foundation operations and VMware Identity Broker, along with new IP address requirements and licensing procedures. Various import scenarios for workload domains are supported, including NSX-attached domains and standalone hosts.  Two distinct depots must be configured: SDC Manager’s depot and the VCF Operations Fleet Manager depot.


What’s New in VMware Cloud Foundation 9.0

Event:

Appearance: VMware Cloud Foundation 9.0 Showcase – Modern Private Cloud

Company: Broadcom

Video Links:

Personnel: Sabina Anja

Step into the future of infrastructure modernization with VMware Cloud Foundation 9.0, the next evolution of VCF. In this session, Broadcom’s Sabina Anja will walk you through new innovations that will redefine how your private cloud operates. Explore features in lifecycle management, fleet management, virtual private cloud networking, and hyper-converged infrastructure (HCI) storage. Discover how these advancements will simplify deployment, streamline operational efficiency, and elevate infrastructure performance. Gain insights into the strategic implications of enhanced capabilities and learn how they empower your organization to build and manage a resilient, future-ready private cloud infrastructure.


Edge to Cloud Security: Harnessing NAC, SASE and ZTNA

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: Adam Fuoss, Mathew George

See new Central cloud native NAC; SASE with SSE, SD-WAN & NAC; new ZTNA natively in SD-WAN Gateways. Adam Fuoss, VP of Product for EdgeConnect SD-WAN, outlined HPE Aruba Networking’s integrated SASE portfolio, comprising SSE (Security Service Edge) for cloud-based security focused on ZTNA (Zero Trust Network Access), EdgeConnect SD-WAN for connecting diverse locations, and ClearPass/NAC (Network Access Control). He highlighted the challenge of traditional ZTNA connectors, which often rely on virtual machines in data centers, leading to inefficient traffic hair-pinning when applications reside in branches. To address this, HPE Aruba Networking has integrated the SSE connector as a container directly into the EdgeConnect SD-WAN appliance, allowing users to connect to cloud security services and then directly to branch applications without backhauling traffic, significantly improving efficiency for distributed applications and remote contractors.

Mathew George, a Technical Marketing Engineer, then provided an overview of Central NAC, HPE Aruba Networking’s cloud-native NAC offering. This solution aims to simplify user and device connectivity by leveraging cloud-based identity sources like Google Workspace, Microsoft Entra, and Okta for authentication and authorization. Central NAC uses Client Insights for advanced device profiling, combining fingerprints with traffic flow information and AI/ML models for accurate classification. It integrates with third-party systems like MDM and EDR solutions to pull compliance attributes, which are then used in NAC policies. Central NAC also supports certificate-based authentication (including “Bring Your Own Certificate” with external PKI), MPSK (Multi-Pre-Shared Key) for user-based or admin-based device authentication, and various guest workflows. A key feature demonstrated was the real-time re-authentication and policy enforcement based on changes in the Identity Provider (IdP), showcasing true Zero Trust in action.

The presentation underscored HPE Aruba Networking’s commitment to a unified Zero Trust posture across their entire portfolio. The vision is for a single policy engine to enforce security from Wi-Fi and IoT devices all the way through switches, access points, gateways, and the SSE cloud. This includes multi-vendor support, allowing for VLAN enforcement on third-party switches like Cisco. While Central NAC streamlines simpler use cases, ClearPass continues to address more complex, on-premise requirements. The overall message emphasized leveraging telemetry-based networking and AI-driven insights to enhance security, improve endpoint experiences, and provide engineers with the necessary data to maintain optimal network performance, ultimately enabling a truly integrated security and networking approach from edge to cloud.


Expanding Access Points as a Platform Capabilities

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: Jerrod Howard, Justin Sergi

This presentation shows the features & benefits of Wi-Fi 7 APs including flex-radios, dynamic antennas, IoT, containers and more. Jerrod Howard, a hardware product manager at Aruba, introduced the concept of the Access Point (AP) as a platform, highlighting how HPE Aruba Networking is building its Wi-Fi 7 portfolio with increasingly flexible radios and complex internal technologies. He explained the drive towards more adaptable radios that can serve different regulatory regions without restriction, allowing deployment as dual band APs to maximize radio utilization. A key innovation is the development of dynamic antennas, which allow a single AP SKU to function as both an omnidirectional and directional access point, configurable via software. This flexibility is particularly beneficial for environments with varying coverage needs, such as warehouses with sloped ceilings or high-density conference rooms that transition from empty to packed.

Justin Sergi, Product Manager covering IoT, further expanded on the “AP as a platform” concept by discussing HPE Aruba Networking’s IoT and containerization strategy. The goal is to consolidate parallel IoT overlays, allowing the AP to serve as a unified IoT gateway with onboard dual IoT radios and extensible USB ports. This evolution is supported by an “App Store” within Aruba Central, enabling customers to deploy various IoT integrations (e.g., electronic shelf labels, asset tracking, access control) as container-based workloads. This decoupling of IoT integrations from the AP’s operating system through a cloud-native microservices architecture significantly accelerates development and deployment. The developer portal further empowers partners to self-publish new applications, managing their own versioning and ensuring security within a container sandbox with limited resource access.

The discussion also touched upon the Smart Antenna Module (SAM), a sensor embedded in outdoor Wi-Fi 6E and Wi-Fi 7 APs. SAM identifies the antenna, carries RPE data (gain, beamwidth), and reports heading and downtilt, providing critical telemetry about the AP’s physical orientation and performance. This data, combined with advancements in software, will become increasingly crucial for optimizing Wi-Fi 7 deployments. The overall strategy underscores HPE Aruba Networking’s commitment to simplifying network management and extending the capabilities of the access point, turning it into a versatile edge device capable of supporting diverse applications, IoT integrations, and advanced AI functionalities, all while offering flexibility in deployment and reducing operational complexity.


Modernize Virtualization Stack with HPE Aruba Networking CX Switches

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: Marty Ma

Marty Ma, Director of Product Management for HPE Aruba Networking’s CX switching strategy, presented on modernizing the virtualization stack with HPE Aruba Networking CX Switches. He introduced new products and recent integrations that unify HPE’s offerings. The CX switch portfolio, established in late 2016, now spans from 1 GBe to 400 GBe, all operating on a common software platform and managed by Aruba Central. This broad portfolio supports campus, branch, remote, access, aggregation, and core environments, as well as high-density data center deployments. A key innovation in late 2021 was the introduction of the first smart switch featuring a DPU (Data Processing Unit) from AMD Pensando, designed to bring services closer to the workload at the top of the rack. This was followed by the CX 10040, focused on 100 GBe server connectivity, both offering stateful Layer 4-7 services, East-West firewall inspection, and high-resolution telemetry.

Ma then linked this to the broader HPE GreenLake strategy, emphasizing the HPE Private Cloud Solution and HPE Private AI offerings, all orchestrated by HPE Morpheus software. He specifically highlighted Morpheus VM Essential, a next-generation virtualization management solution that unifies disparate hypervisor environments (like existing VMware ESXi clusters and KVM-based environments), all managed through a single pane of glass: HVM Manager. This aims to provide a more cost-effective alternative for customers concerned about rising hypervisor licensing costs. The overall HPE strategy positions them as a unique IT vendor capable of delivering a full hybrid cloud stack, with Morpheus for management and OpsRamp for comprehensive visibility across all environments.

The presentation underscored how the CX smart switching portfolio integrates with this virtualization strategy. Customers can leverage plugins to connect DPU-equipped switches into their hypervisor environments, enabling macro and micro-segmentation and advanced network services directly at the first-hop switch, even for bare-metal servers. This approach aims to simplify virtual networking across different hypervisors by offloading network policy enforcement to the hardware. The ongoing trend of increasing network speeds, driven by AI workloads, further validates HPE’s decision to integrate DPUs into switches, making the conversation about DPU placement more relevant than ever. The entire solution is designed to simplify complex end-to-end problems by providing a holistic view from compute and storage to networking, orchestrated and managed centrally.


Simplify Network Management with HPE Aruba Networking Central

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: Dobias Van Ingen

Learn about AI, deep platform intelligence, self-optimizing, observability, troubleshooting and more. Dobias van Ingen, CTO and VP for System Engineers at HPE Aruba Networking, detailed the evolution of Aruba Central, emphasizing its role in addressing common enterprise challenges like domain fragmentation, policy inconsistency, experience gaps, high operational costs, data sovereignty, and vendor lock-in. Their journey began in 2017 by unifying wireless solutions under a single operating system, followed by unifying wired networks in 2019 to provide consistent role-based policies and application visibility. The latest step involved integrating fabrics with SD-Branch/SD-WAN applications (EVPN, VXLAN) and, crucially, developing a common operational model that supports various consumption models, including public cloud, managed services, Network as a Service (NaaS), and on-premise deployment of the same cloud software.

The presentation highlighted Aruba Central’s unique unified configuration model, allowing users to manage network infrastructure through UI, CLI, or API, with changes reflected consistently across all interfaces. This is powered by a central Yang model, ensuring that configurations for roles, profiles, and services are seamlessly applied across access points, gateways, and switches, regardless of their specific device persona. A key demonstration showcased how a single authentication server profile could be configured once and then assigned to various device functions and scopes (e.g., global, regional, device group), significantly simplifying management and reducing potential errors. Furthermore, the unified architecture ensures that all feature sets are available across on-premise and cloud deployments, with minor exceptions for some AI functionalities.

Dobias also delved into user experience insights (UXI), which leverage sensors (physical or software agents) and network telemetry to provide comprehensive data on client and application performance. This includes detailed path analysis, identifying latency across hops and even into tunnels, offering crucial troubleshooting data that goes beyond traditional trace routes. The discussion then transitioned to Agent AI, the evolution beyond traditional and generative AI. Agent AI focuses on reasoning and autonomous action, allowing the system to combine vast knowledge bases with real-time network data to proactively identify issues and suggest or even schedule automated remediations (e.g., disabling an 802.11r amendment for problematic clients). This intelligence is surfaced through a “Network Copilot” interface, enabling natural language interaction for troubleshooting and automated problem-solving, along with multi-vendor operability (e.g., monitoring Cisco switches within Aruba Central) to ease transitions and prevent vendor lock-in.


HPE Aruba Networking Executive Overview with James Robertson

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: James Robertson

James Robertson, VP & GM, kicked off the session by outlining HPE Aruba Networking’s focus on two significant industry shifts: AI for networking (AI-powered NetOps) and networking for AI. The former aims to enhance network efficiency and effectiveness using AI, while the latter is positioned as a new foundational infrastructure for AI workloads in data centers. He emphasized the critical role of data collection as the foundation for AI operations, explaining that a comprehensive data lake, fed by extensive telemetry from across the network, is essential for gaining true visibility and extracting actionable insights. This data-driven approach underpins their strategy to deliver security-first, AI-powered networking, leveraging machine learning to identify anomalies that humans might miss and to create an infrastructure that drives optimal user experiences.

Robertson highlighted three key differentiators for the Aruba portfolio. First, their full-stack integration means that all wired, wireless, and WAN components, whether on campus or in the cloud, are managed through Aruba Central, providing a single pane of glass for comprehensive visibility. Second, HPE Aruba Networking has expanded its deployment options beyond cloud-managed solutions, now supporting on-premise, near-premise, and sovereign (air-gapped) environments to meet diverse organizational needs. Third, they address the challenge of broad observability across the entire IT estate through the integration of OpsRamp, an HPE-acquired company. OpsRamp provides a unified view across security, networking, virtualized platforms, and storage, enriching the telemetry data fed into their AI models for deeper insights.

He further elaborated on HPE Aruba Networking’s AI journey, distinguishing between traditional AI (anomaly detection based on historical data), generative AI (leveraging larger datasets for decision-making and understanding), and the recently announced Agent AI. Agent AI, the latest advancement, focuses on reasoning, allowing the system to combine accumulated knowledge with real-time infrastructure data to proactively identify issues and suggest actions, mimicking human problem-solving. This entire AI framework is underpinned by Aruba Central, which fundamentally aims to connect and protect all infrastructure components, utilizing telemetry for security decisions and automating operations to provide network teams with real-time insights and control.


cPacket Observability for AI

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

Modern AI workloads rely on high-performance, low-latency GPU clusters, but traditional observability tools fall short in diagnosing issues across these dense, distributed environments. In this session, cPacket explored how they augment GPU and storage telemetry (DCGM/NVML/IOPS) with full-fidelity packet insights. They covered how to correlate job scheduling, retransmissions, queue depth, and tensor-core utilization in real time, and how to establish performance baselines, auto-trigger mitigations, integrate with SRE dashboards, and continuously tune topologies for maximum AI throughput and resource efficiency. Erik Rudin and Ron Nevo introduced the emerging challenge of AI factories moving into enterprises, contrasting these inference workloads with the well-understood elephant flows of AI training in hyperscale data centers. Inference presents unique, less-understood traffic patterns, often driven by user or agent interactions and characterized by varying query-response ratios and KV cache management policies, all demanding optimal GPU utilization without sacrificing latency.

The core of cPacket’s solution for AI observability lies in supplementing traditional GPU telemetry with packet-level visibility, particularly on the north-south (front-end) network that connects AI clusters to the rest of the enterprise. This integration is crucial for pinpointing the exact source of latency (whether from the cluster, switch, or storage), identifying microbursts that internal switch telemetry might miss, and understanding session-level characteristics that impact AI workload performance. Unlike traditional network monitoring, which often falls short in these highly dynamic and dense environments, cPacket’s approach aims to provide the granular, real-time data necessary for continuous tuning and optimization of AI infrastructures.

Ultimately, cPacket emphasizes that observability for AI is essential for enterprises making significant investments in GPU workloads at the edge. The rapid evolution of AI necessitates a comprehensive approach that integrates packet insights, session metrics, and AI-driven analytics into existing SRE and NetOps workflows. This allows for proactive identification of anomalies, establishment of performance baselines, and continuous optimization of network topologies to ensure maximum AI throughput and resource efficiency, directly impacting the often high costs associated with AI downtime. The overarching message is to start with the business problem–understanding the specific challenges and desired outcomes for AI workloads–and then leverage cPacket’s integrated, open, and AI-infused platform to drive measurable improvements.


cPacket NOC–SOC Convergence: Compliance

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

At Security Field Day 13, cPacket explored how Network Observability empowers SecOps teams to elevate their threat detection and response. In this session, they shifted the lens to NetOps, examining the growing convergence between NOC (Network Operations Center) and SOC (Security Operations Center) workflows. As performance and security become inseparable in hybrid and zero-trust environments, NetOps teams must adopt tools and practices that support both operational resilience and threat visibility. cPacket demonstrated how packet-based observability bridges this gap, enabling NetOps to detect lateral movement, validate policy compliance, and collaborate more effectively with security teams through shared context and real-time data. They emphasized that security is a top concern for all organizations, and the network provides crucial insights to surface issues like malware and vulnerabilities.

Ron Nevo explained how cPacket’s solution empowers NetOps to contribute significantly to the organization’s security posture. Their Deep Packet Inspection (DPI) engine extracts relevant information from every session, including DNS queries and HTTPS queries, even from encrypted traffic (e.g., domain names, TLS certificate validity). This raw data can be used to generate dashboards and reports that feed into security tools. A compelling demonstration involved using an LLM (Large Language Model) to prompt the system to generate a Grafana dashboard tailored to specific HIPAA regulations. This highlights the platform’s ability to create customized compliance reports without requiring deep knowledge of the underlying visualization tools, extending the reach of network observability for security and auditing purposes.

The discussion acknowledged that while AI can create sophisticated reports and highlight suspicious activities (e.g., identifying suspicious domain names by filtering out known benign traffic), human expertise remains crucial for validation and full compliance. The goal is not to replace human operators but to provide them with powerful tools that streamline data analysis, automate report generation, and surface critical insights. By integrating network insights directly into SOC tools and workflows, cPacket enables proactive detection of anomalies and alerts, strengthening the overall security posture and fostering better collaboration between network and security teams. The ultimate aim is to provide the right data to the right person or tool at the right time, enhancing the ability to respond to and prevent security incidents.


cPacket Proactive Service Assurance and Compliance

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

Latency issues don’t always wait for end users to notice and neither should your operations team. In this session, cPacket demonstrated how they enable proactive latency detection using leading indicators, full-path packet monitoring, and anomaly detection. With integrations into LLM-powered workflows and platforms like Slack and ITSM, teams can resolve issues faster, tune alerts more precisely, and continuously improve visibility through real-time data and trend reporting. The core focus was on achieving proactive service assurance, shifting from a reactive “firefighting” model to one where issues are identified and resolved before they impact users, ideally reducing human-created incidents.

Ron Nevo elaborated on this “nirvana” state, where network operators can proactively assess network health using cPacket’s Observability AI. The system processes trillions of packets to distill vast amounts of data into a manageable handful of “insights,” highlighting what’s most important for a specific operator’s responsibilities. A key use case demonstrated this: querying the system for new insights over the past 24 hours. The LLM (Large Language Model) identified client latency issues and resource utilization problems on a core engineering server. While the interaction still requires a certain level of network engineering sophistication to interpret the insights, the goal is to simplify the discovery process and guide operators to critical areas.

cPacket’s approach relies on dynamic baselining, where the AI learns normal network behavior over time across various metrics and services, detecting anomalies that might indicate a problem before an outage occurs. While the presented prompts were complex, the long-term vision is to abstract this complexity, making the system more intuitive and capable of providing precise, actionable guidance. The ultimate value lies in accelerating the triage process, shortening the Mean Time To Detect (MTTD) and Mean Time To Resolve (MTTR) by integrating AI-driven insights with existing workflows and tools like Slack and ServiceNow. This approach aims to augment human operators, providing them with a powerful tool to proactively manage the network and ensure continuous service reliability.


cPacket Service Assurance: MTTR Reduction

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

When service disruptions or connection issues impact key applications, speed of diagnosis is everything. This session highlighted how cPacket enables real-time monitoring, anomaly detection, and triage using packet-level data. It showcased how IT teams can use LLM-powered interaction, Observability AI baselining, and SIEM integration to accelerate resolution, reduce MTTI/MTTR, and deliver a better user experience across distributed infrastructure and business-critical workflows. Erik Rudin, Field CTO, set the stage by describing a reactive scenario where users are experiencing application issues, and the network appears normal initially. Ron Nevo, CTO, presented a real-world example from a large bank where a specific branch experienced intermittent remote desktop access failures due to a WAN acceleration device adding significant latency. This underscored the challenge of pinpointing issues in complex, multi-hop network paths without pervasive monitoring.

cPacket’s approach to reducing MTTR involves enhancing the user experience through AI-powered interaction. Instead of manually sifting through logs and dashboards, network operators can “chat” with the system, asking natural language questions to gain insights into service performance. The LLM (Large Language Model), in conjunction with AI agents and the MCP (Model Context Protocol), helps to process and contextualize data. A crucial aspect is Observability AI baselining, where cPacket’s machine learning pipeline automatically establishes baselines for various network metrics, accounting for service, time of day, and day of week. This allows the system to identify deviations from normal behavior, even if not immediately surfaced as an alert, and visually present these anomalies against the baseline to the user.

While acknowledging that advanced network engineering knowledge is still valuable, the aim is to simplify the troubleshooting process. The system can identify logical and physical network topologies and pinpoint where latency or other issues reside within the path. This AI-assisted workflow accelerates triage by providing relevant data and insights, shortening the time to detect, understand context, and identify the responsible component or team. cPacket emphasizes that this integration with existing IT workflows–including SIEM, ticketing systems like ServiceNow, and communication platforms like Slack–is critical for achieving measurable outcomes and continuous improvement in service delivery. The ultimate goal is to empower human operators with intelligent tools that streamline diagnostics and decision-making, rather than completely automating the resolution process.


cPacket Service Assurance: Realtime Video Production

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

Real-time video environments demand precision and speed. Troubleshooting can’t wait for decoding or downstream analysis. In this session, cPacket explored how packet-level observability enables immediate detection of transport-layer issues like encoder faults, fiber/switch errors, and edge-to-cloud latency disruptions. They demonstrated how their observability solution, with real-time alerts, dynamic dashboards, and ServiceNow integration, empowers proactive monitoring and MTTR (Mean Time To Resolution) reduction across complex, long-path video delivery networks. Erik Rudin, Field CTO, introduced the scenario of live video streaming, emphasizing the critical importance of video quality for businesses. Ron Nevo, CTO, further detailed the intricate environment of live streaming, involving multiple cameras, production vans, cloud processing, transcoding, and distribution, all of which can introduce potential points of failure.

The core of cPacket’s approach is to deploy monitoring points throughout the video delivery path to quickly determine if an issue is network-related. For real-time video, the presence of even minimal packet loss is a clear indicator of a problem. cPacket’s solution continuously analyzes RTP (Real-time Transport Protocol) streams, triggering real-time alerts (e.g., via Slack) when packet loss increases. These alerts provide direct links to detailed analytics, allowing operators to pinpoint the exact location and nature of the fault, whether it’s a physical cable issue, a video machine problem, or a cloud link disruption. Furthermore, the system automatically creates tickets in existing IT service management tools like ServiceNow, ensuring that identified issues are integrated into the customer’s operational workflows for prompt resolution.

This use case exemplifies cPacket’s broader strategy for service assurance, focusing on delivering actionable insights rather than just raw data. By acquiring and contextualizing packet data at line rate, integrating it into existing ecosystems, and leveraging AI for anomaly detection, cPacket aims to proactively identify and prevent service degradations. The emphasis is on improving the triage process and providing measurable outcomes, such as reduced MTTR and improved customer experience. The session underscored that AI serves as an augmentation to existing analytics, enhancing the ability to predict and prevent outages by identifying subtle patterns like under/overutilized links and their correlation to service degradation or security concerns.


cPacket Introduction with Mark Grodzinsky

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Mark Grodzinsky

cPacket’s presence kicked off by revisiting highlights from previous Networking Field Day and Security Field Day events, providing an overview of the evolution of cPacket’s Network Observability platform and introducing AI-driven innovations, framed by their Value Equation and Customer Value Journey frameworks. Mark Grodzinsky, Chief Product and Marketing Officer, emphasized that while AI is currently at a peak of hype, cPacket views it as a tool, not a standalone solution. Their focus is on how AI, integrated within their network observability platform, drives tangible business outcomes. This approach is rooted in their belief that packet data remains the “single source of truth” for understanding the “what, where, when, and why” of network events, even as other telemetry data (metrics, events, logs, traces) serve important purposes.

cPacket’s observability platform has evolved significantly since its 2007 inception, highlighted by its role in the 2012 Olympics’ 10 GBe network. Key components include a packet broker with FPGA on every port for high-precision data delivery, and advanced packet capture and analytics capabilities, supporting up to 200 GBps concurrent write-to-disk with indexing. Their solutions address challenges like microbursts, which cause packet drops even when overall network capacity seems sufficient. Furthermore, cPacket emphasizes the convergence of network and security operations, advocating for a single source of truth–packet data–to enhance both operational efficiency and security posture, aiding in protection, detection, response, digital forensics, and compliance. AI, in this context, serves as a smart companion for setting deterministic thresholds and identifying anomalies proactively.

The company’s core mission is service assurance, achieved through pervasive, independent, open, and scalable observability. Erik Rudin, Field CTO, highlighted the increasing complexity of modern hybrid and multi-cloud environments, stressing the critical need for monitoring key links to ensure mission-critical application performance. cPacket’s solution begins with nanosecond-precision packet acquisition and immediate metric collection, enabling the identification of patterns like microbursts and low-level latency. This rich data is integrated into their own capture devices for session analytics and correlation, and also exposed through open APIs for integration with existing customer tools and data lakes. They introduced the Value Equation, a framework that connects raw data and AI insights to measurable business outcomes, and the Customer Value Journey, which guides customers through understanding their business problems, integrating cPacket’s technology, validating its impact, and achieving continuous improvement in network and security operations.


Hedgehog Gateway Demonstration

Event: Networking Field Day 38

Appearance: Hedgehog Presents at Networking Field Day 38

Company: Hedgehog

Video Links:

Personnel: Manish Vachharajani, Sergei Lukianov

Hedgehog CTO Manish Vachharajani explained how Hedgehog gateway peering functions as a new component to overcome limitations of switch-based VPC peering. While switch-based peering offers full cut-through bandwidth, traditional switches lack the CPU and RAM for stateful network functions like firewalling, NAT, and handling large routing tables or TCP termination. The Hedgehog Gateway addresses this by leveraging a CPU-rich, high-bandwidth server positioned in the traffic flow between VPCs. This commodity hardware, combined with modern NICs featuring hardware offloads for NAT and VXLAN, can achieve significant throughput (initially targeting 40 Gbps, with plans for 100 Gbps and higher). The gateway operates by acting as a VTEP and selectively advertising routes to attract specific traffic, performing necessary network transformations (including implied NAT as demonstrated), and then re-encapsulating and transmitting packets to their destination VPC.

Sergei Lukianov, Chief Architect, demonstrated VPC peering with basic firewall functions that aim to replace Zipline’s existing Palo Alto Firewalls. The demo illustrated how the gateway enables communication between VPCs with overlapping IP addresses by performing NAT. This involves the gateway advertising NAT’d IP prefixes into the VRFs of peered VPCs, allowing traffic to be routed through the gateway. The demonstration highlighted the comprehensive visibility provided by Hedgehog’s data plane on the gateway, offering insights into traffic flow that traditional switches often lack. While introducing a slight latency increase due to the additional hops (though the demo used debug images, exaggerating this), the gateway offers significantly more flexibility and functionality than switch-based peering.

Looking ahead, Hedgehog plans to enhance the gateway’s capabilities by moving the software onto DPUs (Data Processing Units) within the host, such as NVIDIA Bluefield, for improved performance and scalability. This approach would significantly reduce latency and allow for deeper network extension into virtual environments like VMs and containers. The gateway also includes basic security functionalities like ACLs and port forwarding, with a roadmap to add more advanced features like DDoS protection, IDS/IPS, and Layer 7 inspection as per customer demand or open-source contributions. Furthermore, Hedgehog aims to support multi-data center deployments through Kubernetes Federation, allowing independent clusters to connect via gateway tunnels while presenting a unified API to the end-user.


Hedgehog VPC Peering Demonstration

Event: Networking Field Day 38

Appearance: Hedgehog Presents at Networking Field Day 38

Company: Hedgehog

Video Links:

Personnel: Manish Vachharajani, Sergei Lukianov

Hedgehog CTO Manish Vachharajani reviewed how Hedgehog simplifies AI networking with a Virtual Private Cloud (VPC) abstraction used by customers like Zipline, emphasizing the complexities of designing modern GPU training networks with multiple ports and intricate configurations. Hedgehog addresses this by providing two main abstractions: a low-level wiring diagram for defining physical topology (like leaf/spine connections and AI-specific settings for RDMA traffic), and a VPC operational abstraction for partitioning clusters into multi-tenant environments. This approach leverages the Kubernetes API for configuration, offering a well-known interface with a rich ecosystem of tools for role-based access control and extending its capabilities to manage the physical network. Once the wiring diagram is fed into the Kubernetes API, Hedgehog automates the provisioning, booting, and configuration of network operating systems and agents on the switches, ensuring the specified network policies are enforced.

The core of Hedgehog’s multitenancy solution lies in its VPC abstraction, enabling the creation of isolated network environments with configurable DHCP, IP ranges, and host routes, supporting both L2 and L3 modes. This abstraction automates the complexities of BGP EVPN, VLANs, and route leaks, which are typically manual and error-prone configurations. To facilitate communication between these isolated VPCs, Hedgehog introduces VPC peering, a simple Kubernetes object that automatically configures the necessary route leaks, allowing specified subnets to communicate securely. This eliminates the need for manual route maps and ACLs, significantly simplifying inter-VPC connectivity and reducing the risk of misconfigurations.

Sergei Lukianov, Hedgehog’s Chief Architect, demonstrated the provisioning of tenant VPCs and VPC peering on a three-switch topology (one spine, two leaves). The demo showed that without peering, direct communication between servers in different VPCs (e.g., Server 1 in VPC1 and Server 4 in VPC2) fails. However, by applying a simple peering YAML file to the Kubernetes API, the network automatically reconfigures, enabling successful communication. This process involves the Hedgehog fabric controller translating the peering object into switch configurations, including route leaking between VRFs (Virtual Routing and Forwarding instances). The demonstration also showcased Grafana Cloud integration for collecting and exporting detailed network metrics (counters, queues, logs) from switches and the control node, providing turnkey observability without extensive manual configuration. Manish further explained the limitations of purely switch-based peering for external connectivity, setting the stage for the upcoming discussion on gateway services.