Edge to Cloud Security: Harnessing NAC, SASE and ZTNA

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: Adam Fuoss, Mathew George

See new Central cloud native NAC; SASE with SSE, SD-WAN & NAC; new ZTNA natively in SD-WAN Gateways. Adam Fuoss, VP of Product for EdgeConnect SD-WAN, outlined HPE Aruba Networking’s integrated SASE portfolio, comprising SSE (Security Service Edge) for cloud-based security focused on ZTNA (Zero Trust Network Access), EdgeConnect SD-WAN for connecting diverse locations, and ClearPass/NAC (Network Access Control). He highlighted the challenge of traditional ZTNA connectors, which often rely on virtual machines in data centers, leading to inefficient traffic hair-pinning when applications reside in branches. To address this, HPE Aruba Networking has integrated the SSE connector as a container directly into the EdgeConnect SD-WAN appliance, allowing users to connect to cloud security services and then directly to branch applications without backhauling traffic, significantly improving efficiency for distributed applications and remote contractors.

Mathew George, a Technical Marketing Engineer, then provided an overview of Central NAC, HPE Aruba Networking’s cloud-native NAC offering. This solution aims to simplify user and device connectivity by leveraging cloud-based identity sources like Google Workspace, Microsoft Entra, and Okta for authentication and authorization. Central NAC uses Client Insights for advanced device profiling, combining fingerprints with traffic flow information and AI/ML models for accurate classification. It integrates with third-party systems like MDM and EDR solutions to pull compliance attributes, which are then used in NAC policies. Central NAC also supports certificate-based authentication (including “Bring Your Own Certificate” with external PKI), MPSK (Multi-Pre-Shared Key) for user-based or admin-based device authentication, and various guest workflows. A key feature demonstrated was the real-time re-authentication and policy enforcement based on changes in the Identity Provider (IdP), showcasing true Zero Trust in action.

The presentation underscored HPE Aruba Networking’s commitment to a unified Zero Trust posture across their entire portfolio. The vision is for a single policy engine to enforce security from Wi-Fi and IoT devices all the way through switches, access points, gateways, and the SSE cloud. This includes multi-vendor support, allowing for VLAN enforcement on third-party switches like Cisco. While Central NAC streamlines simpler use cases, ClearPass continues to address more complex, on-premise requirements. The overall message emphasized leveraging telemetry-based networking and AI-driven insights to enhance security, improve endpoint experiences, and provide engineers with the necessary data to maintain optimal network performance, ultimately enabling a truly integrated security and networking approach from edge to cloud.


Expanding Access Points as a Platform Capabilities

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: Jerrod Howard, Justin Sergi

This presentation shows the features & benefits of Wi-Fi 7 APs including flex-radios, dynamic antennas, IoT, containers and more. Jerrod Howard, a hardware product manager at Aruba, introduced the concept of the Access Point (AP) as a platform, highlighting how HPE Aruba Networking is building its Wi-Fi 7 portfolio with increasingly flexible radios and complex internal technologies. He explained the drive towards more adaptable radios that can serve different regulatory regions without restriction, allowing deployment as dual band APs to maximize radio utilization. A key innovation is the development of dynamic antennas, which allow a single AP SKU to function as both an omnidirectional and directional access point, configurable via software. This flexibility is particularly beneficial for environments with varying coverage needs, such as warehouses with sloped ceilings or high-density conference rooms that transition from empty to packed.

Justin Sergi, Product Manager covering IoT, further expanded on the “AP as a platform” concept by discussing HPE Aruba Networking’s IoT and containerization strategy. The goal is to consolidate parallel IoT overlays, allowing the AP to serve as a unified IoT gateway with onboard dual IoT radios and extensible USB ports. This evolution is supported by an “App Store” within Aruba Central, enabling customers to deploy various IoT integrations (e.g., electronic shelf labels, asset tracking, access control) as container-based workloads. This decoupling of IoT integrations from the AP’s operating system through a cloud-native microservices architecture significantly accelerates development and deployment. The developer portal further empowers partners to self-publish new applications, managing their own versioning and ensuring security within a container sandbox with limited resource access.

The discussion also touched upon the Smart Antenna Module (SAM), a sensor embedded in outdoor Wi-Fi 6E and Wi-Fi 7 APs. SAM identifies the antenna, carries RPE data (gain, beamwidth), and reports heading and downtilt, providing critical telemetry about the AP’s physical orientation and performance. This data, combined with advancements in software, will become increasingly crucial for optimizing Wi-Fi 7 deployments. The overall strategy underscores HPE Aruba Networking’s commitment to simplifying network management and extending the capabilities of the access point, turning it into a versatile edge device capable of supporting diverse applications, IoT integrations, and advanced AI functionalities, all while offering flexibility in deployment and reducing operational complexity.


Modernize Virtualization Stack with HPE Aruba Networking CX Switches

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: Marty Ma

Marty Ma, Director of Product Management for HPE Aruba Networking’s CX switching strategy, presented on modernizing the virtualization stack with HPE Aruba Networking CX Switches. He introduced new products and recent integrations that unify HPE’s offerings. The CX switch portfolio, established in late 2016, now spans from 1 GBe to 400 GBe, all operating on a common software platform and managed by Aruba Central. This broad portfolio supports campus, branch, remote, access, aggregation, and core environments, as well as high-density data center deployments. A key innovation in late 2021 was the introduction of the first smart switch featuring a DPU (Data Processing Unit) from AMD Pensando, designed to bring services closer to the workload at the top of the rack. This was followed by the CX 10040, focused on 100 GBe server connectivity, both offering stateful Layer 4-7 services, East-West firewall inspection, and high-resolution telemetry.

Ma then linked this to the broader HPE GreenLake strategy, emphasizing the HPE Private Cloud Solution and HPE Private AI offerings, all orchestrated by HPE Morpheus software. He specifically highlighted Morpheus VM Essential, a next-generation virtualization management solution that unifies disparate hypervisor environments (like existing VMware ESXi clusters and KVM-based environments), all managed through a single pane of glass: HVM Manager. This aims to provide a more cost-effective alternative for customers concerned about rising hypervisor licensing costs. The overall HPE strategy positions them as a unique IT vendor capable of delivering a full hybrid cloud stack, with Morpheus for management and OpsRamp for comprehensive visibility across all environments.

The presentation underscored how the CX smart switching portfolio integrates with this virtualization strategy. Customers can leverage plugins to connect DPU-equipped switches into their hypervisor environments, enabling macro and micro-segmentation and advanced network services directly at the first-hop switch, even for bare-metal servers. This approach aims to simplify virtual networking across different hypervisors by offloading network policy enforcement to the hardware. The ongoing trend of increasing network speeds, driven by AI workloads, further validates HPE’s decision to integrate DPUs into switches, making the conversation about DPU placement more relevant than ever. The entire solution is designed to simplify complex end-to-end problems by providing a holistic view from compute and storage to networking, orchestrated and managed centrally.


Simplify Network Management with HPE Aruba Networking Central

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: Dobias Van Ingen

Learn about AI, deep platform intelligence, self-optimizing, observability, troubleshooting and more. Dobias van Ingen, CTO and VP for System Engineers at HPE Aruba Networking, detailed the evolution of Aruba Central, emphasizing its role in addressing common enterprise challenges like domain fragmentation, policy inconsistency, experience gaps, high operational costs, data sovereignty, and vendor lock-in. Their journey began in 2017 by unifying wireless solutions under a single operating system, followed by unifying wired networks in 2019 to provide consistent role-based policies and application visibility. The latest step involved integrating fabrics with SD-Branch/SD-WAN applications (EVPN, VXLAN) and, crucially, developing a common operational model that supports various consumption models, including public cloud, managed services, Network as a Service (NaaS), and on-premise deployment of the same cloud software.

The presentation highlighted Aruba Central’s unique unified configuration model, allowing users to manage network infrastructure through UI, CLI, or API, with changes reflected consistently across all interfaces. This is powered by a central Yang model, ensuring that configurations for roles, profiles, and services are seamlessly applied across access points, gateways, and switches, regardless of their specific device persona. A key demonstration showcased how a single authentication server profile could be configured once and then assigned to various device functions and scopes (e.g., global, regional, device group), significantly simplifying management and reducing potential errors. Furthermore, the unified architecture ensures that all feature sets are available across on-premise and cloud deployments, with minor exceptions for some AI functionalities.

Dobias also delved into user experience insights (UXI), which leverage sensors (physical or software agents) and network telemetry to provide comprehensive data on client and application performance. This includes detailed path analysis, identifying latency across hops and even into tunnels, offering crucial troubleshooting data that goes beyond traditional trace routes. The discussion then transitioned to Agent AI, the evolution beyond traditional and generative AI. Agent AI focuses on reasoning and autonomous action, allowing the system to combine vast knowledge bases with real-time network data to proactively identify issues and suggest or even schedule automated remediations (e.g., disabling an 802.11r amendment for problematic clients). This intelligence is surfaced through a “Network Copilot” interface, enabling natural language interaction for troubleshooting and automated problem-solving, along with multi-vendor operability (e.g., monitoring Cisco switches within Aruba Central) to ease transitions and prevent vendor lock-in.


HPE Aruba Networking Executive Overview with James Robertson

Event: Networking Field Day 38

Appearance: HPE Aruba Networking Presents at Networking Field Day 38

Company: HPE Aruba Networking

Video Links:

Personnel: James Robertson

James Robertson, VP & GM, kicked off the session by outlining HPE Aruba Networking’s focus on two significant industry shifts: AI for networking (AI-powered NetOps) and networking for AI. The former aims to enhance network efficiency and effectiveness using AI, while the latter is positioned as a new foundational infrastructure for AI workloads in data centers. He emphasized the critical role of data collection as the foundation for AI operations, explaining that a comprehensive data lake, fed by extensive telemetry from across the network, is essential for gaining true visibility and extracting actionable insights. This data-driven approach underpins their strategy to deliver security-first, AI-powered networking, leveraging machine learning to identify anomalies that humans might miss and to create an infrastructure that drives optimal user experiences.

Robertson highlighted three key differentiators for the Aruba portfolio. First, their full-stack integration means that all wired, wireless, and WAN components, whether on campus or in the cloud, are managed through Aruba Central, providing a single pane of glass for comprehensive visibility. Second, HPE Aruba Networking has expanded its deployment options beyond cloud-managed solutions, now supporting on-premise, near-premise, and sovereign (air-gapped) environments to meet diverse organizational needs. Third, they address the challenge of broad observability across the entire IT estate through the integration of OpsRamp, an HPE-acquired company. OpsRamp provides a unified view across security, networking, virtualized platforms, and storage, enriching the telemetry data fed into their AI models for deeper insights.

He further elaborated on HPE Aruba Networking’s AI journey, distinguishing between traditional AI (anomaly detection based on historical data), generative AI (leveraging larger datasets for decision-making and understanding), and the recently announced Agent AI. Agent AI, the latest advancement, focuses on reasoning, allowing the system to combine accumulated knowledge with real-time infrastructure data to proactively identify issues and suggest actions, mimicking human problem-solving. This entire AI framework is underpinned by Aruba Central, which fundamentally aims to connect and protect all infrastructure components, utilizing telemetry for security decisions and automating operations to provide network teams with real-time insights and control.


cPacket Observability for AI

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

Modern AI workloads rely on high-performance, low-latency GPU clusters, but traditional observability tools fall short in diagnosing issues across these dense, distributed environments. In this session, cPacket explored how they augment GPU and storage telemetry (DCGM/NVML/IOPS) with full-fidelity packet insights. They covered how to correlate job scheduling, retransmissions, queue depth, and tensor-core utilization in real time, and how to establish performance baselines, auto-trigger mitigations, integrate with SRE dashboards, and continuously tune topologies for maximum AI throughput and resource efficiency. Erik Rudin and Ron Nevo introduced the emerging challenge of AI factories moving into enterprises, contrasting these inference workloads with the well-understood elephant flows of AI training in hyperscale data centers. Inference presents unique, less-understood traffic patterns, often driven by user or agent interactions and characterized by varying query-response ratios and KV cache management policies, all demanding optimal GPU utilization without sacrificing latency.

The core of cPacket’s solution for AI observability lies in supplementing traditional GPU telemetry with packet-level visibility, particularly on the north-south (front-end) network that connects AI clusters to the rest of the enterprise. This integration is crucial for pinpointing the exact source of latency (whether from the cluster, switch, or storage), identifying microbursts that internal switch telemetry might miss, and understanding session-level characteristics that impact AI workload performance. Unlike traditional network monitoring, which often falls short in these highly dynamic and dense environments, cPacket’s approach aims to provide the granular, real-time data necessary for continuous tuning and optimization of AI infrastructures.

Ultimately, cPacket emphasizes that observability for AI is essential for enterprises making significant investments in GPU workloads at the edge. The rapid evolution of AI necessitates a comprehensive approach that integrates packet insights, session metrics, and AI-driven analytics into existing SRE and NetOps workflows. This allows for proactive identification of anomalies, establishment of performance baselines, and continuous optimization of network topologies to ensure maximum AI throughput and resource efficiency, directly impacting the often high costs associated with AI downtime. The overarching message is to start with the business problem–understanding the specific challenges and desired outcomes for AI workloads–and then leverage cPacket’s integrated, open, and AI-infused platform to drive measurable improvements.


cPacket NOC–SOC Convergence: Compliance

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

At Security Field Day 13, cPacket explored how Network Observability empowers SecOps teams to elevate their threat detection and response. In this session, they shifted the lens to NetOps, examining the growing convergence between NOC (Network Operations Center) and SOC (Security Operations Center) workflows. As performance and security become inseparable in hybrid and zero-trust environments, NetOps teams must adopt tools and practices that support both operational resilience and threat visibility. cPacket demonstrated how packet-based observability bridges this gap, enabling NetOps to detect lateral movement, validate policy compliance, and collaborate more effectively with security teams through shared context and real-time data. They emphasized that security is a top concern for all organizations, and the network provides crucial insights to surface issues like malware and vulnerabilities.

Ron Nevo explained how cPacket’s solution empowers NetOps to contribute significantly to the organization’s security posture. Their Deep Packet Inspection (DPI) engine extracts relevant information from every session, including DNS queries and HTTPS queries, even from encrypted traffic (e.g., domain names, TLS certificate validity). This raw data can be used to generate dashboards and reports that feed into security tools. A compelling demonstration involved using an LLM (Large Language Model) to prompt the system to generate a Grafana dashboard tailored to specific HIPAA regulations. This highlights the platform’s ability to create customized compliance reports without requiring deep knowledge of the underlying visualization tools, extending the reach of network observability for security and auditing purposes.

The discussion acknowledged that while AI can create sophisticated reports and highlight suspicious activities (e.g., identifying suspicious domain names by filtering out known benign traffic), human expertise remains crucial for validation and full compliance. The goal is not to replace human operators but to provide them with powerful tools that streamline data analysis, automate report generation, and surface critical insights. By integrating network insights directly into SOC tools and workflows, cPacket enables proactive detection of anomalies and alerts, strengthening the overall security posture and fostering better collaboration between network and security teams. The ultimate aim is to provide the right data to the right person or tool at the right time, enhancing the ability to respond to and prevent security incidents.


cPacket Proactive Service Assurance and Compliance

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

Latency issues don’t always wait for end users to notice and neither should your operations team. In this session, cPacket demonstrated how they enable proactive latency detection using leading indicators, full-path packet monitoring, and anomaly detection. With integrations into LLM-powered workflows and platforms like Slack and ITSM, teams can resolve issues faster, tune alerts more precisely, and continuously improve visibility through real-time data and trend reporting. The core focus was on achieving proactive service assurance, shifting from a reactive “firefighting” model to one where issues are identified and resolved before they impact users, ideally reducing human-created incidents.

Ron Nevo elaborated on this “nirvana” state, where network operators can proactively assess network health using cPacket’s Observability AI. The system processes trillions of packets to distill vast amounts of data into a manageable handful of “insights,” highlighting what’s most important for a specific operator’s responsibilities. A key use case demonstrated this: querying the system for new insights over the past 24 hours. The LLM (Large Language Model) identified client latency issues and resource utilization problems on a core engineering server. While the interaction still requires a certain level of network engineering sophistication to interpret the insights, the goal is to simplify the discovery process and guide operators to critical areas.

cPacket’s approach relies on dynamic baselining, where the AI learns normal network behavior over time across various metrics and services, detecting anomalies that might indicate a problem before an outage occurs. While the presented prompts were complex, the long-term vision is to abstract this complexity, making the system more intuitive and capable of providing precise, actionable guidance. The ultimate value lies in accelerating the triage process, shortening the Mean Time To Detect (MTTD) and Mean Time To Resolve (MTTR) by integrating AI-driven insights with existing workflows and tools like Slack and ServiceNow. This approach aims to augment human operators, providing them with a powerful tool to proactively manage the network and ensure continuous service reliability.


cPacket Service Assurance: MTTR Reduction

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

When service disruptions or connection issues impact key applications, speed of diagnosis is everything. This session highlighted how cPacket enables real-time monitoring, anomaly detection, and triage using packet-level data. It showcased how IT teams can use LLM-powered interaction, Observability AI baselining, and SIEM integration to accelerate resolution, reduce MTTI/MTTR, and deliver a better user experience across distributed infrastructure and business-critical workflows. Erik Rudin, Field CTO, set the stage by describing a reactive scenario where users are experiencing application issues, and the network appears normal initially. Ron Nevo, CTO, presented a real-world example from a large bank where a specific branch experienced intermittent remote desktop access failures due to a WAN acceleration device adding significant latency. This underscored the challenge of pinpointing issues in complex, multi-hop network paths without pervasive monitoring.

cPacket’s approach to reducing MTTR involves enhancing the user experience through AI-powered interaction. Instead of manually sifting through logs and dashboards, network operators can “chat” with the system, asking natural language questions to gain insights into service performance. The LLM (Large Language Model), in conjunction with AI agents and the MCP (Model Context Protocol), helps to process and contextualize data. A crucial aspect is Observability AI baselining, where cPacket’s machine learning pipeline automatically establishes baselines for various network metrics, accounting for service, time of day, and day of week. This allows the system to identify deviations from normal behavior, even if not immediately surfaced as an alert, and visually present these anomalies against the baseline to the user.

While acknowledging that advanced network engineering knowledge is still valuable, the aim is to simplify the troubleshooting process. The system can identify logical and physical network topologies and pinpoint where latency or other issues reside within the path. This AI-assisted workflow accelerates triage by providing relevant data and insights, shortening the time to detect, understand context, and identify the responsible component or team. cPacket emphasizes that this integration with existing IT workflows–including SIEM, ticketing systems like ServiceNow, and communication platforms like Slack–is critical for achieving measurable outcomes and continuous improvement in service delivery. The ultimate goal is to empower human operators with intelligent tools that streamline diagnostics and decision-making, rather than completely automating the resolution process.


cPacket Service Assurance: Realtime Video Production

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Erik Rudin, Ron Nevo

Real-time video environments demand precision and speed. Troubleshooting can’t wait for decoding or downstream analysis. In this session, cPacket explored how packet-level observability enables immediate detection of transport-layer issues like encoder faults, fiber/switch errors, and edge-to-cloud latency disruptions. They demonstrated how their observability solution, with real-time alerts, dynamic dashboards, and ServiceNow integration, empowers proactive monitoring and MTTR (Mean Time To Resolution) reduction across complex, long-path video delivery networks. Erik Rudin, Field CTO, introduced the scenario of live video streaming, emphasizing the critical importance of video quality for businesses. Ron Nevo, CTO, further detailed the intricate environment of live streaming, involving multiple cameras, production vans, cloud processing, transcoding, and distribution, all of which can introduce potential points of failure.

The core of cPacket’s approach is to deploy monitoring points throughout the video delivery path to quickly determine if an issue is network-related. For real-time video, the presence of even minimal packet loss is a clear indicator of a problem. cPacket’s solution continuously analyzes RTP (Real-time Transport Protocol) streams, triggering real-time alerts (e.g., via Slack) when packet loss increases. These alerts provide direct links to detailed analytics, allowing operators to pinpoint the exact location and nature of the fault, whether it’s a physical cable issue, a video machine problem, or a cloud link disruption. Furthermore, the system automatically creates tickets in existing IT service management tools like ServiceNow, ensuring that identified issues are integrated into the customer’s operational workflows for prompt resolution.

This use case exemplifies cPacket’s broader strategy for service assurance, focusing on delivering actionable insights rather than just raw data. By acquiring and contextualizing packet data at line rate, integrating it into existing ecosystems, and leveraging AI for anomaly detection, cPacket aims to proactively identify and prevent service degradations. The emphasis is on improving the triage process and providing measurable outcomes, such as reduced MTTR and improved customer experience. The session underscored that AI serves as an augmentation to existing analytics, enhancing the ability to predict and prevent outages by identifying subtle patterns like under/overutilized links and their correlation to service degradation or security concerns.


cPacket Introduction with Mark Grodzinsky

Event: Networking Field Day 38

Appearance: cPacket Presents at Networking Field Day 38

Company: cPacket

Video Links:

Personnel: Mark Grodzinsky

cPacket’s presence kicked off by revisiting highlights from previous Networking Field Day and Security Field Day events, providing an overview of the evolution of cPacket’s Network Observability platform and introducing AI-driven innovations, framed by their Value Equation and Customer Value Journey frameworks. Mark Grodzinsky, Chief Product and Marketing Officer, emphasized that while AI is currently at a peak of hype, cPacket views it as a tool, not a standalone solution. Their focus is on how AI, integrated within their network observability platform, drives tangible business outcomes. This approach is rooted in their belief that packet data remains the “single source of truth” for understanding the “what, where, when, and why” of network events, even as other telemetry data (metrics, events, logs, traces) serve important purposes.

cPacket’s observability platform has evolved significantly since its 2007 inception, highlighted by its role in the 2012 Olympics’ 10 GBe network. Key components include a packet broker with FPGA on every port for high-precision data delivery, and advanced packet capture and analytics capabilities, supporting up to 200 GBps concurrent write-to-disk with indexing. Their solutions address challenges like microbursts, which cause packet drops even when overall network capacity seems sufficient. Furthermore, cPacket emphasizes the convergence of network and security operations, advocating for a single source of truth–packet data–to enhance both operational efficiency and security posture, aiding in protection, detection, response, digital forensics, and compliance. AI, in this context, serves as a smart companion for setting deterministic thresholds and identifying anomalies proactively.

The company’s core mission is service assurance, achieved through pervasive, independent, open, and scalable observability. Erik Rudin, Field CTO, highlighted the increasing complexity of modern hybrid and multi-cloud environments, stressing the critical need for monitoring key links to ensure mission-critical application performance. cPacket’s solution begins with nanosecond-precision packet acquisition and immediate metric collection, enabling the identification of patterns like microbursts and low-level latency. This rich data is integrated into their own capture devices for session analytics and correlation, and also exposed through open APIs for integration with existing customer tools and data lakes. They introduced the Value Equation, a framework that connects raw data and AI insights to measurable business outcomes, and the Customer Value Journey, which guides customers through understanding their business problems, integrating cPacket’s technology, validating its impact, and achieving continuous improvement in network and security operations.


Hedgehog Gateway Demonstration

Event: Networking Field Day 38

Appearance: Hedgehog Presents at Networking Field Day 38

Company: Hedgehog

Video Links:

Personnel: Manish Vachharajani, Sergei Lukianov

Hedgehog CTO Manish Vachharajani explained how Hedgehog gateway peering functions as a new component to overcome limitations of switch-based VPC peering. While switch-based peering offers full cut-through bandwidth, traditional switches lack the CPU and RAM for stateful network functions like firewalling, NAT, and handling large routing tables or TCP termination. The Hedgehog Gateway addresses this by leveraging a CPU-rich, high-bandwidth server positioned in the traffic flow between VPCs. This commodity hardware, combined with modern NICs featuring hardware offloads for NAT and VXLAN, can achieve significant throughput (initially targeting 40 Gbps, with plans for 100 Gbps and higher). The gateway operates by acting as a VTEP and selectively advertising routes to attract specific traffic, performing necessary network transformations (including implied NAT as demonstrated), and then re-encapsulating and transmitting packets to their destination VPC.

Sergei Lukianov, Chief Architect, demonstrated VPC peering with basic firewall functions that aim to replace Zipline’s existing Palo Alto Firewalls. The demo illustrated how the gateway enables communication between VPCs with overlapping IP addresses by performing NAT. This involves the gateway advertising NAT’d IP prefixes into the VRFs of peered VPCs, allowing traffic to be routed through the gateway. The demonstration highlighted the comprehensive visibility provided by Hedgehog’s data plane on the gateway, offering insights into traffic flow that traditional switches often lack. While introducing a slight latency increase due to the additional hops (though the demo used debug images, exaggerating this), the gateway offers significantly more flexibility and functionality than switch-based peering.

Looking ahead, Hedgehog plans to enhance the gateway’s capabilities by moving the software onto DPUs (Data Processing Units) within the host, such as NVIDIA Bluefield, for improved performance and scalability. This approach would significantly reduce latency and allow for deeper network extension into virtual environments like VMs and containers. The gateway also includes basic security functionalities like ACLs and port forwarding, with a roadmap to add more advanced features like DDoS protection, IDS/IPS, and Layer 7 inspection as per customer demand or open-source contributions. Furthermore, Hedgehog aims to support multi-data center deployments through Kubernetes Federation, allowing independent clusters to connect via gateway tunnels while presenting a unified API to the end-user.


Hedgehog VPC Peering Demonstration

Event: Networking Field Day 38

Appearance: Hedgehog Presents at Networking Field Day 38

Company: Hedgehog

Video Links:

Personnel: Manish Vachharajani, Sergei Lukianov

Hedgehog CTO Manish Vachharajani reviewed how Hedgehog simplifies AI networking with a Virtual Private Cloud (VPC) abstraction used by customers like Zipline, emphasizing the complexities of designing modern GPU training networks with multiple ports and intricate configurations. Hedgehog addresses this by providing two main abstractions: a low-level wiring diagram for defining physical topology (like leaf/spine connections and AI-specific settings for RDMA traffic), and a VPC operational abstraction for partitioning clusters into multi-tenant environments. This approach leverages the Kubernetes API for configuration, offering a well-known interface with a rich ecosystem of tools for role-based access control and extending its capabilities to manage the physical network. Once the wiring diagram is fed into the Kubernetes API, Hedgehog automates the provisioning, booting, and configuration of network operating systems and agents on the switches, ensuring the specified network policies are enforced.

The core of Hedgehog’s multitenancy solution lies in its VPC abstraction, enabling the creation of isolated network environments with configurable DHCP, IP ranges, and host routes, supporting both L2 and L3 modes. This abstraction automates the complexities of BGP EVPN, VLANs, and route leaks, which are typically manual and error-prone configurations. To facilitate communication between these isolated VPCs, Hedgehog introduces VPC peering, a simple Kubernetes object that automatically configures the necessary route leaks, allowing specified subnets to communicate securely. This eliminates the need for manual route maps and ACLs, significantly simplifying inter-VPC connectivity and reducing the risk of misconfigurations.

Sergei Lukianov, Hedgehog’s Chief Architect, demonstrated the provisioning of tenant VPCs and VPC peering on a three-switch topology (one spine, two leaves). The demo showed that without peering, direct communication between servers in different VPCs (e.g., Server 1 in VPC1 and Server 4 in VPC2) fails. However, by applying a simple peering YAML file to the Kubernetes API, the network automatically reconfigures, enabling successful communication. This process involves the Hedgehog fabric controller translating the peering object into switch configurations, including route leaking between VRFs (Virtual Routing and Forwarding instances). The demonstration also showcased Grafana Cloud integration for collecting and exporting detailed network metrics (counters, queues, logs) from switches and the control node, providing turnkey observability without extensive manual configuration. Manish further explained the limitations of purely switch-based peering for external connectivity, setting the stage for the upcoming discussion on gateway services.


How Zipline Uses Hedgehog for AI Training

Event: Networking Field Day 38

Appearance: Hedgehog Presents at Networking Field Day 38

Company: Hedgehog

Video Links:

Personnel: Florian Berchtold, Marc Austin

Zipline is a drone delivery company that trains AI on private cloud infrastructure to autonomously fly drones and drop packages in precise delivery locations. Florian Berchtold, Zipline’s Principal Engineer responsible for AI developer productivity, highlighted Hedgehog’s crucial role in their operations. Zipline chose an on-premises strategy for their AI infrastructure due to significant cost efficiencies and enhanced governance compared to public cloud options. Florian, a software engineer rather than a network engineer, sought a high-bandwidth networking solution that didn’t demand extensive network CLI expertise. Hedgehog provided a Kubernetes-native, declarative API, allowing Zipline to describe their infrastructure’s desired state in a familiar language, abstracting away complex networking configurations like port channels.

Previously, with a smaller server footprint, Zipline utilized Hedgehog for collapsed core designs, achieving high availability and high bandwidth on a modest scale without requiring specialized networking knowledge. Now, with over sixty servers across multiple racks, Hedgehog continues to be their preferred solution, supporting the larger spine-leaf topology required for their expanded infrastructure. However, a gap existed: while Hedgehog solved the internal fabric networking, Zipline still needed to connect their private cloud to the public internet, necessitating a firewall/router solution. This interim solution involved expensive, commodity legacy firewalls that provided far more capability than Zipline needed for the limited bandwidth they utilized, leading to significant unnecessary costs.

Florian anticipates that Hedgehog’s new Transit Gateway demonstration will fill this crucial gap. He expects the gateway to provide essential routing capabilities, allowing their internal private fabric IPs to access the public internet, along with Network Address Translation (NAT) and basic port forwarding to expose on-premise hosted services. This new functionality from Hedgehog aims to replace their costly existing firewalls, offering a more integrated and cost-effective solution that aligns with their cloud-native infrastructure and declarative management approach.


Networking Field Day Delegate Roundtable: Networking Needs to Evolve

Event: Networking Field Day 38

Appearance: Networking Field Day 38 Delegate Roundtable Discussion

Company: Tech Field Day

Video Links:

Personnel: Tom Hollingsworth

In a roundtable discussion, the delegates at Networking Field Day 38, led by Tom Hollingsworth, explored the evolving role of networking within organizations, moving beyond the traditional “boring is good” approach. The key question revolves around whether the network should remain a commoditized utility, managed by external providers, or become a central, differentiating product of the business. The discussion highlights that while user experience has greatly improved to the point where networks are expected to “just work,” the focus is now shifting towards enhancing the operator experience, primarily through automation. This automation presents a paradox for some, who fear job displacement, while others argue it elevates the value of the networking role by requiring more strategic, software-driven approaches and addressing chronic issues like poor documentation. The key decision for businesses, therefore, isn’t whether to automate, but rather the extent to which their network serves as a differentiator, influencing whether they build in-house automation capabilities or outsource to specialists.

The conversation further looked into the notion of the network as a fundamental utility, akin to electricity or water, but with the unique challenge of constantly escalating bandwidth and latency demands, unlike other static utilities. This constant evolution necessitates a shift in operational paradigms. While many smaller organizations, like law or dental offices, may benefit from “network as a service” models due to their stable growth and commoditized needs, larger enterprises with dynamic requirements, such as those leveraging AI clusters, face a more complex decision. The “middle path,” where organizations try to straddle both approaches, is presented as an illusion of choice, potentially leading to inefficient investments and an inability to adapt to rapidly changing technological landscapes.

Ultimately, the consensus among the delegates is that businesses must consciously choose one of two distinct paths: either treat the network as a utility to be outsourced for efficiency and convenience or elevate it to a core, strategic asset requiring internal investment in specialized skills and automation. This decision is further complicated by a growing skills gap, where newer professionals prefer API-driven interfaces over traditional command-line interfaces. The increasing complexity and demands on modern networks necessitate a multidisciplinary team approach, moving away from the “unicorn” network engineer. The overarching message is that understanding an organization’s specific needs and making a deliberate choice about the network’s role is paramount to avoiding becoming stagnant in the rapidly advancing technological environment.


Aviz Networks Network Copilot Demo

Event: Networking Field Day 38

Appearance: Aviz Networks Presents at Networking Field Day 38

Company: Aviz Networks

Video Links:

Personnel: Madhu Paluru, Thomas Scheibe

In this Networking Field Day session, Aviz Networks introduced Network Copilot (NCP), their private AI platform built for NetOps, emphasizing their vision of “Networks for AI, AI for Networks” and enabling open networking with SONiC and LLMs. The demonstration showcased NCP as a self-hosted software solution that integrates with existing network data sources through various data connectors, such as Cisco Catalyst Center, Nexus Dashboard, IP Fabric, Elastic, Splunk, and Zendesk. The primary goal of NCP is to centralize disparate network data, allowing users to query and correlate information through a natural language chat interface, thereby streamlining operations and reducing the need for manual data aggregation from various tools or complex scripting.

The demo highlighted NCP’s ability to perform tasks like inventory analysis and hostname validation. Users can create projects within NCP to focus on specific troubleshooting sessions or events, inviting collaborators and selectively enabling relevant data connectors to avoid data pollution. A key feature is the ability to upload static contextual information, like naming conventions or CVE lists, as files, which NCP can then use to validate device configurations or generate compliance reports. The presenters stressed that NCP doesn’t “train” on operational data in the traditional sense; instead, it uses a pre-trained LLM (like Llama 70B), fine-tuned for networking, to interpret questions and leverage AI agents to retrieve, process, and summarize data from connected sources.

While acknowledging that some functions could be replicated with scripting, the true value of NCP lies in its abstraction layer, enabling network engineers to manage diverse multi-vendor, multi-NOS environments without needing deep knowledge of every CLI or proprietary system. This empowers junior engineers by providing suggestions for troubleshooting and allows for more efficient audit reporting and capacity planning. Aviz Networks emphasized that NCP is not a CLI replacement, nor does it push configurations, but it can provide insights and facilitate data-driven decisions. The platform’s self-hosted nature with GPU requirements (like NVIDIA A100 or H100) ensures data privacy and offers a quicker ROI by automating tedious tasks, freeing up valuable engineering time.


Aviz Network Copilot with Thomas Scheibe

Event: Networking Field Day 38

Appearance: Aviz Networks Presents at Networking Field Day 38

Company: Aviz Networks

Video Links:

Personnel: Thomas Scheibe

Aviz Networks introduced Network Copilot (NCP), their private AI platform built for NetOps, emphasizing their vision of “Networks for AI, AI for Networks” and enabling open networking with SONiC and LLMs. Aviz Networks, a software networking company founded in 2019, aims to revolutionize networking by separating hardware and software, similar to the server world. Their product portfolio includes Fabric Manager for managing deployments and configurations, an Observability product for deep packet inspection and data correlation, and their flagship Network Copilot. They highlight the widespread issue of data silos and manual workflows in traditional NetOps, which NCP addresses by providing a native language interface to correlate data faster, automate repetitive tasks, and offer recommendations, not self-driving networks.

Aviz Networks highlights that NCP is not a new data lake but rather a tool that bridges existing data islands. It doesn’t require training on an organization’s operational data, as this data is constantly changing and proprietary. Instead, NCP uses an LLM to translate user questions, identify relevant data sources via data connectors, and employ AI agents to process information for a comprehensive answer. A critical aspect of NCP is its private AI platform architecture, ensuring that all data remains local to the customer’s environment, addressing security and privacy concerns. This approach also means the customer controls their LLM instance, without contributing to the training of external models.

NCP is designed to be hardware vendor-neutral, working across various operating systems and hardware platforms, a significant advantage in multi-vendor enterprise environments. Aviz Networks emphasizes that they are not just an LLM company but a networking software company leveraging LLMs to solve real-world network operational challenges. The value proposition lies in its ability to quickly pull and correlate data from disparate tools, offering a more intuitive and faster way to gain insights without the need for data scientists. This significantly reduces the time spent on tasks like compliance reporting and troubleshooting, providing a tangible return on investment for customers seeking to streamline their NetOps.


Breaking Barriers in Connectivity – Exploring the Latest Advancements in Cisco High-Density Wi-Fi Solutions with Wi-Fi 7

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: Cisco Presents at Tech Field Day Extra at Cisco Live US 2025

Company: Cisco

Video Links:

Personnel: Jim Florwick, Matt Swartz

See the latest in high-density Wi-Fi from Cisco with their newest outdoor wireless solution. Cisco is addressing the significant challenges of high-density Wi-Fi deployments in large public venues like stadiums and convention centers. These environments present complex issues, including environmental factors, regulatory compliance (especially for 6 GHz indoors/outdoors with Automatic Frequency Coordination), unique architectural layouts impacting line of sight, and the need for flexible configuration and optimization. The diverse range of client devices and evolving specifications further complicate deployments. To overcome these hurdles, Cisco is releasing the CW9179F, their fourth-generation antenna designed specifically for such venues, which is the first Wi-Fi 7.1 solution capable of ubiquitous coverage.

The CW9179F boasts several innovative features for flexible and reliable deployment. A key advancement is its ability to operate as either an indoor or outdoor AP using a unique “environment pack” with a specialized chip and gaskets, eliminating the need for separate SKUs and simplifying inventory. This ensures waterproofing for outdoor use while allowing full 1200 MHz spectrum access indoors. The antenna also offers switchable 2.4 GHz and 5 GHz low-band/high-band radio placement on the back for localized coverage, complementing the primary 6 GHz radio directed to the masses. Furthermore, a “Quick Connect” accessory allows for off-ladder installation, improving safety and efficiency for deploying these large antennas at height by enabling technicians to pre-assemble and seal sensitive components on the ground.

Performance-wise, Cisco advocates for line-of-sight (LOS) deployments in high-density environments over under-seat solutions, as LOS offers more consistent performance, lower co-channel interference, and better bandwidth distribution. The CW9179F features improved “hyper-directional” capabilities with minimized side lobes, which significantly increases throw distance and performance in challenging environments by reducing interference. It also offers simplified configurable beam steering with fewer modes (narrow/wide) for ease of deployment, and an accelerometer that provides tilt angles for precise installation and monitoring. Real-world testing at events like BottleRock has demonstrated the CW9179F’s superior performance, especially with 6 GHz, achieving high throughput even in dense crowds, solidifying its role as a robust solution for breaking connectivity barriers in high-density Wi-Fi.

Presented by Matt Swartz, Distinguished Engineer, Cisco Wireless, Jim Florwick, Principal TME, Cisco Wireless. Recorded live at Tech Field Day Extra at Cisco Live in San Diego, CA on June 10, 2025. Watch the entire presentation at https://techfieldday.com/appearance/cisco-presents-at-tech-field-day-extra-at-cisco-live-us-2025/ or visit https://techfieldday.com/event/clus25/ or https://Cisco.com for more information.


Automatic Frequency Coordination Updates from Cisco

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: Cisco Presents at Tech Field Day Extra at Cisco Live US 2025

Company: Cisco

Video Links:

Personnel: Manmeet Kaur

Hear the latest updates on 6GHz indoors and outdoors and how to deploy it with Automated Frequency Coordination. The release of the 6 GHz band for unlicensed use was a significant milestone for Wi-Fi, tripling available spectrum with 59 new channels and enabling higher speeds and capacities, while relieving congestion on the 2.4 and 5 GHz bands. However, this band is also occupied by thousands of licensed users, necessitating regulations to protect these incumbents from Wi-Fi interference. Global adoption varies, with North America and South Korea embracing the full band, while Europe and Australia use only the lower half. Additionally, regulations differentiate between indoor and outdoor use: indoor use is permitted at Low Power Indoor (LPI) levels (5 dBm/MHz PSD), but outdoor deployment or higher indoor power requires Standard Power (SP) and Automated Frequency Coordination (AFC).

Automated Frequency Coordination (AFC) is a cloud-based service that facilitates the use of the unlicensed 6 GHz band at Standard Power levels for indoor, outdoor, and external antenna deployments. This service coordinates spectrum sharing with incumbent users. Cisco’s AFC service resides in the cloud, where access points (APs) send requests containing their location (latitude, longitude, height) to the service. Cisco’s service then queries an AFC service provider (e.g., Federated Wireless) which, using a regulatory database, determines and returns the allowed channels and power levels to the AP. This response is valid for 24 hours, requiring APs to periodically send new requests to continue operating at Standard Power.

Operating at Standard Power offers significant gains, typically 3 to 6 dB (24 to 28 dBm) compared to LPI, extending coverage outdoors. While AFC ensures protection for licensed users, a potential concern is service availability; if the AFC service is unavailable for more than 24 hours, outdoor 6 GHz radios will cease operation, falling back to 5 GHz and 2.4 GHz, while indoor APs can still operate at LPI. Design considerations for Standard Power deployments include identifying appropriate use cases, checking channel availability beforehand, assessing client device penetration, accurately determining AP location (latitude, longitude, and manually entered height), and configuring through Cisco’s management platforms such as Meraki and Catalyst Center.


Beyond Visibility: The Age of Intelligent Assurance with Cisco

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: Cisco Presents at Tech Field Day Extra at Cisco Live US 2025

Company: Cisco

Video Links:

Personnel: Nikitha Shashidhar

Is your network reliable? Answer the question with Cisco Network Assurance. Cisco’s vision for network assurance is to unify experiences across Catalyst, Meraki, and ThousandEyes platforms, building smarter, end-to-end capabilities. The goal is to provide a consistent troubleshooting experience for IT administrators, regardless of the Cisco networking solutions they employ. While acknowledging current differences in dashboard complexity, the company aims for simplicity at the core, leveraging popular features from each portfolio, like Meraki’s intuitive flows and ThousandEyes’ path visualization. This unification will eventually lead to a single, consistent assurance score that reflects network health across all platforms, even in hybrid environments.

Cisco’s assurance strategy involves a phased approach: baseline and detect, localize and diagnose, mitigate and remediate, and finally, predict and optimize. Significant investments are being made across all these stages, moving beyond mere visibility to provide actionable insights and intelligent remediation. Recent advancements include org-wide assurance visibility, a feature providing a quick, critical analysis of network health across hundreds or thousands of networks based on a dynamically changing proportional weighted average score. This score considers various network components like clients, devices, infrastructure, and applications (with data from ThousandEyes), allowing for quick identification of problematic areas and contextual drill-downs into specific network health details.

Further enhancements include detailed client visibility, allowing administrators to troubleshoot specific client issues in real-time or historically, identifying connection paths, problems (e.g., DHCP server not responding), and suggested resolutions. The platform leverages root cause analysis frameworks that incorporate knowledge base articles and best practices to guide remediation. Customizable alert profiles help prevent alert fatigue by allowing organizations to set thresholds matching their SLAs. Looking ahead, Cisco is integrating an AI assistant that will enable faster troubleshooting by intelligently processing queries and suggesting actions, streamlining the entire assurance workflow. This AI assistant, along with ongoing improvements to the underlying assurance framework, aims to provide comprehensive and intelligent network management.