Videos

Aviz and the AI NOC

Event: Networking Field Day 40

Appearance: Aviz Networks Presents at Networking Field Day 40

Company: Aviz Networks

Video Links:

Personnel: Cody McCain, Thomas Scheibe

Aviz Networks introduces the AI Networking Operations Center (NOC), a vendor-neutral, agentic AI platform designed to transform traditional network management. Unlike tools that merely append a Large Language Model (LLM) to an existing product, Aviz provides a private, secure, and interoperable framework that integrates with a customer’s existing ecosystem. The platform is built on the FITS principles, Freedom, Integration, Tailorability, and Security, ensuring that enterprises can leverage AI without compromising data privacy or being locked into a specific hardware vendor or software controller.

Thomas Scheibe and Cody McCain emphasize that while the industry often markets AI as a simple natural language interface, the real challenge lies in the complex backend workflows of network operations. Aviz addresses this by acting as a Red Hat for networking, providing professional support and software for open-source options like SONiC while maintaining compatibility with legacy systems from Cisco, Arista, and NVIDIA. Their approach focuses on the Agentic power of AI, where modular agents can be customized to handle specific organizational workflows, moving beyond vendor-defined constraints to meet the unique operational needs of each customer.

To demonstrate the platform’s practical utility, the speakers showcase a physical lab environment featuring a heterogeneous mix of switches, firewalls from Fortinet and Palo Alto, and various management tools. By interfacing directly with devices and disparate data sources, such as config logs, flow data, and ticketing systems, the AI NOC streamlines root cause analysis and automates responses. This architecture allows companies to transition complex tasks typically reserved for expert engineers during maintenance windows into automated functions that can be managed by Tier 1 or Tier 2 support, significantly increasing operational efficiency.


From 400G BiDi to 1.6T: Cisco Optics for Al Fabrics with Paymon Mogharabi

Event: Networking Field Day 40

Appearance: Cisco Data Center Networking Presents at Networking Field Day 40

Company: Cisco

Video Links:

Personnel: Paymon Mogharabi

As AI training and inference scale, the network must function as an extension of the compute fabric. This session explores the architectural requirements for high-performance AI data centers. We will examine the shift toward deterministic networking to mitigate tail latency and fabric congestion, alongside critical hardware innovations — including advanced cooling and next-generation optics, designed to maximize performance and power efficiency. Attendees will gain technical insights into building a unified, programmable fabric that optimizes performance and scalability for high-density AI environments.

The presentation introduces the third generation of Cisco’s bidirectional (BiDi) technology, specifically the 400G BiDi optic. This innovation addresses fiber infrastructure constraints by enabling fiber reuse, allowing customers to upgrade from 40G or 100G to 400G over existing duplex multi-mode fiber without installing new trunk cables or patch panels. By utilizing four wavelengths at 100G each over a single fiber pair, the 400G BiDi simplifies the physical layer with LC connectors, making it eight times more fiber-efficient than parallel SR8 solutions. This approach offers significant financial and operational benefits for both brownfield and greenfield deployments by reducing installation costs and troubleshooting complexity.

A major portion of the session focuses on the critical role of optics reliability and Cisco’s advanced silicon photonics in AI environments. Unlike traditional networks where retransmissions are common, AI workloads are highly synchronized; a single unreliable optical link can cause GPU clusters to stall, potentially reducing performance by 40%. Cisco’s silicon photonics architecture integrates electronics and photonics into a single system, improving stability and power efficiency for 800G and 1.6T scales. Notable highlights include the 1.6T pluggable optic, which supports flexible breakout options, and the 800G Linear Pluggable Optic (LPO). By removing the DSP from the optic and shifting signal conditioning to the switch ASIC, the LPO solution reduces power consumption by 50% per module and lowers overall system latency, providing a more reliable and sustainable foundation for large-scale AI factories.


Cisco Silicon One Powered N9000 Switches with Faraz Taifehesmatian

Event: Networking Field Day 40

Appearance: Cisco Data Center Networking Presents at Networking Field Day 40

Company: Cisco

Video Links:

Personnel: Faraz Taifehesmatian

As AI training and inference scale, the network must function as an extension of the compute fabric. This session explores the architectural requirements for high-performance AI data centers. We will examine the shift toward deterministic networking to mitigate tail latency and fabric congestion, alongside critical hardware innovations — including advanced cooling and next-generation optics, designed to maximize performance and power efficiency. Attendees will gain technical insights into building a unified, programmable fabric that optimizes performance and scalability for high-density AI environments.

The presentation details Cisco’s strategic use of its Silicon One architecture, specifically the G-Series for AI scale-out and the P-Series for “scale-across” data center interconnects. The G-Series, highlighted by the G200 and G300 ASICs, provides high-radix connectivity with up to 512 ports of 200G and fully shared packet buffers to eliminate the performance bottlenecks found in traditional slice-based architectures. A core focus is the Cisco Intelligence Packet Flow (IPF), which enables advanced load balancing techniques such as packet spraying and flowlet switching. These features allow Ethernet to mimic the lossless properties of InfiniBand, ensuring high job completion times for RDMA-heavy AI workloads while maintaining a programmable pipeline that can adapt to evolving standards like Ultra Ethernet mid-cycle.

Hardware innovation is further demonstrated through new form factors and cooling solutions designed for high-density AI environments. Cisco introduced liquid-cooled chassis, such as the N9364F-SG3-L, which achieves 100% liquid cooling to handle the massive power requirements of 100-terabit ASICs without the need for fans. These systems support next-generation optics, including Linear Pluggable Optics (LPO) that reduce power consumption by half and coherent ZR/ZR+ optics for long-haul connectivity up to 1,000 km. Additionally, Cisco’s partnership with NVIDIA was underscored through the N9100 series, which integrates NVIDIA Spectrum-4 and Spectrum-6 silicon into the Cisco ecosystem. This gives customers the choice between a vertically integrated Cisco fabric or an end-to-end NVIDIA Spectrum-X solution, all managed through a consistent operating system and the Nexus Dashboard.


Cisco Scaling AI – Deterministic Fabrics and High-Density Infrastructure with Richard Licon

Event: Networking Field Day 40

Appearance: Cisco Data Center Networking Presents at Networking Field Day 40

Company: Cisco

Video Links:

Personnel: Faraz Taifehesmatian, Richard Licon

As AI training and inference scale, the network must function as an extension of the compute fabric. This session explores the architectural requirements for high-performance AI data centers. We will examine the shift toward deterministic networking to mitigate tail latency and fabric congestion, alongside critical hardware innovations — including advanced cooling and next-generation optics, designed to maximize performance and power efficiency. Attendees will gain technical insights into building a unified, programmable fabric that optimizes performance and scalability for high-density AI environments.

The presentation emphasizes that an AI-ready data center requires simultaneous innovation across five key dimensions: scalability, power efficiency, security, operational management, and silicon diversity. Cisco highlights the rapid transition in networking speeds, moving from 400G and 800G to 1.6T in just two years to keep pace with GPU evolution. A major focus is placed on the shift toward Ethernet for scale-out fabrics, as it offers a consistent operational model across front-end, back-end, and management networks. To achieve performance parity with InfiniBand, Cisco utilizes its Silicon One architecture, featuring deep, fully shared packet buffers and programmable pipelines that allow for the mid-cycle introduction of advanced features like dynamic load balancing and packet spraying to mitigate microbursts and reduce job completion time.

Cisco also detailed its strategic partnership with NVIDIA, which goes beyond simple reselling to include co-engineering systems that integrate Cisco’s NXOS and Nexus Dashboard with NVIDIA’s Spectrum-4 silicon. This collaboration aims to provide repeatable, standardized reference architectures that support high-performance features like adaptive routing and direct data placement. Furthermore, the discussion introduced the concept of “scaling across” geographically distant data centers, necessitating P-series silicon with deeper buffers and advanced optics for long-haul connectivity. By offering a vertically integrated stack encompassing silicon, hardware, operating systems, and optics, Cisco aims to provide a cohesive and programmable fabric that addresses the extreme power and performance demands of modern agentic AI workloads.


Lightyear Demo of Procurement, Network Inventory Manager, and Telecom Expense Management (TEM)

Event: Networking Field Day 40

Appearance: Introducing Lightyear at Networking Field Day 40

Company: Lightyear

Video Links:

Personnel: Dennis Thankachan, Ryan Schrack

Buying and managing enterprise network services has barely changed in decades–carrier portals, spreadsheets, blind renewals, and invoices that never match the contract. Lightyear is building an AI software platform that modernizes the enterprise telecom lifecycle, automating and optimizing for you everything that carriers make difficult. The platform serves as a telecom operating system designed to digitize the full lifecycle of a service, from procurement and installation to inventory management and expense auditing. By replacing manual workflows with a centralized system of action, Lightyear enables enterprises to configure complex requests, such as internet circuits or high-capacity waves, in a matter of seconds, leveraging location intelligence to identify the best on-net vendors and market-aligned pricing.

The platform significantly simplifies the transition from procurement to active service through dedicated install project management software. This system acts as a “squeaky wheel” to keep carriers on schedule, providing detailed visibility into job steps, estimated completion dates, and escalation paths. Once a service is live, it is automatically cataloged in the Network Inventory Manager, which tracks over 30 unique data points per circuit, including SLAs, static IPs, and physical KMZ route maps. This digital system of record not only visualizes the entire network on a map but also automates lifecycle workflows such as MACD (Move, Add, Change, Disconnect) ticketing and proactive shopping, ensuring that services do not lapse into expensive month-to-month rates or auto-renew without oversight.

The final pillar of the platform is an AI-native expense management engine that uses Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) to audit complex telecom invoices. This tool itemizes every charge, distinguishing contracted service costs from government surcharges and taxes, and automatically flags billing variances such as inactive services still being billed or unexpected fee increases. When discrepancies are found, Lightyear’s team can resolve disputes directly with carriers, providing finance teams with clean, exportable data and the option for a single consolidated monthly bill. Integrated messaging and a global calendar of deadlines further bridge the gap between network engineering and procurement, turning telecom management into a streamlined, automated operation.


Automate Enterprise Network Management with Lightyear’s Telecom Operating System

Event: Networking Field Day 40

Appearance: Introducing Lightyear at Networking Field Day 40

Company: Lightyear

Video Links:

Personnel: Dennis Thankachan, Ryan Schrack

Buying and managing enterprise network services has barely changed in decades–carrier portals, spreadsheets, blind renewals, and invoices that never match the contract. Lightyear is building an AI software platform that modernizes the enterprise telecom lifecycle, automating and optimizing for you everything that carriers make difficult. Dennis Thankachan, CEO and co-founder, explains that telecom operations are traditionally manual and lack the digitization necessary for objective data-driven decisions. Lightyear addresses this by offering a telecom operating system that integrates procurement, network inventory management, and expense management into a single, automated system of action. This approach allows enterprises to move away from error-prone spreadsheets toward a digital system of record that streamlines RFPs, installation management, and continuous cost optimization.

The platform’s three core pillars work together to handle the entire lifecycle of network services. The procurement module uses proprietary network and pricing intelligence to conduct digital RFPs, ensuring customers find the best vendors and price points across more than a thousand providers globally. Once a service is selected, the inventory manager tracks over 30 unique data points, including contract details, SLAs, and configuration data, while automating lifecycle tasks like renewal shopping and ticketing. For the final stage, the expense management module acts as an AI-native tool for auditing invoices, capturing billing variances, and managing consolidated payments, which typically results in significant time and cost savings for mid-market and enterprise customers.

Lightyear plans to expand its capabilities with new AI-driven features and a circuit monitoring pillar. CTO Ryan Schrack notes that the company is integrating natural language processing to allow users to query their network data intuitively and extract unique insights via custom charts and reports. The roadmap includes “software that’s actually soft,” meaning highly customizable workflows that adapt to an enterprise’s specific needs for quote selection and implementation. By potentially adding low-integration monitoring for uptime and capacity, Lightyear aims to provide a truly comprehensive system that not only buys and tracks services but also provides actionable intelligence on how the network is performing in real-time.


Nokia AI Scale Platforms with Igor Giangrossi

Event: Networking Field Day 40

Appearance: Nokia Presents at Networking Field Day 40

Company: Nokia

Video Links:

Personnel: Igor Giangrossi

AI networks require purpose-built hardware platforms designed for different roles across the infrastructure. This presentation outlines the hardware platforms positioned for these roles highlighting how each supports performance, bandwidth, and operational needs. This preso will have a focus on with a focus on the scale-out part of the network. It also looks ahead to emerging platforms designed for scale-across architectures, enabling the next phase of large-scale, interconnected AI systems. Igor Giangrossi, lead of hardware product management at Nokia, details the specialized data center portfolio that moves beyond the traditional 7750 SR into platforms specifically optimized for the high-throughput, low-latency demands of AI training and inference.

The presentation focuses heavily on the 7220 IXR series, which utilizes Broadcom Tomahawk chipsets to drive the scale-out portion of the network. Giangrossi introduces the Tomahawk 5 (H5) generation, offering up to 51.2 Tbps capacity with 800G ports, and the newer Tomahawk 6 (H6) generation, which doubles density to 128 ports of 800G or provides 1.6T Ethernet capabilities. A notable advancement in the H6 family is the introduction of liquid-cooled models designed for 21-inch OCP ORV3 racks, addressing the extreme power densities required as AI clusters scale. These platforms integrate advanced features like packet trimming and credit-based flow control into the packet pipeline to manage congestion and improve job completion times.

For scale-across and deep-buffered routing roles, Nokia utilizes the Broadcom Jericho family, including the 7250 IXR-X4 pizza box and the massive IXR-e chassis series. These platforms provide the necessary buffering for geodistributed clusters and long-reach interconnects while maintaining high port density, such as 576 ports of 800G in a single 18E chassis. The hardware design prioritizes operational efficiency and reliability through a mid-plane-less orthogonal architecture, honeycomb meshes for improved airflow, and the deliberate avoidance of retimers to reduce power consumption by up to 30%. This tiered approach ensures that the most appropriate silicon, whether Tomahawk, Jericho, or Nokia’s proprietary FP NPU, is deployed for each specific role in the AI infrastructure.


Nokia Management for AI Data Center Networks

Event: Networking Field Day 40

Appearance: Nokia Presents at Networking Field Day 40

Company: Nokia

Video Links:

Personnel: Zeno Dhaene

Explore the essential management considerations for building and operating multi-tenant AI data center networks. Attendees will learn why abstraction is critical to achieving the scale, speed, and consistency required for AI infrastructure. The presentation will demonstrate how event-driven automation (EDA) simplifies the design, deployment, and operation of backend AI networks, enabling secure and efficient multi-tenancy at scale. Zeno Dhaene, Product Manager for Nokia’s Event-Driven Automation (EDA), emphasizes that while AI data centers may appear uniform, they are uniquely defined by specific physical locations and business needs, such as GPU-as-a-service or shared internal infrastructure. To manage this complexity, EDA utilizes declarative intent and multiple layers of abstraction, allowing operators to treat an entire data center, comprising hundreds of thousands of configuration lines, as a single, manageable resource.

During the demonstration, Dhaene builds a functional AI backend featuring two stripes and a spine connector, capable of hosting approximately 2,000 GPUs, using only high-level labels rather than manual interface assignments. By tagging nodes and interfaces with metadata like “role,” “tenant,” and “data center,” EDA automatically orchestrates the underlying technical requirements, including BGP peering, IP address pooling, and the creation of isolated virtual routers for different tenants. This process is validated through a dry run feature that checks generated configurations against switch YANG models, ensuring accuracy before deployment. The platform’s ability to emit over 2,000 output resources from a single input resource illustrates the efficiency of moving away from traditional, manual configuration methods toward highly automated, intent-based systems.

The presentation also highlights EDA’s flexibility and integration capabilities, noting that the platform can serve as a standalone orchestrator or a gateway that pulls data from external sources of truth like Netbox or Nautobot. This allows for zero-touch automation where internal business tools can trigger network changes, such as provisioning GPUs for a customer, without direct operator intervention. Dhaene concludes by showcasing EDA’s real-time telemetry streaming and its digital twin capabilities, which allow engineers to simulate and test entire data center fabrics on a laptop. By providing a scalable framework for both cookie-cutter and highly customized environments, Nokia’s EDA addresses the critical need for speed and reliability in the rapidly evolving AI infrastructure market.


Ethernet and its Evolution to Support Nokia AI Networking

Event: Networking Field Day 40

Appearance: Nokia Presents at Networking Field Day 40

Company: Nokia

Video Links:

Personnel: Alfred Nothaft

Ethernet continues to evolve to meet the performance and scaling demands of modern AI networking architectures, progressing from RoCEv2 toward innovations driven by the Ultra Ethernet Consortium (UEC). This presentation discusses these requirements and introduces UEC Specification 1.0, with a focus on scale-out AI designs and the core philosophies shaping its development. Key Ethernet capabilities defined in UEC 1.0, both already implemented and forthcoming, are highlighted to show how Ethernet is being optimized for large-scale AI workloads. Alfred Nothaft explains that the primary challenge in AI fabrics is congestion management, particularly during the synchronization phases of training where thousands of GPUs simultaneously attempt to share massive amounts of gradient data. While legacy tools like ECN and PFC provide basic notification and pause mechanisms, they are often insufficient for the high-velocity requirements of current AI clusters.

The move toward UEC 1.0 represents a fundamental shift from network-centric congestion control to an end-node-centric philosophy. Under the RoCEv2 model, the network infrastructure is largely responsible for managing traffic flows and reacting to congestion. In contrast, UEC shifts the intelligence to the Network Interface Card (NIC) at the GPU endpoint. This allows for more granular, per-packet load balancing rather than traditional flow-based hashing, enabling the NIC to “spray” traffic across multiple paths and dynamically adjust based on real-time telemetry. Furthermore, the UEC transport (UET) is designed to be connectionless and includes native, hardware-level security and encryption from the outset, addressing data sovereignty and privacy concerns that were previously overlooked in backend fabrics.

UEC 1.0 introduces several sophisticated mechanisms to ensure job completion times are minimized. These include packet trimming, which reduces a packet to its header during congestion to signal the source without losing the stream’s context, and advanced in-band telemetry for precise congestion signaling. The specification also features link-layer retransmission to quickly recover from localized bit errors and credit-based flow control to meter traffic before it ever saturates the fabric. By leveraging Ethernet’s vast ecosystem and rapid bandwidth scaling, doubling speeds every two years toward 1.6 terabits, Nokia and the UEC aim to provide a highly flexible, vendor-neutral alternative to proprietary interconnects, supporting everything from local scale-out clusters to geodistributed scale-across environments.


AI Data Center Nokia Validated Design (NVD)

Event: Networking Field Day 40

Appearance: Nokia Presents at Networking Field Day 40

Company: Nokia

Video Links:

Personnel: Vivek Venugopal

Get an inside look at how Nokia Validated Designs (NVDs) streamline AI-ready data center and networking deployments through proven architectures, rigorous validation, and real-world performance insights. We’ll highlight several of our latest AI-focused NVDs, show how partners are extending them, and preview what’s coming next as we evolve the portfolio to meet the demands of modern, high-performance networks. Vivek Venugopal explains that Nokia treats network construction with the same intolerance for failure as aeronautical engineering, ensuring that every NVD is pre-tested on physical hardware to guarantee reliability. These designs are developed through an iterative workflow that begins with industry ideation and digital twin modeling in container labs, followed by extensive hardware validation of optics, cables, and protocols. Unlike rigid templates, NVDs serve as documented tech stacks that customers can customize, backed by a four-year support lifecycle that treats the design itself as a managed product.

The presentation highlights several AI-specific architectures, including a rail-only design developed with Lenovo and AMD for small-to-medium clusters and a more complex two-stripe pod design for larger environments using NVIDIA H200 or AMD GPUs. A key innovation discussed is the use of VRFs to emulate multiple leaf switches, allowing customers to scale their GPU clusters accurately without over-provisioning hardware. To ensure these networks are truly lossless, Nokia rigorously validates the interaction between Explicit Congestion Notification (ECN) and Priority Flow Control (PFC). The goal is to ensure ECN triggers first to slow down traffic before PFC pauses frames, preventing the catastrophic tail drops that would force an AI training model to restart from a previous checkpoint.

To prove the real-world efficacy of these designs, Nokia goes beyond simple network specifications to perform application-level benchmarking using open-source tools like Llama 2 and BERT. By measuring job completion times and tokens per second against MLCommons standards, they provide a full-stack validation that includes the GPU servers, storage fabrics (using partners like VAST Data or DDN), and the backend network fabric. The NVD roadmap continues to expand with upcoming designs for scale-across architectures, multi-plane fabrics, and storage-focused deployments. All automation playbooks, telemetry stacks, and digital twin models are made available on GitHub, allowing engineers to try before they buy and ensuring the designs remain accessible and open for integration with common frameworks like Ansible and Netbox.


Ethernet Networking for the AI Era

Event: Networking Field Day 40

Appearance: Nokia Presents at Networking Field Day 40

Company: Nokia

Video Links:

Personnel: Patrick McCabe

The rise of AI has driven the emergence of multiple new network domains, each with distinct roles, architectures, and performance requirements. This presentation explores these new networks and their roles. Patrick McCabe, representing Nokia, builds on the premise that AI is a permanent fixture in the technological landscape, requiring a non-linear evolution of network architecture. He identifies two primary functions, training and inferencing, as the drivers of this change. Training involves massive GPU clusters that scale geometrically and are highly sensitive to packet loss, while inferencing, often pushed to the network edge, prioritizes low latency to serve end users effectively. Together, these functions demand a move away from traditional statistical averages toward a more deterministic approach to network performance.

Architecturally, the shift from north-south to massive east-west traffic patterns within GPU clusters has rendered traditional leaf-spine designs inadequate for AI data movement. McCabe details the emergence of specialized backend networks categorized as scale-up, scale-out, and scale-across. Scale-up handles communication within a single system or server, while scale-out facilitates high-speed interaction between different systems within a data center, a primary focus for the Ultra Ethernet Consortium (UEC). Scale-across is a particularly challenging new frontier, necessitated by the fragmentation of AI clusters across different physical locations, often due to power constraints, requiring advanced routing and data center interconnects to maintain the illusion of a single compute entity over distances of 10 kilometers or more.

The presentation emphasizes that the center of this new universe is the GPU, supported by essential storage networks that feed vast amounts of data to processing units. While the back end deals with the rigors of scale and reliability, the front end remains more traditional, connecting these specialized environments to the outside world and end users. McCabe concludes with an analogy comparing AI to the printing press, suggesting that while AI lowers the cost and scarcity of production it does not replace the human creator. Instead, it shifts the premium value toward innovation, ideas, and judgment, allowing for a radical expansion of who can create within this high-performance infrastructure.


Nokia Introduction with Andy Lapteff

Event: Networking Field Day 40

Appearance: Nokia Presents at Networking Field Day 40

Company: Nokia

Video Links:

Personnel: Andy Lapteff

In this presentation, Andy Lapteff, a network engineer turned product marketing manager at Nokia, introduces the company’s shift from its iconic mobile phone legacy to its current role as a leader in mission-critical network infrastructure. He highlights Nokia’s historical reputation for reliability, exemplified by the legendary indestructible 3310 phone, and explains how that same commitment to durability now applies to complex systems like train signaling, power grids, and air traffic control. Lapteff acknowledges the industry-wide fatigue regarding AI hype but emphasizes that Nokia’s objective at Networking Field Day 40 is to move past the noise and provide clear signal on how networking is fundamentally changing to accommodate the unique demands of the AI era.

Lapteff shares his personal evolution from a skeptic of AI networking to a believer, citing the unprecedented rate of adoption and the staggering capital investment in the sector. He notes that while the Apollo space program cost the equivalent of $65 billion in today’s dollars, current global investment in AI infrastructure is reaching approximately $690 billion this year alone. To illustrate the tangible impact of these technologies, he describes how generative AI has transformed him into a software developer capable of automating complex, multi-step workflows that previously took hours. These tools are no longer just fancy Google searches but are functional drivers of productivity that require a completely reimagined underlying network.

The summary concludes by addressing the technical shift required to support these modern workloads, asserting that traditional protocols like TCP are no longer sufficient because the data center has essentially become one massive, interconnected computer. Lapteff argues that if the network is not as reliable as the indestructible hardware Nokia was once famous for, the entire expensive AI system will fail. The presentation sets the stage for a series of technical deep dives covering new networking models, optimized designs, the evolution of Ethernet and transport protocols, and the operational platforms necessary to keep these high-stakes environments running efficiently.


ResOps Powered by Commvault Cloud Unity

Event: Tech Field Day Extra at RSAC 2026

Appearance: Commvault Presents at Tech Field Day Extra at RSAC 2026

Company: Commvault

Video Links:

Personnel: Chris Bevil, David Cunningham, Michael Fasulo

The presentation centers on the critical evolution from traditional disaster recovery to a more robust framework of cyber resilience. Chris Bevil, a recovering CISO, shares his transition from the high-stress frontline of security to Commvault, where he now focuses on the intersection of IT, security, and board-level business objectives. He emphasizes that the modern threat landscape has turned data recovery into a board-level priority, shifting the conversation from technical patching metrics to the fundamental business need for a faster, safer, and more trustworthy recovery process.

A central theme of the session is the introduction of Resilience Operations, or ResOps, a new methodology designed to break down the silos between IT infrastructure, cloud, and security teams. Bevil illustrates the current gap in organizational readiness by noting that many leaders still lack integrated incident response plans, despite the inevitability of compromise. He argues that disaster recovery is no longer sufficient if it cannot guarantee clean recovery. Without the ability to verify that restored data is untainted by ransomware or malware, organizations risk falling into a cycle of reinfection, a point underscored by a cautionary tale of an organization that took nearly 300 days to recover only to be hit again six months later.

The technical core of the session highlights the Commvault Cloud Unity platform and its sophisticated Resilience Operations (ResOps) methodology, which integrates high-fidelity signals from anomaly detection and deep data discovery. By utilizing a multi-layered defense-in-depth approach–including YARA rules, signatures, and a deep scanning engine capable of detecting polymorphic and zero-day threats–Commvault ensures that recovery is not just possible, but clean. A standout feature discussed is synthetic recovery, an automated process that surgically identifies and skips malware or encrypted files across backup cycles to restore only the last known good versions. This innovation significantly minimizes data loss and eliminates the manual step-restore guesswork traditionally required by administrators during an active breach.

The technical demonstration led by David Cunningham highlights Commvault’s Threat Scan dashboard, a multi-layered defense-in-depth system that integrates anomaly detection, signature-based scanning, and machine learning. This platform identifies risks by correlating signals from internal sensors and third-party partners like CrowdStrike, categorizing resources into critical, high, or moderate risk levels. A key feature is the ability for administrators to perform threat hunts by injecting their own Indicators of Compromise (IOCs), such as YARA rules or hashes from the Google Threat Intelligence platform, to scan both current and historical backup data for hidden threats. To assist non-security personnel, the platform utilizes Arlie, an AI-powered assistant that provides real-time context and guidance during investigations.


What’s NEW at Object First

Event: Tech Field Day Extra at RSAC 2026

Appearance: Object First Presents at Tech Field Day Extra at RSAC 2026

Company: Object First

Video Links:

Personnel: Anthony Cusimano

Anthony Cusimano, Director of Solutions Marketing and one of the company’s earliest employees, provides a roadmap of the company’s rapid hardware and software evolution. Since its inception with OOTBI (Out-of-the-Box Immutability), Object First has expanded its portfolio to include the Ootbi 432, a 2U-node offering 432 terabytes of RAID 60 storage. A single four-node cluster can reach 1.7 petabytes, and through integration with Veeam’s Scale-Out Backup Repository (SOBR), users can scale beyond seven petabytes. On the opposite end of the spectrum, the company introduced the Ootbi Mini, a compact tower designed for edge locations and small businesses that delivers the same “absolute immutability” and honeypot features as the enterprise nodes but in a smaller, desk-friendly form factor.

A major shift in the company’s business model is the introduction of a consumption-based subscription service alongside the traditional perpetual ownership model. This model is supported by a specialized sizing calculator designed to navigate the complexities of immutable storage retention. To ensure a seamless experience, Object First requires telemetry for subscription customers; this allows the company to proactively monitor usage and ship a larger “Box B” before a customer hits their capacity threshold. The transition is designed to be a white glove migration where data is moved to the new appliance and the old hardware is returned, providing a predictable OpEx cycle that avoids the steep cost jumps typically associated with traditional hardware refreshes.

Looking toward the immediate future, Cusimano provided a sneak preview of the Fleet Manager platform, scheduled for official launch on May 6, 2026. Fleet Manager is a secure, cloud-based single pane of glass designed for managed service providers (MSPs) and large enterprises to monitor multiple Object First clusters across various global sites. Driven by telemetry, the tool provides unified visibility into system health, storage utilization, and honeypot alerts without ever touching or transferring actual backup data, maintaining strict zero-trust principles. Future updates to Fleet Manager aim to include centralized S3 bucket creation and remote firmware updates, further simplifying the management of large-scale immutable storage environments.


Object First Honeypot Demo with Geoff Burke

Event: Tech Field Day Extra at RSAC 2026

Appearance: Object First Presents at Tech Field Day Extra at RSAC 2026

Company: Object First

Video Links:

Personnel: Geoff Burke

Senior Technology Advisor Geoff Burke showcases the integrated honeypot functionality built into the Object First appliance. Designed as a digital tripwire, the honeypot is physically hosted on the appliance but logically segmented to ensure security. It serves as an early warning system to detect lateral movement and reconnaissance efforts by attackers who typically probe the network to identify high-value targets. By mimicking juicy targets like a Veeam Windows Repository or SQL Server, the honeypot lures hackers into interacting with it, allowing the system to trigger immediate alerts before the actual backup data is compromised.

The setup process is intentionally simple, requiring only two clicks within the security settings to enable the honeypot with either a static or DHCP IP address. Once active, the system monitors for unauthorized access attempts and can be configured to send notifications via email or Syslog to a Security Information and Event Management (SIEM) platform or tools like Grafana. In a live demonstration, Burke uses the Zenmap utility to perform an “intense scan” against the honeypot’s IP. The Object First dashboard immediately lights up with events, capturing the attacker’s attempts to probe protocols such as RDP and specialized Veeam services.

The honeypot provides both reactive and preventative benefits for organizations. Reactively, it ensures that IT admins are alerted to an intrusion at any hour–specifically targeting the “Friday night at 2:00 AM” window when many ransomware attacks begin. Preventatively, the visibility of these juicy but fake services can act as a deterrent. A sophisticated hacker who recognizes a cluster of high-value services on a single IP may realize they have hit a honeypot and retreat to avoid further detection. By integrating this feature for free, Object First adds a layer of proactive defense to their absolute immutability strategy, ensuring customers have the tools to stop an attack in its early stages.


How Object First Achieves Absolute Immutability

Event: Tech Field Day Extra at RSAC 2026

Appearance: Object First Presents at Tech Field Day Extra at RSAC 2026

Company: Object First

Video Links:

Personnel: Geoff Burke

Geoff Burke, a senior technology advisor at Object First, outlines the architecture of their “Out-of-the-Box Immutability” (OOTBI) solution. Built on Zero Trust principles, the system secures data by assuming breach at every level, from production data and backup software to the primary storage target. The Object First appliance is a hardened Linux-based on-premises storage target that uses the S3 protocol to ensure there is zero access to destructive actions. By eliminating access to the command line and BIOS and strictly enforcing S3 Object Lock in compliance mode, the system ensures that once data hits the disk, it becomes immediately immutable with zero time to immutability, leaving no window for ransomware to alter or delete backup files.

The core magic sauce of the performance and integration is the Smart Object Storage (SOS) API developed by Veeam. This API allows for deep integration between Veeam and the Object First cluster without the need for complex plugins, providing critical visibility into capacity and space that standard S3 protocols often lack. The SOS API enables smart entities, where Veeam breaks down backup jobs and intelligently allocates them to the best available node for load balancing and optimized throughput. This synergy allows the appliance to support a one-megabyte block size, specifically supercharging Veeam’s Instant Recovery feature, which allows businesses to run virtual machines directly from the backup storage at high speeds during a crisis.

Object First positions its appliance as a simple, powerful alternative to complex DIY or cloud-only storage. While cloud storage is a vital secondary resilience zone, Burke emphasizes that local, on-premises storage is essential for meeting recovery time objectives, as cloud egress and latency can extend recovery windows to unacceptable levels. The appliance is designed to be racked and stacked with minimal configuration, using only three IP addresses and multi-factor authentication to reduce the risk of human error or tech debt. To further support overstretched IT teams, Object First includes a proactive telemetry service that monitors hardware health and storage capacity, ensuring that the last line of defense is always ready when a disaster strikes.


Why Object First is Best for Veeam

Event: Tech Field Day Extra at RSAC 2026

Appearance: Object First Presents at Tech Field Day Extra at RSAC 2026

Company: Object First

Video Links:

Personnel: Anthony Cusimano, Geoff Burke

Object First was founded with the specific mission of creating a backup storage solution that is ransomware proof. The company focuses on addressing the primary vulnerability in data protection: the storage target. Since 96% of ransomware attacks target backup data to prevent recovery, Object First provides an intentionally hardened, immutable storage appliance designed specifically for Veeam Backup & Replication. As of January 2026 Object First has been officially acquired by Veeam, integrating their technology directly into the Veeam portfolio.

The presentation introduces the concept of Zero Trust Data Resilience (ZTDR), which applies zero-trust principles specifically to the backup ecosystem. This framework emphasizes three core pillars: segmenting backup software from storage to minimize the blast radius of an attack, creating multiple resilient zones for data copies, and utilizing absolute immutability. Unlike standard immutable storage that can often be bypassed by administrative overrides or governance modes, absolute immutability ensures that once data is written, it cannot be altered or deleted by anyone, including the customer or the vendor, until the set retention period expires. This is achieved through the strict enforcement of S3 Object Lock in compliance mode and a hardware-integrated security layer.

Object First offers a physical appliance that is designed to be secure, simple, and powerful. The device can be racked and configured in under 15 minutes because it limits user privileges by default, reducing the human attack surface and preventing accidental or malicious configuration changes. Security is further bolstered by eight-eyes validation for support and regular third-party penetration testing. On the performance side, the appliance leverages Veeam’s Smart Object Storage API to provide high-speed ingest and rapid recovery features like Instant VM Recovery. By focusing solely on being the best storage target for Veeam, Object First eliminates the trade-offs between security and performance found in DIY or general-purpose storage solutions.


Veeam Unleash – Enable AI and Advance Use Cases

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Michael Cade

Michael Cade and Emilee Tellez introduce the Unleash pillar, which focuses on empowering administrators to leverage backup data for AI-driven insights and advanced operational use cases. Veeam addresses the common challenge of garbage in, garbage out by providing a framework to ensure data hygiene before it is used to train models or fuel AI agents. The centerpiece of this initiative is Veeam Intelligence, an evolved natural language chatbot that moves beyond simple documentation scraping to interact directly with a customer’s specific backup environment. This allows users to generate complex reports, such as identifying failed jobs or malicious activity, through simple conversational queries, effectively transforming backup data from a dormant insurance policy into an active business asset.

The presentation features a live demonstration of the Model Context Protocol (MCP), a standard that Veeam is utilizing to bridge the gap between disparate IT management tools. By integrating Veeam Intelligence with other MCP-compatible servers, such as ServiceNow, administrators can automate entire workflows, from detecting an anomaly and generating an HTML executive report to opening a prioritized incident ticket, all within a single AI interface like Claude Desktop. While these capabilities are currently in technical preview, Veeam emphasizes that they are built with strict role-based access controls (RBAC) and data privacy guardrails, ensuring that only metadata leaves the customer’s site and that immutable backups remain protected from unauthorized modifications by AI agents.

Looking toward the future of enterprise AI, Veeam is positioning itself to manage “agentic” risks by providing visibility into the “social network” of AI agents across the infrastructure. This includes dynamically discovering agents in platforms like AWS Bedrock and Microsoft Copilot to map their access to sensitive data and implementing LLM firewalls to prevent data leakage. In response to delegate concerns about agents making misinformed decisions, the speakers explain that Veeam is developing specialized internal agents, such as a Backup Admin Agent, to provide contextual guardrails and enforce secondary human approval for critical changes. By allowing customers to “bring their own model” (BYOM) or use integrated options, Veeam aims to provide a flexible, secure foundation for the next era of data-driven innovation.


Veeam Resilience – Protect Everything, Recover Anything

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

Rick Vanover and Emilee Tellez focus on the core of the Veeam portfolio: Resilience. The presenters track the evolution of data protection through three distinct generations of disasters, starting with Operational Resilience (fire, flood, and hardware failure), moving into Cyber Resilience (ransomware and targeted encryption), and arriving at the emerging frontier of AI Resilience. This new phase addresses risks such as over-privileged AI agents and non-human identities that can cause massive data deletion or corruption at hyperspeed. To combat these threats, Veeam introduced Agent Commander, an integration of their recent security acquisitions designed to discover AI agents, monitor their permissions via the Data Command Graph, and provide a surgical undo button for AI-driven mistakes.

The presentation highlights how Veeam has pivoted toward “Left of the Boom” preparedness, specifically through its acquisition of Coveware. This integration provides deep forensic visibility into threat actor TTPs (Tactics, Techniques, and Procedures), allowing Veeam to offer proactive scanning before, during, and after a backup. Emilee Tellez details a comprehensive defensive grid that includes an Incident API for EDR tool integration, Recon Scanners to identify brute-force attempts in production, and Veeam Threat Hunter, a proprietary signature-based detection engine. Furthermore, Veeam addresses the exfiltration trend in modern ransomware by emphasizing Data Sovereignty and immutable storage, claiming that no customer utilizing their 70+ immutable storage options and encryption has been unable to recover from an attack.

To dispel the myth that its software is only for small businesses or bound to Windows, Veeam showcases its enterprise-grade Veeam Software Appliance. Now running on a hardened Rocky Linux distribution, the appliance comes pre-packaged with the DISA STIG security profile at no extra cost, which is a significant benefit for government and high-security sectors. The segment features a demonstration of the appliance’s Day 2 operations, highlighting mandatory, automated updates that cover everything from the operating system to the backup application itself. By combining this hardened infrastructure with Data Labs for testing and a portability engine that facilitates massive migrations between hypervisors like VMware and Hyper-V, Veeam positions itself as the most comprehensive end-to-end resilience platform in the 2026 market.


Veeam Security – Protect and Reduce Risk

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Michael Cade

In this presentation, Michael Cade and Emilee Tellez explain how Veeam has expanded its focus from traditional backup to comprehensive Data Security Posture Management (DSPM). By treating an organization’s data ecosystem like a “social network of data,” Veeam’s Data Command Center provides visibility into data lineage, sovereignty, and access rights across structured and unstructured systems. The speakers use a garage analogy to describe how enterprises tend to accumulate vast amounts of unmanaged data, and they highlight how Veeam helps identify ROT (Redundant, Obsolete, and Trivial) data. This not only reduces storage costs but significantly mitigates risk by shrinking the attack surface, ensuring that “God mode” privileges and exposed S3 buckets are flagged before they can be exploited.

The integration between primary data insights and secondary backup data allows Veeam to offer a more sophisticated secure pillar. Emilee Tellez details how the platform now incorporates inline malware detection, YARA rule processing, and file system activity analysis to identify symptoms of encryption or anomalous behavior. This creates a feedback loop with a broad ecosystem of over 60 security partners, including Microsoft Sentinel, Palo Alto Networks, and CrowdStrike. For example, if a storage array from Pure Storage detects an anomaly, it can trigger an API call to Veeam to automatically flag specific backups as infected, preventing them from being used in a restoration and ensuring that security analysts have a correlated view of the threat across the entire infrastructure.

A major theme of the discussion is the shift from simple recovery speed to recovery confidence. The presenters argue that in a cyber-incident scenario, recovering too quickly can lead to re-infection; instead, Veeam advocates for a staged, clean recovery process. This is supported by automated readiness checks and isolated “Data Labs” where users can perform dry runs of their disaster recovery (DR) plans. These tests validate everything from RPO/RTO compliance to the specific boot order of complex applications, such as ensuring a SQL database is online before its dependent application servers. By mapping these technical events to the MITRE ATT&CK framework, Veeam provides security teams with actionable intelligence and automated playbooks, transforming backup data from a passive insurance policy into a proactive component of the security operations center (SOC).


Veeam Understand – Know Your Data

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Michael Cade

In this session, Field CTOs Michael Cade and Emily Tellez dive into the practical application of Veeam’s four-pillar strategy, focusing heavily on the Understand phase. Central to this approach is the recent acquisition of a Data Security Posture Management (DSPM) solution, now integrated as the Data Command Center. This tool acts as a “social network of data,” utilizing a connector framework of over 350 integrations to inventory data systems across platforms like Microsoft 365, Kubernetes, and various cloud environments. By building a comprehensive map of data lineage and access, Veeam helps organizations identify sensitive information, uncover “God mode” privileges, and conduct ROT analysis to eliminate redundant, obsolete, and trivial data, thereby reducing the attack surface and storage costs.

Beyond visibility, the presentation highlights how this intelligence informs smarter backup and recovery workflows. The speakers emphasize that understanding data is the prerequisite for securing it, particularly in the face of agentic AI risks where data might be overshared or mismanaged by automated models. Veeam’s orchestration capabilities, which have evolved since 2018, allow for dynamic documentation and automated readiness checks to ensure compliance with Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). This ensures that disaster recovery plans are not just static documents but living, tested processes that can transition workloads, such as moving VMware backups to Hyper-V or Azure, at scale while maintaining a clear audit trail for cyber insurance and regulatory requirements like GDPR.

The discussion concludes with a focus on clean recovery, addressing the critical need to prevent the re-infection of environments during restoration. Veeam integrates multiple layers of defense, including inline scanning for anomalies, indicator of compromise (IOC) detection, and the use of YARA rules or antivirus signatures. This process can occur at rest, during backup, or before restoration into isolated sandbox environments for forensic testing. By partnering with an ecosystem of over 60 security providers, such as CrowdStrike, Veeam ensures that if a threat is detected in production, the backup system is immediately informed. This holistic approach transforms backup from a black box into a proactive security asset that validates data integrity and operational resilience in a post-AI world.


Sign up for updates to
Tech Field day events

Thank you for being part of the Tech Field Day community! Our mailing list is a great way to stay up to date on our events and technical content, and we appreciate your signup.

We promise that we’ll never spam you, send ads, or sell your information. This list will only be used to communicate with our community about our events and content. And we’ll limit it to no more than one message per week.

Although we only need your email address, it would be nice if you provided a little more information to help us get to know you better!