Videos

ResOps Powered by Commvault Cloud Unity

Event: Tech Field Day Extra at RSAC 2026

Appearance: Commvault Presents at Tech Field Day Extra at RSAC 2026

Company: Commvault

Video Links:

Personnel: Chris Bevil, David Cunningham, Michael Fasulo

The presentation centers on the critical evolution from traditional disaster recovery to a more robust framework of cyber resilience. Chris Bevil, a recovering CISO, shares his transition from the high-stress frontline of security to Commvault, where he now focuses on the intersection of IT, security, and board-level business objectives. He emphasizes that the modern threat landscape has turned data recovery into a board-level priority, shifting the conversation from technical patching metrics to the fundamental business need for a faster, safer, and more trustworthy recovery process.

A central theme of the session is the introduction of Resilience Operations, or ResOps, a new methodology designed to break down the silos between IT infrastructure, cloud, and security teams. Bevil illustrates the current gap in organizational readiness by noting that many leaders still lack integrated incident response plans, despite the inevitability of compromise. He argues that disaster recovery is no longer sufficient if it cannot guarantee clean recovery. Without the ability to verify that restored data is untainted by ransomware or malware, organizations risk falling into a cycle of reinfection, a point underscored by a cautionary tale of an organization that took nearly 300 days to recover only to be hit again six months later.

The technical core of the session highlights the Commvault Cloud Unity platform and its sophisticated Resilience Operations (ResOps) methodology, which integrates high-fidelity signals from anomaly detection and deep data discovery. By utilizing a multi-layered defense-in-depth approach–including YARA rules, signatures, and a deep scanning engine capable of detecting polymorphic and zero-day threats–Commvault ensures that recovery is not just possible, but clean. A standout feature discussed is synthetic recovery, an automated process that surgically identifies and skips malware or encrypted files across backup cycles to restore only the last known good versions. This innovation significantly minimizes data loss and eliminates the manual step-restore guesswork traditionally required by administrators during an active breach.

The technical demonstration led by David Cunningham highlights Commvault’s Threat Scan dashboard, a multi-layered defense-in-depth system that integrates anomaly detection, signature-based scanning, and machine learning. This platform identifies risks by correlating signals from internal sensors and third-party partners like CrowdStrike, categorizing resources into critical, high, or moderate risk levels. A key feature is the ability for administrators to perform threat hunts by injecting their own Indicators of Compromise (IOCs), such as YARA rules or hashes from the Google Threat Intelligence platform, to scan both current and historical backup data for hidden threats. To assist non-security personnel, the platform utilizes Arlie, an AI-powered assistant that provides real-time context and guidance during investigations.


What’s NEW at Object First

Event: Tech Field Day Extra at RSAC 2026

Appearance: Object First Presents at Tech Field Day Extra at RSAC 2026

Company: Object First

Video Links:

Personnel: Anthony Cusimano

Anthony Cusimano, Director of Solutions Marketing and one of the company’s earliest employees, provides a roadmap of the company’s rapid hardware and software evolution. Since its inception with OOTBI (Out-of-the-Box Immutability), Object First has expanded its portfolio to include the Ootbi 432, a 2U-node offering 432 terabytes of RAID 60 storage. A single four-node cluster can reach 1.7 petabytes, and through integration with Veeam’s Scale-Out Backup Repository (SOBR), users can scale beyond seven petabytes. On the opposite end of the spectrum, the company introduced the Ootbi Mini, a compact tower designed for edge locations and small businesses that delivers the same “absolute immutability” and honeypot features as the enterprise nodes but in a smaller, desk-friendly form factor.

A major shift in the company’s business model is the introduction of a consumption-based subscription service alongside the traditional perpetual ownership model. This model is supported by a specialized sizing calculator designed to navigate the complexities of immutable storage retention. To ensure a seamless experience, Object First requires telemetry for subscription customers; this allows the company to proactively monitor usage and ship a larger “Box B” before a customer hits their capacity threshold. The transition is designed to be a white glove migration where data is moved to the new appliance and the old hardware is returned, providing a predictable OpEx cycle that avoids the steep cost jumps typically associated with traditional hardware refreshes.

Looking toward the immediate future, Cusimano provided a sneak preview of the Fleet Manager platform, scheduled for official launch on May 6, 2026. Fleet Manager is a secure, cloud-based single pane of glass designed for managed service providers (MSPs) and large enterprises to monitor multiple Object First clusters across various global sites. Driven by telemetry, the tool provides unified visibility into system health, storage utilization, and honeypot alerts without ever touching or transferring actual backup data, maintaining strict zero-trust principles. Future updates to Fleet Manager aim to include centralized S3 bucket creation and remote firmware updates, further simplifying the management of large-scale immutable storage environments.


Object First Honeypot Demo with Geoff Burke

Event: Tech Field Day Extra at RSAC 2026

Appearance: Object First Presents at Tech Field Day Extra at RSAC 2026

Company: Object First

Video Links:

Personnel: Geoff Burke

Senior Technology Advisor Geoff Burke showcases the integrated honeypot functionality built into the Object First appliance. Designed as a digital tripwire, the honeypot is physically hosted on the appliance but logically segmented to ensure security. It serves as an early warning system to detect lateral movement and reconnaissance efforts by attackers who typically probe the network to identify high-value targets. By mimicking juicy targets like a Veeam Windows Repository or SQL Server, the honeypot lures hackers into interacting with it, allowing the system to trigger immediate alerts before the actual backup data is compromised.

The setup process is intentionally simple, requiring only two clicks within the security settings to enable the honeypot with either a static or DHCP IP address. Once active, the system monitors for unauthorized access attempts and can be configured to send notifications via email or Syslog to a Security Information and Event Management (SIEM) platform or tools like Grafana. In a live demonstration, Burke uses the Zenmap utility to perform an “intense scan” against the honeypot’s IP. The Object First dashboard immediately lights up with events, capturing the attacker’s attempts to probe protocols such as RDP and specialized Veeam services.

The honeypot provides both reactive and preventative benefits for organizations. Reactively, it ensures that IT admins are alerted to an intrusion at any hour–specifically targeting the “Friday night at 2:00 AM” window when many ransomware attacks begin. Preventatively, the visibility of these juicy but fake services can act as a deterrent. A sophisticated hacker who recognizes a cluster of high-value services on a single IP may realize they have hit a honeypot and retreat to avoid further detection. By integrating this feature for free, Object First adds a layer of proactive defense to their absolute immutability strategy, ensuring customers have the tools to stop an attack in its early stages.


How Object First Achieves Absolute Immutability

Event: Tech Field Day Extra at RSAC 2026

Appearance: Object First Presents at Tech Field Day Extra at RSAC 2026

Company: Object First

Video Links:

Personnel: Geoff Burke

Geoff Burke, a senior technology advisor at Object First, outlines the architecture of their “Out-of-the-Box Immutability” (OOTBI) solution. Built on Zero Trust principles, the system secures data by assuming breach at every level, from production data and backup software to the primary storage target. The Object First appliance is a hardened Linux-based on-premises storage target that uses the S3 protocol to ensure there is zero access to destructive actions. By eliminating access to the command line and BIOS and strictly enforcing S3 Object Lock in compliance mode, the system ensures that once data hits the disk, it becomes immediately immutable with zero time to immutability, leaving no window for ransomware to alter or delete backup files.

The core magic sauce of the performance and integration is the Smart Object Storage (SOS) API developed by Veeam. This API allows for deep integration between Veeam and the Object First cluster without the need for complex plugins, providing critical visibility into capacity and space that standard S3 protocols often lack. The SOS API enables smart entities, where Veeam breaks down backup jobs and intelligently allocates them to the best available node for load balancing and optimized throughput. This synergy allows the appliance to support a one-megabyte block size, specifically supercharging Veeam’s Instant Recovery feature, which allows businesses to run virtual machines directly from the backup storage at high speeds during a crisis.

Object First positions its appliance as a simple, powerful alternative to complex DIY or cloud-only storage. While cloud storage is a vital secondary resilience zone, Burke emphasizes that local, on-premises storage is essential for meeting recovery time objectives, as cloud egress and latency can extend recovery windows to unacceptable levels. The appliance is designed to be racked and stacked with minimal configuration, using only three IP addresses and multi-factor authentication to reduce the risk of human error or tech debt. To further support overstretched IT teams, Object First includes a proactive telemetry service that monitors hardware health and storage capacity, ensuring that the last line of defense is always ready when a disaster strikes.


Why Object First is Best for Veeam

Event: Tech Field Day Extra at RSAC 2026

Appearance: Object First Presents at Tech Field Day Extra at RSAC 2026

Company: Object First

Video Links:

Personnel: Anthony Cusimano, Geoff Burke

Object First was founded with the specific mission of creating a backup storage solution that is ransomware proof. The company focuses on addressing the primary vulnerability in data protection: the storage target. Since 96% of ransomware attacks target backup data to prevent recovery, Object First provides an intentionally hardened, immutable storage appliance designed specifically for Veeam Backup & Replication. As of January 2026 Object First has been officially acquired by Veeam, integrating their technology directly into the Veeam portfolio.

The presentation introduces the concept of Zero Trust Data Resilience (ZTDR), which applies zero-trust principles specifically to the backup ecosystem. This framework emphasizes three core pillars: segmenting backup software from storage to minimize the blast radius of an attack, creating multiple resilient zones for data copies, and utilizing absolute immutability. Unlike standard immutable storage that can often be bypassed by administrative overrides or governance modes, absolute immutability ensures that once data is written, it cannot be altered or deleted by anyone, including the customer or the vendor, until the set retention period expires. This is achieved through the strict enforcement of S3 Object Lock in compliance mode and a hardware-integrated security layer.

Object First offers a physical appliance that is designed to be secure, simple, and powerful. The device can be racked and configured in under 15 minutes because it limits user privileges by default, reducing the human attack surface and preventing accidental or malicious configuration changes. Security is further bolstered by eight-eyes validation for support and regular third-party penetration testing. On the performance side, the appliance leverages Veeam’s Smart Object Storage API to provide high-speed ingest and rapid recovery features like Instant VM Recovery. By focusing solely on being the best storage target for Veeam, Object First eliminates the trade-offs between security and performance found in DIY or general-purpose storage solutions.


Veeam Unleash – Enable AI and Advance Use Cases

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Michael Cade

Michael Cade and Emilee Tellez introduce the Unleash pillar, which focuses on empowering administrators to leverage backup data for AI-driven insights and advanced operational use cases. Veeam addresses the common challenge of garbage in, garbage out by providing a framework to ensure data hygiene before it is used to train models or fuel AI agents. The centerpiece of this initiative is Veeam Intelligence, an evolved natural language chatbot that moves beyond simple documentation scraping to interact directly with a customer’s specific backup environment. This allows users to generate complex reports, such as identifying failed jobs or malicious activity, through simple conversational queries, effectively transforming backup data from a dormant insurance policy into an active business asset.

The presentation features a live demonstration of the Model Context Protocol (MCP), a standard that Veeam is utilizing to bridge the gap between disparate IT management tools. By integrating Veeam Intelligence with other MCP-compatible servers, such as ServiceNow, administrators can automate entire workflows, from detecting an anomaly and generating an HTML executive report to opening a prioritized incident ticket, all within a single AI interface like Claude Desktop. While these capabilities are currently in technical preview, Veeam emphasizes that they are built with strict role-based access controls (RBAC) and data privacy guardrails, ensuring that only metadata leaves the customer’s site and that immutable backups remain protected from unauthorized modifications by AI agents.

Looking toward the future of enterprise AI, Veeam is positioning itself to manage “agentic” risks by providing visibility into the “social network” of AI agents across the infrastructure. This includes dynamically discovering agents in platforms like AWS Bedrock and Microsoft Copilot to map their access to sensitive data and implementing LLM firewalls to prevent data leakage. In response to delegate concerns about agents making misinformed decisions, the speakers explain that Veeam is developing specialized internal agents, such as a Backup Admin Agent, to provide contextual guardrails and enforce secondary human approval for critical changes. By allowing customers to “bring their own model” (BYOM) or use integrated options, Veeam aims to provide a flexible, secure foundation for the next era of data-driven innovation.


Veeam Resilience – Protect Everything, Recover Anything

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

Rick Vanover and Emilee Tellez focus on the core of the Veeam portfolio: Resilience. The presenters track the evolution of data protection through three distinct generations of disasters, starting with Operational Resilience (fire, flood, and hardware failure), moving into Cyber Resilience (ransomware and targeted encryption), and arriving at the emerging frontier of AI Resilience. This new phase addresses risks such as over-privileged AI agents and non-human identities that can cause massive data deletion or corruption at hyperspeed. To combat these threats, Veeam introduced Agent Commander, an integration of their recent security acquisitions designed to discover AI agents, monitor their permissions via the Data Command Graph, and provide a surgical undo button for AI-driven mistakes.

The presentation highlights how Veeam has pivoted toward “Left of the Boom” preparedness, specifically through its acquisition of Coveware. This integration provides deep forensic visibility into threat actor TTPs (Tactics, Techniques, and Procedures), allowing Veeam to offer proactive scanning before, during, and after a backup. Emilee Tellez details a comprehensive defensive grid that includes an Incident API for EDR tool integration, Recon Scanners to identify brute-force attempts in production, and Veeam Threat Hunter, a proprietary signature-based detection engine. Furthermore, Veeam addresses the exfiltration trend in modern ransomware by emphasizing Data Sovereignty and immutable storage, claiming that no customer utilizing their 70+ immutable storage options and encryption has been unable to recover from an attack.

To dispel the myth that its software is only for small businesses or bound to Windows, Veeam showcases its enterprise-grade Veeam Software Appliance. Now running on a hardened Rocky Linux distribution, the appliance comes pre-packaged with the DISA STIG security profile at no extra cost, which is a significant benefit for government and high-security sectors. The segment features a demonstration of the appliance’s Day 2 operations, highlighting mandatory, automated updates that cover everything from the operating system to the backup application itself. By combining this hardened infrastructure with Data Labs for testing and a portability engine that facilitates massive migrations between hypervisors like VMware and Hyper-V, Veeam positions itself as the most comprehensive end-to-end resilience platform in the 2026 market.


Veeam Security – Protect and Reduce Risk

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Michael Cade

In this presentation, Michael Cade and Emilee Tellez explain how Veeam has expanded its focus from traditional backup to comprehensive Data Security Posture Management (DSPM). By treating an organization’s data ecosystem like a “social network of data,” Veeam’s Data Command Center provides visibility into data lineage, sovereignty, and access rights across structured and unstructured systems. The speakers use a garage analogy to describe how enterprises tend to accumulate vast amounts of unmanaged data, and they highlight how Veeam helps identify ROT (Redundant, Obsolete, and Trivial) data. This not only reduces storage costs but significantly mitigates risk by shrinking the attack surface, ensuring that “God mode” privileges and exposed S3 buckets are flagged before they can be exploited.

The integration between primary data insights and secondary backup data allows Veeam to offer a more sophisticated secure pillar. Emilee Tellez details how the platform now incorporates inline malware detection, YARA rule processing, and file system activity analysis to identify symptoms of encryption or anomalous behavior. This creates a feedback loop with a broad ecosystem of over 60 security partners, including Microsoft Sentinel, Palo Alto Networks, and CrowdStrike. For example, if a storage array from Pure Storage detects an anomaly, it can trigger an API call to Veeam to automatically flag specific backups as infected, preventing them from being used in a restoration and ensuring that security analysts have a correlated view of the threat across the entire infrastructure.

A major theme of the discussion is the shift from simple recovery speed to recovery confidence. The presenters argue that in a cyber-incident scenario, recovering too quickly can lead to re-infection; instead, Veeam advocates for a staged, clean recovery process. This is supported by automated readiness checks and isolated “Data Labs” where users can perform dry runs of their disaster recovery (DR) plans. These tests validate everything from RPO/RTO compliance to the specific boot order of complex applications, such as ensuring a SQL database is online before its dependent application servers. By mapping these technical events to the MITRE ATT&CK framework, Veeam provides security teams with actionable intelligence and automated playbooks, transforming backup data from a passive insurance policy into a proactive component of the security operations center (SOC).


Veeam Understand – Know Your Data

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Michael Cade

In this session, Field CTOs Michael Cade and Emily Tellez dive into the practical application of Veeam’s four-pillar strategy, focusing heavily on the Understand phase. Central to this approach is the recent acquisition of a Data Security Posture Management (DSPM) solution, now integrated as the Data Command Center. This tool acts as a “social network of data,” utilizing a connector framework of over 350 integrations to inventory data systems across platforms like Microsoft 365, Kubernetes, and various cloud environments. By building a comprehensive map of data lineage and access, Veeam helps organizations identify sensitive information, uncover “God mode” privileges, and conduct ROT analysis to eliminate redundant, obsolete, and trivial data, thereby reducing the attack surface and storage costs.

Beyond visibility, the presentation highlights how this intelligence informs smarter backup and recovery workflows. The speakers emphasize that understanding data is the prerequisite for securing it, particularly in the face of agentic AI risks where data might be overshared or mismanaged by automated models. Veeam’s orchestration capabilities, which have evolved since 2018, allow for dynamic documentation and automated readiness checks to ensure compliance with Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). This ensures that disaster recovery plans are not just static documents but living, tested processes that can transition workloads, such as moving VMware backups to Hyper-V or Azure, at scale while maintaining a clear audit trail for cyber insurance and regulatory requirements like GDPR.

The discussion concludes with a focus on clean recovery, addressing the critical need to prevent the re-infection of environments during restoration. Veeam integrates multiple layers of defense, including inline scanning for anomalies, indicator of compromise (IOC) detection, and the use of YARA rules or antivirus signatures. This process can occur at rest, during backup, or before restoration into isolated sandbox environments for forensic testing. By partnering with an ecosystem of over 60 security providers, such as CrowdStrike, Veeam ensures that if a threat is detected in production, the backup system is immediately informed. This holistic approach transforms backup from a black box into a proactive security asset that validates data integrity and operational resilience in a post-AI world.


Veeam in 2026 with Rick Vanover

Event: Tech Field Day Extra at RSAC 2026

Appearance: Veeam Presents at Tech Field Day Extra at RSAC 2026

Company: Veeam Software

Video Links:

Personnel: Rick Vanover

In this presentation, Rick Vanover reintroduces Veeam as a global leader that has evolved far beyond its origins in virtual backup. Now clearing over $2 billion in revenue with a workforce of 6,000 employees, the company has secured the top market share spot and expanded its reach into diverse environments ranging from Antarctica to submarines. While still recognized for its foundational backup capabilities, Veeam now stands as the most deployed solution for Microsoft 365 and off-the-shelf Kubernetes backup software, reflecting a strategic shift to meet the modern demands of primary and secondary data management.

The core of the presentation focuses on a new strategic framework designed to address the convergence of compliance, security, and data resilience. Vanover outlines a journey through four key pillars: understanding data, securing data, ensuring resilience, and unleashing data potential. This evolution is driven by the necessity of protecting precious data from sophisticated modern threats, moving past traditional “fire and flood” disasters to combat the complexities of ransomware and the risks inherent in the rapid adoption of agentic AI. To manage this landscape, Veeam is integrating Data Security Posture Management (DSPM) with traditional recovery, utilizing the power of AI to provide the necessary scale and oversight.

During a candid discussion with delegates, the presenters emphasized that digital transformation has fundamentally changed the stakes for global CIOs, who must now balance innovation velocity with high-level security. The “Veeam of today” is presented not just as a recovery tool, but as a comprehensive resilience partner that provides technical proof points for an AI-driven world. By addressing the risks of oversharing and automated data workflows, Veeam aims to maintain its reputation for reliable backup while expanding its toolkit to ensure that organizational data remains safe, compliant, and ready to be leveraged for future growth.


Knowledge and culture retained for all by the Internet Archive

Event: Cloud Field Day 25

Appearance: The Internet Archive Presents at Cloud Field Day 25

Company: Internet Archive

Video Links:

Personnel: Joy Chesbrough

Internet Archive is a non-profit library of millions of free texts, movies, software, music, websites, and more. Joy Chesbrough introduces us to the Internet Archive’s mission and accomplishments before examining how this public-good service is funded and operated. Joy, who leads the organization’s philanthropy efforts, explained that the Internet Archive was founded by technologist Brewster Kahle nearly 30 years ago as a non-profit to ensure knowledge remained open, free, and accessible to everyone, using an open-source platform. As a global public service, it is one of the world’s most frequently visited websites, attracting 2.2 million daily users who access a vast array of content from books and magazines to historical tech manuals.

A cornerstone of the Internet Archive’s work is the Wayback Machine, lauded as a “time machine for the web” that prevents digital content from disappearing. This tool has been critical for journalists, capturing government websites during presidential transitions (e.g., end-of-term crawls), and preserving cultural heritage during crises, such as the Ukraine war, and digitizing Aruba’s culture. Beyond the Wayback Machine, the Internet Archive’s mission is to provide universal access to all knowledge, much like a modern Library of Alexandria. It houses an astounding 250 petabytes of data, 113 million public media items, and over one trillion web pages, making it ten times larger than the U.S. Library of Congress. Other vital projects include “Archive-It” for institutional digital preservation, “Democracy’s Library” for archiving government documents globally, “Community Webs” to ensure marginalized voices are historically recorded, and “Open Library,” which provides millions of accessible books, working to overcome the statistic that only 7% of published works are in accessible formats. They also combat website “link rot” through partnerships with platforms like Wikipedia and WordPress, ensuring enduring access to linked content.

The Internet Archive operates as a purpose-driven, independent non-profit, committed to privacy by not tracking users, displaying ads, or monetizing its content. Its $30 million annual operating budget, while a small fraction of the U.S. nonprofit sector’s over $592 billion, is used efficiently without extensive marketing, as its brand recognition often stems from the Wayback Machine. Joy’s philanthropy team has significantly expanded its donor base, attracting nearly 250,000 unique individual donors over the past year. Supporters are deeply loyal, with an average donation higher than most nonprofits, reflecting the perceived value of the Internet Archive in providing a stable foundation for truth and combating misinformation and vanishing culture in an increasingly digital and volatile world. The organization is dedicated to ensuring this invaluable library endures for future generations, preserving the world’s culture and history into perpetuity.


Why VCF Networking NSX Is Essential Even in a VXLAN World with VMware by Broadcom

Event: Cloud Field Day 25

Appearance: VMware by Broadcom Presents at Cloud Field Day 25

Company: VMware by Broadcom

Video Links:

Personnel: Dimitri Desmidt

Physical fabrics may provide VXLAN, but modern private clouds demand far more than basic overlay connectivity. This video explores how VCF Networking (NSX) decouples networking from the physical fabric, enabling automated, policy-driven network services that integrate natively with vCenter and VCF Automation. We also examine Virtual Private Clouds (VPCs), which empower developers to instantly provision secure, multi-tenant environments without deep networking expertise. Discover why VCF Networking is not simply an overlay but the foundational layer that unlocks agility, operational simplicity, and true cloud operating models inside the modern data center. Dimitri Desmidt shows why network virtualization within VMware Cloud Foundation (VCF) is essential, even if the underlying physical network already supports VXLAN. He highlights that while physical networks provide basic overlay connectivity, they fall short in delivering the comprehensive network services – such as switching, routing, load balancing, and firewalling – that modern applications require. Managing these services manually on physical infrastructure for each new application often entails a cumbersome, ticket-driven process spanning multiple teams and interfaces, delaying application deployment by weeks or even months.

VCF Networking, powered by NSX, addresses this by bringing these crucial network services directly into the cloud platform, enabling a self-service, automated consumption model. This shift eliminates the need for manual configuration and inter-team coordination, drastically reducing network provisioning time from weeks to mere seconds. A key innovation in VCF 9.0 is the introduction of Virtual Private Clouds (VPCs), which adopt the familiar industry-standard concept. A VPC is a self-contained “network bubble” that developers or vCenter administrators can instantly provision with subnets and automated IP address management. VCF is pre-configured with an IP block designated for future application networks, ensuring that newly provisioned subnets do not conflict with or overlap existing physical network infrastructure, thereby preventing IP conflicts and maintaining network stability.

VPCs offer granular control over network access, allowing for “public” subnets exposed to the external world, “private transit gateway” subnets for communication within a tenant, and “private VPC” subnets for isolation within a single VPC bubble. While VCF Networking handles basic access control and Network Address Translation (NAT), more advanced security needs, such as protocol-level firewalling, IDS/IPS, and malware inspection, are addressed by vDefense. The VPC gateway is fully distributed, running as a process within each ESX host, making the creation of new subnets completely transparent to the underlying physical fabric. This design means the physical network only sees encapsulated traffic between ESX host IPs, so no changes are required to the physical switches. This approach not only provides exceptional flexibility for dynamically connecting virtual machines but also allows for overlapping private IP address spaces across different VPCs, as all outbound traffic is automatically NAT’d, preventing conflicts. Additionally, VCF enables administrators to set quotas for network resources, ensuring fair usage and resource governance across various tenants or business units.


The Rise and Fall of the Cloud – Again with Tom Lyon

Event: Cloud Field Day 25

Appearance: The Rise and Fall of the Cloud – Again

Company: Tech Field Day

Video Links:

Personnel: Tom Lyon

Tom Lyon begins by suggesting that if cloud computing is defined as outsourcing data processing to a company that owns the equipment, then the concept is nearly a hundred years old. He traces its origins to the 1930s, when IBM established service bureaus where clients could bring data to be processed using punch cards and tabulating machines, an expensive service akin to modern cloud offerings. This early period, marked by the Great Depression, saw basic arithmetic being outsourced, with computing often done by “human computers” before the widespread adoption of machines. The post-World War II era saw advanced punch card computations and a 1956 IBM consent decree that necessitated the creation of the Service Bureau Corporation, highlighting the significance of outsourced data processing even then.

The evolution continued into the 1960s with the proliferation of service bureaus, the birth of timesharing, and the emergence of software as a distinct business. The late 60s witnessed “go-go years” with the concept of a “computer utility” – a direct precursor to modern cloud computing – fueled by remote access, modems, and hard drives, leading to “irrational exuberance” and a subsequent “major depression” in the early 70s. This bust was exacerbated by a shift from services to software and the rise of the mini-computer. The late 70s and 80s brought networking innovations and the desktop era, with the “network is the computer” philosophy solidifying the idea of distributed computing, though general computing wasn’t yet fully within the network “cloud”. The late 90s dot-com boom saw the rise of ISPs and early Infrastructure as a Service (IaaS) providers like Loudcloud and TerraSpring, again characterized by “irrational exuberance” and ambitious data center plans.

However, this boom also led to a significant bust in the early 2000s, which Lyon attributes more to “telecom fraud” than just dot-com speculation. AWS launched in 2006, offering basic cloud services, just before the real estate crash. The 2010s saw AI “get real” with breakthroughs like Watson and AlexNet, propelled by GPU processing and big data. Today, in the 2020s, AI is experiencing “total irrational exuberance,” with an “insane” build-out of data centers, NVIDIA’s dominance, and concerns about creative accounting and fraud. Lyon warns of an impending “AI recession” driven by unsustainable growth expectations, massive infrastructure challenges (especially in energy and water), data sovereignty concerns, and copyright issues. While acknowledging the underlying value of AI, he suggests a period of “normalcy” is five to ten years away, similar to how previous busts eventually paved the way for future growth by leaving behind overbuilt but eventually useful infrastructure.


Database as a Service (DBaaS) with VMware Data Services Manager from VMware by Broadcom

Event: Cloud Field Day 25

Appearance: VMware by Broadcom Presents at Cloud Field Day 25

Company: VMware by Broadcom

Video Links:

Personnel: Eric Gray

Open-source databases like PostgreSQL and MySQL are in high demand, but provisioning them often creates bottlenecks for vSphere admins and DBA teams. Ticket queues grow, governance slips, and “shadow IT” introduces risk. In this video, we show how VMware Data Services Manager (DSM) enables on-demand Database-as-a-Service (DBaaS) on VMware Cloud Foundation. Learn how infrastructure policies and RBAC deliver secure, self-service database deployment while maintaining visibility and control. We also highlight how DSM automates HA deployments, read replicas, backups, and point-in-time recovery, eliminating database sprawl and simplifying Day 2 operations. This addresses the common challenges organizations face with database sprawl, lack of governance, configuration drift, and ticketing bottlenecks when developers arbitrarily spin up VMs with databases without proper oversight.

VMware Data Services Manager (DSM) integrates as an appliance and vCenter plugin within an existing VMware Cloud Foundation (VCF) environment, leveraging management and workload domains. As a vSphere administrator, you retain control over the infrastructure, defining compute resources (clusters, resource pools, supervisor namespaces), storage policies (vSAN, NFS), and networking (VLANs, VPC subnets). DSM handles IP address assignment and allows administrators to define VM classes (e.g., small, medium, large) to provide granular control over resource allocation. Supported databases currently include PostgreSQL, MySQL, and Microsoft SQL Server in tech preview, with the system designed using cloud-native Kubernetes technologies.

The administrative setup involves configuring S3-compatible backup targets (on-prem or cloud), enabling specific database versions, creating DSM namespaces to group resources, and linking directory groups (such as “developers”) to these namespaces with appropriate DSM user roles. Data service policies tie together specific database engines, namespaces, allowed versions, infrastructure policies, and backup locations, providing robust guardrails for self-service. For developers, this translates to a streamlined experience where they can easily provision single or clustered database instances, perform version upgrades, enable read replicas for scaling, and manage backups, all through a simplified UI or API, receiving a ready-to-use connection string for their applications. DSM also offers basic monitoring and integrates with VCF operations or Prometheus for more comprehensive metric collection, ensuring health and resource management while providing flexible point-in-time recovery options.


The DRAM Barrier – Why VMware Advanced Memory Tiering is a Data Center Game Changer with VMware

Event: Cloud Field Day 25

Appearance: VMware by Broadcom Presents at Cloud Field Day 25

Company: VMware by Broadcom

Video Links:

Personnel: Dave Morera

Memory is often the most expensive and restrictive bottleneck in modern datacenters. VMware Memory Tiering (an industry exclusive) solves this by automating data placement across high-performance and cost-optimized memory tiers. This session explores how this unique hypervisor integration drives 40%+ TCO savings, improves VM density, and ensures smarter resource consumption. Learn why VMware is the sole leader in transforming memory from a hardware constraint into a strategic advantage. This innovative feature, “VMware Advanced Memory Tiering with NVMe,” addresses the rapidly escalating cost of DRAM, which now accounts for up to 96% of a server’s bill of materials. Presented as a core component of vSphere, and thus included in VMware Cloud Foundation (VCF) and VMware vSphere Foundation (VVF), this technology aims to overcome the “DRAM barrier” by intelligently managing memory resources.

The core of VMware Memory Tiering involves using less expensive NVMe devices as a secondary memory tier, with DRAM remaining the primary, high-performance tier (Tier 0). From a VM’s perspective, this combination appears as a single, logical memory space, making the underlying tiering transparent. VMware employs a proprietary algorithm that constantly monitors memory page activity, classifying pages as hot, warm, or cold based on recent access patterns. When DRAM utilization reaches a configurable threshold (e.g., 70-75% pressure), cold, inactive pages are proactively moved to the NVMe tier, freeing up DRAM for active workloads. This intelligent, proactive approach differs from reactive measures like swapping or ballooning, enabling customers to achieve over 40% reduction in total cost of ownership by purchasing less physical DRAM, and doubling VM density on existing hardware due to more efficient CPU and memory utilization. The NVMe devices must be directly connected, dedicated solely for this purpose, and meet specific endurance and performance requirements, with hardware RAID support for data mirroring and redundancy.

For operational flexibility, VMware Memory Tiering offers configurable ratios between DRAM and NVMe, starting with a default 1:1 ratio (providing 100% more memory capacity) and scalable up to 1:4 (a 4X increase), with a maximum partition size of 4TB. This allows administrators to adjust capacity based on workload needs without physical hardware changes. The feature seamlessly integrates with existing vSphere functionalities like HA, DRS, and vMotion, as well as various encryption methods (host, VM, vSAN), with vMotion being “tier-aware” to handle VM migrations between hosts with and without memory tiering. However, certain specialized VMs, such as latency-sensitive applications, monster VMs, and security-hardened VMs (e.g., those using TDX or SEV for memory encryption), are not supported as the hypervisor cannot classify their encrypted memory pages. VMware provides extensive documentation, including performance whitepapers, deployment guides, and Hands-on Labs, to aid in understanding and implementing this transformative technology.

 


Accelerate cloud and AI workloads with the Hammerspace Data Platform

Event: Cloud Field Day 25

Appearance: Hammerspace Presents at Cloud Field Day 25

Company: Hammerspace

Video Links:

Personnel: Dan Reger

Hammerspace is a data platform for unstructured data that helps customers unify all their data storage and accelerate their workloads, including AI, to deliver results faster – both in the cloud and in their own data centers. This session will introduce Hammerspace and how it helps cloud customers maximize performance, avoid wholesale data migration, and reduce cloud storage costs. Dan Reger, Senior Product Marketing Director at Hammerspace, focused on accelerating cloud and AI workloads using the platform, particularly highlighting its benefits for cloud and hybrid environments. He noted that migrating workloads to the cloud is often complex, especially when data is distributed across multiple regions or subject to regulatory requirements, and that traditional cloud storage isn’t always optimized for modern high-performance demands.

Hammerspace tackles these challenges by providing a unified global file system namespace that spans across on-premises storage, various cloud storage services (block, file, object), and even different cloud regions. This agentless solution allows customers to simplify and speed cloud migrations, accessing data everywhere without wholesale data movement. The platform dynamically orchestrates data, moving only the necessary subsets to the fastest available storage tiers (e.g., local NVMe on bare-metal GPU servers) to maximize workload performance and compute utilization. This objective-based policy engine ensures data is always where it’s needed, preventing bottlenecks and eliminating unnecessary data transfers.

The platform is designed to accelerate AI, HPC, and workloads involving large volumes of unstructured data across diverse environments. Hammerspace’s capabilities, including parallel NFS and intelligent data orchestration, ensure optimal data performance and efficient use of cloud compute resources. This approach also addresses concerns such as rising cloud storage costs and data sovereignty, with Hammerspace approved for deployment in OCI’s dedicated regions. Real-world examples, such as Meta and other unnamed “household name” customers, illustrate successful large-scale deployments involving thousands of servers, tens of thousands of GPUs, and petabytes of data, demonstrating Hammerspace’s ability to seamlessly integrate and enhance existing IT processes without requiring significant changes.


AI is Driving all Infrastructure Change – Delegate Roundtable at AI Infrastructure Field Day 4

Event: AI Infrastructure Field Day 4

Appearance: Delegate Roundtable at AI Infrastructure Field Day

Company: Tech Field Day

Video Links:

Personnel: Alastair Cooke

This roundtable discussion explores how the reality of artificial intelligence is driving profound shifts in infrastructure, moving beyond mere marketing labels to necessitate new, distinct approaches. Participants noted this transformative power in vendor presentations, citing Exite Labs’ massively scalable ARM architectures embedded within network interfaces and Vast’s innovative use of Bluefield DPUs, both driven by the evolving demands of AI. The conversation highlighted AI’s role as a significant driver of innovation in networking, pushing traditional Ethernet to the forefront over InfiniBand for HPC, and accelerating the development of smarter NICs and HBAs to support AI workflows.

A significant shift observed was the increasing emphasis on AI inferencing over training. This pivot indicates the practical application of AI in real-world scenarios, with enterprises actively deploying AI solutions. However, delegates recognized that building inference is not the final stage; it requires sophisticated application delivery and load balancing that, while familiar in concept, now demands context switching based on specific AI prompts or models. Parallels were drawn to historical architectural migrations, suggesting that AI is reaching a maturity where it’s integrated into applications for mainstream business value, moving away from being a “solution in search of a problem.” This evolution also sees a mix of large language models for general tasks and specialized, smaller language models (SLMs) for specific business applications, as exemplified by Forward Networks’ approach to distribute intelligence.

The discussion also touched on the critical role of human oversight and trust in AI systems, particularly in regulated environments, likening it to the gradual adoption of automation seen in systems such as VMware vSphere’s Dynamic Resource Scheduler. While AI is undeniably accelerating the scale and speed of innovation in networking and storage, some elements resonate with “everything old is new again,” as past concepts like offload engines and advanced storage architectures are being repurposed at an unprecedented scale. There was a debate on whether AI *drives* innovation or simply provides a compelling use case for existing “cool tech” that previously lacked widespread application. Looking ahead, AI is poised to become the “killer app for the edge,” driven by the high cost and time required to move large datasets, pushing processing closer to data generation. This necessitates new infrastructure designs for smaller, distributed AI clusters, creating opportunities for greenfield builds and challenging architects to bridge the gap between massive data center deployments and efficient, localized AI.


Forward AI – Security Vulnerability Management with Forward Networks

Event: AI Infrastructure Field Day 4

Appearance: Forward Networks Presents at AI Infrastructure Field Day

Company: Forward Networks

Video Links:

Personnel: Nikhil Handigol

The presentation highlighted a security vulnerability management use case that demonstrated a unique way to access Forward AI via Slack. In a common scenario, a CISO asked via Slack which devices were affected by a specific CVE. Forward AI, acting as an agent within the Slack channel, was prompted to investigate. It gathered vulnerability details and responded directly in Slack, identifying affected devices and providing a link to further evidence and details within the Forward Networks platform. The speaker addressed security concerns about Slack integration, emphasizing that specific integrations and channel restrictions are in place to ensure secure communication.

Beyond this demonstration, Forward AI aims to lower the barrier to network understanding by enabling users to ask questions in plain English rather than requiring them to learn complex, network-specific languages. It supercharges efficiency through an agentic architecture that can plan and execute dynamic, multi-step workflows, coordinating actions across multiple systems like ServiceNow and Slack. This capability instantly up-levels teams, enabling non-experts to solve complex network problems using state-of-the-art AI. The foundation of Forward AI’s effectiveness lies in combining the broad general capabilities of modern large language models with the deep, specific knowledge derived from Forward Networks’ mathematically accurate digital twin, which overcomes the challenge of applying AI directly to overwhelmingly complex raw network data.

Looking ahead, Forward AI, built on this robust digital twin, is designed to evolve into an agency system that can interact with other external systems via a general mechanism called MCP, fostering a thriving ecosystem of interacting agents. The core philosophy underpinning these agentic operations is trust, especially given the critical nature of network infrastructure. While striving for speed and efficiency, the current approach for Forward AI is to guide operators and provide deep insights, avoiding direct network changes to ensure safety and prevent unintended disruptions. The digital twin remains the essential foundation for enabling these trusted agentic operations, delivering measurable ROI.


Forward AI – Config Audit and Compliance with Forward Networks

Event: AI Infrastructure Field Day 4

Appearance: Forward Networks Presents at AI Infrastructure Field Day

Company: Forward Networks

Video Links:

Personnel: Nikhil Handigol

Forward AI aims to revolutionize network configuration audit and compliance, particularly for organizations in regulated industries that dread annual audits. These audits are typically manual, time-consuming, and error-prone, and carry a significant risk of penalties. The traditional approach involves painstakingly listing all devices, understanding vendor-specific configuration syntaxes for each operating system, extracting data, correlating it with standards, and generating reports – a task that can span days and requires specialized expertise across multiple vendors. This complexity underscores the critical need for automation and simplification to achieve and demonstrate compliance.

Forward Networks addresses this challenge with Forward AI, allowing users to express their audit goals in natural language, such as validating consistent NTP server configurations across all devices. An agentic system then kicks in, generating a precise query to extract relevant configuration data from Forward Networks’ normalized data model, which contains configurations from all network devices. Crucially, Forward AI understands the nuances of multi-vendor environments and automatically generates platform-specific configuration templates for devices from Cisco, Arista, Juniper, Fortinet, and Palo Alto, enabling accurate interpretation and analysis of NTP settings.

This automated process swiftly assesses hundreds of devices, providing a comprehensive report detailing device names, types, configured NTP servers, and compliance status against the specified standard. For instance, the demo showed an audit of 124 devices completed rapidly, identifying discrepancies and highlighting specific device classes where the target NTP server was absent. This not only streamlines the audit process but also provides solid, verifiable evidence for compliance resolution, dramatically reducing manual effort, improving accuracy, and ensuring organizations can efficiently meet their regulatory obligations.


Forward AI Demo – Risk Mitigation and Security with Forward Networks

Event: AI Infrastructure Field Day 4

Appearance: Forward Networks Presents at AI Infrastructure Field Day

Company: Forward Networks

Video Links:

Personnel: Nikhil Handigol

The presentation by Forward Networks demonstrated how their Forward AI platform addresses the critical security challenge of mitigating risks posed by vulnerable hosts, specifically a host named `batch 01` with unpatchable critical vulnerabilities. Traditionally, blocking internet access for such a host involves a laborious, hop-by-hop network analysis to identify firewalls and their configurations, a process that is time-consuming, prone to errors, and difficult to scale across multiple vulnerable devices. Failure to implement these blocks correctly could leave the network exposed, underscoring the need for an automated, reliable solution.

Forward AI streamlines this process significantly. Upon receiving a natural-language query such as “What firewalls do I have to block in order to remove access to the internet for host batch 01?”, the system first gathers context about the host’s vulnerabilities. It then performs a comprehensive path trace from the vulnerable host’s IP address to the entire internet (`0.0.0.0/0`), identifying all egress paths. The AI pinpoints the specific firewall (e.g., `SJC building one FW01`) and the exact access control rule currently permitting the traffic. It then provides verifiable evidence of these findings, such as showing multiple potential paths and the specific rule, and subsequently suggests precise CLI commands to implement a block, typically by modifying or adding a rule to deny traffic from the vulnerable host, thus offering a critical head start in rapid risk mitigation.

The underlying AI architecture uses state-of-the-art, off-the-shelf Large Language Models (LLMs) from providers such as Anthropic (Sonnet and Haiku models via AWS Bedrock) for natural language understanding and task planning. Crucially, these LLMs are not custom-trained or fine-tuned with proprietary networking data. Instead, deep network analysis, the network’s digital twin, and the “guardrails” that ensure the AI’s suggestions are relevant, accurate, and actionable within the network context reside within the Forward Networks platform’s agent. This modular design allows customers to plug in their own hosted LLMs while relying on Forward Networks for authoritative network intelligence and protective logic.


Forward AI Demo Troubleshooting Network Operations with Forward Networks

Event: AI Infrastructure Field Day 4

Appearance: Forward Networks Presents at AI Infrastructure Field Day

Company: Forward Networks

Video Links:

Personnel: Nikhil Handigol

Nikhil Handigol’s presentation showcases how Forward AI revolutionizes network operations troubleshooting. Before diving into the AI capabilities, Handigol provided a concise tour of the foundational Forward Enterprise platform. This robust, deterministic software connects to all network devices across hybrid multi-cloud and multi-vendor environments to create detailed, point-in-time snapshots of network configuration and behavior. The platform offers various analytical views, including graphical topology, inventory dashboards, vulnerability assessments, blast radius analysis, and precise path tracing. A key component is the Network Query Engine (NQE), which transforms raw configuration data into a normalized, hierarchical data model and supports queries via a SQL-like language, enabling users to extract specific network insights and verify compliance against predefined checks, triggering alerts when discrepancies arise.

The core demonstration focused on how Forward AI, as a conversational interface, streamlines resolving common network connectivity issues. By ingesting a service ticket describing a host’s inability to reach a database server over SSH, the AI agent dynamically constructs and executes a diagnostic plan. This plan involves gathering context about the involved hosts and performing a precise path trace through the network’s digital twin. In the scenario presented, Forward AI swiftly identified the issue: SSH traffic was blocked by a specific firewall due to an explicit Access Control List (ACL) deny rule. Crucially, the system provides a clear, “bottom line up front” diagnosis, supported by detailed explanations of the blocking device, the rule, and the full traffic path, all substantiated with direct links to the relevant “evidence” views within the Forward application, enhancing transparency and user trust.

Extending its utility, Forward AI can also generate proposed Command Line Interface (CLI) commands as a starting point for resolving identified issues, such as creating a new firewall security policy. Nikhil strongly emphasized that these generated fixes are for planning purposes only and require human validation and adherence to established operational change procedures, underscoring that the system does not autonomously execute changes. Discussions highlighted essential guardrails, including the AI’s ability to reject unanswerable requests and the enforcement of Role-Based Access Control (RBAC) to restrict data access and command generation based on user permissions. While a feedback mechanism (thumbs up/down) is in place to gather user input for continuous improvement, future iterations may incorporate business policies into AI recommendations and develop simulation capabilities within the digital twin before deploying changes to production, further building trust and enhancing automation.


Sign up for updates to
Tech Field day events

Thank you for being part of the Tech Field Day community! Our mailing list is a great way to stay up to date on our events and technical content, and we appreciate your signup.

We promise that we’ll never spam you, send ads, or sell your information. This list will only be used to communicate with our community about our events and content. And we’ll limit it to no more than one message per week.

Although we only need your email address, it would be nice if you provided a little more information to help us get to know you better!