Introduction to Nile NaaS for Strengthening Enterprise Security

Event: Security Field Day 14

Appearance: Nile Presents at Security Field Day 14

Company: Nile

Video Links:

Personnel: Shashi Kiran

Nile’s mission is to be the “easy button” for network and security in on-premises deployments. The company was founded by networking industry veterans, including former Cisco executives John Chambers and Pankaj Patel, to address the complexity of enterprise LAN environments. Nile has pioneered a new architectural approach, backed by numerous patents, that has led to its recognition as a Visionary in the Gartner Magic Quadrant for Enterprise Wired and Wireless LAN Infrastructure. The Nile service is deployed globally across various verticals, powering large-scale environments such as a 12 million square-foot warehouse and concurrently supporting over 200,000 users.

In his presentation, Shashi Kiran argues that while the data center and the Wide Area Network (WAN) have seen significant security advancements through unification and automation, the Local Area Network (LAN) has been largely neglected. This is a critical vulnerability, as the LAN is where the most users and a growing number of insecure IoT/OT devices reside, creating the enterprise’s largest attack surface. Kiran identifies a “perfect storm” driving the need for change: return-to-office mandates increasing LAN usage, aging infrastructure from pandemic-deferred refreshes, and IT teams facing resource constraints. He describes the current state of LAN security as a complex stack of point solutions, or “corporate spaghetti,” which makes adopting modern principles like Zero Trust nearly impossible due to operational complexity.

To solve this, Nile proposes a fundamental architectural shift rather than adding another product. The solution is a Network-as-a-Service (NaaS) model built on three core principles. The foundation is a unified Zero Trust fabric that natively integrates wired and wireless networks, IT and OT security, and policy enforcement. Secondly, the service is managed through an AI-powered cloud that provides autonomous operations, reducing human error and simplifying lifecycle management. Finally, Nile delivers this entire stack as a service with a predictable OpEx model, eliminating large capital expenditures. This integrated approach combines a Zero Trust fabric, AI-driven operations, and a service-delivery model to make the LAN a first-class citizen of enterprise security, simplifying challenges like guest access, compliance, and microsegmentation.


Growing Government and Industry Adoption of Protective DNS with Infoblox

Event: Security Field Day 14

Appearance: Infoblox Presents at Security Field Day 14

Company: Infoblox

Video Links:

Personnel: Krupa Srivatsan

Protective DNS is rapidly emerging as a trusted layer of defense across industries. Governments, regulators, and enterprises alike are embracing it as a scalable, proactive way to strengthen security posture. Around the world, governments are looking to adopt Protective DNS to safeguard citizens, while updates to NIST SP 800-81 highlight DNS as a foundational control that can stop threats earlier than other systems—supporting Zero Trust and cyber-resiliency strategies. Industry leaders are also moving fast: Microsoft is embracing Zero Trust DNS to protect devices, and Google Cloud DNS Armor applies DNS-based threat detection to natively secure cloud workloads. Speaker Krupa Srivatsan highlighted this growing adoption by citing a key statistic from a former NSA director stating that 92% of cyberattacks use DNS at some point. She provided several examples of governments implementing national Protective DNS (PDNS) services, including CISA in the U.S. for federal agencies, the U.K. for its public and emergency services, and Australia for its public sector. A notable use case is Ukraine, which deployed a national PDNS service that resulted in a 30-40% reduction in reported financial phishing fraud against its citizens amidst the ongoing conflict.

Srivatsan then discussed the influence of regulatory bodies, focusing on the forthcoming NIST Special Publication 800-81, which centers on DNS security. This guidance is built on three pillars: using Protective DNS to block malicious activity, ensuring DNS hygiene and encryption (like DNSSEC and DNS over HTTPS) to prevent spoofing, and hardening DNS servers against denial-of-service attacks. She connected these principles to the Zero Trust framework, arguing that organizations cannot claim to follow Zero Trust if they implicitly trust their DNS resolver. A true Zero Trust architecture requires not only PDNS and encryption but also a comprehensive asset inventory—a capability inherent to DDI platforms—to apply granular, device-aware security policies.

Finally, she detailed significant adoption by industry leaders. Microsoft’s new Zero Trust DNS feature for Windows 11, for example, will lock down the operating system to only resolve queries through an approved PDNS provider, effectively blocking resolutions to unauthorized domains and hardcoded IP addresses. Similarly, the Google Cloud DNS Armor service natively integrates Infoblox’s threat detection engine directly into the Google Cloud console. In its initial version, the service analyzes DNS logs to detect threats and reports them to Google’s security tools, providing preemptive security for cloud workloads without requiring customers to deploy a separate solution. These initiatives by Microsoft and Google signal a major industry shift towards embedding Protective DNS as a foundational security control.


Infoblox Threat Intelligence (ITI) with Dave Mitchell

Event: Security Field Day 14

Appearance: Infoblox Presents at Security Field Day 14

Company: Infoblox

Video Links:

Personnel: Dave Mitchell

Dave Mitchell will introduce the Infoblox Threat Intelligence (ITI) team, highlighting its specialized focus and unique capabilities in DNS-based security. He’ll explore the evolving threat landscape, sharing insights into emerging attack vectors and adversary tactics. The session will demonstrate how Infoblox’s deep expertise in DNS enables superior threat detection and protection. Attendees will gain a clear understanding of what sets Infoblox apart in the cybersecurity ecosystem. As a “recovering operator,” Mitchell explained that his team’s sole focus is DNS, a namespace so vast that it offers attackers near-infinite room to operate. He emphasized that Infoblox’s intelligence is entirely original and not repackaged from other sources. Their process involves a reputation system where algorithms analyze newly registered domains, clustering suspicious ones based on shared attributes like registration patterns and name server behavior. Human researchers then investigate these clusters to identify, name, and track threat actors, building robust signatures that can follow adversaries even as they adapt their tactics. This proactive approach results in a “low regret” security posture, blocking domains that users have no legitimate reason to visit.

This DNS-centric intelligence allows Infoblox to provide “protection before impact.” Mitchell shared that over a recent 90-day period, their system already contained 75% of malicious domains before a single customer query was ever made to them. This is possible because the team observes threat actor infrastructure as it’s being built. A significant portion of the presentation focused on the growing threat of malicious advertising technology (“malvertising”). He detailed how threat actors operate sophisticated Traffic Distribution Systems (TDS) that function like legitimate ad-tech platforms but serve malicious content. These systems use cloaking techniques to profile visitors, redirecting them to scams, info-stealers, or fake software updates only if they match specific criteria, while sending researchers or bots to harmless decoy sites like Google or Alibaba.

Mitchell provided a deep dive into the malvertising ecosystem, illustrating how criminal affiliate networks push everything from cryptocurrency and dating scams to dangerous malware like the SocGholish info-stealer. He highlighted a major threat actor his team has been tracking called Vextrio (also known as “Los Pollos”), a sophisticated cartel that runs a massive TDS operation. Beyond malvertising, he also touched on the persistent problem of lookalike domains, which are impossible for brands to proactively register across all 1,300+ top-level domains, and an advanced command-and-control technique where compromised websites use DNS text records to covertly fetch and decode malicious redirect URLs. These examples underscore the complexity of modern threats and the critical role of specialized, protective DNS in disrupting the attack chain.


A Live Demo of Infoblox Threat Defense

Event: Security Field Day 14

Appearance: Infoblox Presents at Security Field Day 14

Company: Infoblox

Video Links:

Personnel: Kevin Zettel

This hands-on session follows the earlier briefings and goes straight into the Infoblox Security Portal. We’ll trace malicious activity from first DNS lookup to automated enforcement, show how verdicts are backed by Infoblox Threat Intelligence, and walk through incident triage and policy tuning. Expect practical coverage of policy creation, exception handling, and integrations that extend protection across endpoint, network, and cloud. You’ll leave with a clear view of day-to-day operations and the metrics that matter. Speaker Kevin Zettel began the demonstration by outlining the five flexible deployment options for Infoblox’s threat defense solution. These include a lightweight endpoint agent for rich user attribution, physical or virtual NIOS appliances, NIOS as a service with IPsec tunnels for cloud and SASE environments, and a simple external resolver configuration. Zettel emphasized that these methods can be mixed and matched, and even without an endpoint agent, the system uses Universal Asset Insights to enrich data, providing crucial context like the specific device, user, and MAC address for every DNS query. He also confirmed that Infoblox provides comprehensive threat feeds for IPs, URLs, and hashes that can be exported to firewalls to counter adversaries who might pivot away from DNS.

Transitioning to the live portal, Zettel showcased the main dashboard, which provides immediate KPIs on the security of the DNS infrastructure. He highlighted the value of “predictive intelligence” and a key metric called “first to detect,” which demonstrates to customers that Infoblox knew about malicious domains on average several weeks before an employee ever clicked on them. The portal offers a detailed, asset-centric view, allowing security teams to identify at-risk devices, trace their entire IP address history across the network, and review all associated security and policy violations. This capability is critical for incident triage, enabling an analyst to quickly understand the scope of an infection and identify other potentially compromised systems by seeing everywhere a device has been.

To demonstrate how security verdicts are backed by intelligence, Zettel navigated to the threat intelligence section, which shows customers which specific threat actor “cartels” are active in their environment and the exact malicious domains their users have accessed. To make the massive volume of DNS data actionable for security operations (SOC) teams, he introduced an AI-powered feature called “Insights,” which automatically correlates millions of individual events into a handful of manageable incidents. For deeper investigation and policy tuning, the integrated “Dossier” research tool allows an analyst to click any indicator (domain, IP, etc.) and receive a consolidated report from over twenty different tools, providing the full context needed to validate a threat and make informed policy decisions.


The Ten Year Protective DNS Journey with Infoblox

Event: Security Field Day 14

Appearance: Infoblox Presents at Security Field Day 14

Company: Infoblox

Video Links:

Personnel: Mukesh Gupta

DNS is no longer just infrastructure — it is the frontline of preemptive security. This session highlights Infoblox’s decade-long journey in shaping DNS security, with Protective DNS at the center of defending users against evolving threats. Attendees will see why DNS is uniquely positioned to stop attacks before they spread and how DDI integration delivers powerful visibility, automation, and protection. Speaker Mukesh Gupta detailed Infoblox’s evolution from an enterprise appliance company known for DDI (DNS, DHCP, and IPAM) to a security-focused organization. He explained that as enterprises adopted multiple cloud platforms, they ended up with siloed DNS systems (e.g., on-prem, AWS Route 53, Azure DNS), leading to complexity and outages. Infoblox addressed this by creating “Universal DDI,” a platform that provides a single management layer for all of a customer’s disparate DNS services, whether they are on-premises or in the cloud, and offers a true SaaS-based option for DDI services.

Gupta emphasized that DNS is the first point of detection for nearly all types of cyberattacks—from phishing and malware to data exfiltration—because a DNS query always precedes the malicious action. Blocking threats at this initial DNS layer is highly effective, protecting all devices on the network without deploying new agents and significantly reducing the load on other security tools like firewalls and XDRs. Infoblox’s unique approach, developed by a former NSA expert, focuses on tracking the cybercriminal “cartels” rather than individual attacks. Instead of chasing millions of malicious domains (the “drug dealers”), Infoblox identifies and monitors the infrastructure of organizations like “Prolific Puma” (a malicious URL shortening service) or “VainWiper” (a malicious traffic distribution system) that service thousands of attackers. This “cartel”-focused strategy provides a significant strategic advantage.

The primary benefits of this unique approach are a massive lead time and incredible accuracy. Infoblox can identify malicious domains an average of 68 days before they are used in a campaign, often right after the cartel registers them, allowing for preemptive blocking without waiting for a “patient zero.” This methodology also results in an extremely low false positive rate (0.0002%). Gupta argued that integrating this protection directly into the DDI platform is more operationally efficient, as it prevents finger-pointing between network and security teams when a domain is blocked. Infoblox is now extending this protection to cloud workloads, either by having customers point their cloud DNS to Infoblox’s service or through native integrations, such as the new Google Cloud DNS Armor service, which is powered by Infoblox’s threat intelligence technology.


HPE SD-WAN Gateways & Advanced Services

Event: Security Field Day 14

Appearance: HPE Presents at Security Field Day 14

Company: HPE

Video Links:

Personnel: Adam Fuoss, Nirmal Rajarathnam

Explore how the HPE secure SD-WAN portfolio helps protect branch locations against cyberthreats while embracing the flexibility of cloud-first architectures. Discover how the new HPE Networking Application Intelligence Engine (AppEngine), strengthens security with real-time defense, leveraging aggregated application security insights such as risk, reputation, vulnerability, and compliance.

In this session, HPE introduced its newly combined SD-WAN portfolio, which includes Aruba SD-Branch, EdgeConnect (formerly Silverpeak), and the Juniper Session Smart Router. The presentation focused on a key security challenge in branch networks: the lateral movement of threats once a bad actor gains entry. Presenters argued that while identity-based segmentation was an improvement over static VLANs, it is insufficient without a deep understanding of the applications traversing the network. To address this gap, HPE unveiled its Application Intelligence Engine (AppEngine), a new service running within the Aruba Central management platform. The engine’s primary goal is to provide a comprehensive application posture, enabling more effective dynamic segmentation to protect against internal threats.

The AppEngine works by ingesting, correlating, and normalizing application data from multiple sources, such as deep packet inspection (DPI) and URL filtering, into a single, unified application catalog. This process creates a rich, contextual profile for each application, complete with security scores, known vulnerabilities, compliance data, and encryption details. From the central dashboard, an administrator can define global, role-based security policies based on this application intelligence. The AppEngine then automatically distributes the appropriate signatures and policies to the relevant enforcement points, like gateways or access points. The demonstration showcased an administrator identifying high-risk applications and creating a policy to block them for specific user roles during business hours, all without touching individual device configurations. Currently, this functionality is available for the SD-Branch solution managed by Aruba Central, with plans to extend its capabilities across the broader portfolio in the future.


HPE SRX Series Next-Generation Firewalls & Threat Prevention

Event: Security Field Day 14

Appearance: HPE Presents at Security Field Day 14

Company: HPE

Video Links:

Personnel: Kedar Dhuru, Mounir Hahad, Pradeep Hattiangadi

Discover how the SRX firewall portfolio secures networks of any size. We’ll dive into AI-Predictive Threat Prevention (AI-PTP), which neutralizes zero-day attacks with a proxy-less, real-time, on-device AI engine. We’ll also cover how a Machine Learning detection pipeline continuously provides automatically generated signatures for emerging threats, delivering stronger security without compromising firewall performance.

The session outlines a security philosophy focused on making security easier to operationalize, from the user edge to the data center. The speakers explain that with the rise of device proliferation, distributed applications, and Gen AI, the threat landscape has become more complex. HPE’s approach is to use a comprehensive threat detection pipeline, heavily leveraging AI and machine learning, directly on their SRX firewalls. This strategy aims for a high detection rate and a very low false positive rate without sacrificing performance. The core of the presentation centers on a feature called AI-Predictive Threat Prevention (AI-PTP), which represents a shift from traditional reactive, signature-based models to a proactive approach for identifying both known and zero-day malware.

The AI-PTP system operates using a two-stage process. First, machine learning models are trained in HPE’s ATP Cloud using vast datasets of malicious and benign files. These trained models are then deployed to the SRX firewalls, where the “inference” or detection happens directly on the device. A key differentiator is its inline, proxy-less architecture, which analyzes just the initial portion of a file as it’s being downloaded to quickly determine if it’s malicious. This allows the firewall to block threats in real-time. This on-box capability is part of a defense-in-depth strategy, augmented by cloud-based analysis, including multiple sandboxing methods. During the demonstration and Q&A, it was clarified that this process has a negligible performance impact, can update threat signatures across all customers in minutes, and can automatically place an infected host on a blocklist that is shared across the entire HPE security ecosystem, including NAC and switching solutions.


HPE Networking Security Overview with Madani Adjali

Event: Security Field Day 14

Appearance: HPE Presents at Security Field Day 14

Company: HPE

Video Links:

Personnel: Madani Adjali

This presentation marks a significant moment for HPE, as it’s the first time Aruba Networks, now part of HPE, has presented at Security Field Day since 2018. The recent acquisition of Juniper Networks has further expanded HPE’s security portfolio, leading to the formation of HPE Networking. The presenter, Madani Adjali, highlights the historical context of both Aruba and Juniper’s past presentations at the event, expressing a desire for more frequent participation in the future. The newly formed HPE Networking is structured into several groups, including campus and branch, data center, and WAN, with this presentation focusing specifically on the SASE and security pillar.

The core of the presentation will delve into two main areas: new capabilities within Aruba Central related to application intelligence and advancements in the firewall side of the portfolio, leveraging the SRX platform. The SASE and security pillar, led by Adjali, encompasses a wide range of products, including network access control, SD-WAN, SASE, and firewalls. The audience is given a high-level overview of the comprehensive security offerings now available through HPE, which range from various SD-WAN solutions to a full suite of firewalls, ZTNA, SWG, and CASB. The presenter also mentions ClearPass Policy Manager, a network access control product demonstrated back in 2018, and its new cloud-oriented capabilities.

The presentation aims to be an interactive session, with a team of experts on hand to provide in-depth information and answer questions. The goal is to showcase the power and breadth of the new HPE Networking security portfolio. The speaker emphasizes the significance of this moment for the company, following the recent completion of the Juniper Networks acquisition. The presentation will feature deep dives into the technical aspects of the new security capabilities, with a particular focus on the integration of AI and predictive technologies to enhance threat prevention and application intelligence. The session promises to be informative for anyone interested in the future of network security and the combined strengths of HPE and Juniper Networks.


ZEDEDA Edge AI – Object Recognition Use Case

Event:

Appearance: ZEDEDA Edge Field Day Showcase

Company: ZEDEDA

Video Links:

Personnel: Sérgio Santos

In this ZEDEDA Edge Field Day Showcase, Sergio Santos, Account Solutions Architect shows how ZEDEDA manages edge AI for a practical object recognition use case, specifically for computer vision. His presentation shows how to deploy a stack of three applications—an AI inference container, a Prometheus database, and a Grafana dashboard—using the Docker Compose runtime across a fleet of three devices, one equipped with a GPU and two without. The demo highlights the ability to deploy and manage applications at scale from a single control plane, leveraging ZEDEDA’s automated deployment policies. The process starts from a clean slate, moves through provisioning the edge nodes, and automatically pushes the application stack based on predefined policies, including GPU-specific logic.
A key part of the demonstration is the live update and rollback process. Santos shows how to remotely update the inference container to a new version and then roll it back to the original without restarting the runtime. This highlights ZEDEDA’s lightweight, efficient updates and the use of its Zix infrastructure to push configuration changes. The demo also shows the ability to monitor application logs and device metrics (CPU, memory, network traffic) from the central ZEDEDA controller, proving the platform’s comprehensive management capabilities. The session concludes by demonstrating how to easily wipe the entire application stack by simply moving the edge nodes to a different project.


Manage Edge AI Using ZEDEDA Kubernetes Service

Event:

Appearance: ZEDEDA Edge Field Day Showcase

Company: ZEDEDA

Video Links:

Personnel: Hariharasubramanian C. S.

In this Edge Field Day Showcase, ZEDEDA’s Distinguished Engineer, Hariharasubramanian C. S, discusses how ZEDEDA is tackling the growing importance and challenges of deploying AI at the edge. He highlights that factors like insufficient bandwidth, high latency, and data privacy concerns make it impractical to send all sensor data to the cloud for analysis. ZEDEDA’s solution is to bring AI to the edge, closer to the data source. This, however, introduces its own challenges, such as managing a wide range of hardware, ensuring autonomy in disconnected environments, and updating AI models at scale. Hari argues that Kubernetes, with its lightweight nature and robust ecosystem, is the ideal solution for packaging and managing complex AI pipelines at the edge.
This presentation demonstrates how ZEDEDA’s Kubernetes service simplifies the deployment of an Edge AI solution for car classification. Using a Helm chart, he shows how to deploy a multi-component application, including an OpenVINO inference server, a model-pulling sidecar, and a demo client application. The demo showcases how the ZEDEDA platform provides a unified control plane for zero-touch provisioning and lifecycle management of these components, all while keeping models in a private, on-premise network without exposing them to the cloud. He concludes by demonstrating the application’s real-time inference capabilities and encouraging developers to leverage ZEDEDA’s open-source repositories to build their own edge AI solutions.


Understanding Containers at the Edge with ZEDEDA

Event:

Appearance: ZEDEDA Edge Field Day Showcase

Company: ZEDEDA

Video Links:

Personnel: Kristopher Clark, Manny Calero

In this Edge Field Day Showcase, ZEDEDA’s Consulting Solutions Architect, Manny Calero, demonstrates how the ZEDEDA platform addresses the diverse needs of edge computing workloads. While Kubernetes is ideal for large, complex, and distributed applications, Docker Compose is often a better fit for smaller, lightweight, and resource-constrained edge sites. The ZEDEDA platform’s key strength lies in its flexibility, allowing users to deploy both legacy VMs and modern containerized applications side-by-side on the same edge node. This provides a unified orchestration and management experience, offering a simple solution for a repeatable, scalable, and secure edge architecture. This presentation includes a demo of the ZEDEDA platform to deploy Docker Compose workloads to multiple edge nodes, highlighting features like zero-touch provisioning and API-driven automation with Terraform.
Solutions Architect Kris Clark presents the ZEDEDA Edge Kubernetes Service. While Kubernetes is complex, it is essential for highly scalable, distributed, and complex applications. Kris provides a brief overview of the Kubernetes service’s architecture, emphasizing its ease of use and its ability to integrate with familiar developer tools like kubectl and Git repositories. The demo shows how to quickly create a Kubernetes cluster and deploy applications from the ZEDEDA marketplace or from a custom Helm chart. This presentation concludes with a discussion about how the ZEDEDA platform provides a cohesive solution for both containerized and VM-based workloads, supporting enterprises in their digital transformation journey at the edge.


ZEDEDA Automated Orchestration for the Distributed Edge

Event:

Appearance: ZEDEDA Edge Field Day Showcase

Company: ZEDEDA

Video Links:

Personnel: Padraig Stapleton

In this Edge Field Day showcase, ZEDEDA’s Padraig Stapleton, SVP and Chief Product Officer, provides a comprehensive overview of ZEDEDA, its origins, and its vision for bringing the cloud experience to the unique and often hostile environment of the edge. The video highlights how ZEDEDA’s platform enables businesses to securely and scalably run their applications at the edge. The discussion covers how the platform addresses the complexities of diverse hardware, environments, and security challenges, allowing customers to focus on their core business applications.
This presentation also introduces the ZEDEDA edge computing platform for visibility, security and control of edge hardware and applications. The presentation details a unique partnership with OnLogic to provide zero-touch provisioning and discusses various real-world use cases, including container shipping, global automotive manufacturing, and oil and gas.


Unlock AI Cloud Potential with the Rafay Platform

Event: AI Infrastructure Field Day 3

Appearance: Rafay presents at AI Infrastructure Field Day 3

Company: Rafay

Video Links:

Personnel: Haseeb Budhani

Haseeb Budhani, CEO of Rafay Systems, discusses how the Rafay platform can be used to address AI use cases. The platform provides a white-label ready portal that allows end users to self-service provision various compute resources and AI/ML platform services. This enables cloud providers and enterprises to offer services like Kubernetes, bare metal, GPU as a service, and NVIDIA NIM with a simple and standardized experience.

The Rafay platform leverages standardization, infrastructure-as-code (IaC) concepts, and GitOps pipelines to drive consumption for a large number of enterprises. Built on a Git engine for configuration management and capable of handling complex multi-tenancy requirements with integration to various identity providers, the platform allows customers to offer different services, compute functions, and form factors to their end customers through configurable, white-labeled catalogs. Additionally, the platform features a serverless layer for deploying custom code on Kubernetes or VM environments, enabling partners and customers to deliver a wide range of applications and services, from DataRobot to Jupyter notebooks, as part of their offerings.

Rafay addresses security concerns through SOC 2 Type 2 compliance for its SaaS product, providing pentest reports and agent reports for customer assurance. For larger customers, particularly cloud providers, an air-gapped product is offered, allowing them to deploy and manage the Rafay controller within their own secure environments. Furthermore, the platform’s unique Software Defined Perimeter (SDP) architecture enables it to manage Kubernetes clusters remotely, even on edge devices with limited connectivity, by establishing an inside-out connection and a proxy service for secure communication.


From Infrastructure Chaos to Cloud-Like Control with Rafay

Event: AI Infrastructure Field Day 3

Appearance: Rafay presents at AI Infrastructure Field Day 3

Company: Rafay

Video Links:

Personnel: Haseeb Budhani

Rafay, founded seven years ago, initially focused on Kubernetes but has evolved to address the broader challenge of simplifying compute consumption across various environments. Their solution aims to provide self-service compute to companies across verticals.

Rafay typically engages with companies that already have existing infrastructure, automation, and deployments. The core problem they solve is standardization across diverse environments and users. They help companies build a platform engineering function that enables efficient management of environments, upgrades, and policies. The Rafay platform abstracts the underlying infrastructure, providing an interface for users to request and consume compute resources without needing to understand the complexities of the underlying systems.

Rafay’s platform allows organizations to deliver self-service compute across diverse environments and teams, managing identity, policies, and automation. The goal is to reduce the time developers waste on infrastructure tasks, which, according to Rafay, can be as high as 20% in large enterprises. They offer a comprehensive solution that encompasses inventory management, governance, and control, all while generating the underlying infrastructure as code for versioning and auditability. In summary, Rafay enables companies to move away from custom, in-house solutions to a standardized, automated, and cloud-like compute consumption model.


Bridging the gap from GPU-as-a-Service to AI Cloud with Rafay

Event: AI Infrastructure Field Day 3

Appearance: Rafay presents at AI Infrastructure Field Day 3

Company: Rafay

Video Links:

Personnel: Haseeb Budhani

Rafay CEO Haseeb Budhani argues that to truly be considered a cloud provider, organizations must offer self-service consumption, applications (or tools), and multi-tenancy. He contends that many GPU clouds currently rely on manual processes like spreadsheets and bare metal servers, which don’t qualify as true cloud solutions. Budhani emphasizes that users should be able to access a portal, create an account, and consume services on demand, without requiring backend intervention for tasks like VLAN setup or IP address management.

Budhani elaborates on his definition of multi-tenancy, outlining the technical requirements for supporting diverse customer needs. This includes secure VMs, operating system images with pre-installed tools, public IP addresses, firewall rules, and VPCs. He highlights the difference between customers needing a single GPU versus those requiring 64 GPUs and emphasizes that all necessary networking and security configurations must be automated to provide a true self-service experience.

Ultimately, Budhani argues that the goal is self-service consumption of applications or tools, not just GPUs. He believes the industry is moving beyond the “GPU as a service” concept, with users now focused on consuming models and endpoints rather than managing the underlying GPU infrastructure. He suggests that his company, Rafay, addresses many of the complexities in this space, offering solutions that enable the delivery of applications and tools in a self-service, multi-tenant environment.


Accelerating AI Infrastructure Adoption for GPU Providers and Enterprises with Rafay

Event: AI Infrastructure Field Day 3

Appearance: Rafay presents at AI Infrastructure Field Day 3

Company: Rafay

Video Links:

Personnel: Haseeb Budhani

Haseeb Budhani, CEO of Rafay Systems, begins by highlighting the confusion surrounding Rafay’s classification, noting that people variously describe it as a platform as a service (PaaS), orchestration, or middleware, and he welcomes feedback on which term best fits. He then pivots to discussing the current market dynamics in AI infrastructure, particularly the discrepancy between the cost of renting GPUs from providers like Amazon versus acquiring them independently. He illustrates this with an example of using DeepSeek R1, highlighting that while Amazon charges significantly more for consuming the model via Bedrock, renting the underlying H100 GPU directly is much cheaper.

Budhani argues that many companies renting out GPUs are not true “clouds” and may struggle in the long term because they are not selling services on top of the GPUs. He references an Accenture report suggesting that GPU as a Service (GPaaS) will diminish as the market matures, with more value being derived from services. He emphasizes that hyperscalers like Amazon have understood this for a long time, generating most of their revenue from services rather than infrastructure as a service (IaaS). This presents an opportunity for Rafay to help GPU providers and enterprises deliver these higher-level services, enabling them to compete more effectively with hyperscalers and unlock significant cost savings, citing an example of a telco in Thailand that could save millions by deploying its own AI infrastructure with Rafay’s software.

The speaker concludes by emphasizing the increasing importance of sovereign clouds, especially in regions like Europe and the Middle East. Telcos, which previously lost business to public clouds, now have a renewed opportunity to provide AI infrastructure locally due to sovereignty requirements. He states that Rafay aims to provide these telcos and other regional providers with the necessary software stack to deliver these services, thereby addressing a common problem across various geographic locations. He highlights a telco in Indonesia, Indosat, as an early example of a customer using Rafay to deliver a sovereign AI cloud, underscoring the growing demand for such solutions globally.


The Open Flash Platform Initiative with Hammerspace

Event: AI Infrastructure Field Day 3

Appearance: Hammerspace presents at AI Infrastructure Field Day 3

Company: Hammerspace

Video Links:

Personnel: Kurt Kuckein

The Open Flash Platform (OFP) Initiative is a multi-member industry collaboration founded in July 2025. The initiative’s goal is to redefine flash storage architecture, particularly for high-performance AI and data-centric workloads, by replacing traditional storage servers with an open approach that yields a more efficient and modular, standards-based, and disaggregated model.

The presentation highlights the growing challenges of data storage, power consumption, and cooling in modern data centers, especially with the increasing volume of data generated at the edge. The core idea behind the OFP initiative is to leverage recent advancements in large-capacity flash (QLC), powerful DPUs (Data Processing Units), and Linux kernel enhancements to create a highly dense, low-power storage platform. This platform aims to replace traditional CPU-based storage servers with a modular design, ultimately allowing for exabyte-scale deployments within a single rack.

The proposed architecture consists of sleds containing DPUs, networking, and NVMe storage, fitting into trays that can be modularly deployed. This approach offers significant improvements in density and power efficiency compared to existing solutions. While the initial concept uses U.2 drives, the long-term goal is to leverage an extended E.2 standard for even greater capacity. Hammerspace is leading the initiative, fostering collaboration among industry players, including DPU and SSD partners, and exploring adoption by organizations like the Open Compute Project (OCP).

Hammerspace envisions a future where AI infrastructure relies on open standards and efficient hardware. The OFP initiative aligns with this vision by providing a non-proprietary, high-capacity storage platform optimized for AI workloads. The goal is to allow for modernizing storage systems without having to buy additional storage systems, utilizing the flash that’s already available. This would offer a modern AI environment.


Activating Tier 0 Storage Within GPU and CPU-based Compute Cluster with Hammerspace

Event: AI Infrastructure Field Day 3

Appearance: Hammerspace presents at AI Infrastructure Field Day 3

Company: Hammerspace

Video Links:

Personnel: Floyd Christofferson

The highest performing storage available today is an untapped resource within your server clusters that can be activated by Hammerspace to accelerate AI workloads and increase GPU utilization. This session covers how Hammerspace unifies local NVMe across server clusters as a protected, ultra-fast tier that is part of a unified global namespace. This underutilized capacity can now accelerate AI workloads as shared storage, with data automatically orchestrated by Hammerspace across other tiers and cloud storage to increase time to token while also reducing infrastructure costs.

Floyd Christopherson from Hammerspace introduces Tier 0, focusing on how it accelerates AI workflows in GPU and CPU-based clusters. The core problem addressed is the stranded capacity of local NVMe storage within servers, which, despite its speed, is often underutilized. Accessing data over the network to external storage becomes a bottleneck, especially in AI workflows with growing context lengths and fast token access requirements. While increasing network capacity is an option, it’s expensive and still limited. Tier 0 aggregates this local capacity into a single storage tier, making it the primary storage for workflows and enabling programmatic data orchestration, effectively unlocking petabytes of previously unused storage and eliminating the need to buy additional expensive Tier 1 storage.

Hammerspace’s Tier 0 leverages standards-based environments, with the client-side using standard NFS, SMB, and S3 protocols, eliminating the need for client-side software installations. The technology utilizes parallel NFS v4.2 with flex files, contributed to the Linux kernel, to enhance performance and efficiency. This approach avoids proprietary clients and special server deployments, allowing the system to work with existing infrastructure. The orchestration and unification of capacity across servers are key to the solution, turning compute nodes into storage servers without creating isolated islands, thereby reducing bottlenecks and improving data access speeds.

The presentation highlights the performance benefits of Tier 0, showcasing theoretical results and MLPerf benchmarks that demonstrate superior performance per rack unit. By utilizing local NVMe storage, Hammerspace reduces the reliance on expensive and slower cloud storage networks, leading to greater GPU utilization. Furthermore, Hammerspace contributes enhancements to the Linux kernel, such as local IO, to reduce CPU utilization and accelerate write performance, solidifying its commitment to standard-based solutions and continuous improvement in data accessibility. The architecture is designed to be non-disruptive, allowing for live data mobility behind the scenes, ensuring seamless user experience.


What is AI Ready Storage, with Hammerspace

Event: AI Infrastructure Field Day 3

Appearance: Hammerspace presents at AI Infrastructure Field Day 3

Company: Hammerspace

Video Links:

Personnel: Molly Presley

AI Ready Storage is data infrastructure designed to break down silos and give enterprises seamless, high-performance access to their data wherever it lives. With 73% of enterprise data trapped in silos and 87% of AI projects failing to reach production, the bottleneck isn’t GPUs—it’s data. Traditional environments suffer from visualization challenges, high costs, and data gravity that limits AI flexibility. Hammerspace simplifies the enterprise data estate by unifying silos into a single global namespace and providing instant access to data—without forklift upgrades—so organizations can accelerate AI success.

The presentation focused on leveraging existing infrastructure and data to make it AI-ready, emphasizing simplicity for AI researchers under pressure to deliver high-quality results quickly. Hammerspace simplifies the data readiness process, enabling easy access and utilization of data within infrastructure projects. While the presentation covers technical aspects, the emphasis remains on ease of deployment, workload management, and rapid time to results, aligning with customer priorities. Hammerspace provides a virtual data layer across existing infrastructure, creating a unified data namespace enabling access and mobilization of data across different storage systems, enriching metadata for AI workloads, and facilitating data sharing in collaborative environments.

Hammerspace addresses key AI use cases such as global collaboration, model training, and inferencing, particularly focusing on enterprise customers with existing data infrastructure they wish to leverage. The platform’s ability to assimilate metadata from diverse storage systems into a unified control plane allows for a single interface to data, managed through Hammerspace for I/O control and quality of service. By overcoming data gravity through intelligent data movement and leveraging Linux advancements, Hammerspace enables data access regardless of location, maximizing GPU utilization and reducing costs. This is achieved by focusing on data access, compliance, and governance, ensuring that AI projects align with business objectives and minimizing risks associated with data movement.

Hammerspace aims to unify diverse data sources, from edge data to existing storage systems, enabling seamless access for AI factories and competitive advantages through faster data insights. With enriched metadata and automated workflows, HammerSpace accelerates time to insight and removes manual processes. HammerSpace is available as installable software or as a hardware appliance, and supports various deployment models, offering linear scalability and distributed access to data. A “Tier 0” capability was also discussed, which leverages existing underutilized NVMe storage within GPU nodes to create a fast, low-latency storage pool, showcasing the platform’s flexibility and resourcefulness.


The AI Factory in Action: Basketball play classification with Hewlett Packard Enterprise

Event: AI Infrastructure Field Day 3

Appearance: HPE presents at AI Infrastructure Field Day 3

Company: HPE

Video Links:

Personnel: Mark Seither

This session provides a live demonstration of a practical AI application built on top of HPE Private Cloud AI (PCAI). The speaker, Mark Seither, showcases a basketball play classification application that leverages a machine learning model trained on PCAI. This model accurately recognizes and categorizes various basketball plays, such as pick and roll, isolation, and fast break. The demo highlights how the powerful and predictable infrastructure of PCAI enables the development and deployment of complex, real-world AI solutions. This example illustrates the full lifecycle of an AI project—from training to deployment—on a private cloud platform.

The presentation details the development of an AI application for an NBA team that focuses on video analysis, starting with the specific use case of identifying player fatigue. The initial approach involved using an open-source video classification model called Slow Fast, which was trained to recognize basketball plays such as pick and rolls, and isolations. To create a labeled dataset for training, the presenter manually extracted and labeled video clips from YouTube using tools like QuickTime and Label Studio. The model, trained on a small dataset of labeled plays, demonstrated promising accuracy in identifying these plays, and although it had limitations, the presentation illustrates a basic but functional model.

The speaker then discusses the next steps involving HPE’s Machine Learning Inferencing Service (MLIS) to deploy the model as an endpoint. This would allow the team to upload and classify video clips more easily. Furthermore, he plans to integrate the play classification with a video language model (VLM) enabling the team to query their video assets using natural language, such as “Show me every instance of Steph Curry running a pick and roll in the fourth quarter of a game in 2017.” He also showcased the RAG capabilities of the platform using the NBA collective bargaining agreement to answer specific questions, highlighting the platform’s potential to provide quick, valuable insights to customers.