The Ten Year Protective DNS Journey with Infoblox

Event: Security Field Day 14

Appearance: Infoblox Presents at Security Field Day 14

Company: Infoblox

Video Links:

Personnel: Mukesh Gupta

DNS is no longer just infrastructure — it is the frontline of preemptive security. This session highlights Infoblox’s decade-long journey in shaping DNS security, with Protective DNS at the center of defending users against evolving threats. Attendees will see why DNS is uniquely positioned to stop attacks before they spread and how DDI integration delivers powerful visibility, automation, and protection. Speaker Mukesh Gupta detailed Infoblox’s evolution from an enterprise appliance company known for DDI (DNS, DHCP, and IPAM) to a security-focused organization. He explained that as enterprises adopted multiple cloud platforms, they ended up with siloed DNS systems (e.g., on-prem, AWS Route 53, Azure DNS), leading to complexity and outages. Infoblox addressed this by creating “Universal DDI,” a platform that provides a single management layer for all of a customer’s disparate DNS services, whether they are on-premises or in the cloud, and offers a true SaaS-based option for DDI services.

Gupta emphasized that DNS is the first point of detection for nearly all types of cyberattacks—from phishing and malware to data exfiltration—because a DNS query always precedes the malicious action. Blocking threats at this initial DNS layer is highly effective, protecting all devices on the network without deploying new agents and significantly reducing the load on other security tools like firewalls and XDRs. Infoblox’s unique approach, developed by a former NSA expert, focuses on tracking the cybercriminal “cartels” rather than individual attacks. Instead of chasing millions of malicious domains (the “drug dealers”), Infoblox identifies and monitors the infrastructure of organizations like “Prolific Puma” (a malicious URL shortening service) or “VainWiper” (a malicious traffic distribution system) that service thousands of attackers. This “cartel”-focused strategy provides a significant strategic advantage.

The primary benefits of this unique approach are a massive lead time and incredible accuracy. Infoblox can identify malicious domains an average of 68 days before they are used in a campaign, often right after the cartel registers them, allowing for preemptive blocking without waiting for a “patient zero.” This methodology also results in an extremely low false positive rate (0.0002%). Gupta argued that integrating this protection directly into the DDI platform is more operationally efficient, as it prevents finger-pointing between network and security teams when a domain is blocked. Infoblox is now extending this protection to cloud workloads, either by having customers point their cloud DNS to Infoblox’s service or through native integrations, such as the new Google Cloud DNS Armor service, which is powered by Infoblox’s threat intelligence technology.


HPE SD-WAN Gateways & Advanced Services

Event: Security Field Day 14

Appearance: HPE Presents at Security Field Day 14

Company: HPE

Video Links:

Personnel: Adam Fuoss, Nirmal Rajarathnam

Explore how the HPE secure SD-WAN portfolio helps protect branch locations against cyberthreats while embracing the flexibility of cloud-first architectures. Discover how the new HPE Networking Application Intelligence Engine (AppEngine), strengthens security with real-time defense, leveraging aggregated application security insights such as risk, reputation, vulnerability, and compliance.

In this session, HPE introduced its newly combined SD-WAN portfolio, which includes Aruba SD-Branch, EdgeConnect (formerly Silverpeak), and the Juniper Session Smart Router. The presentation focused on a key security challenge in branch networks: the lateral movement of threats once a bad actor gains entry. Presenters argued that while identity-based segmentation was an improvement over static VLANs, it is insufficient without a deep understanding of the applications traversing the network. To address this gap, HPE unveiled its Application Intelligence Engine (AppEngine), a new service running within the Aruba Central management platform. The engine’s primary goal is to provide a comprehensive application posture, enabling more effective dynamic segmentation to protect against internal threats.

The AppEngine works by ingesting, correlating, and normalizing application data from multiple sources, such as deep packet inspection (DPI) and URL filtering, into a single, unified application catalog. This process creates a rich, contextual profile for each application, complete with security scores, known vulnerabilities, compliance data, and encryption details. From the central dashboard, an administrator can define global, role-based security policies based on this application intelligence. The AppEngine then automatically distributes the appropriate signatures and policies to the relevant enforcement points, like gateways or access points. The demonstration showcased an administrator identifying high-risk applications and creating a policy to block them for specific user roles during business hours, all without touching individual device configurations. Currently, this functionality is available for the SD-Branch solution managed by Aruba Central, with plans to extend its capabilities across the broader portfolio in the future.


HPE SRX Series Next-Generation Firewalls & Threat Prevention

Event: Security Field Day 14

Appearance: HPE Presents at Security Field Day 14

Company: HPE

Video Links:

Personnel: Kedar Dhuru, Mounir Hahad, Pradeep Hattiangadi

Discover how the SRX firewall portfolio secures networks of any size. We’ll dive into AI-Predictive Threat Prevention (AI-PTP), which neutralizes zero-day attacks with a proxy-less, real-time, on-device AI engine. We’ll also cover how a Machine Learning detection pipeline continuously provides automatically generated signatures for emerging threats, delivering stronger security without compromising firewall performance.

The session outlines a security philosophy focused on making security easier to operationalize, from the user edge to the data center. The speakers explain that with the rise of device proliferation, distributed applications, and Gen AI, the threat landscape has become more complex. HPE’s approach is to use a comprehensive threat detection pipeline, heavily leveraging AI and machine learning, directly on their SRX firewalls. This strategy aims for a high detection rate and a very low false positive rate without sacrificing performance. The core of the presentation centers on a feature called AI-Predictive Threat Prevention (AI-PTP), which represents a shift from traditional reactive, signature-based models to a proactive approach for identifying both known and zero-day malware.

The AI-PTP system operates using a two-stage process. First, machine learning models are trained in HPE’s ATP Cloud using vast datasets of malicious and benign files. These trained models are then deployed to the SRX firewalls, where the “inference” or detection happens directly on the device. A key differentiator is its inline, proxy-less architecture, which analyzes just the initial portion of a file as it’s being downloaded to quickly determine if it’s malicious. This allows the firewall to block threats in real-time. This on-box capability is part of a defense-in-depth strategy, augmented by cloud-based analysis, including multiple sandboxing methods. During the demonstration and Q&A, it was clarified that this process has a negligible performance impact, can update threat signatures across all customers in minutes, and can automatically place an infected host on a blocklist that is shared across the entire HPE security ecosystem, including NAC and switching solutions.


HPE Networking Security Overview with Madani Adjali

Event: Security Field Day 14

Appearance: HPE Presents at Security Field Day 14

Company: HPE

Video Links:

Personnel: Madani Adjali

This presentation marks a significant moment for HPE, as it’s the first time Aruba Networks, now part of HPE, has presented at Security Field Day since 2018. The recent acquisition of Juniper Networks has further expanded HPE’s security portfolio, leading to the formation of HPE Networking. The presenter, Madani Adjali, highlights the historical context of both Aruba and Juniper’s past presentations at the event, expressing a desire for more frequent participation in the future. The newly formed HPE Networking is structured into several groups, including campus and branch, data center, and WAN, with this presentation focusing specifically on the SASE and security pillar.

The core of the presentation will delve into two main areas: new capabilities within Aruba Central related to application intelligence and advancements in the firewall side of the portfolio, leveraging the SRX platform. The SASE and security pillar, led by Adjali, encompasses a wide range of products, including network access control, SD-WAN, SASE, and firewalls. The audience is given a high-level overview of the comprehensive security offerings now available through HPE, which range from various SD-WAN solutions to a full suite of firewalls, ZTNA, SWG, and CASB. The presenter also mentions ClearPass Policy Manager, a network access control product demonstrated back in 2018, and its new cloud-oriented capabilities.

The presentation aims to be an interactive session, with a team of experts on hand to provide in-depth information and answer questions. The goal is to showcase the power and breadth of the new HPE Networking security portfolio. The speaker emphasizes the significance of this moment for the company, following the recent completion of the Juniper Networks acquisition. The presentation will feature deep dives into the technical aspects of the new security capabilities, with a particular focus on the integration of AI and predictive technologies to enhance threat prevention and application intelligence. The session promises to be informative for anyone interested in the future of network security and the combined strengths of HPE and Juniper Networks.


ZEDEDA Edge AI – Object Recognition Use Case

Event:

Appearance: ZEDEDA Edge Field Day Showcase

Company: ZEDEDA

Video Links:

Personnel: Sérgio Santos

In this ZEDEDA Edge Field Day Showcase, Sergio Santos, Account Solutions Architect shows how ZEDEDA manages edge AI for a practical object recognition use case, specifically for computer vision. His presentation shows how to deploy a stack of three applications—an AI inference container, a Prometheus database, and a Grafana dashboard—using the Docker Compose runtime across a fleet of three devices, one equipped with a GPU and two without. The demo highlights the ability to deploy and manage applications at scale from a single control plane, leveraging ZEDEDA’s automated deployment policies. The process starts from a clean slate, moves through provisioning the edge nodes, and automatically pushes the application stack based on predefined policies, including GPU-specific logic.
A key part of the demonstration is the live update and rollback process. Santos shows how to remotely update the inference container to a new version and then roll it back to the original without restarting the runtime. This highlights ZEDEDA’s lightweight, efficient updates and the use of its Zix infrastructure to push configuration changes. The demo also shows the ability to monitor application logs and device metrics (CPU, memory, network traffic) from the central ZEDEDA controller, proving the platform’s comprehensive management capabilities. The session concludes by demonstrating how to easily wipe the entire application stack by simply moving the edge nodes to a different project.


Manage Edge AI Using ZEDEDA Kubernetes Service

Event:

Appearance: ZEDEDA Edge Field Day Showcase

Company: ZEDEDA

Video Links:

Personnel: Hariharasubramanian C. S.

In this Edge Field Day Showcase, ZEDEDA’s Distinguished Engineer, Hariharasubramanian C. S, discusses how ZEDEDA is tackling the growing importance and challenges of deploying AI at the edge. He highlights that factors like insufficient bandwidth, high latency, and data privacy concerns make it impractical to send all sensor data to the cloud for analysis. ZEDEDA’s solution is to bring AI to the edge, closer to the data source. This, however, introduces its own challenges, such as managing a wide range of hardware, ensuring autonomy in disconnected environments, and updating AI models at scale. Hari argues that Kubernetes, with its lightweight nature and robust ecosystem, is the ideal solution for packaging and managing complex AI pipelines at the edge.
This presentation demonstrates how ZEDEDA’s Kubernetes service simplifies the deployment of an Edge AI solution for car classification. Using a Helm chart, he shows how to deploy a multi-component application, including an OpenVINO inference server, a model-pulling sidecar, and a demo client application. The demo showcases how the ZEDEDA platform provides a unified control plane for zero-touch provisioning and lifecycle management of these components, all while keeping models in a private, on-premise network without exposing them to the cloud. He concludes by demonstrating the application’s real-time inference capabilities and encouraging developers to leverage ZEDEDA’s open-source repositories to build their own edge AI solutions.


Understanding Containers at the Edge with ZEDEDA

Event:

Appearance: ZEDEDA Edge Field Day Showcase

Company: ZEDEDA

Video Links:

Personnel: Kristopher Clark, Manny Calero

In this Edge Field Day Showcase, ZEDEDA’s Consulting Solutions Architect, Manny Calero, demonstrates how the ZEDEDA platform addresses the diverse needs of edge computing workloads. While Kubernetes is ideal for large, complex, and distributed applications, Docker Compose is often a better fit for smaller, lightweight, and resource-constrained edge sites. The ZEDEDA platform’s key strength lies in its flexibility, allowing users to deploy both legacy VMs and modern containerized applications side-by-side on the same edge node. This provides a unified orchestration and management experience, offering a simple solution for a repeatable, scalable, and secure edge architecture. This presentation includes a demo of the ZEDEDA platform to deploy Docker Compose workloads to multiple edge nodes, highlighting features like zero-touch provisioning and API-driven automation with Terraform.
Solutions Architect Kris Clark presents the ZEDEDA Edge Kubernetes Service. While Kubernetes is complex, it is essential for highly scalable, distributed, and complex applications. Kris provides a brief overview of the Kubernetes service’s architecture, emphasizing its ease of use and its ability to integrate with familiar developer tools like kubectl and Git repositories. The demo shows how to quickly create a Kubernetes cluster and deploy applications from the ZEDEDA marketplace or from a custom Helm chart. This presentation concludes with a discussion about how the ZEDEDA platform provides a cohesive solution for both containerized and VM-based workloads, supporting enterprises in their digital transformation journey at the edge.


ZEDEDA Automated Orchestration for the Distributed Edge

Event:

Appearance: ZEDEDA Edge Field Day Showcase

Company: ZEDEDA

Video Links:

Personnel: Padraig Stapleton

In this Edge Field Day showcase, ZEDEDA’s Padraig Stapleton, SVP and Chief Product Officer, provides a comprehensive overview of ZEDEDA, its origins, and its vision for bringing the cloud experience to the unique and often hostile environment of the edge. The video highlights how ZEDEDA’s platform enables businesses to securely and scalably run their applications at the edge. The discussion covers how the platform addresses the complexities of diverse hardware, environments, and security challenges, allowing customers to focus on their core business applications.
This presentation also introduces the ZEDEDA edge computing platform for visibility, security and control of edge hardware and applications. The presentation details a unique partnership with OnLogic to provide zero-touch provisioning and discusses various real-world use cases, including container shipping, global automotive manufacturing, and oil and gas.


Unlock AI Cloud Potential with the Rafay Platform

Event: AI Infrastructure Field Day 3

Appearance: Rafay presents at AI Infrastructure Field Day 3

Company: Rafay

Video Links:

Personnel: Haseeb Budhani

Haseeb Budhani, CEO of Rafay Systems, discusses how the Rafay platform can be used to address AI use cases. The platform provides a white-label ready portal that allows end users to self-service provision various compute resources and AI/ML platform services. This enables cloud providers and enterprises to offer services like Kubernetes, bare metal, GPU as a service, and NVIDIA NIM with a simple and standardized experience.

The Rafay platform leverages standardization, infrastructure-as-code (IaC) concepts, and GitOps pipelines to drive consumption for a large number of enterprises. Built on a Git engine for configuration management and capable of handling complex multi-tenancy requirements with integration to various identity providers, the platform allows customers to offer different services, compute functions, and form factors to their end customers through configurable, white-labeled catalogs. Additionally, the platform features a serverless layer for deploying custom code on Kubernetes or VM environments, enabling partners and customers to deliver a wide range of applications and services, from DataRobot to Jupyter notebooks, as part of their offerings.

Rafay addresses security concerns through SOC 2 Type 2 compliance for its SaaS product, providing pentest reports and agent reports for customer assurance. For larger customers, particularly cloud providers, an air-gapped product is offered, allowing them to deploy and manage the Rafay controller within their own secure environments. Furthermore, the platform’s unique Software Defined Perimeter (SDP) architecture enables it to manage Kubernetes clusters remotely, even on edge devices with limited connectivity, by establishing an inside-out connection and a proxy service for secure communication.


From Infrastructure Chaos to Cloud-Like Control with Rafay

Event: AI Infrastructure Field Day 3

Appearance: Rafay presents at AI Infrastructure Field Day 3

Company: Rafay

Video Links:

Personnel: Haseeb Budhani

Rafay, founded seven years ago, initially focused on Kubernetes but has evolved to address the broader challenge of simplifying compute consumption across various environments. Their solution aims to provide self-service compute to companies across verticals.

Rafay typically engages with companies that already have existing infrastructure, automation, and deployments. The core problem they solve is standardization across diverse environments and users. They help companies build a platform engineering function that enables efficient management of environments, upgrades, and policies. The Rafay platform abstracts the underlying infrastructure, providing an interface for users to request and consume compute resources without needing to understand the complexities of the underlying systems.

Rafay’s platform allows organizations to deliver self-service compute across diverse environments and teams, managing identity, policies, and automation. The goal is to reduce the time developers waste on infrastructure tasks, which, according to Rafay, can be as high as 20% in large enterprises. They offer a comprehensive solution that encompasses inventory management, governance, and control, all while generating the underlying infrastructure as code for versioning and auditability. In summary, Rafay enables companies to move away from custom, in-house solutions to a standardized, automated, and cloud-like compute consumption model.


Bridging the gap from GPU-as-a-Service to AI Cloud with Rafay

Event: AI Infrastructure Field Day 3

Appearance: Rafay presents at AI Infrastructure Field Day 3

Company: Rafay

Video Links:

Personnel: Haseeb Budhani

Rafay CEO Haseeb Budhani argues that to truly be considered a cloud provider, organizations must offer self-service consumption, applications (or tools), and multi-tenancy. He contends that many GPU clouds currently rely on manual processes like spreadsheets and bare metal servers, which don’t qualify as true cloud solutions. Budhani emphasizes that users should be able to access a portal, create an account, and consume services on demand, without requiring backend intervention for tasks like VLAN setup or IP address management.

Budhani elaborates on his definition of multi-tenancy, outlining the technical requirements for supporting diverse customer needs. This includes secure VMs, operating system images with pre-installed tools, public IP addresses, firewall rules, and VPCs. He highlights the difference between customers needing a single GPU versus those requiring 64 GPUs and emphasizes that all necessary networking and security configurations must be automated to provide a true self-service experience.

Ultimately, Budhani argues that the goal is self-service consumption of applications or tools, not just GPUs. He believes the industry is moving beyond the “GPU as a service” concept, with users now focused on consuming models and endpoints rather than managing the underlying GPU infrastructure. He suggests that his company, Rafay, addresses many of the complexities in this space, offering solutions that enable the delivery of applications and tools in a self-service, multi-tenant environment.


Accelerating AI Infrastructure Adoption for GPU Providers and Enterprises with Rafay

Event: AI Infrastructure Field Day 3

Appearance: Rafay presents at AI Infrastructure Field Day 3

Company: Rafay

Video Links:

Personnel: Haseeb Budhani

Haseeb Budhani, CEO of Rafay Systems, begins by highlighting the confusion surrounding Rafay’s classification, noting that people variously describe it as a platform as a service (PaaS), orchestration, or middleware, and he welcomes feedback on which term best fits. He then pivots to discussing the current market dynamics in AI infrastructure, particularly the discrepancy between the cost of renting GPUs from providers like Amazon versus acquiring them independently. He illustrates this with an example of using DeepSeek R1, highlighting that while Amazon charges significantly more for consuming the model via Bedrock, renting the underlying H100 GPU directly is much cheaper.

Budhani argues that many companies renting out GPUs are not true “clouds” and may struggle in the long term because they are not selling services on top of the GPUs. He references an Accenture report suggesting that GPU as a Service (GPaaS) will diminish as the market matures, with more value being derived from services. He emphasizes that hyperscalers like Amazon have understood this for a long time, generating most of their revenue from services rather than infrastructure as a service (IaaS). This presents an opportunity for Rafay to help GPU providers and enterprises deliver these higher-level services, enabling them to compete more effectively with hyperscalers and unlock significant cost savings, citing an example of a telco in Thailand that could save millions by deploying its own AI infrastructure with Rafay’s software.

The speaker concludes by emphasizing the increasing importance of sovereign clouds, especially in regions like Europe and the Middle East. Telcos, which previously lost business to public clouds, now have a renewed opportunity to provide AI infrastructure locally due to sovereignty requirements. He states that Rafay aims to provide these telcos and other regional providers with the necessary software stack to deliver these services, thereby addressing a common problem across various geographic locations. He highlights a telco in Indonesia, Indosat, as an early example of a customer using Rafay to deliver a sovereign AI cloud, underscoring the growing demand for such solutions globally.


The Open Flash Platform Initiative with Hammerspace

Event: AI Infrastructure Field Day 3

Appearance: Hammerspace presents at AI Infrastructure Field Day 3

Company: Hammerspace

Video Links:

Personnel: Kurt Kuckein

The Open Flash Platform (OFP) Initiative is a multi-member industry collaboration founded in July 2025. The initiative’s goal is to redefine flash storage architecture, particularly for high-performance AI and data-centric workloads, by replacing traditional storage servers with an open approach that yields a more efficient and modular, standards-based, and disaggregated model.

The presentation highlights the growing challenges of data storage, power consumption, and cooling in modern data centers, especially with the increasing volume of data generated at the edge. The core idea behind the OFP initiative is to leverage recent advancements in large-capacity flash (QLC), powerful DPUs (Data Processing Units), and Linux kernel enhancements to create a highly dense, low-power storage platform. This platform aims to replace traditional CPU-based storage servers with a modular design, ultimately allowing for exabyte-scale deployments within a single rack.

The proposed architecture consists of sleds containing DPUs, networking, and NVMe storage, fitting into trays that can be modularly deployed. This approach offers significant improvements in density and power efficiency compared to existing solutions. While the initial concept uses U.2 drives, the long-term goal is to leverage an extended E.2 standard for even greater capacity. Hammerspace is leading the initiative, fostering collaboration among industry players, including DPU and SSD partners, and exploring adoption by organizations like the Open Compute Project (OCP).

Hammerspace envisions a future where AI infrastructure relies on open standards and efficient hardware. The OFP initiative aligns with this vision by providing a non-proprietary, high-capacity storage platform optimized for AI workloads. The goal is to allow for modernizing storage systems without having to buy additional storage systems, utilizing the flash that’s already available. This would offer a modern AI environment.


Activating Tier 0 Storage Within GPU and CPU-based Compute Cluster with Hammerspace

Event: AI Infrastructure Field Day 3

Appearance: Hammerspace presents at AI Infrastructure Field Day 3

Company: Hammerspace

Video Links:

Personnel: Floyd Christofferson

The highest performing storage available today is an untapped resource within your server clusters that can be activated by Hammerspace to accelerate AI workloads and increase GPU utilization. This session covers how Hammerspace unifies local NVMe across server clusters as a protected, ultra-fast tier that is part of a unified global namespace. This underutilized capacity can now accelerate AI workloads as shared storage, with data automatically orchestrated by Hammerspace across other tiers and cloud storage to increase time to token while also reducing infrastructure costs.

Floyd Christopherson from Hammerspace introduces Tier 0, focusing on how it accelerates AI workflows in GPU and CPU-based clusters. The core problem addressed is the stranded capacity of local NVMe storage within servers, which, despite its speed, is often underutilized. Accessing data over the network to external storage becomes a bottleneck, especially in AI workflows with growing context lengths and fast token access requirements. While increasing network capacity is an option, it’s expensive and still limited. Tier 0 aggregates this local capacity into a single storage tier, making it the primary storage for workflows and enabling programmatic data orchestration, effectively unlocking petabytes of previously unused storage and eliminating the need to buy additional expensive Tier 1 storage.

Hammerspace’s Tier 0 leverages standards-based environments, with the client-side using standard NFS, SMB, and S3 protocols, eliminating the need for client-side software installations. The technology utilizes parallel NFS v4.2 with flex files, contributed to the Linux kernel, to enhance performance and efficiency. This approach avoids proprietary clients and special server deployments, allowing the system to work with existing infrastructure. The orchestration and unification of capacity across servers are key to the solution, turning compute nodes into storage servers without creating isolated islands, thereby reducing bottlenecks and improving data access speeds.

The presentation highlights the performance benefits of Tier 0, showcasing theoretical results and MLPerf benchmarks that demonstrate superior performance per rack unit. By utilizing local NVMe storage, Hammerspace reduces the reliance on expensive and slower cloud storage networks, leading to greater GPU utilization. Furthermore, Hammerspace contributes enhancements to the Linux kernel, such as local IO, to reduce CPU utilization and accelerate write performance, solidifying its commitment to standard-based solutions and continuous improvement in data accessibility. The architecture is designed to be non-disruptive, allowing for live data mobility behind the scenes, ensuring seamless user experience.


What is AI Ready Storage, with Hammerspace

Event: AI Infrastructure Field Day 3

Appearance: Hammerspace presents at AI Infrastructure Field Day 3

Company: Hammerspace

Video Links:

Personnel: Molly Presley

AI Ready Storage is data infrastructure designed to break down silos and give enterprises seamless, high-performance access to their data wherever it lives. With 73% of enterprise data trapped in silos and 87% of AI projects failing to reach production, the bottleneck isn’t GPUs—it’s data. Traditional environments suffer from visualization challenges, high costs, and data gravity that limits AI flexibility. Hammerspace simplifies the enterprise data estate by unifying silos into a single global namespace and providing instant access to data—without forklift upgrades—so organizations can accelerate AI success.

The presentation focused on leveraging existing infrastructure and data to make it AI-ready, emphasizing simplicity for AI researchers under pressure to deliver high-quality results quickly. Hammerspace simplifies the data readiness process, enabling easy access and utilization of data within infrastructure projects. While the presentation covers technical aspects, the emphasis remains on ease of deployment, workload management, and rapid time to results, aligning with customer priorities. Hammerspace provides a virtual data layer across existing infrastructure, creating a unified data namespace enabling access and mobilization of data across different storage systems, enriching metadata for AI workloads, and facilitating data sharing in collaborative environments.

Hammerspace addresses key AI use cases such as global collaboration, model training, and inferencing, particularly focusing on enterprise customers with existing data infrastructure they wish to leverage. The platform’s ability to assimilate metadata from diverse storage systems into a unified control plane allows for a single interface to data, managed through Hammerspace for I/O control and quality of service. By overcoming data gravity through intelligent data movement and leveraging Linux advancements, Hammerspace enables data access regardless of location, maximizing GPU utilization and reducing costs. This is achieved by focusing on data access, compliance, and governance, ensuring that AI projects align with business objectives and minimizing risks associated with data movement.

Hammerspace aims to unify diverse data sources, from edge data to existing storage systems, enabling seamless access for AI factories and competitive advantages through faster data insights. With enriched metadata and automated workflows, HammerSpace accelerates time to insight and removes manual processes. HammerSpace is available as installable software or as a hardware appliance, and supports various deployment models, offering linear scalability and distributed access to data. A “Tier 0” capability was also discussed, which leverages existing underutilized NVMe storage within GPU nodes to create a fast, low-latency storage pool, showcasing the platform’s flexibility and resourcefulness.


The AI Factory in Action: Basketball play classification with Hewlett Packard Enterprise

Event: AI Infrastructure Field Day 3

Appearance: HPE presents at AI Infrastructure Field Day 3

Company: HPE

Video Links:

Personnel: Mark Seither

This session provides a live demonstration of a practical AI application built on top of HPE Private Cloud AI (PCAI). The speaker, Mark Seither, showcases a basketball play classification application that leverages a machine learning model trained on PCAI. This model accurately recognizes and categorizes various basketball plays, such as pick and roll, isolation, and fast break. The demo highlights how the powerful and predictable infrastructure of PCAI enables the development and deployment of complex, real-world AI solutions. This example illustrates the full lifecycle of an AI project—from training to deployment—on a private cloud platform.

The presentation details the development of an AI application for an NBA team that focuses on video analysis, starting with the specific use case of identifying player fatigue. The initial approach involved using an open-source video classification model called Slow Fast, which was trained to recognize basketball plays such as pick and rolls, and isolations. To create a labeled dataset for training, the presenter manually extracted and labeled video clips from YouTube using tools like QuickTime and Label Studio. The model, trained on a small dataset of labeled plays, demonstrated promising accuracy in identifying these plays, and although it had limitations, the presentation illustrates a basic but functional model.

The speaker then discusses the next steps involving HPE’s Machine Learning Inferencing Service (MLIS) to deploy the model as an endpoint. This would allow the team to upload and classify video clips more easily. Furthermore, he plans to integrate the play classification with a video language model (VLM) enabling the team to query their video assets using natural language, such as “Show me every instance of Steph Curry running a pick and roll in the fourth quarter of a game in 2017.” He also showcased the RAG capabilities of the platform using the NBA collective bargaining agreement to answer specific questions, highlighting the platform’s potential to provide quick, valuable insights to customers.


The AI Factory: A strategic overview with Hewlett Packard Enterprise

Event: AI Infrastructure Field Day 3

Appearance: HPE presents at AI Infrastructure Field Day 3

Company: HPE

Video Links:

Personnel: Mark Seither

Many organizations find that their AI initiatives, despite early promise, fail to deliver a positive ROI. This can be traced to “token economics”—the complex and often unpredictable costs associated with consuming AI models, particularly in the public cloud. This session will dissect these hidden costs and the architectural bottlenecks that lead to runaway spending and stalled projects. We’ll then present a comprehensive overview of HPE Private Cloud AI, a full-stack, turnkey solution designed to provide predictable costs, superior performance, and total control. We will explore how its integrated hardware and software—from NVIDIA GPUs and HPE servers to a unified management console—enable a powerful and predictable path to production, turning AI from a financial gamble into a strategic business asset.

The presentation highlights the often-overlooked costs associated with AI initiatives in the public cloud, citing examples like over-provisioning, lack of checkpointing, and inefficient data usage. The speaker emphasizes that many companies experience significantly higher operational costs than initially anticipated, with one example of an oil and gas company spending ten times more than projected. While some companies may not be overly concerned with these cost overruns if the AI models deliver results, HPE contends that this isn’t sustainable for most organizations and that there are cost savings to be found.

HPE’s solution, Private Cloud AI, offers a predictable cost model and significant savings compared to cloud-based alternatives. These cost savings, averaging around 45%, are most pronounced with larger systems managed within the customer’s own data center, though co-location options are also available with slightly higher overhead. Furthermore, HPE’s solution addresses the hidden costs associated with building and managing an AI infrastructure from scratch, including the need for specialized teams and resources for each layer of the technology stack.

Beyond cost considerations, HPE’s Private Cloud AI provides greater control over data, mitigating concerns about data privacy and usage in downstream training cycles, which is important considering inquiries into the training data used for some AI models. The solution offers flexible purchasing options, including both CapEx and OpEx models, with HPE GreenLake enabling reserved capacity and on-demand access to additional resources without upfront costs. This combination of cost-effectiveness, control, and flexibility positions HPE Private Cloud AI as a compelling alternative to the public cloud for AI deployments.


The AI Chasm: Bridging the gap from pilot to production with Hewlett Packard Enterprise

Event: AI Infrastructure Field Day 3

Appearance: HPE presents at AI Infrastructure Field Day 3

Company: HPE

Video Links:

Personnel: Mark Seither

The AI market is booming with innovation, yet a significant and costly gap exists between the proof-of-concept phase and successful production deployment. A staggering number of AI projects fail to deliver on their promise, often stalling in “pilot purgatory” due to fragmented tools, unpredictable costs, and a lack of scalable infrastructure. In this session, we’ll examine why so many promising AI initiatives fall short and detail the key friction points—from data pipeline complexity and integration issues to governance and security concerns—that prevent organizations from translating AI ambition into measurable business value.

Mark Seither from HPE discusses the challenges organizations face in moving AI projects from pilot to production. He highlights the rapid pace of innovation in foundation models and AI services, making it difficult for companies to keep up and choose the right tools. A major concern is data security, with companies fearing data exposure when using AI models. The time and effort required to coordinate different teams and make decisions on building AI solutions also contributes to the delays.

Seither emphasizes that hardware alone is insufficient for successful AI implementation, and the conversation must center on business objectives. HPE offers a composable and extensible platform with a pre-validated stack of tools for data connectivity, analytics, workflow automation, and data science. Customers can also integrate their own preferred tools via Helm charts, though they are responsible for the lifecycle of those tools. The HPE platform is a co-engineered system with NVIDIA, meaning hardware choices are optimized for cost and performance and that the platform isn’t a reference architecture.

The HPE Data Lakehouse Gateway provides a single namespace for accessing and managing data assets, regardless of their location. HPE also has an Unleash AI program with validated ISV partners and supports NVIDIA Blueprints for end-to-end customizable reference architectures. Furthermore, HPE offers a private cloud solution with cost savings compared to public cloud alternatives, emphasizing faster time to value, complete control over security and data sovereignty, and predictable costs through both CapEx and OpEx models, including flexible capacity with GreenLake.


Your turnkey AI Factory for Rapid Development with Hewlett Packard Enterprise

Event: AI Infrastructure Field Day 3

Appearance: HPE presents at AI Infrastructure Field Day 3

Company: HPE

Video Links:

Personnel: Mark Seither

The vast majority of enterprise AI initiatives fail to deliver ROI, not because of a lack of innovation, but due to a significant gap between development and production. This session will explore the “token economics” behind these failures and introduce HPE Private Cloud AI, a turnkey AI factory designed to bridge this gap. We’ll show how this solution simplifies the journey from concept to full-scale deployment and demonstrate its power with a real-world use case: a powerful LLM built for the NBA, empowering you to drive measurable business value from your AI investments.

Mark Seither, Solutions Architect at HPE, introduced Private Cloud AI (PCAI), a turnkey AI factory designed to bridge the gap between AI development and production. PCAI is a fully integrated appliance comprised of HPE hardware, NVIDIA GPUs and switches, and HPE’s AI Essentials software, along with NVIDIA’s NVAI Enterprise (NVAIE). Seither emphasized that this is not a hastily assembled product but the result of long-term development, internal innovation, and strategic acquisitions, positioning PCAI as a unique and compelling solution in the AI market. He highlights the evolution of AI, noting that the current outcomes are so advanced that they are practically indistinguishable from what was once considered far-off science fiction, making it crucial for businesses to embrace and understand its potential.

The speaker also touched on the practical applications of AI, ranging from personalized product recommendations in retail to computer vision for threat detection and anomaly identification. He underscored a key trend he’s observing with his customers: the primary focus is not on replacing employees with AI but on enhancing their capabilities and improving customer experiences. Seither highlighted the challenges companies face in implementing AI, including a lack of enterprise AI strategies and difficulties in scaling AI projects from pilot to production. Data privacy, control, accessibility, and cost-effective deployment methodologies are also significant hurdles.

HPE’s PCAI aims to address these challenges by providing a ready-to-use solution that eliminates the need for companies to grapple with hardware selection, software integration, and driver compatibility. Offered in different “t-shirt” sizes, including a developer system, PCAI is designed to cater to various needs, from inferencing to fine-tuning. The goal is to empower data scientists to start working on AI projects from day one, focusing on differentiated work that directly impacts the business rather than on the complexities of setting up the AI infrastructure.


From Storage to Enterprise Intelligence, Unlock AI Value from Private Unstructured Data with CTERA

Event: AI Infrastructure Field Day 3

Appearance: CTERA presents at AI Infrastructure Field Day 3

Company: CTERA

Video Links:

Personnel: Aron Brand

Discover the obstacles that hinder AI adoption. What matters most? Data quality or quantity? Understand the strategy CTERA uses for curating data to create trustworthy, AI-ready datasets that overcome silos and security challenges, translating raw data into meaningful and actionable insights.

CTERA’s presentation at AI Infrastructure Field Day 3 focused on the transition from traditional storage solutions to “enterprise intelligence,” highlighting the potential of AI to unlock value from unstructured data. While enterprise GenAI represents a massive market opportunity, with projections reaching $401 billion annually by 2028, the speaker, Aron Brand, emphasized that current adoption is hindered by the poor quality of data being fed into AI models. Brand argued that simply pointing AI tools at existing data leads to “convincing nonsense,” as organizations often lack understanding of their own data, resulting in inaccurate and potentially harmful outputs. He identified three main “quality killers”: messy data, data silos, and compliance/security concerns.

To overcome these obstacles, CTERA proposes a strategy centered on data curation, involving several key steps. These include collecting data from various storage silos, unifying data formats, enriching metadata, filtering data based on rules and policies, and finally, vectorizing and indexing the data. CTERA aims to provide a platform that enables users to create high-quality datasets, enforce permissions and guardrails, and deliver precise context to AI tools. The platform is powered by an MCP server for orchestration and an MCP client for invoking external tools, facilitating an open and extensible system.

CTERA’s vision extends to “virtual employees” or subject matter experts created by users to automate tasks and improve efficiency. The system respects existing access controls and provides verifiable answers grounded in source data. The presented examples demonstrated the potential of the platform in various use cases, including legal research, news analysis, and medical diagnostics. The presentation emphasized that the goal is not to replace human workers but to augment their capabilities with AI-powered assistants that can access and analyze sensitive data in a secure and compliant manner.