|
|
![]() Kurt Kuckein presented for Hammerspace at AI Infrastructure Field Day 4 |
This Presentation date is January 29, 2026 at 8:00AM - 9:00AM PT.
Presenters: Kurt Kuckein, Sam Newnam, Ted Weatherford
Break free from the constraints of traditional storage infrastructure: Unify your data estate and maximize existing investments
AI is in the midst of a market transformation. Whole industries are moving from research and training to production and inference, yet enterprises still struggle to get their arms around the fundamental challenges of organizing, accessing, and monetizing their data. Infrastructure choices that made sense in previous technology cycles, now present barriers to AI adoption. Hammerspace breaks down those barriers by supplying a global namespace across storage technologies, locations, and clouds to accelerate data pipelines and prepare enterprises for AI success. Hammerspace does this by leveraging data in place, orchestrating data movement tuned to organizational and application requirements, and does so with a flexible architecture built on open standards.
Taming Data Estate Chaos for AI with Hammerspace
Watch on YouTube
Watch on Vimeo
Hammerspace introduces itself as a “data company,” distinguishing itself from traditional storage vendors by offering a solution that addresses the complex data demands of modern infrastructure, particularly for AI workloads. The core concept behind Hammerspace is an instantly accessible, infinite virtual space that disaggregates data from its underlying infrastructure, enabling it to reside in any location, across any cloud, and on any storage backend, thereby eliminating data silos. This is achieved by assimilating metadata from existing storage systems into a single, global namespace, managed by metadata servers outside the data path. This approach not only accelerates data pipelines but also enhances existing infrastructure and enables rapid, easy integration of new technologies, providing users with visibility and access to all their data within minutes, rather than requiring lengthy, costly data migrations.
Hammerspace extends its capabilities to address critical challenges in AI infrastructure, including the current tight market and rising flash memory costs. The solution leverages underutilized flash storage within existing environments by aggregating systems and intelligently orchestrating data placement across tiers. It introduces “Tier Zero,” which consumes and aggregates local flash within compute (CPU and GPU) clusters into the global namespace, providing extremely high-performance storage by eliminating network latency. Hammerspace also treats cloud storage as a direct extension of on-premises infrastructure, not just a destination for data, thereby maximizing the use of available flash resources. The software-defined platform ensures data portability and access through a parallel file system (PNFS v4.2) and multi-protocol access (S3, NFS, SMB). Importantly, its policy-driven orchestration automates data movement and ensures data durability and availability through redundant metadata nodes and erasure coding across storage systems. It also centralizes privileged access and security policies, allowing permissions to follow data regardless of its physical location, critical for cross-border data compliance and auditability, and supports rich custom metadata beyond basic POSIX attributes.
Customer examples illustrate these benefits, such as a digital payments company that reduced storage costs by $5 million and simplified workflows for 3,000 data scientists by providing parallel file system access over object storage and enabling hybrid cloud agility. Another customer, facing a 3-4x increase in performance demand from new NVIDIA servers, leveraged Hammerspace to maintain existing NAS systems while deploying high-performance NVMe storage, avoiding significant new infrastructure investments. For inference workloads where latency is critical, Hammerspace can use policies to preload entire projects into local NVMe (Tier Zero) directly connected to GPUs, maintaining high performance and data consistency across globally distributed inference farms. Ultimately, through its integration with platforms like the NVIDIA AI Data Platform, Hammerspace goes beyond merely unifying data access; it truly unlocks the value within data by automating data preparation and orchestration, moving organizations from data chaos to a state of AI-ready data, often allowing interaction with the system via natural language for streamlined management.
Personnel: Kurt Kuckein, Sam Newnam
Unifying AI Enterprise Data into a Single Instantly Accessible Global Namespace with Hammerspace
Watch on YouTube
Watch on Vimeo
Hammerspace introduced its AI Data Platform solution to address the pervasive challenge of data fragmentation, a significant inhibitor to AI readiness. The presentation highlighted the complexity of AI tooling and the substantial capital outlay required, leading to enterprise fears of missing out (FOMO) and messing up (FOMU) on AI initiatives. Their solution aims to simplify these challenges by integrating seamlessly with NVIDIA’s reference designs to deliver a comprehensive, outcome-driven platform rather than a complex toolkit of disparate components.
Hammerspace’s AI Data Platform combines its unique global namespace and Tier Zero capabilities with NVIDIA software, including RAG Blueprints and RTX 6000 Pro, and is often deployed on standard servers such as Cisco C210s. This platform allows enterprises to connect to existing hybrid data through assimilation, whether full or read-only, making vast amounts of legacy data instantly accessible without costly and time-consuming migrations. The core mechanism involves discovering new files and automatically moving them to Tier Zero, a high-performance NVMe flash layer within the servers, for intensive processing such as extraction, embedding, and indexing. This heavy lifting is performed without burdening existing storage systems, with Hammerspace managing the entire process from data ingestion and validation to cleanup, ensuring AI-ready data is available in minutes. The software-defined nature enables flexibility across various hardware platforms and cloud environments, while leveraging protocols such as PNFS and NFS-direct to optimize GPU utilization.
The ultimate goal of Hammerspace’s AI Data Platform is to accelerate time-to-value by eliminating data gravity and GPU gravity. By shifting to a data-first strategy, the platform integrates data categorization and tagging, embedding security and performance characteristics directly into the data’s metadata. This enables automated, intelligent decisions about data placement and processing, replacing manual, script-driven workflows with an intuitive agentic system. This approach allows organizations to leverage their existing capital investments, transforming fragmented enterprise data into a unified, instantly accessible global namespace for AI applications within weeks, effectively creating an AI factory that starts where they are.
Personnel: Kurt Kuckein, Sam Newnam
A Leap Forward in Storage Efficiency with the OFP Initiative and Hammerspace
Watch on YouTube
Watch on Vimeo
Hammerspace is driving the Open Flash Platform (OFP) Initiative, an effort to significantly reduce the complexity and cost associated with large-scale flash storage for AI and other demanding workloads. This presentation introduced a reference design for a high-density, low-power flash storage solution that achieves unprecedented capacity and efficiency within data centers. The goal is to deliver one exabyte of storage in a single rack, enabling a new paradigm of “disappearing storage” in which compact 1U systems are distributed throughout a data center, leveraging otherwise unused rack space and minimal power consumption.
The development process involved several design iterations, shifting from a challenging 2U form factor to a more efficient 1U design. This shift addressed issues such as chassis deformation, power/cooling inefficiencies, and wasted space, requiring extensive thermal and pressure analyses to ensure reliable operation in a tightly packed environment. A significant breakthrough was selecting the Xsight DPU, which delivers robust compute capabilities comparable to an x86 server from a few years ago, in a highly power-efficient package that supports Linux and storage services within this compact design. Ted Weatherford highlighted the Xsight E1 chip as the world’s first 800-Gig DPU, featuring 64 Neoverse cores, a programmable NIC, and an “all fast path design” that eliminates data bottlenecks, achieving 800-Gig line rates, as independently verified by KeySite.
Looking ahead, Hammerspace and its partners are actively exploring new flash form factors to overcome current E2 limitations and achieve the one exabyte-per-rack goal. The OFP Initiative aims to standardize within the Open Compute Project (OCP) to ensure broad industry adoption and benefits. The versatility of the Xsight chip enables applications beyond shared file storage, including block storage and a homogeneous boot device for hyperscalers, streamlining qualification and management across diverse server infrastructures. The project is currently in prototyping and validation, with early-access customers receiving units this quarter and general availability targeted for the second half of the year, while continually recruiting more industry participants to drive this standard forward.
Personnel: Kurt Kuckein, Ted Weatherford









