|
![]() Floyd Christofferson, Molly Presley, and Kurt Kuckein presented for Hammerspace at AI Infrastructure Field Day 3 |
This Presentation date is September 10, 2025 at 10:30-12:30.
Presenters: Floyd Christofferson, Kurt Kuckein, Molly Presley
What is AI Ready Storage, with Hammerspace
Watch on YouTube
Watch on Vimeo
AI Ready Storage is data infrastructure designed to break down silos and give enterprises seamless, high-performance access to their data wherever it lives. With 73% of enterprise data trapped in silos and 87% of AI projects failing to reach production, the bottleneck isn’t GPUs—it’s data. Traditional environments suffer from visualization challenges, high costs, and data gravity that limits AI flexibility. Hammerspace simplifies the enterprise data estate by unifying silos into a single global namespace and providing instant access to data—without forklift upgrades—so organizations can accelerate AI success.
The presentation focused on leveraging existing infrastructure and data to make it AI-ready, emphasizing simplicity for AI researchers under pressure to deliver high-quality results quickly. Hammerspace simplifies the data readiness process, enabling easy access and utilization of data within infrastructure projects. While the presentation covers technical aspects, the emphasis remains on ease of deployment, workload management, and rapid time to results, aligning with customer priorities. Hammerspace provides a virtual data layer across existing infrastructure, creating a unified data namespace enabling access and mobilization of data across different storage systems, enriching metadata for AI workloads, and facilitating data sharing in collaborative environments.
Hammerspace addresses key AI use cases such as global collaboration, model training, and inferencing, particularly focusing on enterprise customers with existing data infrastructure they wish to leverage. The platform’s ability to assimilate metadata from diverse storage systems into a unified control plane allows for a single interface to data, managed through Hammerspace for I/O control and quality of service. By overcoming data gravity through intelligent data movement and leveraging Linux advancements, Hammerspace enables data access regardless of location, maximizing GPU utilization and reducing costs. This is achieved by focusing on data access, compliance, and governance, ensuring that AI projects align with business objectives and minimizing risks associated with data movement.
Hammerspace aims to unify diverse data sources, from edge data to existing storage systems, enabling seamless access for AI factories and competitive advantages through faster data insights. With enriched metadata and automated workflows, HammerSpace accelerates time to insight and removes manual processes. HammerSpace is available as installable software or as a hardware appliance, and supports various deployment models, offering linear scalability and distributed access to data. A “Tier 0” capability was also discussed, which leverages existing underutilized NVMe storage within GPU nodes to create a fast, low-latency storage pool, showcasing the platform’s flexibility and resourcefulness.
Personnel: Molly Presley
Activating Tier 0 Storage Within GPU and CPU-based Compute Cluster with Hammerspace
Watch on YouTube
Watch on Vimeo
The highest performing storage available today is an untapped resource within your server clusters that can be activated by Hammerspace to accelerate AI workloads and increase GPU utilization. This session covers how Hammerspace unifies local NVMe across server clusters as a protected, ultra-fast tier that is part of a unified global namespace. This underutilized capacity can now accelerate AI workloads as shared storage, with data automatically orchestrated by Hammerspace across other tiers and cloud storage to increase time to token while also reducing infrastructure costs.
Floyd Christopherson from Hammerspace introduces Tier 0, focusing on how it accelerates AI workflows in GPU and CPU-based clusters. The core problem addressed is the stranded capacity of local NVMe storage within servers, which, despite its speed, is often underutilized. Accessing data over the network to external storage becomes a bottleneck, especially in AI workflows with growing context lengths and fast token access requirements. While increasing network capacity is an option, it’s expensive and still limited. Tier 0 aggregates this local capacity into a single storage tier, making it the primary storage for workflows and enabling programmatic data orchestration, effectively unlocking petabytes of previously unused storage and eliminating the need to buy additional expensive Tier 1 storage.
Hammerspace’s Tier 0 leverages standards-based environments, with the client-side using standard NFS, SMB, and S3 protocols, eliminating the need for client-side software installations. The technology utilizes parallel NFS v4.2 with flex files, contributed to the Linux kernel, to enhance performance and efficiency. This approach avoids proprietary clients and special server deployments, allowing the system to work with existing infrastructure. The orchestration and unification of capacity across servers are key to the solution, turning compute nodes into storage servers without creating isolated islands, thereby reducing bottlenecks and improving data access speeds.
The presentation highlights the performance benefits of Tier 0, showcasing theoretical results and MLPerf benchmarks that demonstrate superior performance per rack unit. By utilizing local NVMe storage, Hammerspace reduces the reliance on expensive and slower cloud storage networks, leading to greater GPU utilization. Furthermore, Hammerspace contributes enhancements to the Linux kernel, such as local IO, to reduce CPU utilization and accelerate write performance, solidifying its commitment to standard-based solutions and continuous improvement in data accessibility. The architecture is designed to be non-disruptive, allowing for live data mobility behind the scenes, ensuring seamless user experience.
Personnel: Floyd Christofferson
The Open Flash Platform Initiative with Hammerspace
Watch on YouTube
Watch on Vimeo
The Open Flash Platform (OFP) Initiative is a multi-member industry collaboration founded in July 2025. The initiative’s goal is to redefine flash storage architecture, particularly for high-performance AI and data-centric workloads, by replacing traditional storage servers with an open approach that yields a more efficient and modular, standards-based, and disaggregated model.
The presentation highlights the growing challenges of data storage, power consumption, and cooling in modern data centers, especially with the increasing volume of data generated at the edge. The core idea behind the OFP initiative is to leverage recent advancements in large-capacity flash (QLC), powerful DPUs (Data Processing Units), and Linux kernel enhancements to create a highly dense, low-power storage platform. This platform aims to replace traditional CPU-based storage servers with a modular design, ultimately allowing for exabyte-scale deployments within a single rack.
The proposed architecture consists of sleds containing DPUs, networking, and NVMe storage, fitting into trays that can be modularly deployed. This approach offers significant improvements in density and power efficiency compared to existing solutions. While the initial concept uses U.2 drives, the long-term goal is to leverage an extended E.2 standard for even greater capacity. Hammerspace is leading the initiative, fostering collaboration among industry players, including DPU and SSD partners, and exploring adoption by organizations like the Open Compute Project (OCP).
Hammerspace envisions a future where AI infrastructure relies on open standards and efficient hardware. The OFP initiative aligns with this vision by providing a non-proprietary, high-capacity storage platform optimized for AI workloads. The goal is to allow for modernizing storage systems without having to buy additional storage systems, utilizing the flash that’s already available. This would offer a modern AI environment.
Personnel: Kurt Kuckein