|
|
![]() Dan Reger and Chad Smith presented for Hammerspace at Cloud Field Day 25 |
This Presentation date is March 11, 2026 at 9:00 - 10:00.
Presenters: Chad Smith, Dan Reger
Hammerspace is a data storage platform for unstructured data that helps customers unify all their data storage and accelerate their workloads, including AI, to deliver results faster – both in the cloud and in their own data centers. This session will introduce Hammerspace in the cloud and provide technical background on how Tier 0 leverages compute-local NVMe storage in the cloud to maximize performance, how Assimilation avoids wholesale data migration, and Data Orchestration helps reduce cloud storage costs.
Accelerate cloud and AI workloads with the Hammerspace Data Platform
Watch on YouTube
Watch on Vimeo
Hammerspace is a data platform for unstructured data that helps customers unify all their data storage and accelerate their workloads, including AI, to deliver results faster – both in the cloud and in their own data centers. This session will introduce Hammerspace and how it helps cloud customers maximize performance, avoid wholesale data migration, and reduce cloud storage costs. Dan Reger, Senior Product Marketing Director at Hammerspace, focused on accelerating cloud and AI workloads using the platform, particularly highlighting its benefits for cloud and hybrid environments. He noted that migrating workloads to the cloud is often complex, especially when data is distributed across multiple regions or subject to regulatory requirements, and that traditional cloud storage isn’t always optimized for modern high-performance demands.
Hammerspace tackles these challenges by providing a unified global file system namespace that spans across on-premises storage, various cloud storage services (block, file, object), and even different cloud regions. This agentless solution allows customers to simplify and speed cloud migrations, accessing data everywhere without wholesale data movement. The platform dynamically orchestrates data, moving only the necessary subsets to the fastest available storage tiers (e.g., local NVMe on bare-metal GPU servers) to maximize workload performance and compute utilization. This objective-based policy engine ensures data is always where it’s needed, preventing bottlenecks and eliminating unnecessary data transfers.
The platform is designed to accelerate AI, HPC, and workloads involving large volumes of unstructured data across diverse environments. Hammerspace’s capabilities, including parallel NFS and intelligent data orchestration, ensure optimal data performance and efficient use of cloud compute resources. This approach also addresses concerns such as rising cloud storage costs and data sovereignty, with Hammerspace approved for deployment in OCI’s dedicated regions. Real-world examples, such as Meta and other unnamed “household name” customers, illustrate successful large-scale deployments involving thousands of servers, tens of thousands of GPUs, and petabytes of data, demonstrating Hammerspace’s ability to seamlessly integrate and enhance existing IT processes without requiring significant changes.
Personnel: Dan Reger
There and Back Again – NVMe to Cloud and Everywhere in Between with Hammerspace
Watch on YouTube
Watch on Vimeo
Hammerspace extends on-premises workflows seamlessly to the cloud, automatically orchestrating data wherever GPU resources are available across data centers, cloud providers, and neoclouds. Without interrupting users or workflows, organizations can maximize compute utilization while eliminating idle resources. The presentation by Chad Smith, Field CTO of Alliances, featured live demonstrations showcasing Hammerspace in action, highlighting three key capabilities: Data Assimilation, Service-Level Objectives, and Tier 0. These capabilities collectively enable organizations to transform existing storage into a unified global data environment that can be actively used across sites and clouds, automatically ensuring data is placed where compute requires it, when it’s needed, and activating NVMe capacity inside compute servers into high-performance shared storage.
The first demonstrations detailed Data Assimilation, illustrating how existing third-party storage, such as an NFS server, can be integrated into the Hammerspace global file system. This process initially involves collecting metadata in place, allowing users to access existing files through Hammerspace’s virtual file system over multiple protocols, including SMB, even from an NFS source. Any modifications or new file creations will adopt Hammerspace’s structured file system. To establish a global file system between an on-premise cluster and a cloud deployment, Hammerspace leverages an S3 object storage bucket as an intermediary for data replication. Once configured, metadata is instantly available at the remote cloud site, while the actual data is initially pulled on demand or relocated by policy.
The presentation then introduced Tier 0, focusing on integrating NVMe capacity directly within GPU and compute nodes into the global namespace. This involves treating individual GPU nodes as storage systems, configuring their NVMe drives as volumes, and grouping them into “volume groups” while defining Availability Zones (AZs) to ensure data protection through client-side mirroring across different fault domains. Service-Level Objectives (SLOs), expressed as declarative policies like a “place on” directive, are then applied to the global share. This automatically orchestrates the proactive movement and mirroring of data to these Tier 0 NVMe nodes, transforming previously stranded resources into high-performance shared storage, eliminating “pull on demand” delays, and ensuring local performance for demanding workloads.
Personnel: Chad Smith









