|
|
This video is part of the appearance, “Hammerspace Presents at Cloud Field Day 25“. It was recorded as part of Cloud Field Day 25 at 9:00 - 10:00 on March 11, 2026.
Watch on YouTube
Watch on Vimeo
Hammerspace extends on-premises workflows seamlessly to the cloud, automatically orchestrating data wherever GPU resources are available across data centers, cloud providers, and neoclouds. Without interrupting users or workflows, organizations can maximize compute utilization while eliminating idle resources. The presentation by Chad Smith, Field CTO of Alliances, featured live demonstrations showcasing Hammerspace in action, highlighting three key capabilities: Data Assimilation, Service-Level Objectives, and Tier 0. These capabilities collectively enable organizations to transform existing storage into a unified global data environment that can be actively used across sites and clouds, automatically ensuring data is placed where compute requires it, when it’s needed, and activating NVMe capacity inside compute servers into high-performance shared storage.
The first demonstrations detailed Data Assimilation, illustrating how existing third-party storage, such as an NFS server, can be integrated into the Hammerspace global file system. This process initially involves collecting metadata in place, allowing users to access existing files through Hammerspace’s virtual file system over multiple protocols, including SMB, even from an NFS source. Any modifications or new file creations will adopt Hammerspace’s structured file system. To establish a global file system between an on-premise cluster and a cloud deployment, Hammerspace leverages an S3 object storage bucket as an intermediary for data replication. Once configured, metadata is instantly available at the remote cloud site, while the actual data is initially pulled on demand or relocated by policy.
The presentation then introduced Tier 0, focusing on integrating NVMe capacity directly within GPU and compute nodes into the global namespace. This involves treating individual GPU nodes as storage systems, configuring their NVMe drives as volumes, and grouping them into “volume groups” while defining Availability Zones (AZs) to ensure data protection through client-side mirroring across different fault domains. Service-Level Objectives (SLOs), expressed as declarative policies like a “place on” directive, are then applied to the global share. This automatically orchestrates the proactive movement and mirroring of data to these Tier 0 NVMe nodes, transforming previously stranded resources into high-performance shared storage, eliminating “pull on demand” delays, and ensuring local performance for demanding workloads.
Personnel: Chad Smith








