Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Appearances / Hammerspace presents at AI Infrastructure Field Day 3

Hammerspace presents at AI Infrastructure Field Day 3



AI Infrastructure Field Day 3

Floyd Christofferson, Molly Presley, and Kurt Kuckein presented for Hammerspace at AI Infrastructure Field Day 3

This Presentation date is September 10, 2025 at 10:30-12:30.

Presenters: Floyd Christofferson, Kurt Kuckein, Molly Presley


What is AI Ready Storage, with Hammerspace


Watch on YouTube
Watch on Vimeo

AI Ready Storage is data infrastructure designed to break down silos and give enterprises seamless, high-performance access to their data wherever it lives. With 73% of enterprise data trapped in silos and 87% of AI projects failing to reach production, the bottleneck isn’t GPUs—it’s data. Traditional environments suffer from visualization challenges, high costs, and data gravity that limits AI flexibility. Hammerspace simplifies the enterprise data estate by unifying silos into a single global namespace and providing instant access to data—without forklift upgrades—so organizations can accelerate AI success.

The presentation focused on leveraging existing infrastructure and data to make it AI-ready, emphasizing simplicity for AI researchers under pressure to deliver high-quality results quickly. Hammerspace simplifies the data readiness process, enabling easy access and utilization of data within infrastructure projects. While the presentation covers technical aspects, the emphasis remains on ease of deployment, workload management, and rapid time to results, aligning with customer priorities. Hammerspace provides a virtual data layer across existing infrastructure, creating a unified data namespace enabling access and mobilization of data across different storage systems, enriching metadata for AI workloads, and facilitating data sharing in collaborative environments.

Hammerspace addresses key AI use cases such as global collaboration, model training, and inferencing, particularly focusing on enterprise customers with existing data infrastructure they wish to leverage. The platform’s ability to assimilate metadata from diverse storage systems into a unified control plane allows for a single interface to data, managed through Hammerspace for I/O control and quality of service. By overcoming data gravity through intelligent data movement and leveraging Linux advancements, Hammerspace enables data access regardless of location, maximizing GPU utilization and reducing costs. This is achieved by focusing on data access, compliance, and governance, ensuring that AI projects align with business objectives and minimizing risks associated with data movement.

Hammerspace aims to unify diverse data sources, from edge data to existing storage systems, enabling seamless access for AI factories and competitive advantages through faster data insights. With enriched metadata and automated workflows, HammerSpace accelerates time to insight and removes manual processes. HammerSpace is available as installable software or as a hardware appliance, and supports various deployment models, offering linear scalability and distributed access to data. A “Tier 0” capability was also discussed, which leverages existing underutilized NVMe storage within GPU nodes to create a fast, low-latency storage pool, showcasing the platform’s flexibility and resourcefulness.

Personnel: Molly Presley

Activating Tier 0 Storage Within GPU and CPU-based Compute Cluster with Hammerspace


Watch on YouTube
Watch on Vimeo

The highest performing storage available today is an untapped resource within your server clusters that can be activated by Hammerspace to accelerate AI workloads and increase GPU utilization. This session covers how Hammerspace unifies local NVMe across server clusters as a protected, ultra-fast tier that is part of a unified global namespace. This underutilized capacity can now accelerate AI workloads as shared storage, with data automatically orchestrated by Hammerspace across other tiers and cloud storage to increase time to token while also reducing infrastructure costs.

Floyd Christopherson from Hammerspace introduces Tier 0, focusing on how it accelerates AI workflows in GPU and CPU-based clusters. The core problem addressed is the stranded capacity of local NVMe storage within servers, which, despite its speed, is often underutilized. Accessing data over the network to external storage becomes a bottleneck, especially in AI workflows with growing context lengths and fast token access requirements. While increasing network capacity is an option, it’s expensive and still limited. Tier 0 aggregates this local capacity into a single storage tier, making it the primary storage for workflows and enabling programmatic data orchestration, effectively unlocking petabytes of previously unused storage and eliminating the need to buy additional expensive Tier 1 storage.

Hammerspace’s Tier 0 leverages standards-based environments, with the client-side using standard NFS, SMB, and S3 protocols, eliminating the need for client-side software installations. The technology utilizes parallel NFS v4.2 with flex files, contributed to the Linux kernel, to enhance performance and efficiency. This approach avoids proprietary clients and special server deployments, allowing the system to work with existing infrastructure. The orchestration and unification of capacity across servers are key to the solution, turning compute nodes into storage servers without creating isolated islands, thereby reducing bottlenecks and improving data access speeds.

The presentation highlights the performance benefits of Tier 0, showcasing theoretical results and MLPerf benchmarks that demonstrate superior performance per rack unit. By utilizing local NVMe storage, Hammerspace reduces the reliance on expensive and slower cloud storage networks, leading to greater GPU utilization. Furthermore, Hammerspace contributes enhancements to the Linux kernel, such as local IO, to reduce CPU utilization and accelerate write performance, solidifying its commitment to standard-based solutions and continuous improvement in data accessibility. The architecture is designed to be non-disruptive, allowing for live data mobility behind the scenes, ensuring seamless user experience.

Personnel: Floyd Christofferson

The Open Flash Platform Initiative with Hammerspace


Watch on YouTube
Watch on Vimeo

The Open Flash Platform (OFP) Initiative is a multi-member industry collaboration founded in July 2025. The initiative’s goal is to redefine flash storage architecture, particularly for high-performance AI and data-centric workloads, by replacing traditional storage servers with an open approach that yields a more efficient and modular, standards-based, and disaggregated model.

The presentation highlights the growing challenges of data storage, power consumption, and cooling in modern data centers, especially with the increasing volume of data generated at the edge. The core idea behind the OFP initiative is to leverage recent advancements in large-capacity flash (QLC), powerful DPUs (Data Processing Units), and Linux kernel enhancements to create a highly dense, low-power storage platform. This platform aims to replace traditional CPU-based storage servers with a modular design, ultimately allowing for exabyte-scale deployments within a single rack.

The proposed architecture consists of sleds containing DPUs, networking, and NVMe storage, fitting into trays that can be modularly deployed. This approach offers significant improvements in density and power efficiency compared to existing solutions. While the initial concept uses U.2 drives, the long-term goal is to leverage an extended E.2 standard for even greater capacity. Hammerspace is leading the initiative, fostering collaboration among industry players, including DPU and SSD partners, and exploring adoption by organizations like the Open Compute Project (OCP).

Hammerspace envisions a future where AI infrastructure relies on open standards and efficient hardware. The OFP initiative aligns with this vision by providing a non-proprietary, high-capacity storage platform optimized for AI workloads. The goal is to allow for modernizing storage systems without having to buy additional storage systems, utilizing the flash that’s already available. This would offer a modern AI environment.

Personnel: Kurt Kuckein

  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • Oct 9-Oct 9 — Tech Field Day Exclusive with Microsoft Security
  • Oct 15-Oct 15 — Tech Field Day Experience at NetApp INSIGHT 2025
  • Oct 22-Oct 23 — Cloud Field Day 24
  • Oct 29-Oct 30 — AI Field Day 7
  • Nov 5-Nov 6 — Networking Field Day 39
  • Nov 11-Nov 12 — Tech Field Day at KubeCon North America 2025
  • Jan 28-Jan 29 — AI Infrastructure Field Day 4
  • Apr 29-Apr 30 — Security Field Day 15

Latest Coverage

  • Celona’s Private Cellular Vision: Redefining Enterprise Connectivity at Scale
  • How Arista is Unifying Campus Networks with Wi-Fi 7, AIOps and Zero-Trust
  • From GPU Gold Rush to Revenue Reality: How Mirantis k0rdent Transforms AI Infrastructure Dreams into Dollars
  • 68 Days Ahead: Turning DNS Data into Compliance and Cyber Resilience
  • What If Your Storage Knew How to Talk Back?

Tech Field Day News

  • Tech Field Day Experience at NetApp Insight 2025 Keynote Live Blog
  • The Latest in Cybersecurity Innovation at Security Field Day 14

Return to top of page

Copyright © 2025 · Genesis Framework · WordPress · Log in