Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2026 Sponsors
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Videos / Driving Storage Efficiency and the Impacts of AI in 2026 with Solidigm

Driving Storage Efficiency and the Impacts of AI in 2026 with Solidigm



AI Infrastructure Field Day 4


This video is part of the appearance, “Solidigm Presents at AI Infrastructure Field Day“. It was recorded as part of AI Infrastructure Field Day 4 at 8:00AM – 9:30AM PT on January 30, 2026.


Watch on YouTube
Watch on Vimeo

A discussion between Solidigm and Vast on the efforts in the last year, from the all-flash TCO Colloborate to the way our technologies have synced to solve AI Market demands, discusses some recent Context-related impacts to the year of Inference for 2026. With the evolution of DPU-enabled inference platforms, the value and capabilities of Solidigm storage and Vast Data solutions drive even greater customer success. Solidigm’s Scott Shadley initiated the presentation by highlighting the immense power and storage demands of future AI infrastructure, using the “1.21 gigawatts” analogy. He projected that one gigawatt of power could support 550,000 NVIDIA Grace Blackwell GB300 GPUs and 25 exabytes of storage in 2025. This scale requires extremely efficient, high-capacity solid-state drives (SSDs) to stay within power envelopes, making Solidigm’s 122 terabyte drives a key enabler.

In 2026, the presentation introduced NVIDIA’s Vera Rubin platform with Bluefield 4 DPUs, which fundamentally alters AI storage architecture. This new design introduces an “inference context memory storage platform” (ICMSP) layer. This layer, positioned between direct-attached storage and object/data lake storage, is critical for rapid access to KV cache data in AI inference workloads. The new hierarchy distributes the 25 exabytes across high-capacity network-attached storage, the new 6.4 exabytes of context memory storage, and 6.1 exabytes of direct-attach storage. This evolution, while reducing the number of supportable GPUs within the 1-gigawatt limit, requires faster NVMe storage to improve performance and is projected to drive a 5x or greater compound annual growth rate (CAGR) in high-capacity storage demand.

Phil Manez from Vast Data then detailed their role in driving storage efficiency for AI. Vast’s disaggregated shared-everything (DASE) architecture separates compute from storage, utilizing Solidigm SSDs for dense capacity. This design enables global data reduction through a combination of compression, deduplication, and similarity-based reduction, achieving significantly higher data efficiency (often 3-4x more effective capacity) compared to traditional shared-nothing architectures, which is crucial amidst SSD supply constraints. Critically, Vast can deploy its C-node (storage logic) directly on the powerful Bluefield 4 DPUs, creating a highly optimized ICMSP. This approach accelerates time to first token, boosts GPU efficiency by offloading context computation, and dramatically reduces power consumption by eliminating intermediate compute layers, enabling AI inference workloads to operate at unprecedented speed and scale with shared, globally accessible context.

Personnel: Phil Manez, Scott Shadley

  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • Jan 28-Jan 30 — AI Infrastructure Field Day 4
  • Mar 11-Mar 12 — Cloud Field Day 25
  • Mar 23-Mar 24 — Tech Field Day Extra at RSAC 2026
  • Apr 8-Apr 10 — Networking Field Day 40
  • Apr 15-Apr 16 — AI AppDev Field Day 3
  • Apr 29-Apr 30 — Security Field Day 15
  • May 6-May 8 — Mobility Field Day 14
  • May 13-May 14 — AI Field Day 8

Latest Coverage

  • From Brownfield Complexity to Automated Fabric at Global Scale
  • Managing Edge AI and Computer Vision at Scale
  • Digitate ignio and the 2025 AIOps Question: Build or Buy?
  • Reimagining AI from a Security Risk into an Asset with Fortinet
  • ResOps: The Convergence of Security and Operations

Tech Field Day News

  • Cutting-Edge AI Networking and Storage Kick Off 2026 at AI Infrastructure Field Day 4
  • Commvault Shift 2025 Live Blog

Return to top of page

Copyright © 2026 · Genesis Framework · WordPress · Log in