Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2026 Sponsors
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Appearances / Solidigm Presents at AI Infrastructure Field Day

Solidigm Presents at AI Infrastructure Field Day



AI Infrastructure Field Day 4

Scott Shadley and Phil Manez presented for Solidigm at AI Infrastructure Field Day 4

This Presentation date is January 30, 2026 at 8:00AM – 9:30AM PT.

Presenters: Phil Manez, Scott Shadley

Redefining Scale and Efficiency for the AI Era

Solidigm is bringing in one of our outstanding partners, Vast Data, to co-present at AIIFD4. These session we are going to work on the discussion of what is going on in 2026, the markets, the directions, the innovations and the challenges.

The first session will be Solidigm presenting on the current state of the Storage market. A view of what technologies are driving change, the solutions provided to overcome some of the challenges, and a look at how the latest innovations for early 2026 impact the view of storage in the AI pipeline and deployment efforts. Including the scope of high-capacity storage, what is coming, and how the impact of storage is more paramount than ever for AI deployments.

The second session with be a collaborative discussion between Solidigm and Vast on the efforts in the last year. From the all-flash TCO Colloborate, to the way our technologies have synced to solve AI Market demands and discuss some recent Context related impacts to the year of Inference for 2026. With the evolution of the DPU-enabled inference platforms, the value and capability of Solidigm storage along with Vast Data solutions creates even more customer success.

The final session will be a Vast update on plans for 2026, efforts to help customers evolve their existing systems, and a bit of an overview of the Vast AI OS platform. There are some existing, soon to be released, and even future topics in this session that will leave the listeners looking for more, which could even be found at an upcoming Vast event.


Redefining Scale and Efficiency for the Al Era with Solidigm


Watch on YouTube
Watch on Vimeo

Solidigm presents on the current state of the Storage market. A view of what technologies are driving change, the solutions provided to overcome some of the challenges, and a look at how the latest innovations for early 2026 impact the view of storage in the AI pipeline and deployment efforts. Including the scope of high-capacity storage, what is coming, and how the impact of storage is more paramount than ever for AI deployments.

Scott Shadley, representing Solidigm, emphasized the company’s role as a hardware storage provider and highlighted its crucial partnership with software solutions such as VAST Data, represented by Phil Manez. Using a vivid donut analogy, Shadley explained the evolution of storage from traditional hard drives, akin to a “glazed donut with a hole,” to modern SSDs, symbolized by the “maple bar” form factor. The analogy extended to VAST Data as the “jelly” filling the donut, providing essential software solutions for data management and utility. He then delved into the cyclical nature of the semiconductor market, detailing past shifts such as the transition from 2D to 3D NAND and the impact of pandemic-induced hoarding, culminating in the current AI “bubble.” This unprecedented demand, coupled with historical underinvestment in NAND relative to memory, has created significant challenges for storage supply, necessitating long-term agreements and driving the need for Solidigm to innovate beyond building drives.

Solidigm’s strategy for the AI era focuses on delivering both high-performance and high-capacity storage, with products such as the PS1010 for performance and the P5336 (122TB drives) leading in high-capacity shipments. Beyond products, the company is deeply involved in enabling new cooling architectures essential for AI infrastructure. This includes pioneering liquid cold plate designs, contributing to industry standards (SNIA) to ensure vendor compatibility, and validating off-the-shelf products for full-immersion cooling, while addressing practical challenges such as adhesion of stickers in immersion fluids. To further support customers, Solidigm established the AI Central Lab, an independent facility offering remote access to diverse AI architectures, including Hopper, Blackwell, and future Vera Rubin platforms. This lab enables partners and customers to test and optimize solutions, overcoming barriers related to infrastructure availability and cost, and has already demonstrated significant improvements, such as a 27x faster “time to first token” by offloading the KV cache to SSDs, showcasing Solidigm’s deeper involvement in overall AI system functionality.

Personnel: Phil Manez, Scott Shadley

Driving Storage Efficiency and the Impacts of AI in 2026 with Solidigm


Watch on YouTube
Watch on Vimeo

A discussion between Solidigm and Vast on the efforts in the last year, from the all-flash TCO Colloborate to the way our technologies have synced to solve AI Market demands, discusses some recent Context-related impacts to the year of Inference for 2026. With the evolution of DPU-enabled inference platforms, the value and capabilities of Solidigm storage and Vast Data solutions drive even greater customer success. Solidigm’s Scott Shadley initiated the presentation by highlighting the immense power and storage demands of future AI infrastructure, using the “1.21 gigawatts” analogy. He projected that one gigawatt of power could support 550,000 NVIDIA Grace Blackwell GB300 GPUs and 25 exabytes of storage in 2025. This scale requires extremely efficient, high-capacity solid-state drives (SSDs) to stay within power envelopes, making Solidigm’s 122 terabyte drives a key enabler.

In 2026, the presentation introduced NVIDIA’s Vera Rubin platform with Bluefield 4 DPUs, which fundamentally alters AI storage architecture. This new design introduces an “inference context memory storage platform” (ICMSP) layer. This layer, positioned between direct-attached storage and object/data lake storage, is critical for rapid access to KV cache data in AI inference workloads. The new hierarchy distributes the 25 exabytes across high-capacity network-attached storage, the new 6.4 exabytes of context memory storage, and 6.1 exabytes of direct-attach storage. This evolution, while reducing the number of supportable GPUs within the 1-gigawatt limit, requires faster NVMe storage to improve performance and is projected to drive a 5x or greater compound annual growth rate (CAGR) in high-capacity storage demand.

Phil Manez from Vast Data then detailed their role in driving storage efficiency for AI. Vast’s disaggregated shared-everything (DASE) architecture separates compute from storage, utilizing Solidigm SSDs for dense capacity. This design enables global data reduction through a combination of compression, deduplication, and similarity-based reduction, achieving significantly higher data efficiency (often 3-4x more effective capacity) compared to traditional shared-nothing architectures, which is crucial amidst SSD supply constraints. Critically, Vast can deploy its C-node (storage logic) directly on the powerful Bluefield 4 DPUs, creating a highly optimized ICMSP. This approach accelerates time to first token, boosts GPU efficiency by offloading context computation, and dramatically reduces power consumption by eliminating intermediate compute layers, enabling AI inference workloads to operate at unprecedented speed and scale with shared, globally accessible context.

Personnel: Phil Manez, Scott Shadley

VAST Data look at future innovations for AI and the AI OS with Solidigm


Watch on YouTube
Watch on Vimeo

An update from Vast Data on plans for 2026, efforts to help customers evolve their existing systems, and an early overview of the Vast AI OS platform. This session covers existing, soon-to-be-released, and future topics that will leave listeners wanting more, with more to come at an upcoming Vast event. Vast Data, founded in 2016, launched its initial storage product in 2019, based on a new shared-everything architecture designed to address the scale and efficiency challenges of migrating legacy systems into the AI era. Since then, the company has expanded to include a database product for structured and unstructured data, and is now integrating compute capabilities to enable customers to execute models and build agents. A significant focus is on addressing the pervasive “shared nothing” architecture, which, beyond storage, creates substantial problems in eventing infrastructure, leading to scaling difficulties, high write amplification from replication, and weak analytics capabilities, often causing significant delays in gaining real-time insights.

Vast Data’s shared-everything architecture aims to address these issues by providing a parallel compute layer that is ACID-compliant, ensuring event order across partitions. By treating eventing topics as tables in the Vast database, with each event as a row, they leverage storage-class memory for rapid data capture in row format, then migrate it to QLC in columnar format for robust analytics. This approach dramatically simplifies eventing infrastructure, boosts scalability, and delivers superior performance, achieving 1.5 million transactions per server and significantly reducing server count compared to legacy systems. The same “shared nothing” paradigm also plagues vector databases, leading to memory-bound systems that require extensive sharding, suffer from slow inserts and updates, and struggle to scale for rich media such as video, where vector counts can reach trillions.

Vast Data’s vector database, built on its unified architecture, addresses these challenges by supporting trillions of vectors within a single, consolidated database, eliminating the need for complex sharding. This enables seamless scalability for vector search and rapid inserts, a critical capability for real-time applications such as analyzing live video feeds, where traditional in-memory vector databases often fail. Furthermore, a key innovation is the unified security model, which applies a consistent permission structure from the original data (documents, images, videos) to their derived vectors. This ensures that large language models only access information authorized for the user, preventing unintended data exposure and maintaining robust data governance. The platform also supports data-driven workflows, automatically triggering processes such as video embedding and vector storage when new data arrives.

Personnel: Phil Manez, Scott Shadley

  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • Jan 28-Jan 30 — AI Infrastructure Field Day 4
  • Mar 11-Mar 12 — Cloud Field Day 25
  • Mar 23-Mar 24 — Tech Field Day Extra at RSAC 2026
  • Apr 8-Apr 10 — Networking Field Day 40
  • Apr 15-Apr 16 — AI AppDev Field Day 3
  • Apr 29-Apr 30 — Security Field Day 15
  • May 6-May 8 — Mobility Field Day 14
  • May 13-May 14 — AI Field Day 8

Latest Coverage

  • From Brownfield Complexity to Automated Fabric at Global Scale
  • Managing Edge AI and Computer Vision at Scale
  • Digitate ignio and the 2025 AIOps Question: Build or Buy?
  • Reimagining AI from a Security Risk into an Asset with Fortinet
  • ResOps: The Convergence of Security and Operations

Tech Field Day News

  • Cutting-Edge AI Networking and Storage Kick Off 2026 at AI Infrastructure Field Day 4
  • Commvault Shift 2025 Live Blog

Return to top of page

Copyright © 2026 · Genesis Framework · WordPress · Log in