Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2026 Sponsors
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Videos / GPU Memory Offload for LLM fine-tuning and inference with Phison aiDAPTIV+

GPU Memory Offload for LLM fine-tuning and inference with Phison aiDAPTIV+



AI Infrastructure Field Day 2


This video is part of the appearance, “Phison Technology Presents at AI Infrastructure Field Day 2“. It was recorded as part of AI Infrastructure Field Day 2 at 08:00 - 11:30 on April 24, 2025.


Watch on YouTube
Watch on Vimeo

With aiDAPTIV+, Phison makes on-premises AI processing more accessible and affordable, especially for small and medium-sized businesses, government entities, and educational institutions. CTO Sebastien Jean explained that the primary goal of Phison’s solution is to facilitate fine-tuning of large language models (LLMs) on-site. Fine-tuning often demands significantly more memory than inference, making it expensive and difficult for those without massive budgets or a lot of resources.  The presentation highlighted the massive memory requirements for fine-tuning, which can be up to 20 times the memory needed to run an LLM, driving up costs and making it impossible for some organizations to begin with this approach.

Phison’s solution addresses this challenge by decoupling compute and memory. Sebastien Jean, Phison’s CTO, focused on how Phison’s technology, with its AI-optimized SSDs and middleware, enables on-site LLM training and inference. The product uses a combination of their proprietary middleware, Adaptive Link, and custom-built ProSuite software to manage and extend the memory available to PyTorch, effectively turning an SSD into an extended memory pool. This architecture allows for training large models using fewer GPUs.  The system uses a software layer within PyTorch that intercepts calls and then offloads slices of the model to the SSD, which helps in memory management.

By leveraging SSDs and their proprietary controller technology, Phison offers a cost-effective alternative to expensive GPU-intensive setups and targets the SMB, government, and education markets with this solution.  The presentation concluded with a focus on the financial benefits and the sustainability of the solution. By allowing for more efficient hardware utilization, Phison provides not just a financially smart solution but one with power and cooling benefits as well.  Also, by using repurposed NAND, the solution can increase the lifespan of hardware, reduce electronic waste, and extend the useful life of data center infrastructure.

Personnel: Sebastien Jean

  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • Oct 29-Oct 30 — AI Field Day 7
  • Nov 5-Nov 6 — Networking Field Day 39
  • Nov 11-Nov 12 — Tech Field Day at KubeCon North America 2025
  • Jan 28-Jan 29 — AI Infrastructure Field Day 4
  • Mar 11-Mar 12 — Cloud Field Day 25
  • Apr 29-Apr 30 — Security Field Day 15
  • May 6-May 8 — Mobility Field Day 14
  • May 13-May 14 — AI Field Day 8

Latest Coverage

  • Oxide: The Guys in the Garage Are Doing OK
  • The Role of Data Infrastructure in Enterprise AI with Ingo Fuchs of NetApp
  • How HPE’s New Security Playbook Is Actually Stopping Threats
  • Qlik Answers and the Unstructured Frontier: What a Printer Assistant Taught Me About AI in Practice
  • NetApp AFX and AI Data Engine: Transforming Enterprise AI Storage

Tech Field Day News

  • Exploring How AI Transforms the Enterprise Network at Networking Field Day 39
  • Exploring the Future of Enterprise AI Deployment and Innovation at AI Field Day 7

Return to top of page

Copyright © 2025 · Genesis Framework · WordPress · Log in