Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Videos / GPU Memory Offload for LLM fine-tuning and inference with Phison aiDAPTIV+

GPU Memory Offload for LLM fine-tuning and inference with Phison aiDAPTIV+



AI Infrastructure Field Day 2


This video is part of the appearance, “Phison Technology Presents at AI Infrastructure Field Day 2“. It was recorded as part of AI Infrastructure Field Day 2 at 08:00 - 11:30 on April 24, 2025.


Watch on YouTube
Watch on Vimeo

With aiDAPTIV+, Phison makes on-premises AI processing more accessible and affordable, especially for small and medium-sized businesses, government entities, and educational institutions. CTO Sebastien Jean explained that the primary goal of Phison’s solution is to facilitate fine-tuning of large language models (LLMs) on-site. Fine-tuning often demands significantly more memory than inference, making it expensive and difficult for those without massive budgets or a lot of resources.  The presentation highlighted the massive memory requirements for fine-tuning, which can be up to 20 times the memory needed to run an LLM, driving up costs and making it impossible for some organizations to begin with this approach.

Phison’s solution addresses this challenge by decoupling compute and memory. Sebastien Jean, Phison’s CTO, focused on how Phison’s technology, with its AI-optimized SSDs and middleware, enables on-site LLM training and inference. The product uses a combination of their proprietary middleware, Adaptive Link, and custom-built ProSuite software to manage and extend the memory available to PyTorch, effectively turning an SSD into an extended memory pool. This architecture allows for training large models using fewer GPUs.  The system uses a software layer within PyTorch that intercepts calls and then offloads slices of the model to the SSD, which helps in memory management.

By leveraging SSDs and their proprietary controller technology, Phison offers a cost-effective alternative to expensive GPU-intensive setups and targets the SMB, government, and education markets with this solution.  The presentation concluded with a focus on the financial benefits and the sustainability of the solution. By allowing for more efficient hardware utilization, Phison provides not just a financially smart solution but one with power and cooling benefits as well.  Also, by using repurposed NAND, the solution can increase the lifespan of hardware, reduce electronic waste, and extend the useful life of data center infrastructure.

Personnel: Sebastien Jean


  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • May 13-May 15 — Tech Field Day Experience at Qlik Connect 2025
  • May 28-May 29 — Security Field Day 13
  • Jun 4-Jun 5 — Cloud Field Day 23
  • Jun 10-Jun 11 — Tech Field Day Extra at Cisco Live US 2025
  • Jul 9-Jul 10 — Networking Field Day 38
  • Jul 16-Jul 17 — Edge Field Day 4
  • Jul 23-Jul 24 — AppDev Field Day 3
  • Sep 10-Sep 11 — AI Infrastructure Field Day 3

Latest Links

  • Here’s How to Do Multi-Tenancy in the Age of AI
  • Marvis Minis and the Rise of Distributed Observability
  • We are getting closer and closer to self-driving networks, and I love it!
  • Automating the network design, what does it really mean?
  • Reflections on Qlik Connect 2025

Return to top of page

Copyright © 2025 · Genesis Framework · WordPress · Log in