|
This video is part of the appearance, “Phison Technology Presents at AI Infrastructure Field Day 2“. It was recorded as part of AI Infrastructure Field Day 2 at 08:00 - 11:30 on April 24, 2025.
Watch on YouTube
Watch on Vimeo
Phison’s show their aiDAPTIV+ technology, designed to make on-premises AI processing more affordable and accessible, particularly for small to medium-sized businesses, governments, and universities. The core of their innovation lies in leveraging flash storage to offload the memory demands of large language models (LLMs) from the GPU. This approach addresses the growing challenge of limited GPU memory capacity, which often necessitates buying more GPUs than needed, primarily for the memory capacity.
Phison’s solution enables the loading of LLMs onto high-capacity, cost-effective flash memory, allowing the GPU to access the necessary data in slices for processing. This significantly reduces the cost compared to traditional deployments that rely solely on GPU memory. The company is partnering with OEMs to integrate their technology into various platforms, including desktops, laptops, and even IoT devices, with a focus on providing pre-tested solutions for a seamless user experience. They are also expanding their partnerships to include storage systems.
Beyond hardware, Phison also addresses the knowledge gap in LLM training by offering educational programs and working with universities to provide students with access to affordable AI infrastructure. Their teaching PC, offered in partnership with Newegg, aims to democratize LLM training by making it accessible in classrooms. The company’s efforts focus on fine-tuning pre-existing foundational models with domain-specific data, allowing businesses and institutions to tailor AI to their unique needs and keep their data private.
Personnel: Brian Cox