|
This Presentation date is February 21, 2024 at 11:00-12:30.
Presenters: Ace Stryker, Alan Bumgarner, Paul McLeod, Wendell Wenjen
Why Storage Matters for AI with Solidigm
Watch on YouTube
Watch on Vimeo
In this presentation, Ace Stryker and Alan Bumgarner of Solidigm discuss the importance of storage in AI workloads. They explain that as AI models and datasets grow, efficient and high-performance storage becomes increasingly critical. They introduce their company, Solidigm, which emerged from SK Hynix’s acquisition of Intel’s storage group, and they offer a range of SSD products suitable for AI applications.
The discussion covers several key points:
- The growing AI market and the shift from centralized to distributed compute and storage, including the edge.
- The dominance of hard drives for AI data and the opportunity for transitioning to flash storage.
- The role of storage in AI workflows, including data ingestion, preparation, training, and inference.
- The Total Cost of Ownership (TCO) benefits of SSDs over hard drives, considering factors like power consumption, space, and cooling.
- The Solidigm product portfolio, emphasizing different SSDs for various AI tasks, and the importance of choosing the right storage based on workload demands.
- A customer case study from Kingsoft in China, which saw a significant reduction in data processing time by moving to an all-flash array.
- The future potential of AI and the importance of SSDs in enabling efficient AI computing.
The session also includes questions from the Field Day delegates covering technical aspects of Solidigm storage products, such as the role of their cloud storage acceleration layer (CSAL), and discuss the importance of consulting with customers to understand their specific AI workload requirements for optimal storage solutions.
Personnel: Ace Stryker, Alan Bumgarner
Optimized Storage from Supermicro and Solidigm to Accelerate Your Al Data Pipeline
Watch on YouTube
Watch on Vimeo
Wendell Wenjen and Paul McLeod from Supermicro discuss challenges and solutions for AI and machine learning data storage. Supermicro is a company that provides servers, storage, GPU-accelerated servers, and networking solutions, with a significant portion of their revenue being AI-related.
They highlighted the challenges in AI operations and machine learning operations, specifically around data management, which includes collecting data, transforming it, and feeding it into GPU clusters for training and inference. They also emphasized the need for a large capacity of storage to handle the various phases of the AI data pipeline.
Supermicro has a wide range of products designed to cater to each stage of the AI data pipeline, from data ingestion, which requires a large data lake, to the training phase, which requires retaining large amounts of data for model development and validation. They also discussed the importance of efficient data storage solutions and introduced the concept of an “IO Blender effect,” where multiple data pipelines run concurrently, creating a mix of different IO profiles.
Supermicro delved deeper into the storage solutions, highlighting their partnership with WEKA, a software-defined storage company, and how their architecture is optimized for AI workloads. They explained the importance of NVMe flash storage, which can outpace processors, and the challenges of scaling such storage solutions. They also discussed Supermicro’s extensive portfolio of storage servers, ranging from multi-node systems to petascale architectures, designed to accommodate different customer needs.
Supermicro’s approach to storage for AI includes a two-tiered solution with flash storage for high performance and disk-based storage for high capacity at a lower cost. They also touched on the role of GPU direct storage in reducing latency and the flexibility of their software-defined storage solutions.
The presentation concluded with an overview of Supermicro’s product offerings for different AI and machine learning workloads, from edge devices to large data center storage solutions.
Personnel: Paul McLeod, Wendell Wenjen