|
This video is part of the Field Day Showcase, “VAST Data Tech Field Day Showcase“. It was published on March 12, 2024.
In this discussion, Neeloy Bhattacharyya from VAST Data and Sandeep Brahmarouthu from Run:ai explore the complexities of deploying AI for high-value use cases at scale, focusing on the movement and management of data throughout the AI pipeline. They identify a common challenge in organizations where the processes of data preparation and model training and inference are often separated, leading to inefficiencies. They emphasize the importance of understanding data provenance and lineage to leverage AI effectively, especially for innovative use cases.
VAST Data’s approach involves simplifying the AI data pipeline by integrating data capture, preparation, training, and model serving processes more closely, highlighting the inefficiencies of traditional data storage and processing methods. Bhattacharyya introduces the concept of “data adjacency,” where certain functions are more efficiently run closer to where the data is stored to improve processing times and outcomes.
Brahmarouthu discusses Run:ai’s role in managing GPU resources for AI workloads, addressing the challenge of efficiently scheduling and utilizing GPUs across different teams and projects within an organization. He highlights the importance of Kubernetes in managing these resources, despite its limitations for AI-specific workloads, and how Run:ai enhances Kubernetes to better serve AI applications.
The conversation also touches on the operational challenges of deploying AI within enterprises, including the need for a DevOps model that accommodates the experimental nature of AI. They discuss the importance of infrastructure and technology partnerships, like the one between VAST Data and Run:ai, in creating efficient, scalable AI deployment strategies.
Personnel: Neeloy Bhattacharyya, Sandeep Brahmarouthu