|
This video is part of the appearance, “Google Cloud Presents at Cloud Field Day 20“. It was recorded as part of Cloud Field Day 20 at 13:00-15:30 on June 13, 2024.
Watch on YouTube
Watch on Vimeo
Victor Moreno, a product manager at Google Cloud, presented on the network infrastructure Google Cloud has developed to support AI and machine learning (AI/ML) workloads. The exponential growth of AI/ML models necessitates moving vast amounts of data across networks, making it impossible to rely on a single TPU or host. Instead, thousands of nodes must communicate efficiently, which Google Cloud achieves through a robust software-defined network (SDN) that includes hardware acceleration. This infrastructure ensures that GPUs and TPUs can communicate at line rates, dealing with challenges like load balancing and data center topology restructuring to match traffic patterns.
Google Cloud’s AI/ML network infrastructure involves two main networks: one for GPU-to-GPU communication and another for connecting to external storage and data sources. The GPU network is designed to handle high bandwidth and low latency, essential for training large models distributed across many nodes. This network uses a combination of electrical and optical switching to create flexible topologies that can be reconfigured without physical changes. The second network connects the GPU clusters to storage, ensuring periodic snapshots of the training process are stored efficiently. This dual-network approach allows for high-performance data processing and storage communication within the same data center region.
In addition to the physical network infrastructure, Google Cloud leverages advanced load balancing techniques to optimize AI/ML workloads. By using custom metrics like queue depth, Google Cloud can significantly improve response times for AI models. This optimization is facilitated by tools such as the Open Request Cost Aggregation (ORCA) framework, which allows for more intelligent distribution of requests across model instances. These capabilities are integrated into Google Cloud’s Vertex AI service, providing users with scalable, efficient AI/ML infrastructure that can automatically adjust to workload demands, ensuring high performance and reliability.
Personnel: Victor Moreno