|
This video is part of the appearance, “Google Cloud Presents at AI Infrastructure Field Day 2 – Afternoon“. It was recorded as part of AI Infrastructure Field Day 2 at 13:00 - 16:30 on April 22, 2025.
Watch on YouTube
Watch on Vimeo
Rose Zhu, a Product Manager at Google Cloud TPU, presented on TPUs for large-scale training and inference, emphasizing the rapid growth of AI models and the corresponding demands for compute, memory, and networking. Zhu highlighted the specialization of Google’s TPU chips and systems, purpose-built ASICs for machine learning applications, coupled with innovations in power efficiency, networking using Jupiter optical networks and ICI, and liquid cooling. A key focus was on co-designing TPUs with software, enabling them to function as a supercomputer, supported by frameworks like JAX and PyTorch, and a low-level compiler (XLA) to maximize performance.
Showcasing real-world TPU usage, powering Google’s internal applications like Gmail and YouTube, and serving external cloud customers across various segments like Anthropic, Salesforce, Mercedes, and Kakao. The adoption of Cloud TPUs has seen significant growth, with an eightfold increase in chip-per-hour consumption within 12 months. A major announcement was the upcoming 7th generation TPU, Ironwood, slated for general availability in Q4 2025, featuring two configurations, TPU7 and TPU7X, to address diverse data center requirements and customer needs for locality and low latency.
Zhu detailed the specifications of Ironwood, including its BF16 and FP8 support, teraflops performance, and high bandwidth memory. Ironwood boasts significant performance and power efficiency improvements compared to previous TPU generations. Rose also touched on optimizing TPU performance through techniques like flash attention, host DRAM offload, mixed precision training, and an inference stack for TPU. GKE manages TPU for orchestration, focusing on scheduling goodput and runtime goodput. Zhu highlighted GKE’s capabilities in managing large-scale training and inference, emphasizing scheduling and runtime efficiency improvements.
Personnel: Rose Zhu