|
This Presentation date is February 22, 2024 at 10:45-12:15.
Presenters: Ameer Abbas, Brandon Royal
Google Cloud AI Platforms and Infrastructure
Watch on YouTube
Watch on Vimeo
In this session, we’ll explore how Vertex AI, Google Kubernetes Engine (GKE) and Google Cloud’s AI Infrastructure provide a robust platform for AI development, training and inference. We’ll discuss hardware choices for inference (CPUs, GPUs, TPUs), showcasing real-world examples. We’ll cover distributed training and inference with GPUs/TPUs and optimizing AI performance on GKE using tools like autoscaling and dynamic workload scheduling.
Brandon Royal, product manager at Google Cloud, discusses the use of Google Cloud’s AI infrastructure for deploying AI on Google’s infrastructure. The session focuses on how Google Cloud is applying AI to solve customer problems and the trends in AI, particularly the platform shift towards generative AI. Brandon discusses the AI infrastructure designed for generative AI, covering topics such as inference, serving, training, fine-tuning, and how these are applied in Google Cloud.
Brandon explains the evolution of AI models, particularly open models, and their importance for flexibility in deployment and optimization. He highlights that many AI startups and unicorns choose Google Cloud for their AI infrastructure and platforms. He also introduces Gemma, a new open model released by Google DeepMind, which is lightweight, state-of-the-art, and built on the same technology as Google’s Gemini model. Gemma is available with open weights on platforms like Hugging Face and Kaggle.
The session then shifts to a discussion about AI platforms and infrastructure, with a focus on Kubernetes and Google Kubernetes Engine (GKE) as the foundation for open models. Brandon emphasizes the importance of flexibility, performance, and efficiency in AI workloads and how Google provides a managed experience with GKE Autopilot.
He also touches on the hardware choices for inference, including CPUs, GPUs, and TPUs, and how Google Cloud offers the largest selection of AI accelerators in the market. Brandon shares customer stories, such as Palo Alto Networks’ use of CPUs for deep learning models in threat detection systems. He also discusses the deployment of models on GKE, including autoscaling and dynamic workload scheduling.
Finally, Brandon provides a live demo of deploying the Gemma model on GKE, showcasing how to use the model for generating responses and how it can be augmented with retrieval-augmented generation for more grounded responses. He also demonstrates the use of Gradio, a chat-based interface for interacting with models, and discusses the scaling and management of AI workloads on Google Cloud.
Personnel: Brandon Royal
Developer and Operational Productivity in Google Cloud with Duet AI
Watch on YouTube
Watch on Vimeo
In this session, we’ll demonstrate how Duet AI enhances developer and operational productivity. We’ll explore how Google’s state of the art AI is applied to address real- world development and operations challenges. Topics include context-aware code completion, licensing compliance assistance, code explanation, test generation, operational troubleshooting and more. We’ll share customer successes and insights from within Google that inform continuous improvement of AI productivity tools.
Ameer Abbas, Senior Product Manager at Google Cloud, provides a demonstration of Duet AI and its application in enhancing developer and operational productivity. He explains how Google’s state-of-the-art AI is applied to real-world development and operations challenges, emphasizing its role in assisting with context-aware code completion, licensing compliance, code explanation, test generation, operational troubleshooting, and more.
Ameer highlights the division of Google’s AI solutions for consumers and enterprises, mentioning products like Gemini (formerly Bard), Maker Suite, Palm API, Workspace, and Vertex AI. Vertex AI is a platform for expert practitioners to build, extend, tune, and serve their own machine learning models, while Duet AI offers ready-to-consume solutions built on top of foundational models.
He discusses the importance of modern applications that are dynamic, scalable, performant, and intelligent, and how they contribute to business outcomes. Ameer references the DevOps Research and Assessment (DORA) community and its focus on key metrics like lead time for changes, deployment frequency, failure rate, and recovery time for incidents.
The presentation includes a live demo where Ameer uses Duet AI within the Google Cloud Console and an Integrated Development Environment (IDE) to perform various tasks such as generating API specs, creating a Python Flask app, and troubleshooting errors. He demonstrates how Duet AI can understand and generate code based on prompts, interact with existing code files, and provide explanations and suggestions for code improvements. Ameer also shows how Duet AI can assist with generating unit tests, documentation, and fixing errors, and he touches on its capability to learn from user interactions for future improvements.
The demo showcases how Duet AI can be integrated into different stages of the software development lifecycle and how it can be a valuable tool for developers and operators in the cloud environment. Ameer concludes by mentioning future features like the ability to have Duet AI perform actions on the user’s behalf and the incorporation of agents for proactive assistance.
Personnel: Ameer Abbas