|  | 
This video is part of the appearance, “Fortinet Presents at Cloud Field Day 24“. It was recorded as part of Cloud Field Day 24 at 13:30-15:30 on October 22, 2025.
Watch on YouTube
Watch on Vimeo
The scalability, GPU access, and managed services of public cloud make it the natural platform for developing and deploying AI and LLM-based applications—and why this changes the architecture of security itself. Fortinet is focusing on securing AI applications in the cloud, a topic that dominates its conversations with customers. They emphasize the cloud’s unique ability to provide the scalability needed to run GPUs and TPUs, simplifying deployment and accelerating the development of agentic services. They are seeing increased reports of model theft and prompt injection attacks, alongside traditional hygiene issues like misconfigurations and stolen credentials, highlighting the growing need for robust security measures in cloud-based AI deployments.
Fortinet’s approach involves a layered security strategy that incorporates tools such as FortiOS for zero-trust access and continuous posture assessment, FortiCNAP for vulnerability scanning throughout the AI workload lifecycle, and FortiWeb for web application and API protection. FortiWeb uses machine learning to detect anomalous activities and sanitize LLM user input, addressing the OWASP Top 10 threats to LLMs. The company also highlights the importance of data protection, implementing data leak prevention measures on endpoints and in-line to control access to sensitive data and training data.
The presentation outlines a demo environment showcasing a segmented network with standard security measures in place. Fortinet will inspect both north-south and east-west traffic between nodes, monitoring the environment with FortiCNAP. The demo will demonstrate how a combination of old and new attacks, such as SQL injection escalating into SSRF and model corruption, can compromise AI applications. The aim is to highlight the importance of securing access, implementing robust data protection measures, and maintaining vigilance against evolving AI-specific threats.
Personnel: Aidan Walden









