Work Smarter Than Harder FortiAI-Assist from Fortinet

Event: AI Field Day 7

Appearance: Fortinet Presents at AI Field Day 7

Company: Fortinet

Video Links:

Personnel: Max Zeumer

At AI Field Day 7, Fortinet presented its FortiAI Assist technology, emphasizing its integration across the Fortinet Security Operations Center (SOC) platform. Max Zeumer, the speaker, highlighted the growing burden on security teams dealing with massive volumes of alerts and limited personnel. FortiAI Assist was designed to alleviate this pain by embedding AI-driven support directly into the SOC interface. The AI assistant can efficiently prioritize alerts, triage investigations, and enrich incident details using available threat intelligence and telemetry. Analysts can interact with FortiAI Assist via typed or spoken queries, enabling them to focus on higher-level strategic decisions while the AI handles data gathering and analysis. Additionally, Fortinet employs a blend of generative AI and pre-built playbooks to orchestrate actions like isolating compromised hosts and compiling incident reports while maintaining a “human-in-the-loop” model for oversight.

Beyond SOC capabilities, FortiAI Assist also extends to Network Operations Centers (NOC), as described by Maggie Wu during her portion of the presentation. In the NOC context, FortiAI aims to simplify and expedite day-one deployment tasks, such as auto-generating configurations from topology diagrams and validating configuration scripts. Day-two operations are boosted by real-time network health assessments, troubleshooting, and suggested fixes, all guided through AI-driven dialogue with the admin. The AI assistant is capable of identifying root causes—like VPN or Wi-Fi failures—and proposing remediations that can be executed upon user confirmation. The technology allows customizable interaction levels so that organizations can maintain compliance with their change management processes.

Fortinet also addressed the flexibility of FortiAI Assist within both Fortinet-exclusive and multi-vendor environments. While native Fortinet deployments offer full capabilities with deep cross-platform interoperability, approximately 80-90% of AI-based functionality is available in broader ecosystems thanks to comprehensive APIs and collaborative integrations with over 500 partners. Organizations can build or customize their own automation connectors, reinforcing Fortinet’s commitment to open systems and vendor interoperability. Furthermore, FortiAI Assist supports scalable adoption strategies, offering options such as detailed change plans and rollback capabilities, enabling organizations to gain trust gradually through staged automation. Fortinet envisions a future where its AI agents collaborate with partner AI solutions, creating a cohesive and intelligent security and network management ecosystem.


Empower Innovation with AI Secured by Fortinet Fabric

Event: AI Field Day 7

Appearance: Fortinet Presents at AI Field Day 7

Company: Fortinet

Video Links:

Personnel: Max Zeumer

In his presentation at AI Field Day 7, Max Zeumer from Fortinet discussed how the rapid adoption of generative AI has transformed the threat landscape and the imperative for organizations to secure their AI usage. He began by highlighting the explosive growth of generative AI compared to past technologies, stressing that enterprises must adapt quickly to its integration. While some organizations are implementing AI in a structured way with governance and compliance, most are still in the early stages and lack visibility and control, which introduces significant risk. Zeumer noted that as AI progresses from reactive prompt-based tools to agentic and autonomous systems, enterprises face mounting challenges to secure data, manage usage, and maintain compliance.

Zeumer emphasized that quickly evolving AI tools also present new vulnerabilities in the cybersecurity space. Threat actors have begun using AI to create convincing phishing attacks, social engineering campaigns, and malware, often lowering the technical barrier for carrying out sophisticated cyberattacks. In addition, internal risks were discussed, such as employees unknowingly feeding sensitive company data into public AI platforms. This lack of governance can lead to data leakage and regulatory breaches. Fortinet sees a growing need for enterprises to monitor the various applications of AI within their organizations, understand who is using it, how, and what data is being processed, especially as adversaries increasingly employ AI in weaponized forms.

To address these emerging concerns, Fortinet has developed a comprehensive AI-integrated cybersecurity framework called Fortinet Security Fabric, powered by its proprietary AI platform, FortiAI. This system is structured around three main pillars—FortiAI Protect, Assist, and Secure AI—covering threat detection and prevention, operational augmentation, and safeguarding AI systems themselves. FortiGuard Labs plays a fundamental role by continuously collecting sophisticated threat intelligence and feeding it into these systems. This allows customers to receive accurate, real-time insights, manage risk from generative AI applications, and set governance rules. Fortinet’s unified platform and deep AI capabilities, backed by over 500 patents and years of innovation, position it to help enterprises adopt AI securely while maintaining performance and compliance.


HPE Agentic Smart City Solution – Focusing on Real-World Outcomes

Event: AI Field Day 7

Appearance: HPE Presents at AI Field Day 7

Company: HPE, Kamiwaza.AI

Video Links:

Personnel: Luke Norris, Robin Braun

At AI Field Day 7, Robin Braun from HPE and Luke Norris from Kamiwaza presented their collaborative smart city solution, highlighting a real-world deployment in Vail, Colorado. The focus was on using agentic AI systems to improve core municipal operations such as information access, public safety, affordable housing oversight, and regulatory compliance. By integrating Kamiwaza’s backend intelligence with user-friendly digital interfaces powered by HPE infrastructure, they demonstrated the potential of AI-driven digital concierges and fire detection tools. These virtual assistants can provide localized, real-time information to residents and visitors about everything from dining options to emergency weather updates, while the fire detection system synthesizes data from existing city cameras, 3D geospatial models, and real-time weather data to support proactive emergency response.

One of the less glamorous but highly impactful use cases involves automating the interpretation and management of property deeds and housing regulations, many of which were previously stored on microfiche from decades past. HPE and Kamiwaza developed a solution that digitizes and then applies natural language processing and ontology mapping to thousands of deed restriction documents. This not only saves significant full-time staff hours but also enables scalable and equitable housing enforcement without the need for proportionate increases in bureaucratic staffing. Additionally, the system allows both government and citizens to query property data interactively, improving public access and transparency, and supporting future zoning or service decisions with much better data insight.

A significant part of the presentation focused on the long-term vision and ROI of public sector AI deployments. These weren’t just experimental pilots; instead, they already yielded tangible cost and time savings by replacing manual, repetitive processes with AI agents. Critical examples included the automation of 508 compliance audits, which traditionally cost millions over years but now can be performed in weeks with a fraction of the cost. Additionally, through a network of partners such as SHI for deployment and ProHawk for video enhancements, the smart city platform is designed to scale, support ongoing improvements, and adapt to increasing demands. The project demonstrates how AI transforms government services not by reducing workforce but by enhancing their capabilities, decision-making speed, and community responsiveness in areas from environmental risk to urban planning.


HPE News from NVIDIA GTC DC 2025

Event: AI Field Day 7

Appearance: HPE Presents at AI Field Day 7

Company: HPE, Kamiwaza.AI

Video Links:

Personnel: Luke Norris, Robin Braun

At AI Field Day 7, HPE presented its latest AI developments announced during NVIDIA’s GTC DC 2025, with a strong focus on its collaborative initiatives to simplify and operationalize AI workloads. Robin Braun and Luke Norris highlighted the challenges organizations face in deploying AI applications, particularly the difficulty of moving from pilot projects to full-scale production. HPE emphasized its partnership model, notably with Kamiwaza, to address this issue by integrating lifecycle management and streamlined AI operations, making it simpler for enterprises and government entities to maintain and update AI deployments.

A major highlight of the presentation was HPE’s AI stack tailored for various deployment scales, including private cloud environments and air-gapped setups suitable for sensitive sectors like public safety. Braun detailed advancements in scaling AI, such as leveraging RTX 6000 Pro GPUs and introducing pre-integrated, lifecycle-managed AI stacks that can function in isolated networks. These stacks are also being tied into HPE’s digital concierge services and AMP offerings, designed to help customers deploy and support AI solutions faster and more reliably, while also ensuring security and compliance across different use cases.

The Town of Vail served as a flagship example demonstrating HPE’s platform capabilities in real-world conditions. By utilizing existing infrastructure such as town-wide cameras and applying Kamiwaza’s AI backend, HPE enabled adaptive workflows, specifically for fire detection and urban sustainability efforts. This approach provided not only cost and operational efficiencies but also embodied Vail’s commitment to renewable energy and environmental goals. The collaboration between HPE, Kamiwaza, and integration partner SHI showcases how AI can drive meaningful public benefits, such as early fire warning systems and safer deployment environments, all while scaling to future smart city applications.


Unleash AI with HPE and Kamiwaza

Event: AI Field Day 7

Appearance: HPE Presents at AI Field Day 7

Company: HPE, Kamiwaza.AI

Video Links:

Personnel: Luke Norris, Robin Braun

At AI Field Day 7, Robin Braun presented HPE’s “Unleash AI” initiative, emphasizing their collaborative and outcomes-based approach to bringing AI to practical use. Braun was joined by Kamiwaza CEO Luke Norris, in all three sessions. HPE launched the Unleash AI program in early 2024 with the goal of curating a robust partner ecosystem, offering customers end-to-end solutions that are pre-validated on HPE infrastructure and easy to deploy via the channel. They highlighted the importance of converting AI hype into real solutions by working closely with ISVs, creating relevant demos, marketing collateral, and training resources to make AI more accessible and actionable for enterprises. The program’s global scope and diverse use cases, from Vision AI to agentic AI, demonstrate HPE’s commitment to addressing the real-world needs of customers across various industries.

A key focus of the presentation was the Agentic Smart City AI use case in partnership with Kamiwaza and the town of Vail, Colorado. This initiative is a practical example of how municipalities can solve operational challenges using AI. By working with Vail, HPE and Kamiwaza developed several use cases, including improving ADA Section 508 web compliance through AI agents that identify and remediate accessibility issues, saving time and avoiding costly manual web redevelopment. This project broke down data silos and enabled interdepartmental collaboration without requiring cloud connectivity, as everything runs securely on HPE infrastructure. The result was not only a technically sound solution but also a model for how public agencies can adopt AI incrementally without excessive risk.

Kamiwaza’s agentic AI platform, demonstrated during the session, operates as a full-stack orchestration engine capable of connecting to and processing data across distributed environments using various hardware and AI models. Whether running on-premises or at the edge, it brings compute to the data while abstracting the underlying hardware, which enhances performance, scalability, and flexibility. The system incorporates advanced features like ReBAC, an enhanced role and attribute-based access control framework, and ephemeral sessions to enforce security and privacy rigorously. It enables enterprises, including government entities, to be “unbound” by token-based AI billing models and instead focus on fixed-cost, outcome-based deployments. These capabilities have already shown transformational potential in environments like Vail and attracted significant interest from large global enterprises.


Delivering Valuable AI Insights Requires Protecting AI Data Sources

Event: AI Field Day 7

Appearance: HYCU Presents at AI Field Day 7

Company: HYCU

Video Links:

Personnel: Subbiah Sundaram

In the AI Field Day 7 presentation, Subbiah Sundaram, Senior Vice President of Products at HYCU, highlighted the importance of data protection in the context of AI deployment and insights. Sundaram emphasized that protecting data is not limited to the raw data itself, but also includes configurations, metadata, and associated systems that power AI infrastructure. He outlined HYCU’s multi-faceted approach, starting with free data discovery across a broad range of sources, including SaaS, PaaS, DBaaS, and IaaS environments. Their platform helps enterprises continuously map out and visualize their data estates, identifying unprotected resources and automating categorization — a critical need in today’s highly distributed and complex IT landscape.

Sundaram delved deeper into the challenges of protecting data sources that fuel AI models, particularly in environments that use retrieval-augmented generation (RAG) methods to augment language models with proprietary data. The protection of vector databases, such as Pinecone and Redis, was noted as a key differentiator for HYCU, positioning it as the first enterprise backup vendor to offer such capabilities. He discussed how data spread across public cloud, SaaS platforms, and on-premises infrastructures can be managed and protected from a single control plane, offering portability, granular recovery, and ransomware resilience. Importantly, HYCU’s architecture is modular and API-driven, allowing customers and partners to rapidly integrate new SaaS sources ahead of the market, while also maintaining compliance and service-level agreements.

Throughout the presentation, Sundaram underscored a growing enterprise awareness of the need to protect operational and AI-related datasets as they move from experimentation into production environments. He revealed key industry data, showing that most organizations have experienced at least one SaaS-related data breach in the past year, with significant financial and operational impacts. HYCU’s approach ensures that customers retain ownership of their backup data, avoiding third-party control or markup of cloud storage services. Their global, scalable architecture supports all major cloud providers and emphasizes intelligent data locality to minimize costs. Overall, the presentation framed HYCU as a forward-thinking, customer-centric player in AI and data protection, uniquely positioned to help enterprises maintain data sovereignty, security, and continuity in accelerating AI adoption.


Protecting the Intelligence and Infrastructure Behind AI

Event: AI Field Day 7

Appearance: HYCU Presents at AI Field Day 7

Company: HYCU

Video Links:

Personnel: Sathya Sankaran

In his presentation at AI Field Day 7, Sathya Sankaran, Head of Cloud Products at HYCU, emphasizes the importance of protecting the data and infrastructure that underpin AI systems. He highlights that while much of the AI conversation tends to focus on GPUs and models, the foundational data that fuels AI often lacks comprehensive protection. During AI implementation, vast and varied datasets are generated, modified, and analyzed—through data lakes, object storage, and lakehouses—posing significant challenges in maintaining consistency, accuracy, and recoverability. Sankaran underscores that much of this data resides in the cloud, making cloud the “home of AI,” but also introduces new threats due to fragmented services, inefficiencies, and blind spots in current protection measures.

HYCU aims to solve these challenges by offering broad and deep coverage across diverse cloud workloads, ensuring consistent and meaningful backup and recovery. Unlike traditional backup solutions that may not cater to AI-specific workflows or protect more than raw data, HYCU’s platform captures the entire ecosystem, including metadata, views, access policies, and AI-specific formats such as enriched JSON and vector databases. This level of comprehensive protection enables traceability and rollback capabilities for AI pipelines, which are critical when dealing with issues like schema drift, corrupted data, or poisoned datasets. HYCU’s approach involves aligning backups with stages like model training checkpoints, and doing so in a way that maintains consistency across fragmented and asynchronous data processes.

Adding to this, HYCU’s partnership with Dell and use of deduplication technologies such as DD Boost make backing up even large-scale AI data cost-effective and cloud-resilient. Their solution minimizes storage use and egress costs by identifying and transferring only changed data segments, often achieving up to 40:1 savings. This also supports cross-cloud backups, offering organizations flexibility and protection from vendor lock-in or catastrophic cloud failures. Ultimately, HYCU positions itself as an essential component in modern AI architecture by centralizing protection, enabling long-term recoverability, and reducing operational risk, all while keeping pace with the rapidly evolving landscape of AI workloads.


The Role of Data Protection in AI Readiness

Event: AI Field Day 7

Appearance: HYCU Presents at AI Field Day 7

Company: HYCU

Video Links:

Personnel: Brian Babineau

At AI Field Day 7, Brian Babineau, Chief Customer Officer at HYCU, outlined the company’s direction and how data protection plays a crucial role in AI readiness. He introduced his team and emphasized HYCU’s mission to ensure complete data protection and recoverability, especially as businesses embrace AI technologies and shift workloads to the cloud. Babineau drew on his experience in cybersecurity and MSP environments to underscore the urgency of resilient data strategies in the face of modern threats, including cyberattacks and accidental deletions.

Babineau discussed how AI is transforming IT, especially with integrations into existing business applications and the growing importance of AI-specific architectures like vector databases. HYCU is aligning its R&D and product development strategies to support these emerging data environments. The presentation explained that as data moves into second and third generation public cloud environments, new gaps in data availability and protection are becoming evident. HYCU is focused on addressing these risks to ensure that data feeding AI models is both secure and recoverable, which is fundamental for trust and innovation in AI systems.

To reinforce resilience, Babineau highlighted HYCU’s R-Cloud platform, which offers flexibility in backup and recovery options across multiple environments—on-prem, cloud, multi-cloud, and edge. The emphasis was on giving customers the freedom to protect and recover workloads from any location, preventing vendor lock-in and increasing operational flexibility. Strategic partnerships with companies like Dell and Okta, and support from top-tier investors such as Bain Capital, position HYCU to scale its offerings globally. The ability to manage data across complex infrastructures makes HYCU’s solutions increasingly relevant in an AI-powered enterprise landscape.


From Zero to AI Hero with HPE Private Cloud and Storage Solutions

Event: Cloud Field Day 24

Appearance: HPE Presents at Cloud Field Day 24

Company: HPE

Video Links:

Personnel: Ed Beauvais

Discover how HPE Private Cloud AI (PCAI), co-engineered with NVIDIA, delivers a fast, scalable foundation for enterprise AI workloads. See how HPE Alletra MP X10000 Storage powers data-driven innovation with built-in data intelligence and ultra-fast object performance with RDMA to streamline data pipelines—accelerating customers’ AI journey.

Ed Beauvais from HPE presented the HPE Alletra MP X10000 storage solution, emphasizing the challenge that many AI pilot projects fail to reach production due to data-related issues. He highlighted the need for a shift in mindset, where data is considered the product, particularly in the context of generative AI and RAG. The presentation underscored the importance of having fresh, trusted, and well-managed data for building effective AI models, particularly in a distributed infrastructure environment. HPE’s Alletra Storage MP architecture offers a cloud operating model for managing this data, with the X10000 specifically designed to accelerate data readiness for AI, offering fast performance, simplified management, and enterprise-grade features.

A key innovation discussed was RDMA for Object, enabling near doubling of throughput, reduced latency, and lower host CPU consumption, all aimed at accelerating insights. HPE is moving beyond traditional storage by incorporating built-in data intelligence, enabling customers to gain deeper insights into their data assets. The presentation also detailed HPE’s approach to transforming unstructured data by adding metadata tags in an open format (Iceberg tables), facilitating SQL queries and integration with LLMs. This enables the extraction of vector embeddings and integration with an MCP server, providing agentic AI with insights into the storage system and enhancing security and data curation.

Looking forward, HPE plans to bring compute closer to the data, offering customers flexible management facilities and working with solution providers to enable data cataloging and management. HPE envisions a future where customers can bring their own models for data analysis, with a data intelligence server integrated within the storage fabric. The ultimate goal is to provide intelligence and visibility across the entire unstructured data ecosystem, enabling customers to fully leverage their data assets through the ElectroStorage MP platform.


Eliminating Hypervisor Lock In and Accelerating Private Cloud with HPE

Event: Cloud Field Day 24

Appearance: HPE Presents at Cloud Field Day 24

Company: HPE

Video Links:

Personnel: Bharath Ram Ramanathan, Dave Elder

Live from CFD, uncover how Hewlett Packard Enterprise (HPE) is eliminating VMware lock-in and accelerating Private Cloud adoption with HPE Morpheus VM Essentials. See how this powerful solution integrates VMware environments with HPE’s KVM-based hypervisor for seamless migration, VM-vending, and management. Discover how HPE VM Essentials, available as software or embedded in HPE Private Cloud Business Edition with HPE Alletra B10000 or HPE SimpliVity storage, streamlines virtualization and enhances hybrid cloud agility.

HPE is addressing the challenges customers face due to the Broadcom acquisition of VMware, including escalating costs, vendor lock-in, and uncertain strategies. With the industry seeing a surge in customers actively evaluating alternatives to VMware, HPE offers Morpheus VM Essentials as a solution, providing enterprise-grade features built on the KVM hypervisor. The focus is on delivering a hybrid virtualization management environment through a single pane of glass, allowing customers to manage both VMware and HPE’s HVM (HPE VM clustering) infrastructure. This includes essential tooling for IPAM, DNS, and backup integration.

Morpheus VM Essentials serves as a foundational virtualization product, with the option to upgrade to Morpheus Enterprise for expanded functionality, including private and public cloud management, automation, and ITSM integration. It can be compared to vSphere Enterprise or Enterprise Plus, which includes a vCenter console. HPE provides migration capabilities from VMware to HVM, converting VMDK to QCOW2 disk format during the process. It leverages its own proprietary tool built into the Morpheus UI and does not use third-party tools.

VM Essentials is supported on HPE’s disaggregated HCI and SimpliVity hyper-converged infrastructure, offering flexibility in deployment and hardware choices, as well as support for third-party hardware. HPE is actively working with ISVs, including backup vendors like Veeam, Commvault, and soon Cohesity, to provide comprehensive integration with the platform. By offering these solutions, HPE aims to alleviate VMware lock-in and enhance hybrid cloud agility for its customers. It integrates with GreenLake through deployment options and by feeding telemetry data to GreenLake dashboards.

 

 


Enabling Hybrid Cloud Anywhere with HPE CloudOps Software

Event: Cloud Field Day 24

Appearance: HPE Presents at Cloud Field Day 24

Company: HPE

Video Links:

Personnel: Brad Parks, Juden Supapo

In this CFD session, we explore how Hewlett Packard Enterprise (HPE) is transforming the way enterprises provision, manage, and protect hybrid cloud environments with the HPE CloudOps Software suite, comprising HPE Morpheus Enterprise, HPE OpsRamp, and HPE Zerto. Including discussion and live demo of HPE’s orchestration and automation control plane for on-prem technologies like VMware, Nutanix, Microsoft, Red Hat, and public clouds, including AWS, Azure, GCP, Oracle, and more.

HPE addresses the complexities of hybrid cloud environments with its CloudOps portfolio, which includes Morpheus, OpsRamp, and Zerto. Morpheus focuses on delivering self-service capabilities for provisioning VMs, containers, and application stacks across various environments, offering a unified catalog of services, a comprehensive API, a Terraform provider, and a ServiceNow plugin. It provides consistency in the provisioning experience and automates the dependencies involved, such as IP address assignment, DNS entries, Ansible scripts, observability agent installation, backup job creation, and cost allocation. Morpheus also includes a built-in cluster management engine for provisioning Kubernetes clusters and offers a KVM stack, now delivered as Morpheus VM Essentials, to provide core hypervisor capabilities.

OpsRamp is geared towards day two and beyond operations, focusing on observability and monitoring of infrastructure. It offers hybrid discovery, observability, and monitoring capabilities across compute, network, storage, virtualization, and containerization, supporting various cloud platforms and providing a unified view of the infrastructure. OpsRamp aims to correlate alerts, identify root causes, and integrate with ITSM platforms for incident management, as well as enabling intelligent automation for remediation. The platform’s architecture involves deploying OpsRamp gateways for on-prem infrastructure and using agents for servers, providing active monitoring and automation capabilities, with plans to incorporate user experience monitoring.

The integration between Morpheus and OpsRamp enables combined day-zero/one and day-two operations, with Morpheus handling provisioning and OpsRamp focusing on monitoring and management post-deployment. The two platforms can be linked to trigger operational workflows for remediation and present observability data within the Morpheus UI. Both platforms emphasize automation, API-driven approaches, and integration with existing tools and workflows, facilitating a unified and streamlined experience for managing hybrid cloud environments. Policies around tagging and access control were also discussed as essential features to support.


HPE’s Hybrid Cloud Strategy & Portfolio Overview

Event: Cloud Field Day 24

Appearance: HPE Presents at Cloud Field Day 24

Company: HPE

Video Links:

Personnel: Brad Parks

Brad Park from HPE opens by outlining the company’s hybrid cloud strategy and portfolio, emphasizing the importance of achieving a cloud operating model for AI and other initiatives. He highlights the challenges posed by technical debt and the complexities of heterogeneous enterprise environments. The goal is to address these complexities with solutions that transcend individual tech stacks, focusing on provisioning, governance, security policy, and FinOps at a broader level.

The HPE portfolio is presented as a “Fantastic Four” analogy, with GreenLake as the unifying leader, providing a platform for accessing various HPE services through a single pane of glass. The CloudOps Suite is introduced as the software control plane, comprising Morpheus for self-service provisioning and lifecycle management, OpsRamp for observability and reducing mean time to resolution, and Zerto for cyber resiliency and data recovery. This suite aims to manage the lifecycle of application workloads across any cloud, hypervisor, or hardware profile.

Beyond the core components, HPE offers a private cloud portfolio with pre-engineered turnkey systems and flexible options, leveraging the CloudOps Suite. The ultimate goal is to help customers achieve a modern cloud operating model, improve efficiency, and accelerate time to value for both traditional and AI workloads. HPE is also working on integrating these services, building on existing integrations, and enabling use cases like automated database deployment with cost optimization. The placement of Zerto under the CloudOps suite is explained by its focus on protecting workloads, which exist across different environments and technologies, rather than being tied to specific storage types.


Protecting the Keys to the Kingdom with Fortinet

Event: Cloud Field Day 24

Appearance: Fortinet Presents at Cloud Field Day 24

Company: Fortinet

Video Links:

Personnel: Derrick Gooch, Julian Petersohn, Srija Allam

The Three Pillars of Fortinet AI Security: Protect from AI, Assist with AI, and Secure AI. This demonstration illustrates how Fortinet combines AI-driven analytics for SOC assistance with deep protection for AI workloads themselves. Showcasing a simulated attack on a cloud-based e-commerce application powered by an AI chatbot, and highlighting vulnerabilities that can be exploited through prompt injection and server-side request forgery (SSRF). Julian, acting as the attacker, successfully gains access to AWS metadata, steals credentials, and manipulates the chatbot to respond in “ducky language” by injecting malicious content into the S3 bucket storing review data. The attack demonstrated how an attacker could exploit hidden or overlooked API features, underscoring the importance of input sanitization and proper configuration of cloud resources.

Srija then demonstrates Fortinet’s web application firewall (FortiWeb) capabilities in mitigating SSRF attacks through input validation and parameter filtering. By creating rules to block requests originating from local or auto-configuration IPs, FortiWeb successfully prevents Julian from obtaining a new token. Derek showcases FortiCNAP’s ability to monitor API calls, detect malicious activity based on IP address geolocation, and identify misconfigured roles with excessive entitlements.

Finally, Derek initiates an automated remediation workflow using FortiSOAR, triggered by the detection of malicious activity. The workflow cleans the malicious file from the S3 bucket, blocks access from the attacker’s IP address, and revokes the temporary credentials, showcasing a comprehensive approach to threat detection, response, and remediation in a cloud environment. The presentation concludes by reinforcing the importance of a layered security approach that combines preventive measures, monitoring, and automated responses to protect AI-powered applications and cloud infrastructure.


AI Powered Web Application Protection with Fortinet

Event: Cloud Field Day 24

Appearance: Fortinet Presents at Cloud Field Day 24

Company: Fortinet

Video Links:

Personnel: Derrick Gooch, Julian Petersohn, Srija Allam

Fortinet’s approach to securing AI workloads involves a layered defense strategy. Their presentation at Cloud Field Day 24 demonstrated SQL injection (SQLi), Server-Side Request Forgery (SSRF), and model manipulation attacks against an AI-powered application using the Model Context Protocol (MCP), showcasing how Fortinet solutions protect at each stage of the attack kill chain. The demonstration highlighted the vulnerabilities introduced by AI agents and the importance of securing this new attack surface.

The presented environment, deployed in AWS as microservices, features a vulnerable e-commerce application (“Juice Shop”) augmented with an AI chatbot. Traffic between VPCs is routed through a security services VPC, where FortiWeb (web application firewall) and FortiGate provide inspection. The attack flow involves a user interacting with the chatbot, which then communicates with a large language model (OpenAI) via MCP. This interaction exposes vulnerabilities, as demonstrated by an attacker successfully injecting SQL code through the chatbot interface, bypassing traditional web application firewall protections.

Fortinet demonstrated how FortiWeb’s machine learning capabilities can detect and mitigate these attacks. By learning normal application traffic and building a model of expected API behavior, FortiWeb can identify anomalous requests, such as SQL injection attempts. The system then evaluates these alerts, leveraging its threat intelligence database to determine appropriate actions, including blocking malicious requests. Furthermore, FortiWeb’s AI assistant provides detailed analysis of attacks, including remediation recommendations, and generates API documentation to keep up with rapidly evolving pre-built APIs.


Defending Cloud AI Applications with Fortinet

Event: Cloud Field Day 24

Appearance: Fortinet Presents at Cloud Field Day 24

Company: Fortinet

Video Links:

Personnel: Aidan Walden

The scalability, GPU access, and managed services of public cloud make it the natural platform for developing and deploying AI and LLM-based applications—and why this changes the architecture of security itself. Fortinet is focusing on securing AI applications in the cloud, a topic that dominates its conversations with customers. They emphasize the cloud’s unique ability to provide the scalability needed to run GPUs and TPUs, simplifying deployment and accelerating the development of agentic services. They are seeing increased reports of model theft and prompt injection attacks, alongside traditional hygiene issues like misconfigurations and stolen credentials, highlighting the growing need for robust security measures in cloud-based AI deployments.

Fortinet’s approach involves a layered security strategy that incorporates tools such as FortiOS for zero-trust access and continuous posture assessment, FortiCNAP for vulnerability scanning throughout the AI workload lifecycle, and FortiWeb for web application and API protection. FortiWeb uses machine learning to detect anomalous activities and sanitize LLM user input, addressing the OWASP Top 10 threats to LLMs. The company also highlights the importance of data protection, implementing data leak prevention measures on endpoints and in-line to control access to sensitive data and training data.

The presentation outlines a demo environment showcasing a segmented network with standard security measures in place. Fortinet will inspect both north-south and east-west traffic between nodes, monitoring the environment with FortiCNAP. The demo will demonstrate how a combination of old and new attacks, such as SQL injection escalating into SSRF and model corruption, can compromise AI applications. The aim is to highlight the importance of securing access, implementing robust data protection measures, and maintaining vigilance against evolving AI-specific threats.


Where are we going with Oxide Computer Integrations

Event: Cloud Field Day 24

Appearance: Oxide Presents at Cloud Field Day 24

Company: Oxide Computer Company

Video Links:

Personnel: Matthew Sanabria

Matthew Sanabria focuses on future integrations for the Oxide Computer Company, aiming to expand its capabilities and make it a more attractive choice for customers. These integrations include a Kubernetes CSI plugin to enable Oxide storage with Kubernetes, the Kubernetes Cluster API to create clusters across different platforms using Kubernetes, and observability enhancements. The goal is to provide a comprehensive platform that integrates seamlessly with existing infrastructure and tools.

A key component of the future integrations is centered around observability. Oxide has developed a Grafana data source plugin that translates Oxide metrics for Grafana, eliminating the need for operators to use OXQL directly. Additionally, an OpenTelemetry receiver is being developed to convert Oxide metrics to the OpenTelemetry format, enabling users to send data to their preferred observability vendors, such as Datadog or Honeycomb. This effort aims to provide flexibility and compatibility with existing observability platforms.

The discussion expanded to potential use cases for Oxide in various verticals. Oxide aims to replace existing hypervisor infrastructure, offering a lower licensing cost option with its own hypervisor. For life sciences, research pipelines and data pre-formatting for supercomputers are key areas. Furthermore, Oxide emphasized that their platform provides traditional VMs capable of supporting any software, addressing concerns, and expanding possibilities of the types of applications and workloads that can be deployed on the Oxide platform.


Oxide Integrations: Empowering Platform Teams and Developers with Oxide Computer

Event: Cloud Field Day 24

Appearance: Oxide Presents at Cloud Field Day 24

Company: Oxide Computer Company

Video Links:

Personnel: Matthew Sanabria

Matthew Sanabria from Oxide Computer Company discusses integrations that empower platform teams and developers to build on top of the Oxide platform. As Oxide is API-driven, these integrations are crucial for engineering teams needing to work at scale. Sanabria covers three platform integrations: a Go SDK, a Terraform provider, and a Packer plugin, demonstrating how each allows developers to interact with and manage resources on Oxide in a familiar way. The Go SDK offers programmatic access to the Oxide rack, while the Terraform provider enables state management for resources, and the Packer plugin allows the creation of custom images with baked-in application logic.

The presentation then shifts focus to Kubernetes integrations, which are vital for teams standardizing on Kubernetes. Oxide provides integrations for deploying and managing Kubernetes on its infrastructure, including a Cloud Controller Manager, a Rancher node driver, and an Omni infrastructure provider. The Cloud Controller Manager manages node health, load balancing, and routes, providing a Kubernetes-native integration. The Rancher node driver enables users to create Kubernetes clusters on Oxide via the Rancher UI, and the Omni infrastructure provider integrates with Talos Linux, an immutable Linux operating system designed for Kubernetes nodes.

Sanabria demonstrates these Kubernetes integrations in action, showing how the Cloud Controller Manager ensures node health and prevents cascading failures, how the Rancher node driver simplifies cluster creation, and how the Omni infrastructure provider automates the deployment of Talos Linux nodes on Oxide. These integrations provide flexibility for running Kubernetes on Oxide and allow future enhancements, such as load balancers and other controllers, to be seamlessly integrated with Kubernetes.


Who wants Oxide Computer and Why

Event: Cloud Field Day 24

Appearance: Oxide Presents at Cloud Field Day 24

Company: Oxide Computer Company

Video Links:

Personnel: Bryan Cantrill

The video centers on Oxide’s mission to address the inefficiencies and integration challenges prevalent in the industry, which is commoditized and ossified. Oxide started with a clean sheet of paper, tackling problems accumulated over decades by building their own machines fit for purpose, rather than relying on personal computers in data centers. Oxide aims to disrupt the visceral problems in the industry, like AC power supplies per 1U/2U, cords everywhere, fans everywhere, inefficiency; that is just the beginning. Oxide sought not only to replicate what hyperscalers had done —such as using a DC bus bar design —but also to leapfrog them with new differentiators, like the cabled backplane, which removes cabling from the sleds. Another big bet the company made was removing the BIOS —the basic input/output system —originally from CPM.

Bryan Cantrill shared that the company has differentiated on several fronts, including power and efficiency, reliability, operability, and time to deployment, aiming for developers to be working within hours of the IT team uncrating the sleds. When Oxide’s board member, Pierre Lamont, asked Bryan Cantrill what Oxide’s differentiator was, Bryan responded that there is no single differentiator but many. This approach enables Oxide to serve multiple verticals and address distinct customer pain points. Oxide initially underestimated the demand from AI companies. They were surprised to find that the security of their system, particularly the true root of trust and attestation of the entire stack, was a major draw for these companies.

Addressing concerns about single vendor lock-in, Cantrill emphasized Oxide’s commitment to transparency through open-source software. The entire stack is opened up, including the service processor and associated software. The open-source approach, while not entirely mitigating single-vendor risk, provides customers with unprecedented visibility and control, fostering confidence and helping manage risk. Finally, acknowledging the barrier to entry for enterprises due to the rack-level integration, Oxide offers a trial program with a rack in a co-location facility, allowing potential customers to experience the benefits of Oxide’s system firsthand.


Scaling Up The Cloud Computer with Oxide Computer

Event: Cloud Field Day 24

Appearance: Oxide Presents at Cloud Field Day 24

Company: Oxide Computer Company

Video Links:

Personnel: Steve Tuck

Cloud computing has been the most significant platform shift in computing history, allowing companies to modernize and grow their businesses. While cloud computing has accelerated businesses, it has begun to hit its limits. Companies need to extend their operations beyond the public cloud for reasons like locality, security, sovereignty, and regulatory compliance. However, operating infrastructure outside the public cloud often feels like a step back in time, relying on traditional rack-and-stack approaches with limited efficiency and utility.

Oxide Computer Company aims to address this by bringing true cloud computing on-premises. To achieve this, they’ve built a completely different type of computer, a rack-scale system designed holistically from the printed circuit board to the APIs. This approach delivers improvements in density, energy efficiency, and operability, all plumbed with software for operator automation. The goal is to provide businesses with elastic, on-premises scalable computing that mirrors the efficiencies enjoyed by hyperscalers.

The Oxide system features a modular sled design for easy component upgrades, DC power, and a comprehensive software stack, including firmware, an operating system, a hypervisor, and a cloud control plane. This design enables elastic services like compute, storage, and networking, with multi-tenancy and security built in. The company has seen a surge in demand, expanding manufacturing operations and targeting various verticals, including federal, financial services, life sciences, energy, and manufacturing. Oxide focuses on providing modern API-driven services, improved utilization, enhanced energy efficiency, and a trusted product to alleviate public cloud costs and enterprise software challenges.


Enterprise Storage for the Cloud – Simplify, Scale, and Save with Pure Storage

Event: Cloud Field Day 24

Appearance: Pure Storage Presents at Cloud Field Day 24

Company: Pure Storage

Video Links:

Personnel: David Stamen

Pure Storage Cloud brings enterprise-grade storage to the cloud with simplicity, resilience, and efficiency. This session dives into the technical foundations that deliver consistent performance and protection while helping organizations reduce costs across cloud migration, disaster recovery, and hybrid deployments.

David Stamen introduced Pure Storage Cloud as an update to their portfolio, emphasizing a shift towards a cloud-preferred model where data availability is paramount. The new portfolio includes Pure Storage Cloud Dedicated (formerly Cloud Block Store) and Pure Storage Cloud Azure Native Service, signifying a unified experience under a single control plane. Managed services are also a key component, catering to customers seeking hosted stacks within hyperscalers such as Azure VMware Solution and Elastic VMware Service, as well as in cloud-adjacent environments. This unified experience ensures consistent management and licensing, regardless of whether customers use Evergreen 1 or CapEx-based purchasing, all managed through Pure 1 with Purity.

The presentation addressed the challenges customers face when adopting the cloud, including rising costs, limited visibility, and overprovisioning due to bundled performance and capacity. To address these issues, Pure Storage Cloud offers a unified data plane with features such as data reduction (thin provisioning, deduplication, and compression), advanced replication options (synchronous, continuous, and periodic), built-in high availability, double data-at-rest encryption, and best-in-class snapshots. These capabilities aim to provide cost efficiency, performance optimization, and enhanced data protection, resolving the sprawl and management complexities associated with diverse cloud storage options.

A significant development highlighted was the Pure Storage Cloud Azure Native Service, which allows Pure Storage to build and operate a native service that integrates seamlessly with Azure. Key features include on-demand performance scaling, native integration with Azure services via a resource provider, and simplified deployment and management within the Azure portal. Plans include expanding support for Azure VMs, enabling easy connectivity configuration, and potentially integrating with other native services such as containerization platforms (e.g., Azure Kubernetes) and PaaS offerings.