Eliminating Hypervisor Lock In and Accelerating Private Cloud with HPE

Event: Cloud Field Day 24

Appearance: HPE Presents at Cloud Field Day 24

Company: HPE

Video Links:

Personnel: Bharath Ram Ramanathan, Dave Elder

Live from CFD, uncover how Hewlett Packard Enterprise (HPE) is eliminating VMware lock-in and accelerating Private Cloud adoption with HPE Morpheus VM Essentials. See how this powerful solution integrates VMware environments with HPE’s KVM-based hypervisor for seamless migration, VM-vending, and management. Discover how HPE VM Essentials, available as software or embedded in HPE Private Cloud Business Edition with HPE Alletra B10000 or HPE SimpliVity storage, streamlines virtualization and enhances hybrid cloud agility.

HPE is addressing the challenges customers face due to the Broadcom acquisition of VMware, including escalating costs, vendor lock-in, and uncertain strategies. With the industry seeing a surge in customers actively evaluating alternatives to VMware, HPE offers Morpheus VM Essentials as a solution, providing enterprise-grade features built on the KVM hypervisor. The focus is on delivering a hybrid virtualization management environment through a single pane of glass, allowing customers to manage both VMware and HPE’s HVM (HPE VM clustering) infrastructure. This includes essential tooling for IPAM, DNS, and backup integration.

Morpheus VM Essentials serves as a foundational virtualization product, with the option to upgrade to Morpheus Enterprise for expanded functionality, including private and public cloud management, automation, and ITSM integration. It can be compared to vSphere Enterprise or Enterprise Plus, which includes a vCenter console. HPE provides migration capabilities from VMware to HVM, converting VMDK to QCOW2 disk format during the process. It leverages its own proprietary tool built into the Morpheus UI and does not use third-party tools.

VM Essentials is supported on HPE’s disaggregated HCI and SimpliVity hyper-converged infrastructure, offering flexibility in deployment and hardware choices, as well as support for third-party hardware. HPE is actively working with ISVs, including backup vendors like Veeam, Commvault, and soon Cohesity, to provide comprehensive integration with the platform. By offering these solutions, HPE aims to alleviate VMware lock-in and enhance hybrid cloud agility for its customers. It integrates with GreenLake through deployment options and by feeding telemetry data to GreenLake dashboards.

 

 


Enabling Hybrid Cloud Anywhere with HPE CloudOps Software

Event: Cloud Field Day 24

Appearance: HPE Presents at Cloud Field Day 24

Company: HPE

Video Links:

Personnel: Brad Parks, Juden Supapo

In this CFD session, we explore how Hewlett Packard Enterprise (HPE) is transforming the way enterprises provision, manage, and protect hybrid cloud environments with the HPE CloudOps Software suite, comprising HPE Morpheus Enterprise, HPE OpsRamp, and HPE Zerto. Including discussion and live demo of HPE’s orchestration and automation control plane for on-prem technologies like VMware, Nutanix, Microsoft, Red Hat, and public clouds, including AWS, Azure, GCP, Oracle, and more.

HPE addresses the complexities of hybrid cloud environments with its CloudOps portfolio, which includes Morpheus, OpsRamp, and Zerto. Morpheus focuses on delivering self-service capabilities for provisioning VMs, containers, and application stacks across various environments, offering a unified catalog of services, a comprehensive API, a Terraform provider, and a ServiceNow plugin. It provides consistency in the provisioning experience and automates the dependencies involved, such as IP address assignment, DNS entries, Ansible scripts, observability agent installation, backup job creation, and cost allocation. Morpheus also includes a built-in cluster management engine for provisioning Kubernetes clusters and offers a KVM stack, now delivered as Morpheus VM Essentials, to provide core hypervisor capabilities.

OpsRamp is geared towards day two and beyond operations, focusing on observability and monitoring of infrastructure. It offers hybrid discovery, observability, and monitoring capabilities across compute, network, storage, virtualization, and containerization, supporting various cloud platforms and providing a unified view of the infrastructure. OpsRamp aims to correlate alerts, identify root causes, and integrate with ITSM platforms for incident management, as well as enabling intelligent automation for remediation. The platform’s architecture involves deploying OpsRamp gateways for on-prem infrastructure and using agents for servers, providing active monitoring and automation capabilities, with plans to incorporate user experience monitoring.

The integration between Morpheus and OpsRamp enables combined day-zero/one and day-two operations, with Morpheus handling provisioning and OpsRamp focusing on monitoring and management post-deployment. The two platforms can be linked to trigger operational workflows for remediation and present observability data within the Morpheus UI. Both platforms emphasize automation, API-driven approaches, and integration with existing tools and workflows, facilitating a unified and streamlined experience for managing hybrid cloud environments. Policies around tagging and access control were also discussed as essential features to support.


HPE’s Hybrid Cloud Strategy & Portfolio Overview

Event: Cloud Field Day 24

Appearance: HPE Presents at Cloud Field Day 24

Company: HPE

Video Links:

Personnel: Brad Parks

Brad Park from HPE opens by outlining the company’s hybrid cloud strategy and portfolio, emphasizing the importance of achieving a cloud operating model for AI and other initiatives. He highlights the challenges posed by technical debt and the complexities of heterogeneous enterprise environments. The goal is to address these complexities with solutions that transcend individual tech stacks, focusing on provisioning, governance, security policy, and FinOps at a broader level.

The HPE portfolio is presented as a “Fantastic Four” analogy, with GreenLake as the unifying leader, providing a platform for accessing various HPE services through a single pane of glass. The CloudOps Suite is introduced as the software control plane, comprising Morpheus for self-service provisioning and lifecycle management, OpsRamp for observability and reducing mean time to resolution, and Zerto for cyber resiliency and data recovery. This suite aims to manage the lifecycle of application workloads across any cloud, hypervisor, or hardware profile.

Beyond the core components, HPE offers a private cloud portfolio with pre-engineered turnkey systems and flexible options, leveraging the CloudOps Suite. The ultimate goal is to help customers achieve a modern cloud operating model, improve efficiency, and accelerate time to value for both traditional and AI workloads. HPE is also working on integrating these services, building on existing integrations, and enabling use cases like automated database deployment with cost optimization. The placement of Zerto under the CloudOps suite is explained by its focus on protecting workloads, which exist across different environments and technologies, rather than being tied to specific storage types.


Protecting the Keys to the Kingdom with Fortinet

Event: Cloud Field Day 24

Appearance: Fortinet Presents at Cloud Field Day 24

Company: Fortinet

Video Links:

Personnel: Derrick Gooch, Julian Petersohn, Srija Allam

The Three Pillars of Fortinet AI Security: Protect from AI, Assist with AI, and Secure AI. This demonstration illustrates how Fortinet combines AI-driven analytics for SOC assistance with deep protection for AI workloads themselves. Showcasing a simulated attack on a cloud-based e-commerce application powered by an AI chatbot, and highlighting vulnerabilities that can be exploited through prompt injection and server-side request forgery (SSRF). Julian, acting as the attacker, successfully gains access to AWS metadata, steals credentials, and manipulates the chatbot to respond in “ducky language” by injecting malicious content into the S3 bucket storing review data. The attack demonstrated how an attacker could exploit hidden or overlooked API features, underscoring the importance of input sanitization and proper configuration of cloud resources.

Srija then demonstrates Fortinet’s web application firewall (FortiWeb) capabilities in mitigating SSRF attacks through input validation and parameter filtering. By creating rules to block requests originating from local or auto-configuration IPs, FortiWeb successfully prevents Julian from obtaining a new token. Derek showcases FortiCNAP’s ability to monitor API calls, detect malicious activity based on IP address geolocation, and identify misconfigured roles with excessive entitlements.

Finally, Derek initiates an automated remediation workflow using FortiSOAR, triggered by the detection of malicious activity. The workflow cleans the malicious file from the S3 bucket, blocks access from the attacker’s IP address, and revokes the temporary credentials, showcasing a comprehensive approach to threat detection, response, and remediation in a cloud environment. The presentation concludes by reinforcing the importance of a layered security approach that combines preventive measures, monitoring, and automated responses to protect AI-powered applications and cloud infrastructure.


AI Powered Web Application Protection with Fortinet

Event: Cloud Field Day 24

Appearance: Fortinet Presents at Cloud Field Day 24

Company: Fortinet

Video Links:

Personnel: Derrick Gooch, Julian Petersohn, Srija Allam

Fortinet’s approach to securing AI workloads involves a layered defense strategy. Their presentation at Cloud Field Day 24 demonstrated SQL injection (SQLi), Server-Side Request Forgery (SSRF), and model manipulation attacks against an AI-powered application using the Model Context Protocol (MCP), showcasing how Fortinet solutions protect at each stage of the attack kill chain. The demonstration highlighted the vulnerabilities introduced by AI agents and the importance of securing this new attack surface.

The presented environment, deployed in AWS as microservices, features a vulnerable e-commerce application (“Juice Shop”) augmented with an AI chatbot. Traffic between VPCs is routed through a security services VPC, where FortiWeb (web application firewall) and FortiGate provide inspection. The attack flow involves a user interacting with the chatbot, which then communicates with a large language model (OpenAI) via MCP. This interaction exposes vulnerabilities, as demonstrated by an attacker successfully injecting SQL code through the chatbot interface, bypassing traditional web application firewall protections.

Fortinet demonstrated how FortiWeb’s machine learning capabilities can detect and mitigate these attacks. By learning normal application traffic and building a model of expected API behavior, FortiWeb can identify anomalous requests, such as SQL injection attempts. The system then evaluates these alerts, leveraging its threat intelligence database to determine appropriate actions, including blocking malicious requests. Furthermore, FortiWeb’s AI assistant provides detailed analysis of attacks, including remediation recommendations, and generates API documentation to keep up with rapidly evolving pre-built APIs.


Defending Cloud AI Applications with Fortinet

Event: Cloud Field Day 24

Appearance: Fortinet Presents at Cloud Field Day 24

Company: Fortinet

Video Links:

Personnel: Aidan Walden

The scalability, GPU access, and managed services of public cloud make it the natural platform for developing and deploying AI and LLM-based applications—and why this changes the architecture of security itself. Fortinet is focusing on securing AI applications in the cloud, a topic that dominates its conversations with customers. They emphasize the cloud’s unique ability to provide the scalability needed to run GPUs and TPUs, simplifying deployment and accelerating the development of agentic services. They are seeing increased reports of model theft and prompt injection attacks, alongside traditional hygiene issues like misconfigurations and stolen credentials, highlighting the growing need for robust security measures in cloud-based AI deployments.

Fortinet’s approach involves a layered security strategy that incorporates tools such as FortiOS for zero-trust access and continuous posture assessment, FortiCNAP for vulnerability scanning throughout the AI workload lifecycle, and FortiWeb for web application and API protection. FortiWeb uses machine learning to detect anomalous activities and sanitize LLM user input, addressing the OWASP Top 10 threats to LLMs. The company also highlights the importance of data protection, implementing data leak prevention measures on endpoints and in-line to control access to sensitive data and training data.

The presentation outlines a demo environment showcasing a segmented network with standard security measures in place. Fortinet will inspect both north-south and east-west traffic between nodes, monitoring the environment with FortiCNAP. The demo will demonstrate how a combination of old and new attacks, such as SQL injection escalating into SSRF and model corruption, can compromise AI applications. The aim is to highlight the importance of securing access, implementing robust data protection measures, and maintaining vigilance against evolving AI-specific threats.


Where are we going with Oxide Computer Integrations

Event: Cloud Field Day 24

Appearance: Oxide Presents at Cloud Field Day 24

Company: Oxide Computer Company

Video Links:

Personnel: Matthew Sanabria

Matthew Sanabria focuses on future integrations for the Oxide Computer Company, aiming to expand its capabilities and make it a more attractive choice for customers. These integrations include a Kubernetes CSI plugin to enable Oxide storage with Kubernetes, the Kubernetes Cluster API to create clusters across different platforms using Kubernetes, and observability enhancements. The goal is to provide a comprehensive platform that integrates seamlessly with existing infrastructure and tools.

A key component of the future integrations is centered around observability. Oxide has developed a Grafana data source plugin that translates Oxide metrics for Grafana, eliminating the need for operators to use OXQL directly. Additionally, an OpenTelemetry receiver is being developed to convert Oxide metrics to the OpenTelemetry format, enabling users to send data to their preferred observability vendors, such as Datadog or Honeycomb. This effort aims to provide flexibility and compatibility with existing observability platforms.

The discussion expanded to potential use cases for Oxide in various verticals. Oxide aims to replace existing hypervisor infrastructure, offering a lower licensing cost option with its own hypervisor. For life sciences, research pipelines and data pre-formatting for supercomputers are key areas. Furthermore, Oxide emphasized that their platform provides traditional VMs capable of supporting any software, addressing concerns, and expanding possibilities of the types of applications and workloads that can be deployed on the Oxide platform.


Oxide Integrations: Empowering Platform Teams and Developers with Oxide Computer

Event: Cloud Field Day 24

Appearance: Oxide Presents at Cloud Field Day 24

Company: Oxide Computer Company

Video Links:

Personnel: Matthew Sanabria

Matthew Sanabria from Oxide Computer Company discusses integrations that empower platform teams and developers to build on top of the Oxide platform. As Oxide is API-driven, these integrations are crucial for engineering teams needing to work at scale. Sanabria covers three platform integrations: a Go SDK, a Terraform provider, and a Packer plugin, demonstrating how each allows developers to interact with and manage resources on Oxide in a familiar way. The Go SDK offers programmatic access to the Oxide rack, while the Terraform provider enables state management for resources, and the Packer plugin allows the creation of custom images with baked-in application logic.

The presentation then shifts focus to Kubernetes integrations, which are vital for teams standardizing on Kubernetes. Oxide provides integrations for deploying and managing Kubernetes on its infrastructure, including a Cloud Controller Manager, a Rancher node driver, and an Omni infrastructure provider. The Cloud Controller Manager manages node health, load balancing, and routes, providing a Kubernetes-native integration. The Rancher node driver enables users to create Kubernetes clusters on Oxide via the Rancher UI, and the Omni infrastructure provider integrates with Talos Linux, an immutable Linux operating system designed for Kubernetes nodes.

Sanabria demonstrates these Kubernetes integrations in action, showing how the Cloud Controller Manager ensures node health and prevents cascading failures, how the Rancher node driver simplifies cluster creation, and how the Omni infrastructure provider automates the deployment of Talos Linux nodes on Oxide. These integrations provide flexibility for running Kubernetes on Oxide and allow future enhancements, such as load balancers and other controllers, to be seamlessly integrated with Kubernetes.


Who wants Oxide Computer and Why

Event: Cloud Field Day 24

Appearance: Oxide Presents at Cloud Field Day 24

Company: Oxide Computer Company

Video Links:

Personnel: Bryan Cantrill

The video centers on Oxide’s mission to address the inefficiencies and integration challenges prevalent in the industry, which is commoditized and ossified. Oxide started with a clean sheet of paper, tackling problems accumulated over decades by building their own machines fit for purpose, rather than relying on personal computers in data centers. Oxide aims to disrupt the visceral problems in the industry, like AC power supplies per 1U/2U, cords everywhere, fans everywhere, inefficiency; that is just the beginning. Oxide sought not only to replicate what hyperscalers had done —such as using a DC bus bar design —but also to leapfrog them with new differentiators, like the cabled backplane, which removes cabling from the sleds. Another big bet the company made was removing the BIOS —the basic input/output system —originally from CPM.

Bryan Cantrill shared that the company has differentiated on several fronts, including power and efficiency, reliability, operability, and time to deployment, aiming for developers to be working within hours of the IT team uncrating the sleds. When Oxide’s board member, Pierre Lamont, asked Bryan Cantrill what Oxide’s differentiator was, Bryan responded that there is no single differentiator but many. This approach enables Oxide to serve multiple verticals and address distinct customer pain points. Oxide initially underestimated the demand from AI companies. They were surprised to find that the security of their system, particularly the true root of trust and attestation of the entire stack, was a major draw for these companies.

Addressing concerns about single vendor lock-in, Cantrill emphasized Oxide’s commitment to transparency through open-source software. The entire stack is opened up, including the service processor and associated software. The open-source approach, while not entirely mitigating single-vendor risk, provides customers with unprecedented visibility and control, fostering confidence and helping manage risk. Finally, acknowledging the barrier to entry for enterprises due to the rack-level integration, Oxide offers a trial program with a rack in a co-location facility, allowing potential customers to experience the benefits of Oxide’s system firsthand.


Scaling Up The Cloud Computer with Oxide Computer

Event: Cloud Field Day 24

Appearance: Oxide Presents at Cloud Field Day 24

Company: Oxide Computer Company

Video Links:

Personnel: Steve Tuck

Cloud computing has been the most significant platform shift in computing history, allowing companies to modernize and grow their businesses. While cloud computing has accelerated businesses, it has begun to hit its limits. Companies need to extend their operations beyond the public cloud for reasons like locality, security, sovereignty, and regulatory compliance. However, operating infrastructure outside the public cloud often feels like a step back in time, relying on traditional rack-and-stack approaches with limited efficiency and utility.

Oxide Computer Company aims to address this by bringing true cloud computing on-premises. To achieve this, they’ve built a completely different type of computer, a rack-scale system designed holistically from the printed circuit board to the APIs. This approach delivers improvements in density, energy efficiency, and operability, all plumbed with software for operator automation. The goal is to provide businesses with elastic, on-premises scalable computing that mirrors the efficiencies enjoyed by hyperscalers.

The Oxide system features a modular sled design for easy component upgrades, DC power, and a comprehensive software stack, including firmware, an operating system, a hypervisor, and a cloud control plane. This design enables elastic services like compute, storage, and networking, with multi-tenancy and security built in. The company has seen a surge in demand, expanding manufacturing operations and targeting various verticals, including federal, financial services, life sciences, energy, and manufacturing. Oxide focuses on providing modern API-driven services, improved utilization, enhanced energy efficiency, and a trusted product to alleviate public cloud costs and enterprise software challenges.


Enterprise Storage for the Cloud – Simplify, Scale, and Save with Pure Storage

Event: Cloud Field Day 24

Appearance: Pure Storage Presents at Cloud Field Day 24

Company: Pure Storage

Video Links:

Personnel: David Stamen

Pure Storage Cloud brings enterprise-grade storage to the cloud with simplicity, resilience, and efficiency. This session dives into the technical foundations that deliver consistent performance and protection while helping organizations reduce costs across cloud migration, disaster recovery, and hybrid deployments.

David Stamen introduced Pure Storage Cloud as an update to their portfolio, emphasizing a shift towards a cloud-preferred model where data availability is paramount. The new portfolio includes Pure Storage Cloud Dedicated (formerly Cloud Block Store) and Pure Storage Cloud Azure Native Service, signifying a unified experience under a single control plane. Managed services are also a key component, catering to customers seeking hosted stacks within hyperscalers such as Azure VMware Solution and Elastic VMware Service, as well as in cloud-adjacent environments. This unified experience ensures consistent management and licensing, regardless of whether customers use Evergreen 1 or CapEx-based purchasing, all managed through Pure 1 with Purity.

The presentation addressed the challenges customers face when adopting the cloud, including rising costs, limited visibility, and overprovisioning due to bundled performance and capacity. To address these issues, Pure Storage Cloud offers a unified data plane with features such as data reduction (thin provisioning, deduplication, and compression), advanced replication options (synchronous, continuous, and periodic), built-in high availability, double data-at-rest encryption, and best-in-class snapshots. These capabilities aim to provide cost efficiency, performance optimization, and enhanced data protection, resolving the sprawl and management complexities associated with diverse cloud storage options.

A significant development highlighted was the Pure Storage Cloud Azure Native Service, which allows Pure Storage to build and operate a native service that integrates seamlessly with Azure. Key features include on-demand performance scaling, native integration with Azure services via a resource provider, and simplified deployment and management within the Azure portal. Plans include expanding support for Azure VMs, enabling easy connectivity configuration, and potentially integrating with other native services such as containerization platforms (e.g., Azure Kubernetes) and PaaS offerings.


Breaking Silos and Managing Data Across On-Premises and Cloud with Pure Storage

Event: Cloud Field Day 24

Appearance: Pure Storage Presents at Cloud Field Day 24

Company: Pure Storage

Video Links:

Personnel: Brent Lim

Pure Fusion, built into the Purity operating environment, is a core enabler of the Enterprise Data Cloud architecture. Fusion federates arrays into a single, unified fleet and uses outcome-driven automation through Presets to ensure consistent provisioning and configuration of workloads across environments. The result: a self-service, API-driven platform that lets users manage data—not storage—across their hybrid cloud.

The presentation details the evolution of Pure Fusion to version 2, emphasizing its integration with Purity and its role in enabling the Enterprise Data Cloud architecture. A key goal is to shift the focus from managing storage infrastructure to managing data across on-premises and cloud environments. Fusion V2 prioritizes backward compatibility, allowing existing customers to leverage its benefits without requiring extensive script rewrites or retraining. Furthermore, it caters to “dark site” customers by integrating the control plane into Purity, eliminating the need for cloud connectivity while offering workload-placement recommendations through Pure One.

Fusion simplifies storage management through presets, which are declarative definitions of desired outcomes. Storage administrators and consumers can define their requirements in these presets, enabling Fusion to automate provisioning, configuration, and monitoring of workloads. The introduction of the “fleet” concept allows multiple arrays, including FlashArray, FlashBlade, and Pure Storage Cloud, to communicate and coordinate, enabling consistent application of presets across the entire data estate. This unified approach facilitates a shift from managing individual arrays to managing the fleet as a whole, streamlining operations and reducing the risk of misconfigurations.

The presentation showcased a demo of workload provisioning using presets, highlighting how junior administrators can easily deploy and configure databases with predefined settings, ensuring consistent, compliant configurations. The ability to tag resources with billing IDs facilitates chargeback and showback processes, while a compliance engine monitors for configuration drift and enables remediation. Also showcased was the potential of using Fusion with AI language models to automatically provision storage for Machine Learning training workloads. Fusion provides a framework for achieving objective-based management across multiple use cases, including MSPs, standardization, and provider-consumer separation.


The Enterprise Data Cloud with Pure Storage

Event: Cloud Field Day 24

Appearance: Pure Storage Presents at Cloud Field Day 24

Company: Pure Storage

Video Links:

Personnel: Brent Lim, David Stamen

The Enterprise Data Cloud reimagines storage as a unified, software-driven environment—enabling organizations to manage data, not hardware. It brings together on-premises, cloud, and hybrid resources under a single intelligent control plane for consistent governance, automated protection, and seamless mobility. With built-in cyber resilience, SLA-driven performance, and real-time analytics, the Enterprise Data Cloud empowers enterprises to simplify operations, scale without disruption, and accelerate data-driven innovation.

Pure Storage presents an updated vision for the Enterprise Data Cloud, focusing on the shift from traditional siloed storage architectures to a more horizontal, virtualized, and automated approach. Legacy systems were characterized by individual arrays provisioned for specific workloads, leading to inefficient resource utilization and manual data governance. The modern data experience, in contrast, emphasizes resource pooling, virtualization, and automation, all managed through a unified control plane. This allows for consistent management of file, block, and object storage across on-premises, cloud, and hybrid environments.

The presentation highlighted key components of this vision, including Evergreen One (a consumption-based service), Purity (a unified data plane for block, file, and object storage), and Pure Fusion (intelligent automation of workflows). Evergreen One offers a scalable consumption model in which Pure Storage takes responsibility for meeting performance and capacity SLAs, including ransomware protection. Purity provides built-in data resilience, cybersecurity, and unified data services across various protocols and environments. Pure Fusion integrates with the Enterprise Data Cloud, delivering workload performance and scalability for enterprise and modern applications.

Ultimately, Pure Storage aims to deliver a unified, self-service, and scalable consumption model that abstracts the underlying storage infrastructure, allowing customers to focus on their data and applications. The Enterprise Data Cloud is designed to pool resources, virtualize data, and provide a consistent environment with built-in cyber resilience, data governance, and global scalability. The speakers emphasized that this approach simplifies virtualization support, provides built-in provisioning and disaster recovery, and offers Kubernetes-aware storage through Portworx.


Delegate Roundtable – AI Workloads Meet Data Operations at NetApp Insight 2025

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: Tech Field Day Delegate Roundtable at NetApp INSIGHT 2025

Company: NetApp

Video Links:

Personnel: Stephen Foskett

At NetApp Insight 2025, the Tech Field Day delegates gathered to provide their perspectives on the company’s major announcements, primarily focusing on the AI Data Engine (AI/DE) and the new AFX storage platform. Attendees were impressed by NetApp’s clear messaging about returning focus to its long-standing core strength: storage. The company’s positioning of AI as a natural evolution of data operations was well received, especially because it reframed storage as more than a backend necessity—it became central to the AI data pipeline. Delegates praised the strategy of anchoring tokenization and embedding within storage operations and appreciated NetApp’s ability to decouple compute and storage while maintaining ONTAP’s legacy features.

The panelists noted that NetApp appears to be embracing a more coherent and integrated direction after years of broad diversification and numerous acquisitions. While the company’s positioning is not about becoming a full AI solutions provider, its emphasis on data operations—particularly through automated metadata analysis, tagging, and governance—positions it uniquely among storage vendors. NetApp’s recognition that effective AI starts with robust, well-governed data excited the delegates, though there were calls for more practical demonstrations or customer journey stories to showcase how AI/DE is being adopted in the field. The discussion also highlighted NetApp’s exclusive capability of offering first-party storage across all three major hyperscalers as a clear differentiator.

Nonetheless, the delegates had constructive critiques, emphasizing the need for NetApp to elaborate on AI-specific concerns like security, ethics, and governance frameworks. While the company has laid down a strong foundation—from classification and compliance to data mobility across clouds—it was suggested that more clarity around partner integrations and extensibility of the platform would resonate with a broader enterprise audience. Delegates appreciated the modest roll-out scope of AI/DE, as it shows NetApp learned from past missteps when rolling out major platform changes too ambitiously. They expressed a shared hope that by NetApp Insight 2026, there will be concrete examples of AI workload deployments enabled by NetApp’s offerings, providing validation through customer success stories and real-world use cases.

Moderator: Stephen Foskett
Panelists: Becky Elliott, Denny Cherry, Gina Rosenthal, Glenn Dekhayser, Guy Currier, Jason Benedicic, Karen Lopez


NetApp Cloud Building the Most Differentiated AI Era Storage Platforms

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: NetApp Cloud Storage for the AI Era

Company:

Video Links:

Personnel: Puneet Dhawan, Sayandeb Saha

In this featured session, gain insights into how customers leverage NetApp Cloud Storage to handle demanding workloads such as HPC, EDA, databases, VMware, and SAP at scale. Discover the reasons behind enterprises’ choice of NetApp, highlighting its exceptional price-performance ratio, relentless innovation, and proven file and block cloud storage capabilities. The session features real demos showcasing AI workloads on hyperscalers, GenAI pipelines accelerated by first-party storage, and Instaclustr’s GenAI-ready services, demonstrating how NetApp is leading the way in shaping the future of Cloud and AI.

The presentation, led by Puneet Dhawan and Sayandeb Saha, delves into the transformative role of NetApp’s cloud storage portfolio in enabling hybrid multi-cloud architectures that support a wide array of enterprise workloads. NetApp’s integration with major hyperscalers—AWS, Azure, and Google Cloud—offers customers native experiences while leveraging powerful features of ONTAP, their unified storage platform. Customers can choose from first-party offerings, Cloud Volumes ONTAP (CVO), or NetApp’s fully managed service Keystone, enabling consistent experience and operational simplicity across environments. The Instaclustr acquisition enriches the data layer with managed open-source services like Kafka, Cassandra, and PostgreSQL, catering to streaming, transactional, and real-time analytics needs with a focus on scalability, openness, and cost efficiency.

A significant aspect of the talk centers on NetApp’s expanding capabilities for AI and analytics use cases. The company is enhancing its storage performance, streamlining data mobility with tools like SnapMirror and FlexCache, and improving integration with hyperscaler AI services. In-place AI data access eliminates the need for redundant data copies while supporting services like SageMaker, Azure AI Studio, and Google Gemini Enterprise. Enhancements include support for S3 protocol, performance boosts on Azure NetApp Files, and enterprise-grade security. The unveiling of new tools like NetApp Data Migrator simplifies cloud transitions, even from non-ONTAP sources, and new additions to Instaclustr such as PGVector and the MCP Gateway project demonstrate NetApp’s commitment to powering modern AI-infused applications through seamless and secure data infrastructure across hybrid multi-cloud ecosystems.


Unleash Innovation with the NetApp AI Data Engine

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: NetApp’s Platform for AI Data Innovation

Company: NetApp

Video Links:

Personnel: Arindam Banerjee, Tore ​​Sundelin

Data is the fuel that powers AI. Discover how NetApp uniquely empowers AI Innovators to unleash the full potential of GenAI by securely accessing and managing their enterprise data, regardless of location or scale. Gain insights into real-world examples and use cases demonstrating how NetApp is assisting organizations in overcoming data challenges across data centers and multi-cloud environments, ultimately accelerating AI-driven outcomes. Be among the first to learn about groundbreaking innovations that span the best infrastructure for AI, data discovery, data governance, and how to seamlessly integrate AI and data. Simplify Enterprise AI for Inferencing, Retrieval Augmented Generation (RAG), and model training today, paving the way for your Agentic AI future tomorrow.

In their session at the Tech Field Day at NetApp INSIGHT 2025, speakers Tore Sundelin and Arindam Banerjee introduced the NetApp AI Data Engine (AIDE), discussing the state of enterprise AI adoption and the common challenges companies face in scaling AI to production use. Despite the tantalizing promises of AI, studies claim a high rate of failure among enterprise AI projects due to fragmented tools, siloed and duplicated data sets, and complex management needs. NetApp sought to address these issues with a unified platform anchored by ONTAP, its industry-leading data management software, and powerful integration with NVIDIA. The AI Data Engine aims to simplify AI operations across data discovery, governance, transformation, and cost-efficiency, enabling organizations to move from isolated experiments to production AI systems more easily.

Banerjee highlighted how the AI Data Engine integrates compute and storage by introducing dedicated Data Compute Nodes (DCNs) connected via high-speed networks to ONTAP-based AFX clusters. This tight integration, enhanced with NVIDIA GPUs and co-engineered embedding models, enables efficient vectorization and semantic search for AI workloads, especially for use cases like RAG. The system also features rich metadata indexing, resilient snapshot-based lineage tracking, and automated detection and governance tools to protect sensitive data. With support for hybrid and multi-cloud environments, the platform empowers both infrastructure admins and data scientists via distinct interfaces, allowing for flexible, secure, and scalable AI development and deployment processes, all while leveraging NetApp’s proven storage technologies and ecosystem integrations.

Looking ahead, NetApp’s roadmap for AIDE envisions a decentralized knowledge graph architecture that will further extend its scalability and capability to support complex AI use cases such as Agentic AI. The platform is already compatible with AI tools like Langchain and Domino Data Labs, and plans are underway to accommodate bring-your-own-model scenarios and support advanced AI modalities. Deep collaboration with NVIDIA has resulted in optimized pipelines and hardware compatibility, including support for upcoming GPU generations. Ultimately, AIDE is positioned as a future-ready solution to help enterprises unlock the value of their massive data estates—over 100 exabytes currently under NetApp management—and make them readily usable and governable for advanced AI applications.


Unlocking Innovation with Modern Data Infrastructure from NetApp

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: Data Infrastructure Modernization with NetApp

Company: NetApp

Video Links:

Personnel: James Kwon, Pranoop Erasani

Infrastructure modernization today goes beyond simply upgrading storage. It’s the cornerstone of breaking down silos and establishing a unified data foundation that drives innovation across your organization. Whether you’re optimizing hybrid operations, enhancing cyber resilience, or accelerating your AI journey, this featured session will demonstrate how an intelligent data infrastructure, such as the NetApp data platform, offers unparalleled simplicity, security, and efficiency for all your workloads.

At Tech Field Day Experience at NetApp INSIGHT 2025, James Kwon and Pranoop Erasani introduced AFX, an advanced architecture within the NetApp data platform, designed to meet the growing demands of AI and other high-performance workloads. They explained the project’s origin, focusing on how traditional ONTAP architectures struggled to keep pace with the rapid computational advancements of GPUs. AFX breaks from the high-availability (HA) pair constraints by disaggregating storage and compute components, thereby allowing independent scaling of capacity and performance. This approach offers greater flexibility to customers, who can now customize infrastructure growth based on workload requirements rather than being locked into synchronized hardware upgrades.

The AFX design introduces a single storage pool architecture, eliminating redundant storage layers such as aggregates and simplifying both the user experience and storage management. It supports ONTAP interoperability and maintains near-complete feature parity, including capabilities like SnapMirror and FlexGroup, while delivering automatic rebalancing and volume re-hosting for seamless operation. While three separate cluster “personalities”—unified, block-only, and disaggregated—are maintained, features like zero-copy volume moves and simplified expansion reinforce the efficiency and adaptability of AFX. The new system is ideal for AI workloads given its throughput optimization, yet flexibility in design paves the way for use cases in EDA, HPC, and other data-intensive sectors. Though not yet a wholesale replacement for unified ONTAP, AFX represents a foundational step toward universally modern, scalable, and intelligent storage solutions.


Redefining Data Infrastructure for AI with NetApp

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: NetApp AI Vision and Strategy

Company: NetApp

Video Links:

Personnel: Syam Nair

At the Tech Field Day Experience during NetApp INSIGHT 2025, Syam Nair, Chief Product Officer at NetApp, outlines their strategy to redefine data infrastructure for artificial intelligence (AI). The focus is on building intelligent storage systems that unify data across block, file, and object formats, enabling fast and secure access to AI-ready data. By integrating AI capabilities directly into the storage layer, such as metadata enrichment, tokenization, and data governance, NetApp aims to empower storage administrators and extend the usability of data for AI workflows without sacrificing control or security.

In his presentation, Syam Nair emphasized that the AI data landscape is shifting from hype to practical application, with growing unstructured data sources such as machine-generated and generative AI outputs. With the ONTAP platform at its core, NetApp’s vision hinges on making all data AI-ready by embedding intelligence—like security policies, tokenization, and embedding mechanisms—into the data layer itself. This not only ensures data accessibility and governance but also removes the need for complex extraction, transformation, and loading (ETL) processes. NetApp’s AI Data Engine (AIDE) and AFX are designed to streamline this intelligent access while reducing the proliferation of data copies by managing metadata and vectorization in place.

NetApp’s approach aims to elevate the role of storage administrators, transforming them from infrastructure caretakers into enablers of data-centric applications and AI workflows. Instead of pushing AI users to understand the storage backend, NetApp provides APIs and policy-driven data access mechanisms that integrate with tools like Kafka or database systems. Emphasis was placed on security through granular, zero-trust policies and governance over metadata to prevent overhead and sprawl. NetApp aims to support emerging standards such as Apache Iceberg for semantic access and to evolve toward a system where unstructured data can be consumed like structured data—offering semantic reads without altering write formats. Ultimately, NetApp is not attempting to replace databases but rather to unify and enrich data access directly within the intelligent storage infrastructure.


Enterprise Grade Artificial Intelligence with NetApp

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: NetApp AI Vision and Strategy

Company: NetApp

Video Links:

Personnel: Jeff Baxter

In this presentation, Jeff Baxter, VP of Product Marketing at NetApp, discusses how NetApp is enabling enterprise-grade artificial intelligence through a comprehensive intelligent data infrastructure. By addressing four key imperatives—modernizing data centers, transitioning to the cloud, adopting AI, and ensuring cyber resilience—NetApp aims to help organizations transform how they manage and utilize data.

Baxter underscored NetApp’s singular focus on data infrastructure, highlighting the evolution of the ONTAP operating system as a consistent data plane across environments—from on-premises systems to public and sovereign clouds. A major topic was the unveiling of the NetApp data platform and the importance of AI-ready data. Baxter asserted that many AI projects fail, not due to flawed models or lack of talent, but because the data isn’t prepared for AI—emphasizing issues like accessibility, compliance, and integration. With this challenge in mind, NetApp is repositioning itself as a true data platform company, absorbing years of enterprise experience into a unified, resilient, and highly available backbone for modern workloads.

Baxter then introduced two significant product announcements: the NetApp AFX system and the NetApp AI Data Engine, together named the NetApp AFX AI portfolio. AFX is a disaggregated storage architecture built on enhanced ONTAP, allowing for scalable performance and storage capacity separately—ideal for diverse AI workloads. The AI Data Engine complements this by providing a high-speed, secure data pipeline integrated with a vector database optimized for AI use cases like retrieval-augmented generation (RAG). This engine supports semantic search, data guardrails, and AI-ready APIs, pushing AI workloads into full enterprise territory with the reliability, compliance, and availability expectations that business-critical systems require.


Microsoft Sentinel Delegate Roundtable Discussion

Event: Tech Field Day Exclusive with Microsoft Security

Appearance: Tech Field Day Exclusive Delegate Roundtable Discussion

Company: Tech Field Day

Video Links:

Personnel: Tom Hollingsworth

In this roundtable discussion, the Field Day delegates discussion the current state of the Microsoft Sentinel. Currently, there is work to do with bringing together multiple portals like Defender, Entra, and Purview, as well as clearing up analysts whose roles span multiple security personas. There is also a need to clarify the licensing requirements and how each of the tools in the overall suite are integrated into workflows. The consensus is that the platform feels like a collection of separate products from different teams rather than a truly unified, integrated solution. This challenge is magnified for organizations with hybrid or multi-cloud environments, where the high cost of ingesting data from non-Microsoft sources like AWS presents a significant barrier to adoption.

The delegates expressed hesitation about making a strategic investment in a platform that seems so early in its development, concerned that future changes could force them to retool their processes. They stressed the need for greater maturity, transparency, and traceability, especially in reporting, as they cannot present “black box” data to senior leadership. For Sentinel to succeed in the real world, the delegates believe Microsoft must demonstrate a stronger commitment to interoperability by adopting open standards like OCSF more quickly and offering more flexibility in data engineering and routing before data enters the Sentinel lake. The feeling is that Microsoft needs to transition from its traditional license-based, “all-or-nothing” approach to prove it can truly function as an open ecosystem partner.

Despite these criticisms, the delegates are optimistic about Sentinel’s potential. The underlying data platform, with its integrated layer of tabular, graph, and vector data, is considered powerful, especially for advanced data science teams. The graph visualizations were particularly praised as an effective way to communicate pre- and post-breach scenarios and risk to business leaders. The delegates concluded that the platform’s greatest current strength is its flexibility. By providing low-code/no-code interfaces and natural language query capabilities, Microsoft empowers customers to build the specific reports and tools they need. This ability for organizations to create their own solutions is seen as a powerful way to bridge the current maturity gap and extract immediate, tailored value from the platform.