Enterprise Storage for the Cloud – Simplify, Scale, and Save with Pure Storage

Event: Cloud Field Day 24

Appearance: Pure Storage Presents at Cloud Field Day 24

Company: Pure Storage

Video Links:

Personnel: David Stamen

Pure Storage Cloud brings enterprise-grade storage to the cloud with simplicity, resilience, and efficiency. This session dives into the technical foundations that deliver consistent performance and protection while helping organizations reduce costs across cloud migration, disaster recovery, and hybrid deployments.

David Stamen introduced Pure Storage Cloud as an update to their portfolio, emphasizing a shift towards a cloud-preferred model where data availability is paramount. The new portfolio includes Pure Storage Cloud Dedicated (formerly Cloud Block Store) and Pure Storage Cloud Azure Native Service, signifying a unified experience under a single control plane. Managed services are also a key component, catering to customers seeking hosted stacks within hyperscalers such as Azure VMware Solution and Elastic VMware Service, as well as in cloud-adjacent environments. This unified experience ensures consistent management and licensing, regardless of whether customers use Evergreen 1 or CapEx-based purchasing, all managed through Pure 1 with Purity.

The presentation addressed the challenges customers face when adopting the cloud, including rising costs, limited visibility, and overprovisioning due to bundled performance and capacity. To address these issues, Pure Storage Cloud offers a unified data plane with features such as data reduction (thin provisioning, deduplication, and compression), advanced replication options (synchronous, continuous, and periodic), built-in high availability, double data-at-rest encryption, and best-in-class snapshots. These capabilities aim to provide cost efficiency, performance optimization, and enhanced data protection, resolving the sprawl and management complexities associated with diverse cloud storage options.

A significant development highlighted was the Pure Storage Cloud Azure Native Service, which allows Pure Storage to build and operate a native service that integrates seamlessly with Azure. Key features include on-demand performance scaling, native integration with Azure services via a resource provider, and simplified deployment and management within the Azure portal. Plans include expanding support for Azure VMs, enabling easy connectivity configuration, and potentially integrating with other native services such as containerization platforms (e.g., Azure Kubernetes) and PaaS offerings.


Breaking Silos and Managing Data Across On-Premises and Cloud with Pure Storage

Event: Cloud Field Day 24

Appearance: Pure Storage Presents at Cloud Field Day 24

Company: Pure Storage

Video Links:

Personnel: Brent Lim

Pure Fusion, built into the Purity operating environment, is a core enabler of the Enterprise Data Cloud architecture. Fusion federates arrays into a single, unified fleet and uses outcome-driven automation through Presets to ensure consistent provisioning and configuration of workloads across environments. The result: a self-service, API-driven platform that lets users manage data—not storage—across their hybrid cloud.

The presentation details the evolution of Pure Fusion to version 2, emphasizing its integration with Purity and its role in enabling the Enterprise Data Cloud architecture. A key goal is to shift the focus from managing storage infrastructure to managing data across on-premises and cloud environments. Fusion V2 prioritizes backward compatibility, allowing existing customers to leverage its benefits without requiring extensive script rewrites or retraining. Furthermore, it caters to “dark site” customers by integrating the control plane into Purity, eliminating the need for cloud connectivity while offering workload-placement recommendations through Pure One.

Fusion simplifies storage management through presets, which are declarative definitions of desired outcomes. Storage administrators and consumers can define their requirements in these presets, enabling Fusion to automate provisioning, configuration, and monitoring of workloads. The introduction of the “fleet” concept allows multiple arrays, including FlashArray, FlashBlade, and Pure Storage Cloud, to communicate and coordinate, enabling consistent application of presets across the entire data estate. This unified approach facilitates a shift from managing individual arrays to managing the fleet as a whole, streamlining operations and reducing the risk of misconfigurations.

The presentation showcased a demo of workload provisioning using presets, highlighting how junior administrators can easily deploy and configure databases with predefined settings, ensuring consistent, compliant configurations. The ability to tag resources with billing IDs facilitates chargeback and showback processes, while a compliance engine monitors for configuration drift and enables remediation. Also showcased was the potential of using Fusion with AI language models to automatically provision storage for Machine Learning training workloads. Fusion provides a framework for achieving objective-based management across multiple use cases, including MSPs, standardization, and provider-consumer separation.


The Enterprise Data Cloud with Pure Storage

Event: Cloud Field Day 24

Appearance: Pure Storage Presents at Cloud Field Day 24

Company: Pure Storage

Video Links:

Personnel: Brent Lim, David Stamen

The Enterprise Data Cloud reimagines storage as a unified, software-driven environment—enabling organizations to manage data, not hardware. It brings together on-premises, cloud, and hybrid resources under a single intelligent control plane for consistent governance, automated protection, and seamless mobility. With built-in cyber resilience, SLA-driven performance, and real-time analytics, the Enterprise Data Cloud empowers enterprises to simplify operations, scale without disruption, and accelerate data-driven innovation.

Pure Storage presents an updated vision for the Enterprise Data Cloud, focusing on the shift from traditional siloed storage architectures to a more horizontal, virtualized, and automated approach. Legacy systems were characterized by individual arrays provisioned for specific workloads, leading to inefficient resource utilization and manual data governance. The modern data experience, in contrast, emphasizes resource pooling, virtualization, and automation, all managed through a unified control plane. This allows for consistent management of file, block, and object storage across on-premises, cloud, and hybrid environments.

The presentation highlighted key components of this vision, including Evergreen One (a consumption-based service), Purity (a unified data plane for block, file, and object storage), and Pure Fusion (intelligent automation of workflows). Evergreen One offers a scalable consumption model in which Pure Storage takes responsibility for meeting performance and capacity SLAs, including ransomware protection. Purity provides built-in data resilience, cybersecurity, and unified data services across various protocols and environments. Pure Fusion integrates with the Enterprise Data Cloud, delivering workload performance and scalability for enterprise and modern applications.

Ultimately, Pure Storage aims to deliver a unified, self-service, and scalable consumption model that abstracts the underlying storage infrastructure, allowing customers to focus on their data and applications. The Enterprise Data Cloud is designed to pool resources, virtualize data, and provide a consistent environment with built-in cyber resilience, data governance, and global scalability. The speakers emphasized that this approach simplifies virtualization support, provides built-in provisioning and disaster recovery, and offers Kubernetes-aware storage through Portworx.


Delegate Roundtable – AI Workloads Meet Data Operations at NetApp Insight 2025

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: Tech Field Day Delegate Roundtable at NetApp INSIGHT 2025

Company: NetApp

Video Links:

Personnel: Stephen Foskett

At NetApp Insight 2025, the Tech Field Day delegates gathered to provide their perspectives on the company’s major announcements, primarily focusing on the AI Data Engine (AI/DE) and the new AFX storage platform. Attendees were impressed by NetApp’s clear messaging about returning focus to its long-standing core strength: storage. The company’s positioning of AI as a natural evolution of data operations was well received, especially because it reframed storage as more than a backend necessity—it became central to the AI data pipeline. Delegates praised the strategy of anchoring tokenization and embedding within storage operations and appreciated NetApp’s ability to decouple compute and storage while maintaining ONTAP’s legacy features.

The panelists noted that NetApp appears to be embracing a more coherent and integrated direction after years of broad diversification and numerous acquisitions. While the company’s positioning is not about becoming a full AI solutions provider, its emphasis on data operations—particularly through automated metadata analysis, tagging, and governance—positions it uniquely among storage vendors. NetApp’s recognition that effective AI starts with robust, well-governed data excited the delegates, though there were calls for more practical demonstrations or customer journey stories to showcase how AI/DE is being adopted in the field. The discussion also highlighted NetApp’s exclusive capability of offering first-party storage across all three major hyperscalers as a clear differentiator.

Nonetheless, the delegates had constructive critiques, emphasizing the need for NetApp to elaborate on AI-specific concerns like security, ethics, and governance frameworks. While the company has laid down a strong foundation—from classification and compliance to data mobility across clouds—it was suggested that more clarity around partner integrations and extensibility of the platform would resonate with a broader enterprise audience. Delegates appreciated the modest roll-out scope of AI/DE, as it shows NetApp learned from past missteps when rolling out major platform changes too ambitiously. They expressed a shared hope that by NetApp Insight 2026, there will be concrete examples of AI workload deployments enabled by NetApp’s offerings, providing validation through customer success stories and real-world use cases.

Moderator: Stephen Foskett
Panelists: Becky Elliott, Denny Cherry, Gina Rosenthal, Glenn Dekhayser, Guy Currier, Jason Benedicic, Karen Lopez


NetApp Cloud Building the Most Differentiated AI Era Storage Platforms

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: NetApp Cloud Storage for the AI Era

Company:

Video Links:

Personnel: Puneet Dhawan, Sayandeb Saha

In this featured session, gain insights into how customers leverage NetApp Cloud Storage to handle demanding workloads such as HPC, EDA, databases, VMware, and SAP at scale. Discover the reasons behind enterprises’ choice of NetApp, highlighting its exceptional price-performance ratio, relentless innovation, and proven file and block cloud storage capabilities. The session features real demos showcasing AI workloads on hyperscalers, GenAI pipelines accelerated by first-party storage, and Instaclustr’s GenAI-ready services, demonstrating how NetApp is leading the way in shaping the future of Cloud and AI.

The presentation, led by Puneet Dhawan and Sayandeb Saha, delves into the transformative role of NetApp’s cloud storage portfolio in enabling hybrid multi-cloud architectures that support a wide array of enterprise workloads. NetApp’s integration with major hyperscalers—AWS, Azure, and Google Cloud—offers customers native experiences while leveraging powerful features of ONTAP, their unified storage platform. Customers can choose from first-party offerings, Cloud Volumes ONTAP (CVO), or NetApp’s fully managed service Keystone, enabling consistent experience and operational simplicity across environments. The Instaclustr acquisition enriches the data layer with managed open-source services like Kafka, Cassandra, and PostgreSQL, catering to streaming, transactional, and real-time analytics needs with a focus on scalability, openness, and cost efficiency.

A significant aspect of the talk centers on NetApp’s expanding capabilities for AI and analytics use cases. The company is enhancing its storage performance, streamlining data mobility with tools like SnapMirror and FlexCache, and improving integration with hyperscaler AI services. In-place AI data access eliminates the need for redundant data copies while supporting services like SageMaker, Azure AI Studio, and Google Gemini Enterprise. Enhancements include support for S3 protocol, performance boosts on Azure NetApp Files, and enterprise-grade security. The unveiling of new tools like NetApp Data Migrator simplifies cloud transitions, even from non-ONTAP sources, and new additions to Instaclustr such as PGVector and the MCP Gateway project demonstrate NetApp’s commitment to powering modern AI-infused applications through seamless and secure data infrastructure across hybrid multi-cloud ecosystems.


Unleash Innovation with the NetApp AI Data Engine

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: NetApp’s Platform for AI Data Innovation

Company: NetApp

Video Links:

Personnel: Arindam Banerjee, Tore ​​Sundelin

Data is the fuel that powers AI. Discover how NetApp uniquely empowers AI Innovators to unleash the full potential of GenAI by securely accessing and managing their enterprise data, regardless of location or scale. Gain insights into real-world examples and use cases demonstrating how NetApp is assisting organizations in overcoming data challenges across data centers and multi-cloud environments, ultimately accelerating AI-driven outcomes. Be among the first to learn about groundbreaking innovations that span the best infrastructure for AI, data discovery, data governance, and how to seamlessly integrate AI and data. Simplify Enterprise AI for Inferencing, Retrieval Augmented Generation (RAG), and model training today, paving the way for your Agentic AI future tomorrow.

In their session at the Tech Field Day at NetApp INSIGHT 2025, speakers Tore Sundelin and Arindam Banerjee introduced the NetApp AI Data Engine (AIDE), discussing the state of enterprise AI adoption and the common challenges companies face in scaling AI to production use. Despite the tantalizing promises of AI, studies claim a high rate of failure among enterprise AI projects due to fragmented tools, siloed and duplicated data sets, and complex management needs. NetApp sought to address these issues with a unified platform anchored by ONTAP, its industry-leading data management software, and powerful integration with NVIDIA. The AI Data Engine aims to simplify AI operations across data discovery, governance, transformation, and cost-efficiency, enabling organizations to move from isolated experiments to production AI systems more easily.

Banerjee highlighted how the AI Data Engine integrates compute and storage by introducing dedicated Data Compute Nodes (DCNs) connected via high-speed networks to ONTAP-based AFX clusters. This tight integration, enhanced with NVIDIA GPUs and co-engineered embedding models, enables efficient vectorization and semantic search for AI workloads, especially for use cases like RAG. The system also features rich metadata indexing, resilient snapshot-based lineage tracking, and automated detection and governance tools to protect sensitive data. With support for hybrid and multi-cloud environments, the platform empowers both infrastructure admins and data scientists via distinct interfaces, allowing for flexible, secure, and scalable AI development and deployment processes, all while leveraging NetApp’s proven storage technologies and ecosystem integrations.

Looking ahead, NetApp’s roadmap for AIDE envisions a decentralized knowledge graph architecture that will further extend its scalability and capability to support complex AI use cases such as Agentic AI. The platform is already compatible with AI tools like Langchain and Domino Data Labs, and plans are underway to accommodate bring-your-own-model scenarios and support advanced AI modalities. Deep collaboration with NVIDIA has resulted in optimized pipelines and hardware compatibility, including support for upcoming GPU generations. Ultimately, AIDE is positioned as a future-ready solution to help enterprises unlock the value of their massive data estates—over 100 exabytes currently under NetApp management—and make them readily usable and governable for advanced AI applications.


Unlocking Innovation with Modern Data Infrastructure from NetApp

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: Data Infrastructure Modernization with NetApp

Company: NetApp

Video Links:

Personnel: James Kwon, Pranoop Erasani

Infrastructure modernization today goes beyond simply upgrading storage. It’s the cornerstone of breaking down silos and establishing a unified data foundation that drives innovation across your organization. Whether you’re optimizing hybrid operations, enhancing cyber resilience, or accelerating your AI journey, this featured session will demonstrate how an intelligent data infrastructure, such as the NetApp data platform, offers unparalleled simplicity, security, and efficiency for all your workloads.

At Tech Field Day Experience at NetApp INSIGHT 2025, James Kwon and Pranoop Erasani introduced AFX, an advanced architecture within the NetApp data platform, designed to meet the growing demands of AI and other high-performance workloads. They explained the project’s origin, focusing on how traditional ONTAP architectures struggled to keep pace with the rapid computational advancements of GPUs. AFX breaks from the high-availability (HA) pair constraints by disaggregating storage and compute components, thereby allowing independent scaling of capacity and performance. This approach offers greater flexibility to customers, who can now customize infrastructure growth based on workload requirements rather than being locked into synchronized hardware upgrades.

The AFX design introduces a single storage pool architecture, eliminating redundant storage layers such as aggregates and simplifying both the user experience and storage management. It supports ONTAP interoperability and maintains near-complete feature parity, including capabilities like SnapMirror and FlexGroup, while delivering automatic rebalancing and volume re-hosting for seamless operation. While three separate cluster “personalities”—unified, block-only, and disaggregated—are maintained, features like zero-copy volume moves and simplified expansion reinforce the efficiency and adaptability of AFX. The new system is ideal for AI workloads given its throughput optimization, yet flexibility in design paves the way for use cases in EDA, HPC, and other data-intensive sectors. Though not yet a wholesale replacement for unified ONTAP, AFX represents a foundational step toward universally modern, scalable, and intelligent storage solutions.


Redefining Data Infrastructure for AI with NetApp

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: NetApp AI Vision and Strategy

Company: NetApp

Video Links:

Personnel: Syam Nair

At the Tech Field Day Experience during NetApp INSIGHT 2025, Syam Nair, Chief Product Officer at NetApp, outlines their strategy to redefine data infrastructure for artificial intelligence (AI). The focus is on building intelligent storage systems that unify data across block, file, and object formats, enabling fast and secure access to AI-ready data. By integrating AI capabilities directly into the storage layer, such as metadata enrichment, tokenization, and data governance, NetApp aims to empower storage administrators and extend the usability of data for AI workflows without sacrificing control or security.

In his presentation, Syam Nair emphasized that the AI data landscape is shifting from hype to practical application, with growing unstructured data sources such as machine-generated and generative AI outputs. With the ONTAP platform at its core, NetApp’s vision hinges on making all data AI-ready by embedding intelligence—like security policies, tokenization, and embedding mechanisms—into the data layer itself. This not only ensures data accessibility and governance but also removes the need for complex extraction, transformation, and loading (ETL) processes. NetApp’s AI Data Engine (AIDE) and AFX are designed to streamline this intelligent access while reducing the proliferation of data copies by managing metadata and vectorization in place.

NetApp’s approach aims to elevate the role of storage administrators, transforming them from infrastructure caretakers into enablers of data-centric applications and AI workflows. Instead of pushing AI users to understand the storage backend, NetApp provides APIs and policy-driven data access mechanisms that integrate with tools like Kafka or database systems. Emphasis was placed on security through granular, zero-trust policies and governance over metadata to prevent overhead and sprawl. NetApp aims to support emerging standards such as Apache Iceberg for semantic access and to evolve toward a system where unstructured data can be consumed like structured data—offering semantic reads without altering write formats. Ultimately, NetApp is not attempting to replace databases but rather to unify and enrich data access directly within the intelligent storage infrastructure.


Enterprise Grade Artificial Intelligence with NetApp

Event: Tech Field Day Experience at NetApp INSIGHT 2025

Appearance: NetApp AI Vision and Strategy

Company: NetApp

Video Links:

Personnel: Jeff Baxter

In this presentation, Jeff Baxter, VP of Product Marketing at NetApp, discusses how NetApp is enabling enterprise-grade artificial intelligence through a comprehensive intelligent data infrastructure. By addressing four key imperatives—modernizing data centers, transitioning to the cloud, adopting AI, and ensuring cyber resilience—NetApp aims to help organizations transform how they manage and utilize data.

Baxter underscored NetApp’s singular focus on data infrastructure, highlighting the evolution of the ONTAP operating system as a consistent data plane across environments—from on-premises systems to public and sovereign clouds. A major topic was the unveiling of the NetApp data platform and the importance of AI-ready data. Baxter asserted that many AI projects fail, not due to flawed models or lack of talent, but because the data isn’t prepared for AI—emphasizing issues like accessibility, compliance, and integration. With this challenge in mind, NetApp is repositioning itself as a true data platform company, absorbing years of enterprise experience into a unified, resilient, and highly available backbone for modern workloads.

Baxter then introduced two significant product announcements: the NetApp AFX system and the NetApp AI Data Engine, together named the NetApp AFX AI portfolio. AFX is a disaggregated storage architecture built on enhanced ONTAP, allowing for scalable performance and storage capacity separately—ideal for diverse AI workloads. The AI Data Engine complements this by providing a high-speed, secure data pipeline integrated with a vector database optimized for AI use cases like retrieval-augmented generation (RAG). This engine supports semantic search, data guardrails, and AI-ready APIs, pushing AI workloads into full enterprise territory with the reliability, compliance, and availability expectations that business-critical systems require.


Microsoft Sentinel Delegate Roundtable Discussion

Event: Tech Field Day Exclusive with Microsoft Security

Appearance: Tech Field Day Exclusive Delegate Roundtable Discussion

Company: Tech Field Day

Video Links:

Personnel: Tom Hollingsworth

In this roundtable discussion, the Field Day delegates discussion the current state of the Microsoft Sentinel. Currently, there is work to do with bringing together multiple portals like Defender, Entra, and Purview, as well as clearing up analysts whose roles span multiple security personas. There is also a need to clarify the licensing requirements and how each of the tools in the overall suite are integrated into workflows. The consensus is that the platform feels like a collection of separate products from different teams rather than a truly unified, integrated solution. This challenge is magnified for organizations with hybrid or multi-cloud environments, where the high cost of ingesting data from non-Microsoft sources like AWS presents a significant barrier to adoption.

The delegates expressed hesitation about making a strategic investment in a platform that seems so early in its development, concerned that future changes could force them to retool their processes. They stressed the need for greater maturity, transparency, and traceability, especially in reporting, as they cannot present “black box” data to senior leadership. For Sentinel to succeed in the real world, the delegates believe Microsoft must demonstrate a stronger commitment to interoperability by adopting open standards like OCSF more quickly and offering more flexibility in data engineering and routing before data enters the Sentinel lake. The feeling is that Microsoft needs to transition from its traditional license-based, “all-or-nothing” approach to prove it can truly function as an open ecosystem partner.

Despite these criticisms, the delegates are optimistic about Sentinel’s potential. The underlying data platform, with its integrated layer of tabular, graph, and vector data, is considered powerful, especially for advanced data science teams. The graph visualizations were particularly praised as an effective way to communicate pre- and post-breach scenarios and risk to business leaders. The delegates concluded that the platform’s greatest current strength is its flexibility. By providing low-code/no-code interfaces and natural language query capabilities, Microsoft empowers customers to build the specific reports and tools they need. This ability for organizations to create their own solutions is seen as a powerful way to bridge the current maturity gap and extract immediate, tailored value from the platform.


Microsoft Sentinel Capabilities Demo with Abhishek Agrawal

Event: Tech Field Day Exclusive with Microsoft Security

Appearance: Microsoft Security Platform Demos

Company: Microsoft Security

Video Links:

Personnel: Abhishek Agrawal

This presentation demonstrates the capabilities of Microsoft Sentinel’s evolution into a unified security platform, showcasing how a single console empowers security practitioners to manage and investigate threats across their entire digital estate. The core principle is that since “attackers think in graphs” and move across domains, defenders need a consolidated, cross-domain view. This is delivered through the Microsoft Defender console, which brings together tools for identity, endpoints, email, and cloud infrastructure. A key feature is the proactive exposure management capability, powered by the Sentinel Graph. It visualizes attack paths from internet-exposed assets to critical data, allowing teams to prioritize patching the most crucial vulnerabilities first, moving beyond simple vulnerability scanning to understanding true organizational risk.

For post-breach scenarios, the platform offers a unified incident queue that reduces alert fatigue by correlating alerts from both Microsoft and third-party sources into a single “Uber story.” When an incident occurs, the Sentinel Graph is used to stitch together the alerts into a coherent narrative and calculate the potential blast radius, showing analysts where an attacker could pivot next and helping them prioritize response actions. This graph-based approach also transforms threat hunting. While analysts can still run traditional Kusto Query Language (KQL) queries on recent data in the analytics tier, they can now also perform “posture hunting” directly on the graph to proactively find overprivileged access or risky configurations before they can be exploited.

These advanced capabilities are powered by the Sentinel Data Lake, which decouples storage and compute to allow for the cost-effective, long-term retention of high-volume data like syslogs and cloud trails. This data is stored in an open Delta Parquet format, enabling multiple forms of analysis on a single copy of the data. Analysts can run KQL queries for retro-hunts spanning years or perform deep, big-data analysis using Spark and Python directly within VS Code. This is further enhanced by AI, where the Sentinel MCP server and GitHub Copilot allow analysts to perform “vibe hunting.” They can use natural language to ask questions, discover relevant data tables in the lake, and even have the AI generate entire Python analysis notebooks, dramatically upskilling the entire SOC and making sophisticated data science accessible to every team member.


Microsoft Sentinel Evolution Executive Session

Event: Tech Field Day Exclusive with Microsoft Security

Appearance: Microsoft Sentinel Evolution Executive Session

Company: Microsoft Security

Video Links:

Personnel: Gideon Bibliowicz, Scott Woodgate

Microsoft Sentinel is evolving from a market-leading Security Information and Event Management (SIEM) tool to a full-fledged, AI-driven security platform for Microsoft Security and its partners. The core of this evolution is to unify security operations within the Microsoft Defender portal, which will remain the primary interface for SOC analysts. Sentinel is being re-architected to serve as the underlying data and analytics engine for all Microsoft security products, including Defender, Entra, and Purview. This shift addresses the need to ingest and analyze massive volumes of security data from diverse sources affordably and efficiently, setting the stage for advanced AI capabilities and automated security agents. The goal is to eliminate the trade-off between comprehensive security coverage and budget constraints by creating a centralized, scalable foundation.

This new platform is built on several key innovations. The Sentinel Data Lake, now generally available, provides a low-cost tier for long-term data storage (up to 12 years), separating storage costs from compute costs. This makes it feasible for organizations to retain voluminous logs from network devices and other third-party sources that were previously cost-prohibitive. On top of this data lake, Microsoft is introducing new ways to interact with data, most notably the Sentinel Graph. This feature allows analysts to visualize relationships between assets, identities, and activities, helping them to understand complex attack paths and blast radiuses in a more intuitive way, because “attackers think in graphs.” The platform also includes a new MCP (Microsoft Copilot Protocol) server, which enables natural language queries and provides a framework for AI agents to discover and use security tools automatically.

Microsoft emphasizes that this is an open platform designed to support a thriving ecosystem and heterogeneous customer environments. With nearly 400 connectors, the platform is built to ingest and correlate data from third-party tools like CrowdStrike and Zscaler with the same fidelity as Microsoft’s native stack. The vision extends to AI-driven actions, like Attack Disruption, which will be expanded to take actions on third-party systems. This entire stack, from the data platform to the AI capabilities, is brought together in the new Microsoft Security Store. This marketplace allows customers to discover, purchase, and deploy curated security solutions and AI agents from both Microsoft and its partners, completing the transition to a unified, AI-ready security architecture.


Getting Visibility and Control over SaaS Sprawl with 1Password Extended Access Management

Event: Security Field Day 14

Appearance: 1Password Presents at Security Field Day 14

Company: 1Password

Video Links:

Personnel: Jason Meller

SaaS sprawl creates a number of serious issues for companies: wasted budget, the exposure of sensitive data via unsanctioned apps, and disjointed access management for apps outside SSO. Jason Meller walks through how 1Password helps our customers discover, manage, and secure their entire SaaS ecosystem – even non-SSO apps – via 1Password Device Trust and Trelica by 1Password. This problem has exploded as employees have gained more autonomy to choose their own tools, creating a significant visibility challenge for IT and security teams. 1Password addresses this by using its Device Trust agent to discover the full scope of application usage across an organization. The agent provides deep visibility by identifying browser visits, desktop apps, browser extensions, and even IDE plugins across Windows, macOS, and Linux, all while providing users with a privacy center to understand what data is being collected. This is particularly effective for discovering modern AI tools, which often have multiple components; for example, the agent can detect not only the ChatGPT website but also its native desktop app and VS Code extension.

Once these applications are discovered, 1Password provides nuanced control that goes beyond simple blocking. For a tool like ChatGPT, an administrator can create a policy that doesn’t just ban it but instead ensures employees are using the sanctioned corporate workspace. If a user is detected using a personal account, Device Trust can block them from accessing sensitive company resources until they switch to the approved account, educating the user on the policy in real time. This discovery and control capability is further enhanced by Trelica by 1Password, a SaaS management platform that acts as a single pane of glass for app governance. Trelica integrates with IDPs, financial systems, and its own browser extension to discover shadow IT, manage licenses, and automate complex onboarding and offboarding workflows across hundreds of integrated applications.

Ultimately, these components come together in the 1Password App Launcher, which provides a unified and seamless sign-in experience for end users. The launcher presents all of a user’s applications, whether they are federated through an IDP or require a username and password. When a user clicks an icon, 1Password handles the authentication details in the background—either navigating the SSO flow or autofilling credentials and TOTP codes—while transparently enforcing device trust checks. This creates “experiential uniformity” for the user, allowing IT and security teams to improve security behind the scenes, such as upgrading an app from password-based login to federated SSO, without disrupting the user’s workflow. This holistic approach is central to 1Password’s mission to secure every sign-in to every app from every device.


How 1Password is Building Agentic AI Security and GenAI Discovery

Event: Security Field Day 14

Appearance: 1Password Presents at Security Field Day 14

Company: 1Password

Video Links:

Personnel: Anand Srinivas

Anand Srinivas discusses 1Password’s security-first approach to AI, and shows how our principles inform the AI-related capabilities we’re building. Our first area of focus is ensuring secure access for AI agents via the 1Password SDK, so agents receive timebound, auditable access without the use of hardcoded credentials. In addition, Srinivas shows how our products enable customers to discover and block unapproved genAI tools. This approach is guided by core principles, including adhering to the same zero-knowledge architecture for AI as for user credentials, ensuring authorization is deterministic rather than probabilistic, and never placing raw credentials into an LLM’s context window. 1Password recognizes that agentic AI is fundamentally different from traditional applications; it’s probabilistic, often acts on behalf of a human, and behaves like a hybrid of a user and an application. This unique nature scrambles the traditional, siloed methods of managing secrets for applications versus the workforce, creating a need for a single, unified source of truth for all credentials.

To address these new challenges, 1Password is developing solutions to secure how AI agents and developers interact with sensitive data. One demonstration showed how their SaaS management tool, Trelica, can connect to an LLM through a Model Context Protocol (MCP) server, allowing an AI like Claude to answer questions about enterprise contracts without ever accessing raw credentials. This highlights a way to leverage AI’s power while maintaining strict data governance. The presentation also previewed a significant security enhancement for developers who often “vibe code” and hardcode secrets. A new feature will allow developers to import secrets from a plain-text environment file directly into a secure 1Password vault with a single click, replacing the vulnerable local file with a securely mounted one that requires authentication to access, thus preventing accidental exposure in code repositories.

1Password is extending its reach to secure emerging AI-native platforms. They announced a partnership with the AI browser Perplexity, becoming the exclusive launch partner for password management to ensure users can interact with these new tools securely from the start. This move, along with their work on securing developer workflows and programmatic AI access, demonstrates 1Password’s strategy to apply its user-friendly, security-first philosophy to the entire AI ecosystem. While specific solutions for providing agentic AI with timebound, auditable access are still forthcoming, the company has clearly identified the core problems and is building a framework to solve them, positioning the password manager as a central component of an enterprise’s AI security strategy.


How 1Password Extended Access Management is Securing the Future of Work

Event: Security Field Day 14

Appearance: 1Password Presents at Security Field Day 14

Company: 1Password

Video Links:

Personnel: Jason Meller, Leya Leydiker

1Password is the leader in Extended Access Management, a new category of security that addresses the gaps in access management created by app, identity, and device sprawl. Our platform is composed of three products: our Enterprise Password Manager, Trelica by 1Password, and 1Password Device Trust. In this presentation, Jason Meller and Leya Leydiker explain the Access-Trust Gap facing modern organizations, and explore how our password manager acts as the foundation for our suite of solutions. This “Access-Trust Gap” is defined as the combination of unmanaged devices, shadow IT applications, and sprawling identities that fall outside the purview of traditional security tools like Identity Providers (IDPs) and Mobile Device Management (MDM). Because 1Password is used to store credentials that these other systems don’t cover (like API keys), the company has unique visibility into this growing problem. Their Extended Access Management platform aims to close this gap by providing unified visibility and complete control. The presentation demonstrated this by showing how 1Password Device Trust could detect an unencrypted SSH key on a developer’s laptop, block access to a sensitive app like GitHub, and then seamlessly guide the user to secure that key within their 1Password vault, thereby fixing the issue and training the user simultaneously.

The foundation of this strategy is 1Password’s Enterprise Password Manager (EPM), which secures every step of the user journey, not just the initial login. The platform’s success is rooted in its user-first design philosophy, which stems from its origins as a consumer application. This focus on making the secure way the easy way drives user adoption and reduces friction, which in turn minimizes help desk tickets for things like password resets. The EPM handles not only passwords but also API keys, SSH keys, passkeys, and one-time passcodes (OTPs), allowing it to serve as a single, secure vault for all types of credentials. This capability enables secure sharing among teams, such as a social media team sharing a single login with MFA. Crucially, all of this is built on a “zero knowledge” security model, meaning user data is encrypted locally on their device, and 1Password itself cannot access it, ensuring credentials remain secure even in the event of a breach.


Security Field Day Delegate Roundtable: Enforcement

Event: Security Field Day 14

Appearance: Security Field Day 14 Delegate Roundtable Discussion

Company: Tech Field Day

Video Links:

Personnel: Tom Hollingsworth

The presentation discusses the best places to enforce security policy, whether that’s on the endpoint, in the network, or in the cloud, while also exploring where security policy enforcement is headed and how it affects practitioners today. The delegates challenge the traditional default of placing enforcement in the network, but quickly acknowledge its necessity in specific situations. For environments with unmanaged devices, such as universities with student BYOD policies or enterprises with a proliferation of IoT devices like cameras and smart appliances, the network remains the only viable enforcement point. These scenarios highlight that a one-size-fits-all approach is impractical; the correct location for enforcement is heavily dependent on the context of the organization, the users, and the types of devices that need protection. The core challenge is applying effective policy without being able to install an agent or directly manage the endpoint.

As the discussion evolves, it addresses how the very structure of the enterprise network has fundamentally changed. The classic three-tier model of core, distribution, and access has been replaced by a modern equivalent for remote work: the cloud, the internet, and the employee’s home. This shift has eliminated the traditional network choke points where security policies were once enforced. In response to this new reality, the conversation shifts to Zero Trust as a necessary paradigm. Rather than defending a perimeter, Zero Trust treats every access request as a distinct transaction. It simplifies security to its core components—a consumer (like a user or service) attempting to access a resource—and mandates authentication for both sides of every interaction. This is a radical departure from simply funneling traffic through a firewall and underscores the need for a new way of thinking about security architecture.

Despite the conceptual advantages, the delegates recognize the immense difficulty of implementing a Zero Trust model in established “brownfield” environments. The primary obstacle is the requirement to understand and map every data flow and application interaction, a task that has historically been nearly impossible. A more pragmatic path forward is to adopt a “protect surface” strategy, applying Zero Trust principles to one critical application or dataset at a time and expanding from there. The roundtable concludes that while emerging technologies like AI may help in mapping these complex environments, they also introduce new risks and regulatory pressures. Ultimately, the key takeaway is that no enforcement strategy—whether it’s network-based, endpoint-based, or Zero Trust—can succeed without first achieving a comprehensive and accurate understanding of the environment being protected.


SquareX Browser Detection and Response Demos

Event: Security Field Day 14

Appearance: Introducing SquareX at Security Field Day 14

Company: SquareX

Video Links:

Personnel: Shourya Pratap Singh

Shourya Pratap Singh, Principal Software Engineer, discusses the architecture of the SquareX Extension, engineered from the ground up with a modular and scalable design to deliver browser security. He explains how it augments existing security setups. Through demos, Shourya showcases use cases such as Browser Attack Detection and Response, Browser DLP, and enterprise browser use cases. He also highlights how the platform enables rapid modeling of protection against new threats, providing organizations with faster and more comprehensive browser security.

Throughout the presentation, Singh demonstrates how attackers exploit the visibility gap in traditional security tools by executing attacks entirely within the browser. He showcases how malicious files can be hidden in plain sight within legitimate web resources like CSS or WebAssembly files, and then reassembled and triggered as a download on the client side, bypassing proxy-based scanners. Similarly, he illustrates an OAuth consent attack where a legitimate link to a service like Salesforce is used to trick a user into granting risky permissions, leading to data exfiltration that email security and EDRs would miss. In both scenarios, the SquareX browser extension provides the necessary “last mile” control, intercepting the file download or the consent-granting action directly within the browser to block the threat before it can be executed.

Singh explains that the SquareX platform complements existing security setups by providing granular control and deep visibility into browser activity. Administrators can create policies using a simple UI, an AI-powered natural language generator, or a flexible Lua script editor, which allows for rapid defense modeling against novel attacks. Detections are enriched with an “AttackGraph” that maps the user’s entire navigation path leading to an incident, providing far more context than traditional logs. The extension-based approach is positioned as superior to dedicated enterprise browsers, as it avoids disrupting user behavior and workflows, enhances reliability, and seamlessly integrates with any browser to fill the critical security gaps in DLP and EDR.


SquareX Browser Detection and Response: Closing the SWG and EDR Visibility Gap

Event: Security Field Day 14

Appearance: Introducing SquareX at Security Field Day 14

Company: SquareX

Video Links:

Personnel: Shourya Pratap Singh

SquareX’s browser extension turns any browser on any device into an enterprise grade secure browser. SquareX’s industry-first Browser Detection and Response (BDR) solution empowers organizations to proactively defend against browser-native threats including Last Mile Reassembly Attacks, rogue AI agents, malicious extensions and identity attacks. SquareX is the only solution that provides BDR, enterprise browser and browser DLP capabilities in a single extension. Unlike dedicated enterprise browsers, SquareX seamlessly integrates with users’ existing consumer browsers, delivering security without compromising user experience.

In the presentation, Shourya Pratap Singh explains that this solution is necessary because the very definition of an endpoint is evolving. Whereas endpoints were once defined by native applications and local storage, today the browser has become the primary application platform where most organizational work occurs. This shift means that the attack surface has also moved to the browser. Singh argues that traditional security tools, which were designed when browsers were simple rendering tools, are no longer sufficient. The modern browser is a complex ecosystem with advanced protocols and capabilities, making it impossible to infer all threats simply by inspecting network traffic, as was possible in the past. This complexity creates a significant visibility gap for existing security stacks.

Singh details how both Endpoint Detection and Response (EDR) and Secure Web Gateway (SWG) solutions fail to close this gap. EDR tools have limited visibility because the browser operates as a “closed box,” preventing them from seeing threats that live and die entirely within it, such as malicious extensions, identity-based consent attacks, or threats delivered via WebAssembly. Likewise, network-based SWG solutions lack the application context to detect advanced evasions. Singh uses the example of “Last Mile Reassembly Attacks,” where a malicious file is broken into individually benign chunks that pass through network security, only to be reassembled into a threat by JavaScript on the client side. By operating as a browser extension, SquareX’s BDR provides the necessary in-browser visibility to detect and respond to these modern, evasive threats that bypass traditional security controls.


Customer Spotlight: The Future Takes Shape with JetZero and Nile

Event: Security Field Day 14

Appearance: Nile Presents at Security Field Day 14

Company: Nile

Video Links:

Personnel: Drew Geyer

Nile’s mission is to be the “easy button” for network and security in on-premises deployments. The company was founded by networking industry veterans, including former Cisco executives John Chambers and Pankaj Patel, to address the complexity of enterprise LAN environments. Nile has pioneered a new architectural approach, backed by numerous patents, that has led to its recognition as a Visionary in the Gartner Magic Quadrant for Enterprise Wired and Wireless LAN Infrastructure. The Nile service is deployed globally across various verticals, powering large-scale environments such as a 12 million square-foot warehouse and concurrently supporting over 200,000 users.

Drew Geyer of JetZero explained that as a revolutionary aviation company developing a next-generation blended wing body aircraft, their network requirements for performance and security are exceptionally demanding. With high-value intellectual property and a $44 billion order backlog, their technology is a prime target for adversaries. However, their initial network, built with top-tier legacy vendors, was a “complete disaster” marked by overwhelming complexity. The small IT team was constantly fighting with a fragile and non-cohesive system of VLANs, ACLs, and bolt-on appliances. This resulted in constant issues, including dead spots across their large hangar, unreliable connections that dropped during crucial investor meetings, and abysmal performance ranging from 3 to 20 Mbps.

Initially skeptical of Nile’s claims, Geyer was won over by their unique philosophy of building security directly into the network fabric rather than adding another tool. A proof-of-concept test “went viral” among employees, who were thrilled with speeds jumping to over 800 Mbps. The full deployment was described as “invisible” to the JetZero IT team, as Nile handled the entire process, delivering a simple, reliable, and high-performing network. The result was a transformative shift from constant firefighting to having a network that operates like a utility, giving the team “peace of mind” to focus on strategic initiatives. Geyer concluded that Nile’s Network-as-a-Service provides the essential foundation that allows JetZero to pursue its mission of building the future of aviation without compromising between security and performance.


Security in Action – Top Use-Cases with Nile NaaS

Event: Security Field Day 14

Appearance: Nile Presents at Security Field Day 14

Company: Nile

Video Links:

Personnel: Jaswanth Kongara, Shiv Mehra

Nile’s mission is to be the “easy button” for network and security in on-premises deployments. The company was founded by networking industry veterans, including former Cisco executives John Chambers and Pankaj Patel, to address the complexity of enterprise LAN environments. Nile has pioneered a new architectural approach, backed by numerous patents, that has led to its recognition as a Visionary in the Gartner Magic Quadrant for Enterprise Wired and Wireless LAN Infrastructure. The Nile service is deployed globally across various verticals, powering large-scale environments such as a 12 million square-foot warehouse and concurrently supporting over 200,000 users.

Shiv Mehra detailed Nile’s Zero Trust fabric, designed to counter common attack paths by securing the infrastructure, controlling network access, and governing post-access activity. The infrastructure itself is hardened by design; Nile hardware has no direct management interfaces like SSH or Telnet, and all communications between fabric components are mutually authenticated and encrypted with MACsec. Access control operates on a “deny by default” principle where physical ports are “colorless,” meaning access is determined solely by identity, not port configuration. Nile makes identity verification a cornerstone, supporting seamless wired and wireless SSO integrated with IdPs, traditional 802.1X/RADIUS, and a robust system for IoT devices that combines continuous fingerprinting with optional device validation to ensure proper identification and segmentation.

This identity-first approach enables a “segment of one,” where every user and device is isolated by default, preventing lateral movement and network reconnaissance as demonstrated in a live demo. The policy engine, called the Trust Service, enforces granular, least-privilege access by requiring every entity to belong to a group (user, device, or application). Policies are then built by defining rules between these groups, enhanced with contextual attributes like device compliance status from an MDM or EDR. A final demo showcased the ease of this model by creating a policy in a few clicks to allow only a specific video streaming protocol between employees, while all other inter-employee traffic, including pings, remained blocked, illustrating how Nile simplifies the implementation of true microsegmentation.