Scaling Autonomous IT: The Real Enterprise Impact with Digitate ignio

Event: AI Field Day 7

Appearance: Digitate Presents at AI Field Day 7

Company: Digitate

Video Links:

Personnel: Rajiv Nayan

Discover how Digitate is transforming enterprise operations through an AI-first go-to-market strategy built for global scale and complexity. In this session, Digitate’s GM and VP, Rajiv Nayan, will dive into real-world customer success stories that showcase how Digitate is scaling innovation across industries, building multi-billion dollar business opportunities, and reshaping how businesses run in the AI era. Learn how your enterprises can scale smarter, faster, and more proactively with Digitate: https://digitate.com/ai-agents/

At AI Field Day 7, Rajiv Nayan, Vice President and General Manager of Digitate, presented “Scaling Autonomous IT: The Real Enterprise Impact with Digitate ignio.” Nayan introduced ignio as an agent-based platform designed to bring autonomy to enterprise IT operations, built from the ground up over the past decade and protected by over 110 patents. Targeting a $31 billion global market across retail, pharma, banking, and manufacturing, ignio leverages machine intelligence and agent-based automation to address repetitive, knowledge-driven IT tasks, aiming to shift enterprises from assisted or augmented operations to true autonomy. According to Nayan, the platform is used by over 250 customers worldwide and has earned a high customer satisfaction rating of 4.4 out of 5 on G2.

Nayan illustrated ignio’s capabilities through detailed customer stories. For luxury retailer Tapestry, ignio integrated with IBM Sterling order management, financial, and logistics systems to monitor and optimize the journey of orders across 37 global webfronts. The platform proactively handled issues ranging from data inconsistencies to job cycle failures, ultimately saving the company millions and managing over 100,000 orders. In another case, a large pharmaceutical company with 70 complex system interfaces used ignio to streamline their prosthetic limb supply chain, reducing critical demand planning processes from weeks to hours. A global consumer goods company also employed ignio to automate order fulfillment across SAP systems and manufacturing plants, avoiding disruptions in a high-volume direct-to-store delivery model and preventing millions in potential losses.

At scale, ignio demonstrated significant operational efficiencies for enterprises such as a major pharmaceutical distributor and a retail pharmacy chain. In the distribution environment, ignio handled over 20,000 configuration items and 220 business-critical applications, achieving 80 percent noise reduction in event management and automating 110,000 hours of annual manual work. For the retail pharmacy chain with 9,000 stores, ignio automated ticket management tied to revenue assurance for promotions, reducing mean time to resolution from nearly three days to under ten minutes and recapturing $17 million in revenue while saving $5 million in support costs. Across its deployments, ignio processed 1.2 billion events last year, achieved 87 percent noise reduction, and executed 300 million automated actions—demonstrating that agentic, autonomous IT platforms can significantly reduce business disruptions and free human talent for higher-value work.


Incident Resolution with Digitate’s ignio AI Agent

Event: AI Field Day 7

Appearance: Digitate Presents at AI Field Day 7

Company: Digitate

Video Links:

Personnel: Rahul Kelkar

At AI Field Day 7, Rahul Kelkar, Chief Product Officer at Digitate, presented the capabilities of ignio, an AI-based incident resolution agent designed to automate, augment, and improve IT operations. Ignio uses a logical reasoning model for incident resolution, leveraging enterprise blueprints to understand situations in a closed loop and applying automation where possible. When a fully automated response is not viable, ignio augments human efforts through assisted resolution, supplying prioritized incident lists based on business impact, providing situational context, and capturing both historical and episodic memory about recurring issues. The product integrates with various data sources to build a formal enterprise IT model, supporting information ingestion via templates or extraction from existing documentation, and includes adapters for common ITSM systems like ServiceNow for seamless change management.

Technically, ignio’s core incident resolution operates via automated root cause analysis, performing real-time health checks across application hierarchies—such as applications running on Oracle databases hosted on Red Hat servers—and comparing the current state to baselines to isolate anomalies. It can autonomously apply prescriptive fixes, such as restarting services, and then validate remediation by rechecking stack health. In more complex scenarios, like SAP HANA environments or intricate batch job dependencies in retail order management, ignio handles non-vertical, multi-layered issues involving middleware, business processes, and interdependent bad jobs. The solution features out-of-the-box knowledge for common technologies and allows continuous augmentation with customer-specific logic. Custom operational models and atomic actions can be enhanced using Ignio Studio, and the system learns from user feedback through reinforcement learning, improving accuracy in prioritizing incidents, suggesting fixes, and predicting service level agreement (SLA) violations before they occur.

Ignio extends beyond deterministic resolution to assist engineers and SREs via conversational augmentation. For issues not resolved autonomously, ignio provides contextual insights—including previous incidents, typical resolutions, and guidance for next-steps—while collaborating via a “resolution assistant” so humans can contribute domain knowledge and validate procedure. The demo showed proactive recommendation capabilities, identifying dominant recurring SLA violations and offering actionable, prioritized problem management insights. Ignio integrates with multiple agent-based platforms for orchestrated, multi-channel incident management flows, including email, Slack, and ticketing systems, using orchestration protocols and adapters. The platform employs advanced anomaly mining and sequence analysis, allowing users to identify root causes not only within vertical stacks but also through complex temporal and conditional relationships across business functions, ultimately supporting predictive, reactive, and continuous improvement use cases in large-scale enterprise IT environments.


Powering Autonomous IT with ignio AI Agents from Digitate

Event: AI Field Day 7

Appearance: Digitate Presents at AI Field Day 7

Company: Digitate

Video Links:

Personnel: Rahul Kelkar

Digitate is a global provider of agentic AI platform for autonomous IT operations. Powered by ignio™, Digitate combines unified observability, AI-powered insights, and closed-loop automation to deliver resilient, agile, and self-healing IT and business operations. In this presentation, Digitate’s Chief Product Officer, Rahul Kelkar, will introduce Digitate’s vision for an autonomous enterprise, where organizations learn, adapt, and make decisions with minimal human intervention. Through a series of demos, Rahul will also showcase how Digitate’s purpose-built AI agents work seamlessly across observability, cloud operations, and IT for business to boost cloud ROI, predict delays, and ensure long-term stability. Learn more about Digitate and its ignio platform, visit: https://digitate.com/ai-agents/

At AI Field Day 7, Rahul Kelkar, Chief Product Officer at Digitate, presented on powering autonomous IT with ignio, Digitate’s agentic AI platform designed for IT operations. Kelkar began by outlining the industry’s evolution from manual IT operations and cognitive automation toward modern AIOps and agentic AI, framing the journey towards fully autonomous IT as a progression through stages of manual, task-automated, and augmented operations. He described how ignio leverages a unified three-pillar approach: unified (or business) observability for comprehensive monitoring of both technical and business processes, AI-driven insights using traditional and agentic AI including machine learning and generative models, and closed-loop automation that not only provides recommendations but executes prescriptive actions with high confidence. This architecture aims to proactively eliminate business disruptions due to IT, identify issues before they impact business productivity, and reduce incident resolution times.

Ignio operates on what Digitate calls the “Enterprise Blueprint,” essentially a digital twin or knowledge graph that captures both structural and behavioral aspects of enterprise IT. The platform integrates with common monitoring and IT management tools, ingesting metrics, events, logs, and traces to provide a layered view of health across infrastructure, application stacks, and business value streams. Observability data is enriched with AI-based noise filtering, anomaly detection, correlation, and dynamic thresholding, automatically triaging and suppressing redundant alerts. Kelkar highlighted ignio’s “composite AI” approach, combining logical reasoning (rule-based and machine learning models), analogical reasoning (using generative AI and large language models for contextualization where knowledge is incomplete), and assisted reasoning (bringing domain experts into the loop to validate and tune recommendations). The workflow encompasses agent-based management of event and incident handling, automated root cause analysis, and remediation actions, all while learning from human validation to continuously improve performance.

The platform is designed to address complex, applied use cases in large-scale, modern environments, such as event reduction, proactive incident management, observability, patch management, and cost optimization across multi-cloud and containerized workloads. ignio supports integrations through out-of-the-box adapters for 30-40 major tools (with customization for specialized environments) and specific modules for SAP applications, batch scheduling, digital workspaces, and procurement processes. Its agentic capability is extended with Ignio Studio, empowering SREs and IT operations teams to continuously extend and customize workflows. As demonstrated, ignio’s AI agents interact via conversational interfaces, notifications, and dashboards, enabling a shift to smaller, cross-functional SRE teams—supported by autonomous agents handling the bulk of monitoring, triage, and remediation, with humans focusing on governance, validation, and improvement. This supports a vision of truly autonomous, resilient IT operations that adapt rapidly to changing workloads and technologies, minimizing disruptions and keeping business-critical systems running smoothly.


The Technical Foundations of Articul8’s Agentic Al Platform

Event: AI Field Day 8

Appearance: Articul8 Presents at AI Field Day 7

Company: Articul8

Video Links:

Personnel: Arun Subramaniyan, Renato Nascimento

Dr. Renato Nascimento, Head of Technology at Articul8, and Dr. Arun Subramaniyan, Founder and CEO, presented the technical architecture and capabilities of the Articul8 platform at AI Field Day 7. The platform is built to enable orchestration and management of hundreds or thousands of domain- and task-specific AI models and agents at enterprise scale, supporting both cloud and on-premises deployments on all major cloud providers. The core architecture leverages Kubernetes for elasticity, high availability, and robust isolation of components. Key elements include a horizontally scalable API service layer and a proprietary “model mesh orchestrator,” which coordinates dynamic, low-latency, real-time executions across a variety of AI models deployed for customer-specific workloads. Observability, auditability, and compliance features are integrated at the intelligence layer, allowing enterprises to track, validate, and meet regulatory requirements for SOC and other audit demands.

At the heart of the platform is the automated construction and utilization of knowledge graphs, which are generated during data ingestion without manual annotation. For example, Articul8 demonstrated the ingestion and analysis of a 200,000-page aerospace dataset, generating a knowledge graph with over 6 million entities, 160,000 topics, 800,000 images, and 130,000 tables. The system automatically identifies topics, clusters, and semantic relationships, enabling precise search, reasoning, and model flows (e.g., distinguishing charts from images, applying task-specific models for tables including OCR, summary statistics, and understanding content). This knowledge graph forms the substrate for supporting both the training and real-time inference of domain-specific and task-specific AI models. The Model Mesh intelligence layer breaks down incoming data, determines its type, and routes it through appropriate model pipelines for processing, ensuring that the architecture can support both large and small models as appropriate for the data and task complexity.

The platform also showcases advanced agentic functionalities such as the creation of digital twins—AI-powered proxies of individuals or departments—which can be quickly spun up from public or private data and progressively improved through feedback and additional data integration. In an illustrative demo, Articul8 built digital twins of AI Field Day participants and orchestrated live, multi-agent discussions on technical topics. The platform supports squad-mode interactions, wherein multiple digital twins can collaborate, offer opinions, revise answers, and converge or diverge in real-time analysis. All these actions are fully tracked and auditable, supporting enterprise security and access controls. The discussion outcomes are summarized and can be exported, making the platform suitable not only for typical enterprise Q&A and knowledge retrieval, but also for scenario planning, decision support, and collaborative agentic workflows in secure, controlled environments.


Inside Articul8’s Domain-Specific Platform and Real-World ROl

Event: AI Field Day 7

Appearance: Articul8 Presents at AI Field Day 7

Company: Articul8

Video Links:

Personnel: Arun Subramaniyan, Parvi Letha

At AI Field Day 7, Parvathi Letha, Head of Product Management at Articul8, and Dr. Arun Subramaniyan, Founder and CEO of Articul8, presented the Articul8 platform, a secure, domain-specific generative AI solution tailored for high-value enterprise use cases. Parvathi detailed the architecture, beginning with its autonomous data perception functionality, which ingests and semantically connects structured and unstructured data—including PDFs, tables, images, and CAD drawings—into a living, schema-aware knowledge graph. This knowledge graph serves as the core institutional memory, continually evolving through user interactions, and supports auditability and traceability by tracking updates and allowing for corrective feedback, ensuring personalized and adaptive intelligence for each enterprise and user.

A cornerstone of Articul8’s approach is its agentic reasoning engine, which utilizes hundreds of domain- and task-specific agents. When a complex enterprise mission is received, the reasoning engine breaks it down into sub-missions and autonomously selects the most appropriate agent for each segment. This collaborative, context-aware architecture emulates expert teams, vastly accelerating tasks such as root cause analysis, compliance checks, and network troubleshooting, and supporting outcomes with fine-grained traceability. Articul8’s hyper-personalization goes further with adaptive user interfaces and the capability to create digital twins, allowing outputs and insights to reflect individual user decision styles and enterprise workflows. Domain-specific models span industries like semiconductor, aerospace, manufacturing, supply chain, and telco, with bespoke agents such as a highly specialized Verilog model for chip design and general-task agents for time series, table, and chart understanding.

For enterprise consumption, Articul8 supports multiple models: companies can subscribe to the full suite as a traditional enterprise software platform or deploy select agents a la carte via the AWS Marketplace, paying per API call for agents like LLM IQ, which benchmarks LLMs for specific use cases, or the Network Topology Agent, which digests network logs and diagrams to support troubleshooting. The platform’s industry-specific models are co-developed with partners such as the Electric Power Research Institute and leading aerospace firms, ensuring both robust data integration and expert validation. Articul8’s agent-of-agent architecture has been in production for more than two years and is already driving measurable ROI in semiconductor root cause analysis and CAD validation, reducing processing times from days to hours and delivering over 93 percent detection accuracy—while maintaining data control and auditability entirely within customer security perimeters.


Why Uniqueness Defines the Future of Enterprise GenAl with Articul8

Event: AI Field Day 7

Appearance: Articul8 Presents at AI Field Day 7

Company: Articul8

Video Links:

Personnel: Arun Subramaniyan

At AI Field Day 7, Dr. Arun Subramaniyan, Founder and CEO of Articul8, presented a compelling vision of the future of enterprise generative AI (GenAI) centered on hyper-personalization and domain specificity. The talk addressed the evolution of AI accessibility, highlighting how general-purpose models have become commoditized, making it easier than ever to generate content but often sacrificing depth and context. Dr. Subramaniyan emphasized the limitations of relying purely on generic large language models (LLMs) for enterprise tasks, especially in specialized domains like healthcare, manufacturing, cybersecurity, and energy. Instead, Articul8 advocates for tailored solutions built using domain-specific models that can factor in both tacit knowledge and contextual nuances that general models may overlook or misinterpret.

Articul8’s technological strategy involves developing proprietary models specifically trained or fine-tuned with datasets relevant to particular industries. These models range in size from a few billion to several hundred billion parameters. They are not designed to be general-purpose but to perform specific tasks within domains such as thermodynamics, aerodynamics, and system diagnostics. Articul8 also builds task-specific models, such as those focused on table understanding in spreadsheets or parsing complex PDFs. These models are augmented with multi-model orchestration capabilities and metadata tagging to construct automated knowledge graphs, enabling richer semantic understanding without moving raw data—only metadata is transferred. This approach allows enterprises to retain control over data security, including air-gapped deployments, while benefiting from AI-powered insights.

Articul8’s platform architecture supports both on-premises and SaaS-based models, enabling flexibility in deployment. The platform can ingest unstructured data, apply semantic reasoning, and, using a combination of models and tools, generate agents and squads of agents that autonomously execute “missions” within enterprise workflows. This layered AI model architecture culminates in creating “digital twins,” which are dynamically updated representations of business systems for simulation and high-fidelity analysis. Demonstrating measurable performance gains, Articul8 benchmarked its energy domain model against state-of-the-art general-purpose models like LLaMA 3, GPT-4, and others, showing superior results across multiple task dimensions. The company maintains control of intellectual property in models unless customer data is involved in training, in which case the IP resides with the customer. This approach ensures scalability while maintaining rigorous standards for contextual accuracy, enterprise integration, and domain relevance.


Nutanix Data Lens Demo

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Mike McGhee

In the Nutanix presentation at AI Field Day 7, Mike McGhee showcased the capabilities of Nutanix Data Lens, a data analytics and cybersecurity tool designed to provide visibility across storage environments, including Nutanix file servers and third-party object stores like AWS S3. The tool collects and analyzes metadata—for example, file names, sizes, and data age—as well as real-time audit trails that track operations like reads, writes, and permission changes. Data Lens is tightly integrated with Nutanix storage solutions and will soon be bundled with their offerings for on-premises deployments, ensuring seamless operational data ingestion and monitoring without reliance on external scanning requests.

A major feature of Data Lens is its ransomware protection, which includes both signature-based detection using known file extensions from open source communities and behavioral analysis that identifies anomalous events like in-place or out-of-place encryption. This allows Data Lens to detect threats from unknown malware using intelligent algorithms trained on customer activity patterns. When threats are detected, Data Lens can log full user activity, block affected users or clients, and provide options for recovery that include restoring individual affected files or entire shares using recommended snapshots taken before the time of compromise. This detailed approach helps reduce false positives and accelerates recovery in the event of an attack.

In addition to its security functions, Nutanix emphasized the need for better tooling and data accessibility for platform engineering and end-users, particularly when supporting AI and DevOps workflows. Speakers stressed the importance of giving end-users API-level access to infrastructure components like databases, Kubernetes platforms, and AI services so they can manage and troubleshoot their workloads autonomously, without bottlenecks from IT. Nutanix’s broader ecosystem—including Nutanix Kubernetes Platform (NKP), Nutanix Database Services (NDB), and Nutanix AI (NAI)—was presented as a robust infrastructure solution with integrated observability and automation features designed to empower users and admins alike with actionable insights and streamlined management.


Data Mobility and Security with Nutanix

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Vishal Sinha

At AI Field Day 7, Vishal Sinha of Nutanix presented the company’s comprehensive approach to data mobility and security in the context of AI and enterprise computing. He detailed how Nutanix facilitates seamless data movement across edge, core, and cloud environments by supporting various data operations such as synchronization, replication, and distribution. This data mobility capability allows for use cases such as consolidating data from edge locations to a central data center and then utilizing the cloud for AI model inferencing and tiered storage. The platform maintains a consistent global namespace, simplifying data management across disparate environments and enabling organizations to process, analyze, and store data efficiently regardless of location.

The focus then shifted to security, where Sinha emphasized Nutanix’s architecture as being inherently built with security in mind. The system features cyber-resilience measures that detect both known and novel ransomware threats using built-in file system logic and a behavioral-based detection engine. By continuously monitoring file activities and validating potential threats before alerting users, Nutanix minimizes false positives while maintaining an exposure window currently down to ten minutes, with plans to reduce it further. Integration with third-party security tools like CrowdStrike, Palo Alto Networks, and Splunk enhances the platform’s capabilities within broader enterprise security frameworks. Immutable snapshots and a powerful remediation engine further allow customers to take automated or manual action to protect their data and restore functionality with minimal downtime.

Further elaborating on the benefits to enterprises, especially within AI workflows, Nutanix offers Data Lens—a metadata analytics tool that provides full visibility and governance over data usage. This tool can trace file activity over time, identify anomalous user behaviors, and clarify permissions with bidirectional data access views. These capabilities improve compliance, auditability, and operational transparency. Nutanix’s philosophy centers around four foundational pillars: simplicity, security, ubiquity, and unification of storage formats. By delivering an integrated, software-defined storage platform that supports files, objects, and blocks, Nutanix enables organizations to build and maintain intelligent AI-driven applications while maintaining robust data governance and near real-time threat response.


Nutanix Data Architecture and Safety

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Manoj Naik

In this presentation at AI Field Day 7, Manoj Naik from Nutanix outlined the underlying architecture of Nutanix’s unified data platform, which is designed to support next-generation AI workloads with high performance, security, and scalability. He began by analyzing traditional enterprise storage architectures—shared everything and shared nothing—and explained their inherent trade-offs. To overcome these limitations, Nutanix introduced a “shared flexible” architecture, combining the global accessibility of shared everything with the linear scalability of shared nothing. This allows for disaggregated yet cohesive compute and storage capabilities within a single cluster, enabling organizations to scale their infrastructure flexibly and efficiently.

Naik emphasized that the Nutanix platform is built around a data-first architecture, capable of handling the entire AI lifecycle from edge to core to cloud. It supports diverse protocols (NFS, SMB, S3) and applies smart data placement strategies through fine-grained metadata, intelligent caching, and protocol enhancements like SMB referrals and NFS v4. As data flows through the AI lifecycle, Nutanix ensures consistent operations and flexible mobility via replication, cloud integration, and unified control through Prism Central. The ability to scale both capacity and performance seamlessly—from single-node clusters to multi-petabyte, multi-cluster environments—positions Nutanix as a robust storage solution for AI pipelines and large-scale data lake infrastructures.

To validate its performance claims, Nutanix participated in ML Commons’ MLPerf Storage benchmarks, which simulate real-world AI workloads. Improved architectural paths, such as end-to-end RDMA, fast-path data transfers, and utilization of advanced features like SR-IOV, have allowed Nutanix to double performance while using half the hardware. Alongside performance, the platform includes comprehensive data protection features like snapshots, synchronous Metro replication, asynchronous DR, object locking, and integration with cloud object storage. This ensures both data resilience and compliance across deployment environments. Nutanix’s continued investment in performance optimization and robust data governance makes it well-suited for the demands of modern AI infrastructure.


Enabling Data using Nutanix Unified Storage

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Manoj Naik, Vishal Sinha

In their presentation at AI Field Day 7, Nutanix, represented by Distinguished Engineer Manoj Naik and SVP Vishal Sinha, laid out their vision for supporting AI workflows using Nutanix Unified Storage. They began by walking through a typical AI pipeline—from data ingestion and cleansing to model fine-tuning, inferencing, and archiving—emphasizing the challenges presented by data fragmentation across edge, cloud, and various formats. The Nutanix platform addresses these challenges by helping platform teams build and manage AI-ready data pipelines, ensuring clean, high-performance data is readily available for training large language models (LLMs) and inferencing operations.

The Nutanix solution includes a wide array of components that form a full-stack enterprise AI platform. Key pieces include Nutanix Cloud Infrastructure (NCI), Nutanix Unified Storage, the Nutanix Database Service (NDB), Nutanix Kubernetes Engine, and management layers such as Nutanix Central and Cloud Manager. Particularly noteworthy is their database management layer, which supports vector databases like PGVector and Milvus, enabling customers to manage LLM applications efficiently. The storage capabilities are tailored for each phase of the AI lifecycle—high-performance for training, rich metadata tiering for archiving, and versioning support via object storage and snapshotting.

The platform is designed to operate seamlessly across hybrid multicloud environments, offering data security, cyber resilience, and robust data mobility through features like global namespaces, immutable snapshots, and tiering based on metadata sensitivity. Nutanix Unified Storage supports multiple protocols (NFS, SMB, S3, iSCSI) and enables app and data colocation to optimize performance. The platform also supports cascading disaster recovery setups—including metro, asynchronous, and near-synchronous replication—to meet global compliance standards. With use cases ranging from edge AI inferencing to deep archival storage, Nutanix’s mature and versatile platform is built to handle the full lifecycle of data-intensive, AI-powered applications.


The Enterprise AI Cloud Platform Powered by Nutanix Unified Storage

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Alex Almeida

At AI Field Day 7, Alex Almeida from Nutanix presented the company’s vision for enabling enterprise AI initiatives through a robust, cloud-native infrastructure built on Nutanix Unified Storage. He began by highlighting a critical industry insight: while interest in AI is surging, a reported 95% of enterprise generative AI pilots are failing to reach production. This widespread issue, he explained, is not due to deficiencies in the AI applications themselves but rather due to the lack of a strong, scalable data infrastructure that can support AI at the enterprise level. As companies attempt to scale pilots into production, the complexity of integrating data from edge, core, and cloud environments—along with difficulties in orchestrating networking, compute, and storage—presents a considerable barrier to adoption.

To address this challenge, Nutanix is focused on delivering a seamless platform that simplifies the infrastructure required for AI. Their solution is the Nutanix Cloud Platform, built on a modern server-based architecture with a focus on scalability, manageability, and consistency across compute, networking, and especially storage. This shift from traditional three-tier architectures to cloud-native technologies supports containerized applications and Kubernetes workflows. Nutanix sees its evolution from hyper-converged infrastructure (HCI) into a robust cloud-native platform as a foundational change that sets the stage for the next decade of enterprise computing, particularly for AI and PaaS environments. With components like Nutanix Kubernetes Platform (NKP) and Nutanix Cloud Clusters, organizations can run applications consistently across on-prem, cloud, and edge environments.

A central piece of this AI-ready infrastructure is Nutanix’s Unified Storage offering, which provides file, block, and object storage all integrated into a single platform. Almeida emphasized their data management capabilities through Nutanix Data Lens, which adds analytics and cybersecurity features such as ransomware detection. He also pointed to Nutanix’s strategic partnership with NVIDIA, underscoring their certified support for NVIDIA’s GPU Direct Storage and active involvement in NVIDIA’s Enterprise AI Factory. These collaborations ensure that Nutanix’s solutions are aligned with the latest AI data platform innovations. Overall, Nutanix aims to empower enterprises to overcome infrastructure hurdles and realize real ROI from their AI initiatives by providing a modern, unified data foundation.


Security Using AI with Fortinet

Event: AI Field Day 7

Appearance: Fortinet Presents at AI Field Day 7

Company: Fortinet

Video Links:

Personnel: Keith Choi

At AI Field Day 7, Keith Choi from Fortinet presented an overview of Fortinet’s AI strategy and portfolio, emphasizing the integration of AI within cybersecurity solutions. He explained that the increasing adoption of AI in enterprises is driven by the need for efficiency and innovation, and Fortinet has developed a layered approach categorized into three buckets: Protect AI, Secure AI, and AI-Assisted Operations. These categories are designed to address different aspects of AI-related security, from defending against AI-driven threats, to securing AI systems themselves, and enhancing operational efficiency through AI-powered tools like SOC and NOC support.

Choi detailed various solutions underpinning Fortinet’s AI capabilities. For example, FortiGate, the company’s next-generation firewall, now includes controls for generative AI applications, allowing administrators to manage access and prevent data loss. Meanwhile, FortiNDR provides deep network detection and response capabilities without affecting throughput, acting as an internal magnifying tool to monitor and detect threats. FortiDLP complements these tools by offering endpoint data loss protection with real-time alerts and monitoring, helping organizations prevent sensitive data from leaking through AI platforms like ChatGPT. These tools illustrate Fortinet’s commitment to using AI not only to protect networks but also to secure how AI itself is used within organizations.

The presentation concluded with insights into AI-Driven operational tools like FortiAI Assist, which uses generative AI for troubleshooting and management through an interactive chatbot UI. Choi highlighted that Fortinet’s architecture uses region-specific AI proxies, ensures sensitive data masking, and allows deployment flexibility between on-premises and cloud environments depending on client needs. He reinforced the message that AI security solutions need to be architected based on specific organizational requirements, not delivered as a one-size-fits-all model. Fortinet’s approach, with heavy emphasis on secure architecture and user training, positions them as a versatile partner in navigating AI’s growing footprint in enterprise environments.


Secure AI Conversation, Not Just the Data, with Fortinet

Event: AI Field Day 7

Appearance: Fortinet Presents at AI Field Day 7

Company: Fortinet

Video Links:

Personnel: Maggie Wu

In Fortinet’s presentation at AI Field Day 7, Maggie Wu emphasized how the emergence of AI applications has radically altered the cybersecurity landscape, moving beyond traditional web security into more dynamic, conversation-driven AI interactions. The company’s approach integrates threat intelligence from FortiGuard Labs into their systems, providing real-time insights and protection across network environments. Their AI models, developed and maintained within Fortinet’s infrastructure, are further tailored to customer environments by mapping incoming AI interactions to localized context, ensuring outputs are relevant and secure for specific clients. Fortinet utilizes public LLMs for basic interactions while incorporating customer-specific data locally to avoid unnecessary exposure.

Wu elaborated on a multi-layered strategy to secure not just data, but also the AI-driven conversations themselves. Recognizing that modern AI systems are vulnerable to prompt injection, data leakage, and model poisoning, Fortinet introduced an AI orchestration layer and a set of protections designed to sanitize both AI inputs and outputs. AI infrastructure is continuously scanned for vulnerabilities, and user/environment-based access controls are enforced rigorously. They have also integrated security into their CI/CD pipelines, ensuring that AI models are secure even before deployment. This multi-faceted approach helps prevent security flaws from being exploited during any stage of the AI lifecycle.

Fortinet differentiates its offering from competitors by embedding AI capabilities directly into its existing unified platform rather than creating separate AI products. This integration enables smarter, context-aware automation across Fortinet’s entire ecosystem of security, networking, SOC, and SASE solutions. While optimal performance is achieved within a Fortinet-centric infrastructure, the company also supports multi-vendor environments by offering modular add-ons like FortiAI Assist, which can integrate with third-party SIEM and SOAR platforms. Their AI governance model includes comprehensive tracking and compliance monitoring of LLM interactions, supporting enterprise needs for regulatory adherence and ethical AI use.


Work Smarter Than Harder FortiAI-Assist from Fortinet

Event: AI Field Day 7

Appearance: Fortinet Presents at AI Field Day 7

Company: Fortinet

Video Links:

Personnel: Max Zeumer

At AI Field Day 7, Fortinet presented its FortiAI Assist technology, emphasizing its integration across the Fortinet Security Operations Center (SOC) platform. Max Zeumer, the speaker, highlighted the growing burden on security teams dealing with massive volumes of alerts and limited personnel. FortiAI Assist was designed to alleviate this pain by embedding AI-driven support directly into the SOC interface. The AI assistant can efficiently prioritize alerts, triage investigations, and enrich incident details using available threat intelligence and telemetry. Analysts can interact with FortiAI Assist via typed or spoken queries, enabling them to focus on higher-level strategic decisions while the AI handles data gathering and analysis. Additionally, Fortinet employs a blend of generative AI and pre-built playbooks to orchestrate actions like isolating compromised hosts and compiling incident reports while maintaining a “human-in-the-loop” model for oversight.

Beyond SOC capabilities, FortiAI Assist also extends to Network Operations Centers (NOC), as described by Maggie Wu during her portion of the presentation. In the NOC context, FortiAI aims to simplify and expedite day-one deployment tasks, such as auto-generating configurations from topology diagrams and validating configuration scripts. Day-two operations are boosted by real-time network health assessments, troubleshooting, and suggested fixes, all guided through AI-driven dialogue with the admin. The AI assistant is capable of identifying root causes—like VPN or Wi-Fi failures—and proposing remediations that can be executed upon user confirmation. The technology allows customizable interaction levels so that organizations can maintain compliance with their change management processes.

Fortinet also addressed the flexibility of FortiAI Assist within both Fortinet-exclusive and multi-vendor environments. While native Fortinet deployments offer full capabilities with deep cross-platform interoperability, approximately 80-90% of AI-based functionality is available in broader ecosystems thanks to comprehensive APIs and collaborative integrations with over 500 partners. Organizations can build or customize their own automation connectors, reinforcing Fortinet’s commitment to open systems and vendor interoperability. Furthermore, FortiAI Assist supports scalable adoption strategies, offering options such as detailed change plans and rollback capabilities, enabling organizations to gain trust gradually through staged automation. Fortinet envisions a future where its AI agents collaborate with partner AI solutions, creating a cohesive and intelligent security and network management ecosystem.


Empower Innovation with AI Secured by Fortinet Fabric

Event: AI Field Day 7

Appearance: Fortinet Presents at AI Field Day 7

Company: Fortinet

Video Links:

Personnel: Max Zeumer

In his presentation at AI Field Day 7, Max Zeumer from Fortinet discussed how the rapid adoption of generative AI has transformed the threat landscape and the imperative for organizations to secure their AI usage. He began by highlighting the explosive growth of generative AI compared to past technologies, stressing that enterprises must adapt quickly to its integration. While some organizations are implementing AI in a structured way with governance and compliance, most are still in the early stages and lack visibility and control, which introduces significant risk. Zeumer noted that as AI progresses from reactive prompt-based tools to agentic and autonomous systems, enterprises face mounting challenges to secure data, manage usage, and maintain compliance.

Zeumer emphasized that quickly evolving AI tools also present new vulnerabilities in the cybersecurity space. Threat actors have begun using AI to create convincing phishing attacks, social engineering campaigns, and malware, often lowering the technical barrier for carrying out sophisticated cyberattacks. In addition, internal risks were discussed, such as employees unknowingly feeding sensitive company data into public AI platforms. This lack of governance can lead to data leakage and regulatory breaches. Fortinet sees a growing need for enterprises to monitor the various applications of AI within their organizations, understand who is using it, how, and what data is being processed, especially as adversaries increasingly employ AI in weaponized forms.

To address these emerging concerns, Fortinet has developed a comprehensive AI-integrated cybersecurity framework called Fortinet Security Fabric, powered by its proprietary AI platform, FortiAI. This system is structured around three main pillars—FortiAI Protect, Assist, and Secure AI—covering threat detection and prevention, operational augmentation, and safeguarding AI systems themselves. FortiGuard Labs plays a fundamental role by continuously collecting sophisticated threat intelligence and feeding it into these systems. This allows customers to receive accurate, real-time insights, manage risk from generative AI applications, and set governance rules. Fortinet’s unified platform and deep AI capabilities, backed by over 500 patents and years of innovation, position it to help enterprises adopt AI securely while maintaining performance and compliance.


HPE Agentic Smart City Solution – Focusing on Real-World Outcomes

Event: AI Field Day 7

Appearance: HPE Presents at AI Field Day 7

Company: HPE, Kamiwaza.AI

Video Links:

Personnel: Luke Norris, Robin Braun

At AI Field Day 7, Robin Braun from HPE and Luke Norris from Kamiwaza presented their collaborative smart city solution, highlighting a real-world deployment in Vail, Colorado. The focus was on using agentic AI systems to improve core municipal operations such as information access, public safety, affordable housing oversight, and regulatory compliance. By integrating Kamiwaza’s backend intelligence with user-friendly digital interfaces powered by HPE infrastructure, they demonstrated the potential of AI-driven digital concierges and fire detection tools. These virtual assistants can provide localized, real-time information to residents and visitors about everything from dining options to emergency weather updates, while the fire detection system synthesizes data from existing city cameras, 3D geospatial models, and real-time weather data to support proactive emergency response.

One of the less glamorous but highly impactful use cases involves automating the interpretation and management of property deeds and housing regulations, many of which were previously stored on microfiche from decades past. HPE and Kamiwaza developed a solution that digitizes and then applies natural language processing and ontology mapping to thousands of deed restriction documents. This not only saves significant full-time staff hours but also enables scalable and equitable housing enforcement without the need for proportionate increases in bureaucratic staffing. Additionally, the system allows both government and citizens to query property data interactively, improving public access and transparency, and supporting future zoning or service decisions with much better data insight.

A significant part of the presentation focused on the long-term vision and ROI of public sector AI deployments. These weren’t just experimental pilots; instead, they already yielded tangible cost and time savings by replacing manual, repetitive processes with AI agents. Critical examples included the automation of 508 compliance audits, which traditionally cost millions over years but now can be performed in weeks with a fraction of the cost. Additionally, through a network of partners such as SHI for deployment and ProHawk for video enhancements, the smart city platform is designed to scale, support ongoing improvements, and adapt to increasing demands. The project demonstrates how AI transforms government services not by reducing workforce but by enhancing their capabilities, decision-making speed, and community responsiveness in areas from environmental risk to urban planning.


HPE News from NVIDIA GTC DC 2025

Event: AI Field Day 7

Appearance: HPE Presents at AI Field Day 7

Company: HPE, Kamiwaza.AI

Video Links:

Personnel: Luke Norris, Robin Braun

At AI Field Day 7, HPE presented its latest AI developments announced during NVIDIA’s GTC DC 2025, with a strong focus on its collaborative initiatives to simplify and operationalize AI workloads. Robin Braun and Luke Norris highlighted the challenges organizations face in deploying AI applications, particularly the difficulty of moving from pilot projects to full-scale production. HPE emphasized its partnership model, notably with Kamiwaza, to address this issue by integrating lifecycle management and streamlined AI operations, making it simpler for enterprises and government entities to maintain and update AI deployments.

A major highlight of the presentation was HPE’s AI stack tailored for various deployment scales, including private cloud environments and air-gapped setups suitable for sensitive sectors like public safety. Braun detailed advancements in scaling AI, such as leveraging RTX 6000 Pro GPUs and introducing pre-integrated, lifecycle-managed AI stacks that can function in isolated networks. These stacks are also being tied into HPE’s digital concierge services and AMP offerings, designed to help customers deploy and support AI solutions faster and more reliably, while also ensuring security and compliance across different use cases.

The Town of Vail served as a flagship example demonstrating HPE’s platform capabilities in real-world conditions. By utilizing existing infrastructure such as town-wide cameras and applying Kamiwaza’s AI backend, HPE enabled adaptive workflows, specifically for fire detection and urban sustainability efforts. This approach provided not only cost and operational efficiencies but also embodied Vail’s commitment to renewable energy and environmental goals. The collaboration between HPE, Kamiwaza, and integration partner SHI showcases how AI can drive meaningful public benefits, such as early fire warning systems and safer deployment environments, all while scaling to future smart city applications.


Unleash AI with HPE and Kamiwaza

Event: AI Field Day 7

Appearance: HPE Presents at AI Field Day 7

Company: HPE, Kamiwaza.AI

Video Links:

Personnel: Luke Norris, Robin Braun

At AI Field Day 7, Robin Braun presented HPE’s “Unleash AI” initiative, emphasizing their collaborative and outcomes-based approach to bringing AI to practical use. Braun was joined by Kamiwaza CEO Luke Norris, in all three sessions. HPE launched the Unleash AI program in early 2024 with the goal of curating a robust partner ecosystem, offering customers end-to-end solutions that are pre-validated on HPE infrastructure and easy to deploy via the channel. They highlighted the importance of converting AI hype into real solutions by working closely with ISVs, creating relevant demos, marketing collateral, and training resources to make AI more accessible and actionable for enterprises. The program’s global scope and diverse use cases, from Vision AI to agentic AI, demonstrate HPE’s commitment to addressing the real-world needs of customers across various industries.

A key focus of the presentation was the Agentic Smart City AI use case in partnership with Kamiwaza and the town of Vail, Colorado. This initiative is a practical example of how municipalities can solve operational challenges using AI. By working with Vail, HPE and Kamiwaza developed several use cases, including improving ADA Section 508 web compliance through AI agents that identify and remediate accessibility issues, saving time and avoiding costly manual web redevelopment. This project broke down data silos and enabled interdepartmental collaboration without requiring cloud connectivity, as everything runs securely on HPE infrastructure. The result was not only a technically sound solution but also a model for how public agencies can adopt AI incrementally without excessive risk.

Kamiwaza’s agentic AI platform, demonstrated during the session, operates as a full-stack orchestration engine capable of connecting to and processing data across distributed environments using various hardware and AI models. Whether running on-premises or at the edge, it brings compute to the data while abstracting the underlying hardware, which enhances performance, scalability, and flexibility. The system incorporates advanced features like ReBAC, an enhanced role and attribute-based access control framework, and ephemeral sessions to enforce security and privacy rigorously. It enables enterprises, including government entities, to be “unbound” by token-based AI billing models and instead focus on fixed-cost, outcome-based deployments. These capabilities have already shown transformational potential in environments like Vail and attracted significant interest from large global enterprises.


Delivering Valuable AI Insights Requires Protecting AI Data Sources

Event: AI Field Day 7

Appearance: HYCU Presents at AI Field Day 7

Company: HYCU

Video Links:

Personnel: Subbiah Sundaram

In the AI Field Day 7 presentation, Subbiah Sundaram, Senior Vice President of Products at HYCU, highlighted the importance of data protection in the context of AI deployment and insights. Sundaram emphasized that protecting data is not limited to the raw data itself, but also includes configurations, metadata, and associated systems that power AI infrastructure. He outlined HYCU’s multi-faceted approach, starting with free data discovery across a broad range of sources, including SaaS, PaaS, DBaaS, and IaaS environments. Their platform helps enterprises continuously map out and visualize their data estates, identifying unprotected resources and automating categorization — a critical need in today’s highly distributed and complex IT landscape.

Sundaram delved deeper into the challenges of protecting data sources that fuel AI models, particularly in environments that use retrieval-augmented generation (RAG) methods to augment language models with proprietary data. The protection of vector databases, such as Pinecone and Redis, was noted as a key differentiator for HYCU, positioning it as the first enterprise backup vendor to offer such capabilities. He discussed how data spread across public cloud, SaaS platforms, and on-premises infrastructures can be managed and protected from a single control plane, offering portability, granular recovery, and ransomware resilience. Importantly, HYCU’s architecture is modular and API-driven, allowing customers and partners to rapidly integrate new SaaS sources ahead of the market, while also maintaining compliance and service-level agreements.

Throughout the presentation, Sundaram underscored a growing enterprise awareness of the need to protect operational and AI-related datasets as they move from experimentation into production environments. He revealed key industry data, showing that most organizations have experienced at least one SaaS-related data breach in the past year, with significant financial and operational impacts. HYCU’s approach ensures that customers retain ownership of their backup data, avoiding third-party control or markup of cloud storage services. Their global, scalable architecture supports all major cloud providers and emphasizes intelligent data locality to minimize costs. Overall, the presentation framed HYCU as a forward-thinking, customer-centric player in AI and data protection, uniquely positioned to help enterprises maintain data sovereignty, security, and continuity in accelerating AI adoption.


Protecting the Intelligence and Infrastructure Behind AI

Event: AI Field Day 7

Appearance: HYCU Presents at AI Field Day 7

Company: HYCU

Video Links:

Personnel: Sathya Sankaran

In his presentation at AI Field Day 7, Sathya Sankaran, Head of Cloud Products at HYCU, emphasizes the importance of protecting the data and infrastructure that underpin AI systems. He highlights that while much of the AI conversation tends to focus on GPUs and models, the foundational data that fuels AI often lacks comprehensive protection. During AI implementation, vast and varied datasets are generated, modified, and analyzed—through data lakes, object storage, and lakehouses—posing significant challenges in maintaining consistency, accuracy, and recoverability. Sankaran underscores that much of this data resides in the cloud, making cloud the “home of AI,” but also introduces new threats due to fragmented services, inefficiencies, and blind spots in current protection measures.

HYCU aims to solve these challenges by offering broad and deep coverage across diverse cloud workloads, ensuring consistent and meaningful backup and recovery. Unlike traditional backup solutions that may not cater to AI-specific workflows or protect more than raw data, HYCU’s platform captures the entire ecosystem, including metadata, views, access policies, and AI-specific formats such as enriched JSON and vector databases. This level of comprehensive protection enables traceability and rollback capabilities for AI pipelines, which are critical when dealing with issues like schema drift, corrupted data, or poisoned datasets. HYCU’s approach involves aligning backups with stages like model training checkpoints, and doing so in a way that maintains consistency across fragmented and asynchronous data processes.

Adding to this, HYCU’s partnership with Dell and use of deduplication technologies such as DD Boost make backing up even large-scale AI data cost-effective and cloud-resilient. Their solution minimizes storage use and egress costs by identifying and transferring only changed data segments, often achieving up to 40:1 savings. This also supports cross-cloud backups, offering organizations flexibility and protection from vendor lock-in or catastrophic cloud failures. Ultimately, HYCU positions itself as an essential component in modern AI architecture by centralizing protection, enabling long-term recoverability, and reducing operational risk, all while keeping pace with the rapidly evolving landscape of AI workloads.