Graphiant Demos with Vinay Prabhu

Event: Networking Field Day 39

Appearance: Graphiant Presents at Networking Field Day 39

Company: Graphiant

Video Links:

Personnel: Vinay Prabhu

Chief Product Officer Vinay Prabhu demonstrated the Graphiant infrastructure, first focusing on the network for AI. He framed AI as a massive publisher-subscriber problem and demoed a B2B data exchange where services, like GPU farms, can be published to a personal marketplace. This allows partners (both on and off-network) to be securely connected in minutes, automating complex routing, NAT, and security. This capability is then monitored by the Data Assurance Dashboard, which uses a real-time telemetry pipeline (correlating NetFlow, DNS, and DPI) to provide deep visibility without decrypting payloads. This dashboard identifies malicious threats, provides full auditability, and offers an “Uber-like” spatial and temporal view, allowing operators to prove an application’s exact path and confirm compliance with geofencing policies.

This visibility enables absolute control, where users can define policies for performance, path, or risk. Prabhu confirmed customers can enforce policies to drop traffic rather than failover to a non-compliant path, ensuring governance is never compromised. The presentation concluded with the AI for the network component: GINA, the Graphiant Intelligent Network Assistant. GINA acts as a virtual team member, capable of running a “60-minute stand-up in 60 seconds” by generating guided operational and compliance reports. Prabhu stressed that GINA does not train on customer data; it uses Generative AI to interpret queries and accesses information strictly through the user’s existing role-based access control (RBAC) APIs, ensuring data remains secure.


Graphiant: The AI Strategy

Event: Networking Field Day 39

Appearance: Graphiant Presents at Networking Field Day 39

Company: Graphiant

Video Links:

Personnel: Vinay Prabhu

Vinay Prabhu, Chief Product Officer at Graphiant, outlined a two-part strategy encompassing network for AI and AI for network. The network for AI pillar addresses the challenge of AI being a massive distributed publisher and subscriber problem, where data is generated in one location and inference happens in another, often across different business boundaries. To manage this, Graphiant provides a platform for secure data exchange, analogous to financial or healthcare workloads. Prabhu emphasized that simplifying exchange is insufficient without trust, using a ride-sharing app analogy: just as a passenger needs to see the driver and the path, enterprises need real-time observability, auditability, and centralized control to program governance policies directly onto the global fabric.

The second pillar, AI for the network, is embodied by GINA (Graphiant Intelligent Network Assistant). GINA is designed to act as a virtual member of the operations team, automating complex, time-consuming tasks. Prabhu gave the example of a CSO requesting a monthly compliance report, a task that might take an hour to manually collate data from various dashboards and databases. GINA can generate this report, along with threat intelligence and infrastructure insights, almost instantly. Prabhu summarized GINA’s value as running a 60-minute stand-up in 60 seconds, buying back valuable time for practitioners to focus on innovation rather than manual data gathering.


Graphiant Use Cases with Arsalan Khan

Event: Networking Field Day 39

Appearance: Graphiant Presents at Networking Field Day 39

Company: Graphiant

Video Links:

Personnel: Arsalan Mustafa Khan

Graphiant’s CSO, Arsalan Khan, detailed use cases focused on this challenge, beginning with unified connectivity. He explained that the Graphiant fabric treats all endpoints, public clouds, data centers, and emerging AI neoclouds, as part of a single any-to-any fabric. This model eliminates the need for traffic backhauling, providing lower latency and guaranteed paths for high-bandwidth AI workloads, all while ensuring data privacy with end-to-end encryption that is never decrypted in transit.

Khan then highlighted business-to-business data exchange and data assurance as key enablers for AI. The platform simplifies partner collaboration, which is critical for many AI ecosystems, by handling network complexities like IP overlapping and NAT. This capability extends to partners not on the Graphiant platform and includes the ability to dynamically revoke access if a partner is breached. The core data assurance use case provides a centralized tool for CISO and governance teams. Using role-based access control, they can enforce network-level policies, such as ensuring specific data never leaves a geographical boundary, rather than relying on individual application developers to implement compliance.

Finally, Khan addressed how this infrastructure specifically serves AI workloads. He clarified the strategy is “networking for AI,” meaning the platform is designed to offload the complex burden of security and data governance from AI applications to the network itself. This accelerates AI deployment by simplifying compliance. The system supports this with threat intelligence that, without inspecting encrypted payloads, uses public feeds and behavioral analysis. By classifying normal application flows, the network can detect and flag erratic behavior, providing an essential layer of security for moving and processing the large, sensitive data “haystacks” required by AI.


About Graphiant NaaS with Arsalan Khan

Event: Networking Field Day 39

Appearance: Graphiant Presents at Networking Field Day 39

Company: Graphiant

Video Links:

Personnel: Arsalan Mustafa Khan

As enterprises accelerate AI adoption, data governance and network security have become inseparable. In Graphiant’s Networking Field Day presentation titled, “Why Data Governance Demands a Unified and Secure Approach to AI Networking,” Graphiant explores how a secure, compliant, and unified networking infrastructure is essential to enable responsible AI at scale.

Arsalan Khan framed the core problem: enterprises are investing heavily in AI, but their data is siloed across on-prem data centers, multiple clouds, and emerging neocloud providers. This creates massive infrastructure headaches, high costs, and significant security risks. The challenge is compounded by complex data governance regulations, especially for sensitive PII in finance and healthcare. Khan noted that while traditional networking struggles to “catch up” to new technologies, AI demands moving the “whole haystack,” not just finding needles, making network-level control and compliance essential from the start.

Graphiant’s solution is a Network-as-a-Service (NaaS) built on a stateless core, which functions as an overlay/underlay network operated by Graphiant using leased fiber. This provides a single, ubiquitous fabric for any-to-any connectivity with SLA guarantees. The key, Khan emphasized, is simplifying the “plumbing” so businesses can focus on their AI goals. The platform provides centralized visibility and control over metadata, allowing enterprises to see traffic paths and applications (without decrypting payloads) and enforce granular policies, such as guaranteeing that specific data never leaves a geographical boundary. This approach aims to provide the auditability, security, and cost-effectiveness required to manage modern AI data flows.


Futurum Signal – Agentic AI Platforms for Enterprise

Event: AI Field Day 7

Appearance: Futurum Signal Presentation at AI Field Day 7

Company: The Futurum Group

Video Links:

Personnel: Stephen Foskett

In his presentation at AI Field Day 7, Stephen Foskett, President of Tech Field Day at The Futurum Group, introduced the Futurum Signal, a groundbreaking vendor evaluation survey designed to challenge traditional analyst methodologies. The Signal leverages Agentic AI to provide a fresh perspective on evaluating major enterprise AI platforms. Unlike traditional, manual processes that involve extended data collection from vendors and often lead to out-of-date reports, this new method utilizes a combination of proprietary data, industry analysis, AI-driven insights, and human intelligence to generate timely, comprehensive assessments. The process is streamlined to offer enterprise decision-makers updated insights, highlighting the agility of AI-enhanced analytics in evolving technical landscapes.

Foskett shared the latest Signal Report focusing on Agentic AI platforms for enterprises, evaluating major players and identifying strategic partners best suited for enterprise buyers aiming to revolutionize business processes with AI. Through a sophisticated AI-driven system, analysts within the Futurum Research Group assess a pool of significant companies to determine their fit as partners in the AI space. This evaluation considers data integrity, collaboration among multiple agents, governance, and enterprise-oriented controls, all while illuminating promising trends for advanced AI deployment. The report places Microsoft and Salesforce as top contenders in the elite zone, recognized for their comprehensive suite of tools suitable for the largest enterprise clients. Google, IBM, SAP, and ServiceNow are also notable, while AWS and Oracle occupy the established zone, reflecting the dynamic and competitive landscape of agent-based enterprise AI solutions.

The integration of AI into the analytical process allows for real-time data processing and the generation of reports that incorporate recent and relevant updates, such as financial results or organizational changes within evaluated companies. This capability ensures that the information remains fresh and actionable for decision-makers. Futurum’s commitment to leveraging AI as a foundational element in their signal reports underscores a strategic shift toward more responsive, data-enriched analyses. Foskett emphasized the importance of timely and frequent updates, projecting that future reports, including those for Tech Field Day, will be heavily influenced by insights gathered from AI-driven data, aiming for transformative impacts in technology evaluation and enterprise strategy.


Battle of the Bots – Which AI Assistant Delivers with Calvin Hendryx-Parker

Event: AI Field Day 7

Appearance: Calvin Hendryx-Parker Presents at AI Field Day 7

Company: Ignite, Six Feet Up

Video Links:

Personnel: Calvin Hendryx-Parker

Calvin Hendryx-Parker, Co-Founder and CTO of Six Feet Up, delivered an insightful presentation at AI Field Day 7, evaluating the efficacy of various AI coding assistants in real-world developer workflows. His talk built upon an earlier session by exploring updates in agentic AI tools, which have become indispensable in modern coding practices. These tools, including Aider, Goose, Claude Code, Cursor, Juni, and OpenAI Codex, interact with a developer’s environment via APIs, leveraging protocols such as the Model Context Protocol (MCP) to enable autonomous or semi-autonomous coding assistance. Each AI tool has unique strengths, such as differential context management capabilities, sub-agent functionality, and tool-specific interfaces, which can deeply affect a developer’s productivity and workflow efficiency.

Hendryx-Parker’s discussion emphasized the transformative impact of AI assistants on developers’ operational efficiencies, highlighting specific products and protocols. Aider, for instance, is noted for its integration with Git and can handle local models such as Llama to ensure data privacy, while also providing a semiautonomous coding experience through its architect and code modes. Goose by Block is lauded for its inclusivity on multiple models, including support for OpenRouter. It stands out with its recipes for repeated task automation and container isolation to mitigate risk during operations. Claude Code, developed by Anthropic, supports proprietary tools and is inherently more empathetic, which can be advantageous during discussions or negotiations, despite its non-open-source nature.

The presentation culminated in an analysis of the trajectory and potential dominators among these AI tools. Goose and Claude Code were seen as potential leaders due to their robust feature sets and wide-ranging usefulness for enterprise and individual users alike. Goose’s integration with GUI tools indicates a focus on a wider market, possibly covering both professional developers and office workers with coding needs. Hendryx-Parker also touched upon innovations such as the Agent Control Protocol (ACP) for enhanced tool interoperability and pointed to the necessity for developers, especially juniors, to familiarize themselves with these tools to maintain a competitive edge in the rapidly evolving technological landscape. The talk was a comprehensive overview of the AI coding assistant landscape, providing a detailed insight into each tool’s unique capabilities and potential for streamlining developer productivity.


Cloud Field Day 24 Delegate Roundtable Discussion

Event: Cloud Field Day 24

Appearance: Cloud Field Day 24 Delegate Roundtable Discussion

Company: Tech Field Day

Video Links:

Personnel: Alastair Cooke

This final session of Cloud Field Day 24 features a roundtable discussion with the delegates to explore their impressions of the event and delve into topics not thoroughly covered during the presentations. The delegates, who practically work with these products in complex environments, aim to discuss pertinent issues and the right solutions for their customers, particularly regarding the evolution of hybrid or private cloud and how it differs from public cloud.

The discussion centered on the shift towards on-premises cloud solutions, highlighted by companies like Oxide and Morpheus, that mimic cloud functionality without replicating the public cloud model. A key theme was the concept of multi-cloud, which enables workload placement and movement based on specific needs, alongside essential observability and management capabilities. Data management, particularly data sovereignty, was identified as a major driver for on-premise solutions, due to the physics involved in data transfer and the vendors’ belief that current cloud services often fail to meet enterprise data sovereignty requirements.

Further topics included the financial implications of diverse platforms, the rising costs of cloud-based AI agents, and the need for simplification in IT operations amid increasing complexity and staffing challenges. The discussion also addressed the security vulnerabilities in AI and the importance of incorporating security into AI infrastructures from the outset, rather than as an afterthought. Finally, the panel discussed the distinct approaches to cloud solutions, contrasting single-source providers like Oxide with vendors that enable best-of-breed integrations, acknowledging the challenges of integrating new solutions into complex, legacy environments.


Scaling Autonomous IT: The Real Enterprise Impact with Digitate ignio

Event: AI Field Day 7

Appearance: Digitate Presents at AI Field Day 7

Company: Digitate

Video Links:

Personnel: Rajiv Nayan

Discover how Digitate is transforming enterprise operations through an AI-first go-to-market strategy built for global scale and complexity. In this session, Digitate’s GM and VP, Rajiv Nayan, will dive into real-world customer success stories that showcase how Digitate is scaling innovation across industries, building multi-billion dollar business opportunities, and reshaping how businesses run in the AI era. Learn how your enterprises can scale smarter, faster, and more proactively with Digitate: https://digitate.com/ai-agents/

At AI Field Day 7, Rajiv Nayan, Vice President and General Manager of Digitate, presented “Scaling Autonomous IT: The Real Enterprise Impact with Digitate ignio.” Nayan introduced ignio as an agent-based platform designed to bring autonomy to enterprise IT operations, built from the ground up over the past decade and protected by over 110 patents. Targeting a $31 billion global market across retail, pharma, banking, and manufacturing, ignio leverages machine intelligence and agent-based automation to address repetitive, knowledge-driven IT tasks, aiming to shift enterprises from assisted or augmented operations to true autonomy. According to Nayan, the platform is used by over 250 customers worldwide and has earned a high customer satisfaction rating of 4.4 out of 5 on G2.

Nayan illustrated ignio’s capabilities through detailed customer stories. For luxury retailer Tapestry, ignio integrated with IBM Sterling order management, financial, and logistics systems to monitor and optimize the journey of orders across 37 global webfronts. The platform proactively handled issues ranging from data inconsistencies to job cycle failures, ultimately saving the company millions and managing over 100,000 orders. In another case, a large pharmaceutical company with 70 complex system interfaces used ignio to streamline their prosthetic limb supply chain, reducing critical demand planning processes from weeks to hours. A global consumer goods company also employed ignio to automate order fulfillment across SAP systems and manufacturing plants, avoiding disruptions in a high-volume direct-to-store delivery model and preventing millions in potential losses.

At scale, ignio demonstrated significant operational efficiencies for enterprises such as a major pharmaceutical distributor and a retail pharmacy chain. In the distribution environment, ignio handled over 20,000 configuration items and 220 business-critical applications, achieving 80 percent noise reduction in event management and automating 110,000 hours of annual manual work. For the retail pharmacy chain with 9,000 stores, ignio automated ticket management tied to revenue assurance for promotions, reducing mean time to resolution from nearly three days to under ten minutes and recapturing $17 million in revenue while saving $5 million in support costs. Across its deployments, ignio processed 1.2 billion events last year, achieved 87 percent noise reduction, and executed 300 million automated actions—demonstrating that agentic, autonomous IT platforms can significantly reduce business disruptions and free human talent for higher-value work.


Incident Resolution with Digitate’s ignio AI Agent

Event: AI Field Day 7

Appearance: Digitate Presents at AI Field Day 7

Company: Digitate

Video Links:

Personnel: Rahul Kelkar

At AI Field Day 7, Rahul Kelkar, Chief Product Officer at Digitate, presented the capabilities of ignio, an AI-based incident resolution agent designed to automate, augment, and improve IT operations. Ignio uses a logical reasoning model for incident resolution, leveraging enterprise blueprints to understand situations in a closed loop and applying automation where possible. When a fully automated response is not viable, ignio augments human efforts through assisted resolution, supplying prioritized incident lists based on business impact, providing situational context, and capturing both historical and episodic memory about recurring issues. The product integrates with various data sources to build a formal enterprise IT model, supporting information ingestion via templates or extraction from existing documentation, and includes adapters for common ITSM systems like ServiceNow for seamless change management.

Technically, ignio’s core incident resolution operates via automated root cause analysis, performing real-time health checks across application hierarchies—such as applications running on Oracle databases hosted on Red Hat servers—and comparing the current state to baselines to isolate anomalies. It can autonomously apply prescriptive fixes, such as restarting services, and then validate remediation by rechecking stack health. In more complex scenarios, like SAP HANA environments or intricate batch job dependencies in retail order management, ignio handles non-vertical, multi-layered issues involving middleware, business processes, and interdependent bad jobs. The solution features out-of-the-box knowledge for common technologies and allows continuous augmentation with customer-specific logic. Custom operational models and atomic actions can be enhanced using Ignio Studio, and the system learns from user feedback through reinforcement learning, improving accuracy in prioritizing incidents, suggesting fixes, and predicting service level agreement (SLA) violations before they occur.

Ignio extends beyond deterministic resolution to assist engineers and SREs via conversational augmentation. For issues not resolved autonomously, ignio provides contextual insights—including previous incidents, typical resolutions, and guidance for next-steps—while collaborating via a “resolution assistant” so humans can contribute domain knowledge and validate procedure. The demo showed proactive recommendation capabilities, identifying dominant recurring SLA violations and offering actionable, prioritized problem management insights. Ignio integrates with multiple agent-based platforms for orchestrated, multi-channel incident management flows, including email, Slack, and ticketing systems, using orchestration protocols and adapters. The platform employs advanced anomaly mining and sequence analysis, allowing users to identify root causes not only within vertical stacks but also through complex temporal and conditional relationships across business functions, ultimately supporting predictive, reactive, and continuous improvement use cases in large-scale enterprise IT environments.


Powering Autonomous IT with ignio AI Agents from Digitate

Event: AI Field Day 7

Appearance: Digitate Presents at AI Field Day 7

Company: Digitate

Video Links:

Personnel: Rahul Kelkar

Digitate is a global provider of agentic AI platform for autonomous IT operations. Powered by ignio™, Digitate combines unified observability, AI-powered insights, and closed-loop automation to deliver resilient, agile, and self-healing IT and business operations. In this presentation, Digitate’s Chief Product Officer, Rahul Kelkar, will introduce Digitate’s vision for an autonomous enterprise, where organizations learn, adapt, and make decisions with minimal human intervention. Through a series of demos, Rahul will also showcase how Digitate’s purpose-built AI agents work seamlessly across observability, cloud operations, and IT for business to boost cloud ROI, predict delays, and ensure long-term stability. Learn more about Digitate and its ignio platform, visit: https://digitate.com/ai-agents/

At AI Field Day 7, Rahul Kelkar, Chief Product Officer at Digitate, presented on powering autonomous IT with ignio, Digitate’s agentic AI platform designed for IT operations. Kelkar began by outlining the industry’s evolution from manual IT operations and cognitive automation toward modern AIOps and agentic AI, framing the journey towards fully autonomous IT as a progression through stages of manual, task-automated, and augmented operations. He described how ignio leverages a unified three-pillar approach: unified (or business) observability for comprehensive monitoring of both technical and business processes, AI-driven insights using traditional and agentic AI including machine learning and generative models, and closed-loop automation that not only provides recommendations but executes prescriptive actions with high confidence. This architecture aims to proactively eliminate business disruptions due to IT, identify issues before they impact business productivity, and reduce incident resolution times.

Ignio operates on what Digitate calls the “Enterprise Blueprint,” essentially a digital twin or knowledge graph that captures both structural and behavioral aspects of enterprise IT. The platform integrates with common monitoring and IT management tools, ingesting metrics, events, logs, and traces to provide a layered view of health across infrastructure, application stacks, and business value streams. Observability data is enriched with AI-based noise filtering, anomaly detection, correlation, and dynamic thresholding, automatically triaging and suppressing redundant alerts. Kelkar highlighted ignio’s “composite AI” approach, combining logical reasoning (rule-based and machine learning models), analogical reasoning (using generative AI and large language models for contextualization where knowledge is incomplete), and assisted reasoning (bringing domain experts into the loop to validate and tune recommendations). The workflow encompasses agent-based management of event and incident handling, automated root cause analysis, and remediation actions, all while learning from human validation to continuously improve performance.

The platform is designed to address complex, applied use cases in large-scale, modern environments, such as event reduction, proactive incident management, observability, patch management, and cost optimization across multi-cloud and containerized workloads. ignio supports integrations through out-of-the-box adapters for 30-40 major tools (with customization for specialized environments) and specific modules for SAP applications, batch scheduling, digital workspaces, and procurement processes. Its agentic capability is extended with Ignio Studio, empowering SREs and IT operations teams to continuously extend and customize workflows. As demonstrated, ignio’s AI agents interact via conversational interfaces, notifications, and dashboards, enabling a shift to smaller, cross-functional SRE teams—supported by autonomous agents handling the bulk of monitoring, triage, and remediation, with humans focusing on governance, validation, and improvement. This supports a vision of truly autonomous, resilient IT operations that adapt rapidly to changing workloads and technologies, minimizing disruptions and keeping business-critical systems running smoothly.


The Technical Foundations of Articul8’s Agentic Al Platform

Event: AI Field Day 8

Appearance: Articul8 Presents at AI Field Day 7

Company: Articul8

Video Links:

Personnel: Arun Subramaniyan, Renato Nascimento

Dr. Renato Nascimento, Head of Technology at Articul8, and Dr. Arun Subramaniyan, Founder and CEO, presented the technical architecture and capabilities of the Articul8 platform at AI Field Day 7. The platform is built to enable orchestration and management of hundreds or thousands of domain- and task-specific AI models and agents at enterprise scale, supporting both cloud and on-premises deployments on all major cloud providers. The core architecture leverages Kubernetes for elasticity, high availability, and robust isolation of components. Key elements include a horizontally scalable API service layer and a proprietary “model mesh orchestrator,” which coordinates dynamic, low-latency, real-time executions across a variety of AI models deployed for customer-specific workloads. Observability, auditability, and compliance features are integrated at the intelligence layer, allowing enterprises to track, validate, and meet regulatory requirements for SOC and other audit demands.

At the heart of the platform is the automated construction and utilization of knowledge graphs, which are generated during data ingestion without manual annotation. For example, Articul8 demonstrated the ingestion and analysis of a 200,000-page aerospace dataset, generating a knowledge graph with over 6 million entities, 160,000 topics, 800,000 images, and 130,000 tables. The system automatically identifies topics, clusters, and semantic relationships, enabling precise search, reasoning, and model flows (e.g., distinguishing charts from images, applying task-specific models for tables including OCR, summary statistics, and understanding content). This knowledge graph forms the substrate for supporting both the training and real-time inference of domain-specific and task-specific AI models. The Model Mesh intelligence layer breaks down incoming data, determines its type, and routes it through appropriate model pipelines for processing, ensuring that the architecture can support both large and small models as appropriate for the data and task complexity.

The platform also showcases advanced agentic functionalities such as the creation of digital twins—AI-powered proxies of individuals or departments—which can be quickly spun up from public or private data and progressively improved through feedback and additional data integration. In an illustrative demo, Articul8 built digital twins of AI Field Day participants and orchestrated live, multi-agent discussions on technical topics. The platform supports squad-mode interactions, wherein multiple digital twins can collaborate, offer opinions, revise answers, and converge or diverge in real-time analysis. All these actions are fully tracked and auditable, supporting enterprise security and access controls. The discussion outcomes are summarized and can be exported, making the platform suitable not only for typical enterprise Q&A and knowledge retrieval, but also for scenario planning, decision support, and collaborative agentic workflows in secure, controlled environments.


Inside Articul8’s Domain-Specific Platform and Real-World ROl

Event: AI Field Day 7

Appearance: Articul8 Presents at AI Field Day 7

Company: Articul8

Video Links:

Personnel: Arun Subramaniyan, Parvi Letha

At AI Field Day 7, Parvathi Letha, Head of Product Management at Articul8, and Dr. Arun Subramaniyan, Founder and CEO of Articul8, presented the Articul8 platform, a secure, domain-specific generative AI solution tailored for high-value enterprise use cases. Parvathi detailed the architecture, beginning with its autonomous data perception functionality, which ingests and semantically connects structured and unstructured data—including PDFs, tables, images, and CAD drawings—into a living, schema-aware knowledge graph. This knowledge graph serves as the core institutional memory, continually evolving through user interactions, and supports auditability and traceability by tracking updates and allowing for corrective feedback, ensuring personalized and adaptive intelligence for each enterprise and user.

A cornerstone of Articul8’s approach is its agentic reasoning engine, which utilizes hundreds of domain- and task-specific agents. When a complex enterprise mission is received, the reasoning engine breaks it down into sub-missions and autonomously selects the most appropriate agent for each segment. This collaborative, context-aware architecture emulates expert teams, vastly accelerating tasks such as root cause analysis, compliance checks, and network troubleshooting, and supporting outcomes with fine-grained traceability. Articul8’s hyper-personalization goes further with adaptive user interfaces and the capability to create digital twins, allowing outputs and insights to reflect individual user decision styles and enterprise workflows. Domain-specific models span industries like semiconductor, aerospace, manufacturing, supply chain, and telco, with bespoke agents such as a highly specialized Verilog model for chip design and general-task agents for time series, table, and chart understanding.

For enterprise consumption, Articul8 supports multiple models: companies can subscribe to the full suite as a traditional enterprise software platform or deploy select agents a la carte via the AWS Marketplace, paying per API call for agents like LLM IQ, which benchmarks LLMs for specific use cases, or the Network Topology Agent, which digests network logs and diagrams to support troubleshooting. The platform’s industry-specific models are co-developed with partners such as the Electric Power Research Institute and leading aerospace firms, ensuring both robust data integration and expert validation. Articul8’s agent-of-agent architecture has been in production for more than two years and is already driving measurable ROI in semiconductor root cause analysis and CAD validation, reducing processing times from days to hours and delivering over 93 percent detection accuracy—while maintaining data control and auditability entirely within customer security perimeters.


Why Uniqueness Defines the Future of Enterprise GenAl with Articul8

Event: AI Field Day 7

Appearance: Articul8 Presents at AI Field Day 7

Company: Articul8

Video Links:

Personnel: Arun Subramaniyan

At AI Field Day 7, Dr. Arun Subramaniyan, Founder and CEO of Articul8, presented a compelling vision of the future of enterprise generative AI (GenAI) centered on hyper-personalization and domain specificity. The talk addressed the evolution of AI accessibility, highlighting how general-purpose models have become commoditized, making it easier than ever to generate content but often sacrificing depth and context. Dr. Subramaniyan emphasized the limitations of relying purely on generic large language models (LLMs) for enterprise tasks, especially in specialized domains like healthcare, manufacturing, cybersecurity, and energy. Instead, Articul8 advocates for tailored solutions built using domain-specific models that can factor in both tacit knowledge and contextual nuances that general models may overlook or misinterpret.

Articul8’s technological strategy involves developing proprietary models specifically trained or fine-tuned with datasets relevant to particular industries. These models range in size from a few billion to several hundred billion parameters. They are not designed to be general-purpose but to perform specific tasks within domains such as thermodynamics, aerodynamics, and system diagnostics. Articul8 also builds task-specific models, such as those focused on table understanding in spreadsheets or parsing complex PDFs. These models are augmented with multi-model orchestration capabilities and metadata tagging to construct automated knowledge graphs, enabling richer semantic understanding without moving raw data—only metadata is transferred. This approach allows enterprises to retain control over data security, including air-gapped deployments, while benefiting from AI-powered insights.

Articul8’s platform architecture supports both on-premises and SaaS-based models, enabling flexibility in deployment. The platform can ingest unstructured data, apply semantic reasoning, and, using a combination of models and tools, generate agents and squads of agents that autonomously execute “missions” within enterprise workflows. This layered AI model architecture culminates in creating “digital twins,” which are dynamically updated representations of business systems for simulation and high-fidelity analysis. Demonstrating measurable performance gains, Articul8 benchmarked its energy domain model against state-of-the-art general-purpose models like LLaMA 3, GPT-4, and others, showing superior results across multiple task dimensions. The company maintains control of intellectual property in models unless customer data is involved in training, in which case the IP resides with the customer. This approach ensures scalability while maintaining rigorous standards for contextual accuracy, enterprise integration, and domain relevance.


Nutanix Data Lens Demo

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Mike McGhee

In the Nutanix presentation at AI Field Day 7, Mike McGhee showcased the capabilities of Nutanix Data Lens, a data analytics and cybersecurity tool designed to provide visibility across storage environments, including Nutanix file servers and third-party object stores like AWS S3. The tool collects and analyzes metadata—for example, file names, sizes, and data age—as well as real-time audit trails that track operations like reads, writes, and permission changes. Data Lens is tightly integrated with Nutanix storage solutions and will soon be bundled with their offerings for on-premises deployments, ensuring seamless operational data ingestion and monitoring without reliance on external scanning requests.

A major feature of Data Lens is its ransomware protection, which includes both signature-based detection using known file extensions from open source communities and behavioral analysis that identifies anomalous events like in-place or out-of-place encryption. This allows Data Lens to detect threats from unknown malware using intelligent algorithms trained on customer activity patterns. When threats are detected, Data Lens can log full user activity, block affected users or clients, and provide options for recovery that include restoring individual affected files or entire shares using recommended snapshots taken before the time of compromise. This detailed approach helps reduce false positives and accelerates recovery in the event of an attack.

In addition to its security functions, Nutanix emphasized the need for better tooling and data accessibility for platform engineering and end-users, particularly when supporting AI and DevOps workflows. Speakers stressed the importance of giving end-users API-level access to infrastructure components like databases, Kubernetes platforms, and AI services so they can manage and troubleshoot their workloads autonomously, without bottlenecks from IT. Nutanix’s broader ecosystem—including Nutanix Kubernetes Platform (NKP), Nutanix Database Services (NDB), and Nutanix AI (NAI)—was presented as a robust infrastructure solution with integrated observability and automation features designed to empower users and admins alike with actionable insights and streamlined management.


Data Mobility and Security with Nutanix

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Vishal Sinha

At AI Field Day 7, Vishal Sinha of Nutanix presented the company’s comprehensive approach to data mobility and security in the context of AI and enterprise computing. He detailed how Nutanix facilitates seamless data movement across edge, core, and cloud environments by supporting various data operations such as synchronization, replication, and distribution. This data mobility capability allows for use cases such as consolidating data from edge locations to a central data center and then utilizing the cloud for AI model inferencing and tiered storage. The platform maintains a consistent global namespace, simplifying data management across disparate environments and enabling organizations to process, analyze, and store data efficiently regardless of location.

The focus then shifted to security, where Sinha emphasized Nutanix’s architecture as being inherently built with security in mind. The system features cyber-resilience measures that detect both known and novel ransomware threats using built-in file system logic and a behavioral-based detection engine. By continuously monitoring file activities and validating potential threats before alerting users, Nutanix minimizes false positives while maintaining an exposure window currently down to ten minutes, with plans to reduce it further. Integration with third-party security tools like CrowdStrike, Palo Alto Networks, and Splunk enhances the platform’s capabilities within broader enterprise security frameworks. Immutable snapshots and a powerful remediation engine further allow customers to take automated or manual action to protect their data and restore functionality with minimal downtime.

Further elaborating on the benefits to enterprises, especially within AI workflows, Nutanix offers Data Lens—a metadata analytics tool that provides full visibility and governance over data usage. This tool can trace file activity over time, identify anomalous user behaviors, and clarify permissions with bidirectional data access views. These capabilities improve compliance, auditability, and operational transparency. Nutanix’s philosophy centers around four foundational pillars: simplicity, security, ubiquity, and unification of storage formats. By delivering an integrated, software-defined storage platform that supports files, objects, and blocks, Nutanix enables organizations to build and maintain intelligent AI-driven applications while maintaining robust data governance and near real-time threat response.


Nutanix Data Architecture and Safety

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Manoj Naik

In this presentation at AI Field Day 7, Manoj Naik from Nutanix outlined the underlying architecture of Nutanix’s unified data platform, which is designed to support next-generation AI workloads with high performance, security, and scalability. He began by analyzing traditional enterprise storage architectures—shared everything and shared nothing—and explained their inherent trade-offs. To overcome these limitations, Nutanix introduced a “shared flexible” architecture, combining the global accessibility of shared everything with the linear scalability of shared nothing. This allows for disaggregated yet cohesive compute and storage capabilities within a single cluster, enabling organizations to scale their infrastructure flexibly and efficiently.

Naik emphasized that the Nutanix platform is built around a data-first architecture, capable of handling the entire AI lifecycle from edge to core to cloud. It supports diverse protocols (NFS, SMB, S3) and applies smart data placement strategies through fine-grained metadata, intelligent caching, and protocol enhancements like SMB referrals and NFS v4. As data flows through the AI lifecycle, Nutanix ensures consistent operations and flexible mobility via replication, cloud integration, and unified control through Prism Central. The ability to scale both capacity and performance seamlessly—from single-node clusters to multi-petabyte, multi-cluster environments—positions Nutanix as a robust storage solution for AI pipelines and large-scale data lake infrastructures.

To validate its performance claims, Nutanix participated in ML Commons’ MLPerf Storage benchmarks, which simulate real-world AI workloads. Improved architectural paths, such as end-to-end RDMA, fast-path data transfers, and utilization of advanced features like SR-IOV, have allowed Nutanix to double performance while using half the hardware. Alongside performance, the platform includes comprehensive data protection features like snapshots, synchronous Metro replication, asynchronous DR, object locking, and integration with cloud object storage. This ensures both data resilience and compliance across deployment environments. Nutanix’s continued investment in performance optimization and robust data governance makes it well-suited for the demands of modern AI infrastructure.


Enabling Data using Nutanix Unified Storage

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Manoj Naik, Vishal Sinha

In their presentation at AI Field Day 7, Nutanix, represented by Distinguished Engineer Manoj Naik and SVP Vishal Sinha, laid out their vision for supporting AI workflows using Nutanix Unified Storage. They began by walking through a typical AI pipeline—from data ingestion and cleansing to model fine-tuning, inferencing, and archiving—emphasizing the challenges presented by data fragmentation across edge, cloud, and various formats. The Nutanix platform addresses these challenges by helping platform teams build and manage AI-ready data pipelines, ensuring clean, high-performance data is readily available for training large language models (LLMs) and inferencing operations.

The Nutanix solution includes a wide array of components that form a full-stack enterprise AI platform. Key pieces include Nutanix Cloud Infrastructure (NCI), Nutanix Unified Storage, the Nutanix Database Service (NDB), Nutanix Kubernetes Engine, and management layers such as Nutanix Central and Cloud Manager. Particularly noteworthy is their database management layer, which supports vector databases like PGVector and Milvus, enabling customers to manage LLM applications efficiently. The storage capabilities are tailored for each phase of the AI lifecycle—high-performance for training, rich metadata tiering for archiving, and versioning support via object storage and snapshotting.

The platform is designed to operate seamlessly across hybrid multicloud environments, offering data security, cyber resilience, and robust data mobility through features like global namespaces, immutable snapshots, and tiering based on metadata sensitivity. Nutanix Unified Storage supports multiple protocols (NFS, SMB, S3, iSCSI) and enables app and data colocation to optimize performance. The platform also supports cascading disaster recovery setups—including metro, asynchronous, and near-synchronous replication—to meet global compliance standards. With use cases ranging from edge AI inferencing to deep archival storage, Nutanix’s mature and versatile platform is built to handle the full lifecycle of data-intensive, AI-powered applications.


The Enterprise AI Cloud Platform Powered by Nutanix Unified Storage

Event: AI Field Day 7

Appearance: Nutanix Presents at AI Field Day 7

Company: Nutanix

Video Links:

Personnel: Alex Almeida

At AI Field Day 7, Alex Almeida from Nutanix presented the company’s vision for enabling enterprise AI initiatives through a robust, cloud-native infrastructure built on Nutanix Unified Storage. He began by highlighting a critical industry insight: while interest in AI is surging, a reported 95% of enterprise generative AI pilots are failing to reach production. This widespread issue, he explained, is not due to deficiencies in the AI applications themselves but rather due to the lack of a strong, scalable data infrastructure that can support AI at the enterprise level. As companies attempt to scale pilots into production, the complexity of integrating data from edge, core, and cloud environments—along with difficulties in orchestrating networking, compute, and storage—presents a considerable barrier to adoption.

To address this challenge, Nutanix is focused on delivering a seamless platform that simplifies the infrastructure required for AI. Their solution is the Nutanix Cloud Platform, built on a modern server-based architecture with a focus on scalability, manageability, and consistency across compute, networking, and especially storage. This shift from traditional three-tier architectures to cloud-native technologies supports containerized applications and Kubernetes workflows. Nutanix sees its evolution from hyper-converged infrastructure (HCI) into a robust cloud-native platform as a foundational change that sets the stage for the next decade of enterprise computing, particularly for AI and PaaS environments. With components like Nutanix Kubernetes Platform (NKP) and Nutanix Cloud Clusters, organizations can run applications consistently across on-prem, cloud, and edge environments.

A central piece of this AI-ready infrastructure is Nutanix’s Unified Storage offering, which provides file, block, and object storage all integrated into a single platform. Almeida emphasized their data management capabilities through Nutanix Data Lens, which adds analytics and cybersecurity features such as ransomware detection. He also pointed to Nutanix’s strategic partnership with NVIDIA, underscoring their certified support for NVIDIA’s GPU Direct Storage and active involvement in NVIDIA’s Enterprise AI Factory. These collaborations ensure that Nutanix’s solutions are aligned with the latest AI data platform innovations. Overall, Nutanix aims to empower enterprises to overcome infrastructure hurdles and realize real ROI from their AI initiatives by providing a modern, unified data foundation.


Security Using AI with Fortinet

Event: AI Field Day 7

Appearance: Fortinet Presents at AI Field Day 7

Company: Fortinet

Video Links:

Personnel: Keith Choi

At AI Field Day 7, Keith Choi from Fortinet presented an overview of Fortinet’s AI strategy and portfolio, emphasizing the integration of AI within cybersecurity solutions. He explained that the increasing adoption of AI in enterprises is driven by the need for efficiency and innovation, and Fortinet has developed a layered approach categorized into three buckets: Protect AI, Secure AI, and AI-Assisted Operations. These categories are designed to address different aspects of AI-related security, from defending against AI-driven threats, to securing AI systems themselves, and enhancing operational efficiency through AI-powered tools like SOC and NOC support.

Choi detailed various solutions underpinning Fortinet’s AI capabilities. For example, FortiGate, the company’s next-generation firewall, now includes controls for generative AI applications, allowing administrators to manage access and prevent data loss. Meanwhile, FortiNDR provides deep network detection and response capabilities without affecting throughput, acting as an internal magnifying tool to monitor and detect threats. FortiDLP complements these tools by offering endpoint data loss protection with real-time alerts and monitoring, helping organizations prevent sensitive data from leaking through AI platforms like ChatGPT. These tools illustrate Fortinet’s commitment to using AI not only to protect networks but also to secure how AI itself is used within organizations.

The presentation concluded with insights into AI-Driven operational tools like FortiAI Assist, which uses generative AI for troubleshooting and management through an interactive chatbot UI. Choi highlighted that Fortinet’s architecture uses region-specific AI proxies, ensures sensitive data masking, and allows deployment flexibility between on-premises and cloud environments depending on client needs. He reinforced the message that AI security solutions need to be architected based on specific organizational requirements, not delivered as a one-size-fits-all model. Fortinet’s approach, with heavy emphasis on secure architecture and user training, positions them as a versatile partner in navigating AI’s growing footprint in enterprise environments.


Secure AI Conversation, Not Just the Data, with Fortinet

Event: AI Field Day 7

Appearance: Fortinet Presents at AI Field Day 7

Company: Fortinet

Video Links:

Personnel: Maggie Wu

In Fortinet’s presentation at AI Field Day 7, Maggie Wu emphasized how the emergence of AI applications has radically altered the cybersecurity landscape, moving beyond traditional web security into more dynamic, conversation-driven AI interactions. The company’s approach integrates threat intelligence from FortiGuard Labs into their systems, providing real-time insights and protection across network environments. Their AI models, developed and maintained within Fortinet’s infrastructure, are further tailored to customer environments by mapping incoming AI interactions to localized context, ensuring outputs are relevant and secure for specific clients. Fortinet utilizes public LLMs for basic interactions while incorporating customer-specific data locally to avoid unnecessary exposure.

Wu elaborated on a multi-layered strategy to secure not just data, but also the AI-driven conversations themselves. Recognizing that modern AI systems are vulnerable to prompt injection, data leakage, and model poisoning, Fortinet introduced an AI orchestration layer and a set of protections designed to sanitize both AI inputs and outputs. AI infrastructure is continuously scanned for vulnerabilities, and user/environment-based access controls are enforced rigorously. They have also integrated security into their CI/CD pipelines, ensuring that AI models are secure even before deployment. This multi-faceted approach helps prevent security flaws from being exploited during any stage of the AI lifecycle.

Fortinet differentiates its offering from competitors by embedding AI capabilities directly into its existing unified platform rather than creating separate AI products. This integration enables smarter, context-aware automation across Fortinet’s entire ecosystem of security, networking, SOC, and SASE solutions. While optimal performance is achieved within a Fortinet-centric infrastructure, the company also supports multi-vendor environments by offering modular add-ons like FortiAI Assist, which can integrate with third-party SIEM and SOAR platforms. Their AI governance model includes comprehensive tracking and compliance monitoring of LLM interactions, supporting enterprise needs for regulatory adherence and ethical AI use.