|
|
![]() Rached Blili and Shailesh Manjrekar presented for Fabrix.ai at AI Infrastructure Field Day 4 |
This Presentation date is January 28, 2026 at 2:30PM – 3:30PM PT.
Presenters: Rached Blili, Shailesh Manjrekar
Fabrix’s Enterprise-Ready AgentOps Platform powered by Agentic Middleware – Deep dive & demo
Enterprises are able to derive very little value from AI agents and agentic workflows, though there are several experimental demos and siloed implementations. They lack enterprise readiness for production use cases at scale. This is particularly true when dealing with Operational use cases, where data is real-time, with thousands of assets / devices and several hundred reasoning workflows.
Fabrix’s AgentOps platform is full-stack, purpose-built and enterprise grade specifically to overcome these challenges.
- The unique proposition of the platform revolves around supporting multiple operational personas – ITOps, SecOps, NOCOps, BizOps who can use out of the box agents, but also build their own agents for bespoke use cases.
- Fabrix’s Agentic Middleware generates precise, curated context eliminating hallucinations and providing the right guardrails, while the universal tooling engine provides the dynamic tooling with tool wrapping needed to overcome data management challenges.
- Lastly the AgentOps framework enables operationalizing deployments with trust, governance, security, reliability&performance, observability and explainability.
Build Reliable, Secure, and Performant Agents using Fabrix.AI AgentOps Platform
Watch on YouTube
Watch on Vimeo
Fabrix.AI addresses the evolving AI operations landscape with an AgentOps platform that builds reliable, secure, and high-performance agents. The company, formerly Cloudfabrics.com, rebranded as Fabrix.AI in response to customer demand for agentic functionality, moving beyond traditional AIOps, which relies on manual remediation after correlation and root-cause analysis. This shift was motivated by real-world challenges, such as an 8-hour telco outage caused by inadvertent access control list changes, highlighting the need for autonomous or semi-autonomous remediation workflows powered by Large Language Models (LLMs). However, this transition introduces new complexities, including the non-deterministic nature of LLMs, context and data management at scale, and the challenge of connecting to diverse data sources, which can lead to issues such as hallucination and an “agentic value gap,” where experimental demos rarely translate to enterprise value.
Fabrix.AI’s solution centers on proprietary middleware that serves as a critical intermediary between AI agents/LLMs and various data sources. This middleware comprises two main components: the Context Engine and Universal Tooling. The Context Engine ensures “purity of context” by providing only curated, summarized data to the LLM, thereby preventing context corruption and reducing hallucination, while also maintaining state across interactions. The Universal Tooling dynamically connects to over 1,700 disparate data sources, including MCP-enabled endpoints, API-based systems, and raw or legacy data, by creating necessary wrappers and normalizing data schemas for LLM understanding, and can even dynamically generate tools by scraping public APIs. This approach allows the platform to integrate seamlessly with existing IT environments, offering a full-stack solution from data acquisition to automation.
The platform is purpose-built for real-time data environments, differentiating it from generic agentic frameworks that may not meet these requirements. It offers a “co-pilot” for conversational queries and an “Agent Studio” for building custom agents, supplementing its library of 50 out-of-the-box agents across AIOps, Observability, SecOps, and BizOps. Fabrix.AI emphasizes operationalizing agents through its AgentOps model, which incorporates trust via prompt templates and dynamic instructions, governance through FinOps models, security via a “least agency” principle, and comprehensive observability at the agentic layer with audit trails and real-time flow maps. By consolidating tools, reducing Mean Time to Resolution (MTTR) and alert noise, and enabling faster deployments, Fabrix.AI positions itself as a robust, enterprise-grade platform that complements and enhances existing observability and ITOM tools.
Personnel: Shailesh Manjrekar
Crossing the Production Gap to Agentic AI with Fabrix.ai
Watch on YouTube
Watch on Vimeo
Fabrix.ai highlights the critical challenges in deploying agentic AI from prototype to production within large enterprises. The Rached Blili noted that while agents are quick to prototype, they frequently fail in real-world environments due to dynamic variables. These failures typically stem from issues in context management, such as handling large tool responses and maintaining “context purity,” as well as from operational challenges related to observability and infrastructure, including security and user rights. To overcome these hurdles, Fabrix.ai proposes three core principles: moving as much of the problem as possible to the tooling layer, rigorously curating the context fed to the Large Language Model (LLM), and implementing comprehensive operational controls that monitor for business outcomes rather than just technical errors.
Fabrix.ai’s solution is a middleware built on a “trifabric platform” comprising data, automation, and AI fabrics. This middleware features two primary functional components: the Context Engine and the Tooling and Connectivity Engine. The Context Engine focuses on delivering pure, relevant information to the LLM through intelligent caching of large datasets (making them addressable and providing profiles such as histograms) and sophisticated conversation compaction that tailors summaries to the current user goal, preserving critical information better than traditional summarization. The Tooling and Connectivity Engine serves as an abstraction layer that integrates various enterprise tools, including existing MCP servers and non-MCP tools. It allows tools to exchange data directly, bypassing the LLM and preventing token waste. This engine uses a low-code, YAML-based approach for tool definition and dynamic data discovery to automatically generate robust, specific tools for common enterprise workflows, thereby reducing the LLM’s burden and improving reliability.
Beyond these core components, Fabrix.ai emphasizes advanced operational capabilities. Their platform incorporates qualitative analysis of agentic sessions, generating reports, identifying themes, and suggesting optimizations to improve agent performance over time, effectively placing agents on a “performance improvement plan” (PIP). This outcome-based evaluation contrasts with traditional metrics like token count or latency. Case studies demonstrated Fabrix.ai’s ability to handle queries across vast numbers of large documents, outperforming human teams in efficiency and consistency, and to correlate information across numerous heterogeneous systems without requiring a data lake, thanks to dynamic data discovery. The platform also includes essential spend management and cost controls, recognizing the risk that agents may incur high operational costs if not properly managed.
Personnel: Rached Blili
Fabrix.ai Demo – Building Agentic AI at scale for Production
Fabrix.ai is building agentic AI at scale for production, moving beyond proofs of concept to deliver robust solutions. In the video from the Fabrix.AI channel, Rached Blili demonstrated the Fabrix.ai platform, highlighting its agent catalog, where users can access and manage a variety of agents, both developed by Fabrix.ai and custom-built. The platform offers an AI Storyboard dashboard that provides a comprehensive view of AI operations, enabling agents to be organized into projects with distinct permissions and toolsets. A significant emphasis is placed on observability, including detailed AI cost tracking at both global and project levels, and visibility into individual “conversations” or agentic sessions. Uniquely, Fabrix.ai provides performance evaluation for agents, treating them as digital workers by monitoring their performance over time, identifying top and underperforming agents, and suggesting specific fixes, such as modifying system prompts, to continuously improve their efficacy.
The demonstration showcases two types of agents: autonomous and interactive. Autonomous agents operate in the background, triggered by events, alerts, or schedules, as exemplified by a Network Root Cause Analysis agent. This agent automatically diagnoses network failures, such as router configuration errors, by analyzing logs, incident data, and router configurations. It generates comprehensive reports detailing the root cause, impact assessment, and multiple remediation plans, which a remediation agent can then use for automated implementation and verification. For interactive use, Fabrix.ai’s copilot, Fabio, enables users to converse directly with agents to manage complex tasks, such as verifying VPNs or configuring Netflow in a lab network, significantly reducing manual intervention and saving time.
Delving into the underlying architecture, the presentation revealed that complex problems are tackled using multi-agent complexes, where an orchestrator agent calls specialized sub-agents, each handling a specific part of the problem with a sequestered context. This approach enhances individual agents’ capabilities while enabling detailed cost management, tracking token usage, time, and expenses, and capturing individual agent contributions within a hierarchical structure. A detailed example illustrated an application root-cause analysis in which the orchestrator agent systematically investigated incident details, application dependency maps, and even interpreted plain-English change requests from a ticketing system. The platform’s advanced context and tooling engines are critical to operating at scale, enabling mass operations across numerous devices in parallel and efficiently processing vast tool outputs by storing them in a context cache for later retrieval and analysis, ensuring effective, secure, and reliable agent deployment.
Personnel: Rached Blili









