|
|
This video is part of the appearance, “Fabrix.ai Presents at AI Infrastructure Field Day“. It was recorded as part of AI Infrastructure Field Day 4 at 2:30PM – 3:30PM PT on January 28, 2026.
Watch on YouTube
Watch on Vimeo
Fabrix.ai highlights the critical challenges in deploying agentic AI from prototype to production within large enterprises. The Rached Blili noted that while agents are quick to prototype, they frequently fail in real-world environments due to dynamic variables. These failures typically stem from issues in context management, such as handling large tool responses and maintaining “context purity,” as well as from operational challenges related to observability and infrastructure, including security and user rights. To overcome these hurdles, Fabrix.ai proposes three core principles: moving as much of the problem as possible to the tooling layer, rigorously curating the context fed to the Large Language Model (LLM), and implementing comprehensive operational controls that monitor for business outcomes rather than just technical errors.
Fabrix.ai’s solution is a middleware built on a “trifabric platform” comprising data, automation, and AI fabrics. This middleware features two primary functional components: the Context Engine and the Tooling and Connectivity Engine. The Context Engine focuses on delivering pure, relevant information to the LLM through intelligent caching of large datasets (making them addressable and providing profiles such as histograms) and sophisticated conversation compaction that tailors summaries to the current user goal, preserving critical information better than traditional summarization. The Tooling and Connectivity Engine serves as an abstraction layer that integrates various enterprise tools, including existing MCP servers and non-MCP tools. It allows tools to exchange data directly, bypassing the LLM and preventing token waste. This engine uses a low-code, YAML-based approach for tool definition and dynamic data discovery to automatically generate robust, specific tools for common enterprise workflows, thereby reducing the LLM’s burden and improving reliability.
Beyond these core components, Fabrix.ai emphasizes advanced operational capabilities. Their platform incorporates qualitative analysis of agentic sessions, generating reports, identifying themes, and suggesting optimizations to improve agent performance over time, effectively placing agents on a “performance improvement plan” (PIP). This outcome-based evaluation contrasts with traditional metrics like token count or latency. Case studies demonstrated Fabrix.ai’s ability to handle queries across vast numbers of large documents, outperforming human teams in efficiency and consistency, and to correlate information across numerous heterogeneous systems without requiring a data lake, thanks to dynamic data discovery. The platform also includes essential spend management and cost controls, recognizing the risk that agents may incur high operational costs if not properly managed.
Personnel: Rached Blili








