Watch on YouTube
Watch on Vimeo
As AI training and inference scale, the network must function as an extension of the compute fabric. This session explores the architectural requirements for high-performance AI data centers. We will examine the shift toward deterministic networking to mitigate tail latency and fabric congestion, alongside critical hardware innovations — including advanced cooling and next-generation optics, designed to maximize performance and power efficiency. Attendees will gain technical insights into building a unified, programmable fabric that optimizes performance and scalability for high-density AI environments.
The presentation introduces the third generation of Cisco’s bidirectional (BiDi) technology, specifically the 400G BiDi optic. This innovation addresses fiber infrastructure constraints by enabling fiber reuse, allowing customers to upgrade from 40G or 100G to 400G over existing duplex multi-mode fiber without installing new trunk cables or patch panels. By utilizing four wavelengths at 100G each over a single fiber pair, the 400G BiDi simplifies the physical layer with LC connectors, making it eight times more fiber-efficient than parallel SR8 solutions. This approach offers significant financial and operational benefits for both brownfield and greenfield deployments by reducing installation costs and troubleshooting complexity.
A major portion of the session focuses on the critical role of optics reliability and Cisco’s advanced silicon photonics in AI environments. Unlike traditional networks where retransmissions are common, AI workloads are highly synchronized; a single unreliable optical link can cause GPU clusters to stall, potentially reducing performance by 40%. Cisco’s silicon photonics architecture integrates electronics and photonics into a single system, improving stability and power efficiency for 800G and 1.6T scales. Notable highlights include the 1.6T pluggable optic, which supports flexible breakout options, and the 800G Linear Pluggable Optic (LPO). By removing the DSP from the optic and shifting signal conditioning to the switch ASIC, the LPO solution reduces power consumption by 50% per module and lowers overall system latency, providing a more reliable and sustainable foundation for large-scale AI factories.
Personnel: Paymon Mogharabi
Thank you for being part of the Tech Field Day community! Our mailing list is a great way to stay up to date on our events and technical content, and we appreciate your signup.
We promise that we’ll never spam you, send ads, or sell your information. This list will only be used to communicate with our community about our events and content. And we’ll limit it to no more than one message per week.
Although we only need your email address, it would be nice if you provided a little more information to help us get to know you better!