Watch on YouTube
Watch on Vimeo
AI networks require purpose-built hardware platforms designed for different roles across the infrastructure. This presentation outlines the hardware platforms positioned for these roles highlighting how each supports performance, bandwidth, and operational needs. This preso will have a focus on with a focus on the scale-out part of the network. It also looks ahead to emerging platforms designed for scale-across architectures, enabling the next phase of large-scale, interconnected AI systems. Igor Giangrossi, lead of hardware product management at Nokia, details the specialized data center portfolio that moves beyond the traditional 7750 SR into platforms specifically optimized for the high-throughput, low-latency demands of AI training and inference.
The presentation focuses heavily on the 7220 IXR series, which utilizes Broadcom Tomahawk chipsets to drive the scale-out portion of the network. Giangrossi introduces the Tomahawk 5 (H5) generation, offering up to 51.2 Tbps capacity with 800G ports, and the newer Tomahawk 6 (H6) generation, which doubles density to 128 ports of 800G or provides 1.6T Ethernet capabilities. A notable advancement in the H6 family is the introduction of liquid-cooled models designed for 21-inch OCP ORV3 racks, addressing the extreme power densities required as AI clusters scale. These platforms integrate advanced features like packet trimming and credit-based flow control into the packet pipeline to manage congestion and improve job completion times.
For scale-across and deep-buffered routing roles, Nokia utilizes the Broadcom Jericho family, including the 7250 IXR-X4 pizza box and the massive IXR-e chassis series. These platforms provide the necessary buffering for geodistributed clusters and long-reach interconnects while maintaining high port density, such as 576 ports of 800G in a single 18E chassis. The hardware design prioritizes operational efficiency and reliability through a mid-plane-less orthogonal architecture, honeycomb meshes for improved airflow, and the deliberate avoidance of retimers to reduce power consumption by up to 30%. This tiered approach ensures that the most appropriate silicon, whether Tomahawk, Jericho, or Nokia’s proprietary FP NPU, is deployed for each specific role in the AI infrastructure.
Personnel: Igor Giangrossi
Thank you for being part of the Tech Field Day community! Our mailing list is a great way to stay up to date on our events and technical content, and we appreciate your signup.
We promise that we’ll never spam you, send ads, or sell your information. This list will only be used to communicate with our community about our events and content. And we’ll limit it to no more than one message per week.
Although we only need your email address, it would be nice if you provided a little more information to help us get to know you better!