|
![]() |
Introduction: Networks for AI – What’s New
AI workloads demand specialized networks — with separate front-end and back-end paths, differentiated QoS, and compute elements integrated directly into the network topology. Aviz presents their solution for configuring and managing AI workloads across the network.
Enterprise Challenges
Aviz is resolving enterprises need for automation and orchestration to speed up deployment, service activation, and expansion. Aviz presents two paths common deployment paths: NVIDIA Spectrum-X, based on a reference architecture, or open networking with SONiC.
Customer Workflow
Aviz presents a customer use case, starting with network design and POD sizing, followed by Day 0 setup of fabric and QoS. On Day 1, tenants are onboarded and traffic is segmented. Day 2 brings scaling, alert management, and operational troubleshooting as the environment grows.
Demo Walkthrough
- SONiC in ONES: Day 0 covers basic orchestration, BGP control plane, and lossless QoS. Day 2 focuses on monitoring the full stack — network, compute, and GPUs — and managing operational alerts.
- Spectrum-X with NVIDIA AIR + ONES: Aviz will show Day 0 orchestration per NVIDIA RA, with service unit setup combining compute and network. Day 2 showcases full-stack monitoring and streamlined Day-2 ops.
Wrap-Up: What ONES Adds
ONES delivers centralized, multi-vendor management across SONiC, Cumulus, and Spectrum-X. It supports Day 0–2 workflows, simplifies multi-tenancy planning, aligns with leading architectures, and treats NICs and GPUs as core monitored assets.