|
This video is part of the appearance, “Keysight Presents at AI Field Day 5“. It was recorded as part of AI Field Day 5 at 8:00-9:30 on September 11, 2024.
Watch on YouTube
Watch on Vimeo
This demonstration of the AI Data Center Test Platform shows how network events impact completion times. The first demo showcases the effects of congestion on completion times and how poor fabric utilization impacts performance. You’ll also see how the AI Data Center Test Platform can show how increasing parallelism of data transfer helps improve utilization and completion times.
In the presentation by Keysight Technologies at AI Field Day 5, Ankur Sheth, Director of AI Test R&D, demonstrated the AI Data Center Test Platform, focusing on how network events impact completion times. The setup involved emulating a server with eight GPUs connected to a two-tier fabric network, using the Arise 1 box to simulate the GPUs and network interface cards (NICs). The demonstration aimed to show the effects of network congestion on performance and how increasing the parallelism of data transfer can improve fabric utilization and completion times. The first scenario examined the impact of congestion on the network, revealing poor performance due to misconfigured congestion control settings.
Sheth explained the configuration and results of running an All Reduce Collective operation, which is commonly used during the backward pass of a training job. The initial test showed that the network’s poor configuration led to low utilization and high latency, with only 25% of the theoretical throughput achieved. Detailed flow completion times and cumulative distribution functions (CDFs) highlighted significant discrepancies in data transfer times, indicating a problem in the network configuration. After adjusting the network settings, particularly the Priority Flow Control (PFC) settings, the performance improved dramatically, achieving 95% utilization and significantly reducing completion times.
In a second experiment, Sheth demonstrated the impact of using different algorithms and increasing the number of Q-Pairs, which are connections used in the RDMA over Converged Ethernet (RoCE) protocol. The halving-doubling algorithm initially showed average performance with significant tail latencies. By increasing the Q-Pairs from one to eight, the network’s performance improved, with more parallel and consistent data transfer times. This change allowed the network to better load balance the traffic, resulting in more efficient utilization. The presentation concluded with a demonstration of how the platform’s metrics and data can be integrated into automated test cases and analyzed using tools like Jupyter notebooks, providing valuable insights for network designers and engineers.
Personnel: Ankur Sheth