Fortinet AI-Powered Transformation

Event: Mobility Field Day 13

Appearance: Fortinet Presents at Mobility Field Day 13

Company: Fortinet

Video Links:

Personnel: Alex Vizzari, James Allen

Join us for an in-depth exploration of Fortinet’s AI-driven approach to campus network management. We’ll discuss how FortiAI and FortiAIOps empower organizations to streamline deployment, enhance operational efficiency, and ensure seamless integration of security and networking infrastructure. Discover how artificial intelligence is shaping the future of network security management.


Fortinet Secure Networking Vision & Overview

Event: Mobility Field Day 13

Appearance: Fortinet Presents at Mobility Field Day 13

Company: Fortinet

Video Links:

Personnel: Sumana Mannem

Fortinet is more than just a firewall company; we provide a comprehensive suite of cybersecurity and LAN solutions that help IT, and security professionals create a secure, scalable and high-performance network. This session will offer an overview of Fortinet’s vision for the convergence of security and networking, focusing on how our integrated solutions are transforming the way businesses secure their networks.


Arista CloudVision AGNI, Arista’s Next Generation NAC In Action

Event: Mobility Field Day 13

Appearance: Arista Presents at Mobility Field Day 13

Company: Arista

Video Links:

Personnel: Anubhav Gupta, Parul Sharma

One of the core tenets of CloudVision AGNI is to be able to integrate with other enterprise security tools for profiling, access control and dynamic network access for a true Zero Trust Networking Solution. In this video, we will demo how CV AGNI can integrate with CrowdStrike for dynamic access control and also showcase Arista’s patented UPSK solution with segmentation.


Beyond Signal Bars: Optimizing Wi-Fi User Happiness with Arista’s Digital Experience Monitoring

Event: Mobility Field Day 13

Appearance: Arista Presents at Mobility Field Day 13

Company: Arista

Video Links:

Personnel: Robert Ferruolo, Senthil Shanmugavadivel

In this video, we will introduce some of Arista’s latest innovations around AIOps, proactive network assurance, Digital Experience Monitoring, leveraging Arista’s unique Multi Function Radio capability. Join us as we provide a live demo of these exciting new features and innovations.


Arista and Wi-Fi 7: What the Real World and Arista Lab Tests are Really Telling Us

Event: Mobility Field Day 13

Appearance: Arista Presents at Mobility Field Day 13

Company: Arista

Video Links:

Personnel: Asvin Kumar Muthrurangam

In this session, Asvin will share some insights into the Multi-Link Operation (MLO) behavior of various Wi-Fi 7 clients based on testing conducted in Arista’s lab. Through a live demo, we’ll see how MLO-enabled clients use the spectrum intelligently compared to non-MLO clients – highlighting the real-world benefits of MLO in action.


Engineering Campus-Wide Mobility: Arista’s Scalable Wi-Fi Roaming Design

Event: Mobility Field Day 13

Appearance: Arista Presents at Mobility Field Day 13

Company: Arista

Video Links:

Personnel: Ken Duda

Designing for large Wi-Fi roaming domain especially in environments like university campus has many challenges. In this video, Arista’s own founder and CTO, Ken Duda will talk about how we applied some of our learnings from large scale data center & AI cluster deployments to solve Wi-Fi roaming and thus unifying our wired and WLAN data plane fabric.


Arista’s Latest and Greatest: Innovations in Campus Since the Last Mobility Field Day

Event: Mobility Field Day 13

Appearance: Arista Presents at Mobility Field Day 13

Company: Arista

Video Links:

Personnel: Kumar Srikantan

In this video, get an update from Kumar Srikantan about the state of Arista’s Campus solution in 2025. Hear about the new Wi-Fi 7 Access Point portfolio, expanded switching portfolio, exciting features especially around AIOps and CloudVision AGNI.


Cisco SDN Strategy Discussion

Event:

Appearance: Packet Pushers Discussion of Cisco’s SDN Strategy at Cisco Live US 2012

Company: Cisco

Video Links:

Personnel: Derick Winkworth, Ethan Banks, Greg Ferro, Russ White, Stephen Foskett, Tom Hollingsworth

A group of independent thought leaders from the Networking Field Day/PacketPushers crew gathered at Cisco Live US 2012 to discuss the company’s Open Networking Environment (ONE) announcement. This announcement centered on a strategy for software-defined networking (SDN), and this was the focus of our discussion as well.

This wide-ranging discussion touched on the following topics:

  • Contrasting Cisco’s ONE strategy with SDN and OpenFlow in general
  • APIs, OpenFlow, and XML
  • What will people do with SDN in the future?
  • Distributed and autonomous versus centralized
  • Standards: IEEE vs. IETF, de facto and interoperability
  • VXLAN and the Nexus 1000V – Is 1000V SDN?
  • Operational and organizational impacts
  • Systems engineering
  • Thinking of networks as flows

The conversation will continue in July with two more “Virtual Symposium” discussions with Cisco. We will cover Network Programmability and Virtual Machine Networking. Watch for more!


Demonstration of Day 2 AI network operations, monitoring and anomaly detection with Aviz

Event: AI Infrastructure Field Day 2

Appearance: Aviz Networks Presents at AI Infrastructure Field Day 2

Company: Aviz Networks

Video Links:

Personnel: Ravi Kumar

Aviz Networks’ AI Infrastructure Field Day demonstration focused on Day 2 operations, monitoring, and anomaly detection for AI workloads. The core challenge addressed is the specialized networking requirements of AI, including multiple networks, differentiated QoS, and the need to manage compute as part of the end-to-end network topology. Aviz presented solutions for orchestrating AI fabrics based on Sonic and NVIDIA’s Spectrum-X reference architecture, showcasing a customer workflow that includes network design, Day 0 infrastructure deployment, Day 1 tenant onboarding and traffic isolation, and Day 2 operations like adding Pods, handling alerts, and troubleshooting.

The presentation demonstrated Aviz’s orchestration capabilities for Sonic-based and NVIDIA RA-based AI fabrics. For Sonic, the presenter showed how to orchestrate the fabric using YAML-based intent, validating configurations, and performing operational checks. The demonstration emphasized the ease of use of industry-standard CLI, built-in validation, and the ability to compare configurations to identify any drift. With the NVIDIA Spectrum-X platform, the presentation highlighted agentless orchestration, the use of NVIDIA AIR for simulating deployments, and config comparison.

Finally, the presentation detailed Aviz’s monitoring and anomaly detection features. The tool provides comprehensive monitoring with a bottom-up approach for networks, servers, and GPUs. The demo showed how to view various telemetry data, including traffic, queue drops, and GPU health metrics. The presentation also covered Aviz’s built-in anomaly detection system, which allows users to create custom rules and receive notifications through tools like Slack and Zendesk. The system includes curated rules, role-based access control, and configuration comparison capabilities to streamline operations and reduce potential errors.


Design, deploy, and monitor networks for AI with Aviz

Event: AI Infrastructure Field Day 2

Appearance: Aviz Networks Presents at AI Infrastructure Field Day 2

Company: Aviz Networks

Video Links:

Personnel: Thomas Scheibe

Thomas Scheibe, Chief Product Officer, offers solutions for designing, deploying, and monitoring networks for AI workloads. Their focus is on addressing the specialized networking needs of AI, including multiple networks, differentiated Quality of Service (QoS), and the integration of compute into the end-to-end network topology. They aim to provide automation and orchestration for faster deployment, service activation, and infrastructure expansion. Their product, ONCE, supports Sonic and Cumulus network operating systems, focusing on streamlining network management through design, modeling, deployment, and monitoring capabilities.

The Aviz presentation highlighted the evolution of networking in AI, emphasizing the shift from a single data center network to multiple networks, particularly the separation between front-end (user access) and back-end (GPU communication) networks. Aviz recognizes the importance of lossless behavior, different methods to address AI application requirements, and the integration of network settings on both the switches and the network interface cards (NICs). The company partners with hardware providers and uses reference architectures like NVIDIA Spectrum-X to automate network configuration. This allows enterprises to define networks and configure network separation.

Aviz offers comprehensive support for Sonic deployments in enterprise data centers and at the edge. They are automating deployment workflows for the NVIDIA Spectrum-X reference architecture, with the ability to configure multi-tenancy and extend the fabric. Aviz simplifies network management in AI, allowing users to deploy and manage their networks quickly and efficiently. They offer a comprehensive suite of solutions to design, deploy, and monitor networks for AI, focusing on automation and orchestration.


Wrapping up and summarizing Nutanix Enterprise AI

Event: AI Infrastructure Field Day 2

Appearance: Nutanix Presents at AI Infrastructure Field Day 2

Company: Nutanix

Video Links:

Personnel: Mike Barmonde

The Nutanix presentation at AI Infrastructure Field Day focused on enterprise AI solutions, emphasizing giving customers a solid technical understanding of Nutanix Enterprise AI (NAI) and its role in addressing key customer challenges. The discussion highlighted the curated model catalog, offering pre-configured and customizable models, and the ability to incorporate new cutting-edge models, even within air-gapped environments, easily. This approach provides control over models and data, which is particularly relevant for customers seeking sovereign AI solutions and needing to deploy AI models in their own environments.

Nutanix also emphasized the “deploy once, inference many” model, allowing for the creation of a shared service model where multiple applications can connect to deployed models via endpoints. Furthermore, the session touched upon the simplification of sizing, as NAI streamlines the deployment of models, making the process straightforward. The speaker reiterated the benefits of NAI as an application running on Kubernetes, offering flexibility and portability. The presentation concluded by discussing the future of distributed inference across multiple nodes, acknowledging its importance and status as a planned future development.

A key takeaway from the presentation was the growing demand for sovereign AI, driven by geopolitical factors and specific terms of service that restrict the use of certain models in certain regions. Nutanix recognizes and actively helps its customers address this need by providing the necessary tools and infrastructure to enable control over AI models and data within their own environments. The company’s commitment to adapting and evolving its AI solutions to meet the rapid advancements in the AI landscape was underscored, ensuring that Nutanix remains a relevant player in the enterprise AI space.


AI Inferencing Sizing Considerations on Nutanix Enterprise AI

Event: AI Infrastructure Field Day 2

Appearance: Nutanix Presents at AI Infrastructure Field Day 2

Company: Nutanix

Video Links:

Personnel: Jesse Gonzales

Jesse Gonzales, Staff Solution Architect, offers sizing guidance for AI inferencing based on real-world experience. The presentation focuses on the critical aspect of appropriately sizing AI infrastructure, particularly for inferencing workloads. Gonzales emphasized the need to understand model requirements, GPU device types, and the role of inference engines. He walks the audience through considerations like CPU and memory requirements based on the selected inference engine, and how this directly impacts the resources needed on Kubernetes worker nodes. The discussion also touches on the importance of accounting for administrative overhead and high availability when deploying LLM endpoints, offering a practical guide to managing resources within a Kubernetes cluster.

The presentation highlights the value of the Nutanix Enterprise AI’s pre-validated models, offering recommendations on the specific resources needed to run a model in a production-ready environment. Gonzales discussed the shift in customer focus from proof-of-concept to centralized systems that allow for sharing large models. The discussion also underscores the importance of accounting for factors like planned maintenance and ensuring sufficient capacity for pod migration. Gonzales explained the sizing process, starting with model selection, GPU device identification, and determining GPU count, followed by calculating CPU and memory needs.

Throughout the presentation, Gonzales addresses critical aspects like FinOps and cost management, highlighting the forthcoming integration of metrics for request counts, latency, and eventually, token-based consumption. He addressed questions about the deployment and licensing options for Nutanix Enterprise AI (NAI), offering different scenarios for on-premises, bare metal, and cloud deployments, depending on the customer’s existing infrastructure. Nutanix’s approach revolves around flexibility, supporting various choices in infrastructure, virtualization, and Kubernetes distributions. The presentation demonstrates how the company streamlines AI deployment and management, making it easier for customers to navigate the complexities of AI infrastructure and scale as needed.


Nutanix Enterprise AI Demonstration

Event: AI Infrastructure Field Day 2

Appearance: Nutanix Presents at AI Infrastructure Field Day 2

Company: Nutanix

Video Links:

Personnel: Laura Jordana

As presented by Laura Jordana, Nutanix Enterprise AI (NAI) is designed to simplify the process of deploying and managing AI models for IT administrators and developers. The presentation begins by demonstrating the NAI interface, a Kubernetes application deployable on various platforms. The primary use case highlighted is enabling IT admins to provide developers with easy access to LLMs by connecting to external model repositories and creating secure endpoints. This allows developers to build and deploy AI workflows while keeping data within the organization’s control.

The demo showcases the dashboard, which offers insights into active endpoints, request metrics, and infrastructure health. This view is crucial for IT admins to monitor model usage and impact on resources. The process involves importing models from various hubs like Hugging Face and creating endpoints that serve as the inference engine connection. The presenter emphasized the simplicity of this process, with much of the configuration pre-filled to ease the admin workload. They also highlighted the platform’s OpenAI compatibility, allowing integration with existing tools.

While focusing on inferencing, not model training, the platform provides a secure and streamlined way to deploy and manage models within the organization’s infrastructure. The key takeaway from the presentation is the simplification of AI model deployment, focusing on day 2 operations and ease of use. The platform leverages Kubernetes’ ability to run on Nutanix, EKS, and other cloud instances. It also provides API access and monitoring capabilities for IT admins, and easy access to LLMs for AI developers.


Lets take a look at Nutanix Enterprise AI

Event: AI Infrastructure Field Day 2

Appearance: Nutanix Presents at AI Infrastructure Field Day 2

Company: Nutanix

Video Links:

Personnel: Ashwini Vasanth

Ashwini Vasanth presented Nutanix Enterprise AI, which simplifies the complexities of adopting and deploying GenAI models and addresses common customer challenges. The product, launched in November 2024, focuses on providing a curated and validated approach to model selection, deployment, and security. The presentation highlighted the “cold start” problem, acknowledging the overwhelming number of available models and the need for a user-friendly starting point for IT or AI admins.

Nutanix Enterprise AI offers a curated list of validated models through partnerships with Hugging Face and NVIDIA to address these challenges, providing a “small, medium, and large” selection. This approach aims to simplify model selection and ensure reliable operation. Additionally, the platform handles GPU selection, inference engine choices, and security complexities, incorporating dynamic endpoint creation to streamline the deployment process. Key to Nutanix’s offering is the integrated security, where Nutanix security experts perform scans for vulnerabilities, eliminating the need for customers to manage their security efforts.

Beyond the mechanics of model deployment, Vasanth discussed the need for on-premises deployment, choice of environments, and addressing the “shadow IT” problem through centralized resource management and monitoring dashboards. The presentation underscored Nutanix’s strategic move into the AI space, leveraging its existing infrastructure expertise, including its Kubernetes platform, storage solutions, and the core principles of simplifying infrastructure. The company’s approach has evolved from a solutions-based offering to a full-fledged product based on the need for a pre-integrated AI platform.


Company Overview and AI Challenges we address with Nutanix

Event: AI Infrastructure Field Day 2

Appearance: Nutanix Presents at AI Infrastructure Field Day 2

Company: Nutanix

Video Links:

Personnel: Mike Barmonde

GenAI’s rapid advancement and impact present a significant challenge for enterprises seeking to leverage its potential. Nutanix helps businesses transition from GenAI possibilities to production with its Nutanix Enterprise AI (NAI) solution, a full-stack AI infrastructure designed specifically for IT needs. The NAI offering provides a standardized inferencing solution centered around a model repository, allowing for creating secure endpoints with APIs for GenAI applications, spanning from edge to public clouds.

Mike Barmonde, the Sr. Product Marketing Manager for Nutanix AI products, presented an overview of Nutanix and its approach to addressing AI challenges. The presentation focused on how Nutanix simplifies AI inferencing for IT, highlighting that many organizations struggle to scale their AI initiatives. Nutanix Enterprise AI provides a four-step process to deploy AI infrastructure, including Kubernetes selection, hardware choice (with options for public cloud or air-gapped environments), LLM deployment from various sources, and the creation of secure endpoints, all managed from a central location.

The presentation emphasized the comprehensive nature of Nutanix’s AI infrastructure approach, extending from LLMs down to the underlying hardware. Nutanix’s goal is to streamline the entire process, enabling seamless Day 2 operations. This allows IT professionals to centralize their AI infrastructure and provide a better experience for their developers and application owners.


Multi-Tenancy & Network Automation for AI Infrastructure Operators Demonstrated with Netris

Event: AI Infrastructure Field Day 2

Appearance: Netris Presents at AI Infrastructure Field Day 2

Company: Netris

Video Links:

Personnel: Alex Saroyan

Netris CEO Alex Soroyan demonstrated the multi-tenancy and network automation solution in AI infrastructure. The presentation began with a live demonstration of the Netris controller, showcasing how it facilitates the setup and management of AI infrastructure networking. Utilizing Terraform modules and a “CloudSim” simulation, Soroyan illustrated the process of initializing the controller, generating network configurations based on user-defined parameters, and creating a digital twin of the network for validation.

The core of the presentation focused on day-2 operations, specifically the creation and management of tenants and network isolation. Using templates, Soroyan showed how easy it is to establish isolated clusters (VPCs) for different tenants. These templates translate high-level server assignments into low-level switch port configurations, enabling a cloud-native approach to network management. The demo also highlighted the integration of Elastic IPs to expose the internal clusters to the outside world.

Finally, Soroyan discussed monitoring features, which automate the configuration of monitoring tools and provide network health checks, including link validation. The presentation also touched on InfiniBand networking, demonstrating Netris’s capability to manage InfiniBand fabrics and integrate them with Ethernet networks. The key takeaways were automating network tasks, simplifying complex configurations through templates, and comprehensive monitoring capabilities, all contributing to a more efficient and manageable AI infrastructure environment.


How it works. Multi-Tenancy & Network Automation for AI Infrastructure Operators with Netris

Event: AI Infrastructure Field Day 2

Appearance: Netris Presents at AI Infrastructure Field Day 2

Company: Netris

Video Links:

Personnel: Alex Saroyan

Netris, as presented by CEO Alex Soroyan, offers cloud-provider-grade network automation and multi-tenancy software tailored for AI Infrastructure operators. The core of their solution lies in the Netris Controller, which acts as the centralized source of truth for network engineers.  It allows for the modeling and simulating network infrastructure using tools like Terraform and CloudSim, while also providing APIs that can integrate into cloud provider platforms, facilitating the creation of VPCs and managing network functions. A key component of their offering is SoftGate, a gateway on Linux servers that provides functions such as elastic load balancing and NAT. It offers a streamlined, integrated solution compared to separate, third-party products.

The presentation details Netris’ approach to day-zero and day-one operations, highlighting the use of Terraform for infrastructure-as-code methodologies and how the controller facilitates the deployment and management of various switch vendors.  The system supports granular multi-tenancy through VXLANs and is designed to integrate with shared storage solutions.  Netris facilitates access and isolation by allowing access to the network from the storage and the tenants, intending to integrate directly with storage vendors via their API. This setup allows for a cloud-like experience for AI infrastructure operators, streamlining the onboarding of tenants and the allocation of resources.

Netris differentiates itself by being multi-vendor and providing cloud networking constructs not typically found in traditional network automation platforms. The presentation emphasized the efficiency and integration provided by SoftGate, which eliminates the complexity of connecting firewalls and load balancers while supporting InfiniBand through integration with NVIDIA UFM.  Alex expressed confidence in Netris’ position, particularly given the growing demand for cloud-provider-like capabilities in the AI infrastructure space.


Introduction to Multi-Tenancy & Network Automation for AI Infrastructure Operators with Netris

Event: AI Infrastructure Field Day 2

Appearance: Netris Presents at AI Infrastructure Field Day 2

Company: Netris

Video Links:

Personnel: Alex Saroyan

Netris helps GPU-based AI infrastructure operators automate their networks, provide multi-tenancy and isolation, and offer essential cloud networking features like VPCs, internet gateways, and load balancers. Netris focuses on network software designed for AI and cloud infrastructure operators because the growing popularity of AI necessitates specialized networking solutions to handle demanding AI workloads. Netris’s technology is particularly well-aligned with NVIDIA’s networking offerings, which are based on the foundation of Mellanox and Cumulus networks.

The presentation highlights the importance of dynamic multi-tenancy for maximizing the utilization of expensive GPUs. Netris provides “cloud provider grade network automation software” that allows AI infrastructure operators to achieve security levels comparable to physical isolation while maintaining software-driven speed. This solves the problem of manual network configuration, which is time-consuming, error-prone, and doesn’t scale. Furthermore, Netris supports cloud networking functions like Internet gateways, NAT gateways, and load balancers, offering a complete solution that addresses the need for secure and flexible network management in AI environments.

Netris’s solution is built on three key pillars: VPCs for isolation, cloud networking functions for connectivity, and fabric management for network operations. They manage both Ethernet and InfiniBand fabrics, providing operators with a single pane of glass.  For InfiniBand fabrics, Netris integrates with NVIDIA’s UFM controllers. On the Ethernet side, Netris acts as the fabric manager for several vendors, including NVIDIA, Dell, and Arista, automating the management of network switches and streamlining operations. The goal is to offer a comprehensive, integrated network automation platform tailored for the demands of AI infrastructure.


The Pascari SSD portfolio by Phison

Event: AI Infrastructure Field Day 2

Appearance: Phison Technology Presents at AI Infrastructure Field Day 2

Company: Phison

Video Links:

Personnel: Chris Ramseyer

This Phison presentation at AI Infrastructure Field Day showcased their Pascari SSD portfolio, emphasizing innovation in the enterprise SSD market through performance and reliability. The presentation, led by Chris Ramseyer, focused on Phison’s high-performance Pascari Enterprise X series, designed for data-intensive applications like AI and machine learning. The X series boasts impressive speeds, including up to 15 GB/s sequential read performance and 3.2 million IOPS, positioning it as a leader in the market. The discussion highlighted the shift in data center workloads due to AI, particularly the need for higher queue depths and increased read bandwidth.

The presentation delved into the specifics of Phison’s SSDs, highlighting their various form factors and benefits, such as increased storage density and sustainability through improved power efficiency. The Pascari D-series, in particular, was showcased for its high capacity, with the D205V reaching 122.88 TB, and different form factors, including E1.S. Phison also demonstrated their customization capabilities, emphasizing collaborations, for example, with VDURA, which allows them to tailor their products to meet specific customer needs. In addition, Phison also offers boot drives and SATA drives to the legacy market.

The presentation concluded with three key takeaways: Phison’s broad portfolio and deep expertise, their status as a trusted innovation partner, and their unmatched flexibility, reliability, and performance across hyperscale, AI, and legacy systems. The presentation underscored Phison’s commitment to innovation, evidenced by their vertically integrated approach with in-house IP, controller knowledge, and testing capabilities. The team also highlighted the value of their products used in space and how their technology “trickles down” for use on Earth. Overall, Phison presented itself as a flexible and capable partner for enterprises seeking high-performance, reliable, and customizable SSD solutions to meet the evolving demands of modern data-intensive workloads.


Innovation in the enterprise SSD market driven by Phison

Event: AI Infrastructure Field Day 2

Appearance: Phison Technology Presents at AI Infrastructure Field Day 2

Company: Phison

Video Links:

Personnel: Michael Wu

Phison, a leading innovator in the enterprise SSD market, is driving the future of data-intensive applications with its Pascari SSDs. Michael Wu, GM & President, presented at AI Infrastructure Field Day, highlighting Phison’s commitment to innovation, focusing on performance and reliability in the enterprise space. Wu shared insights into the company’s journey, from its origins as an engineering-focused company to its current status as a $2 billion enterprise, and its evolution from a behind-the-scenes technology provider to a brand recognized for its cutting-edge SSD solutions.

Phison is uniquely positioned to meet the evolving needs of the enterprise market. Their strategy revolves around a vertically integrated approach, encompassing controller, firmware, and hardware design, allowing them to offer customized solutions through their Imagine Plus design service. This focus on customization, coupled with their early adoption of dual-port technology for enterprise applications and a commitment to providing legacy support, sets them apart.  They also emphasize their commitment to innovation with their world’s first achievements and unique NAND emulator system.

Looking ahead, Phison is strategically expanding into the enterprise market. They are investing heavily in R&D, with 75% of their workforce dedicated to it. Furthermore, Phison is actively establishing regional presences through joint ventures and partnerships, particularly in India and Malaysia, to provide local customers with tailored solutions. Their approach to the market, supported by their proprietary NAND emulator, allows Phison to be first to market with new technologies. As a result, they anticipate substantial growth in enterprise revenue, fueled by the rising demand for high-density storage solutions and their commitment to being the leader.