Introduction to VMware Private AI

Event: AI Field Day 4

Appearance: VMware by Broadcom Presents at AI Field Day 4

Company: VMware by Broadcom

Video Links:

Personnel: Chris Wolf

VMware Private AI brings compute capacity and AI models to where enterprise data is created, processed, and consumed, whether that is in a public cloud, enterprise data center, or at the edge. VMware Private AI consists of both product offerings (VMware Private AI Foundation with NVIDIA) and a VMware Private AI Reference Architecture for Open Source to help customers achieve their desired AI outcomes by supporting best-in-class open source software (OSS) technologies today and in the future. VMware’s interconnected and open ecosystem supports flexibility and choice in customers’ AI strategies.

Chris Wolf, the Global Head of AI and Advanced Services at VMware by Broadcom, discusses VMware’s Private AI initiative, which was announced in August 2023. The goal of Private AI is to democratize general AI and ignite business innovation across all enterprises while addressing privacy and control concerns. VMware focuses on providing AI infrastructure, optimizations, security, data privacy, and data serving, leaving higher-level AI services to AI ISVs (Independent Software Vendors). This non-competitive approach makes it easier for VMware to partner with ISVs since VMware does not directly compete with them in offering top-level AI services, unlike public clouds.

Wolf shares an example of VMware’s code generation use case with a 92% acceptance rate by software engineers using an internal solution based on an open-source model for the ESXi kernel. He discusses the importance of governance and compliance, particularly in AI-generated code, and mentions VMware’s AI council and governance practices.

He highlights use cases such as call center resolution and advanced information retrieval across various industries. VMware’s solution emphasizes flexibility, choice of hardware and software, simplifying deployment, and mitigating risks. Wolf also notes VMware’s capability to stand up an AI cluster with preloaded models in about three seconds, which is not possible in public clouds or on bare metal.

The discussion covers the advantages of VMware Private AI in managing multiple AI projects within large enterprises, including efficient resource utilization and integration with existing operational tools, leading to lower total cost of ownership.

Wolf touches on the trend of AI adoption at the edge, the importance of security features within VMware’s stack, and the curated ecosystem of partners that VMware is building. He points out that VMware’s Private AI solution can leverage existing IT investments by bringing AI models to where the data already resides, such as on VMware Cloud Foundation (VCF).

Finally, Wolf previews upcoming Tech Field Day sessions that will go into detail about VMware’s collaborations with NVIDIA, Intel, and IBM, showcasing solutions like Private AI Foundation with NVIDIA and WatsonX SaaS service deployment on-premises. He encourages attendees to participate in these sessions to learn more about VMware’s AI offerings.


VMware by Broadcom Private AI Primer – An Emerging Category

Event: AI Field Day 4

Appearance: VMware by Broadcom Presents at AI Field Day 4

Company: VMware by Broadcom

Video Links:

Personnel: Chris Wolf

Private AI as an architectural approach that aims to balance the business gains from AI with the practical privacy and compliance needs of the organization. What is most important is that privacy and control requirements are satisfied, regardless of where AI models and data are deployed. This session will walk through the core tenets of Private AI and the common use cases that it addresses.

Chris Wolf, Global Head of AI and Advanced Services at VMware by Broadcom, discusses the evolution of application innovation, highlighting the shift from PC applications to business productivity tools, web applications, and mobile apps, and now the rise of AI applications. He emphasizes that AI is not new, with its use in specialized models for fraud detection being a longstanding practice. Chris notes that financial services with existing AI expertise have quickly adapted to generative AI with large language models, and he cites a range of industry use cases, such as VMware’s use of SaaS-based AI services for marketing content creation.

He mentions McKinsey’s projection of the annual potential economic value for generative AI being around $4.4 trillion, indicating a significant opportunity for industry transformation. Chris discusses the early adoption of AI in various regions, particularly in Japan, where the government invests in AI to compensate for a shrinking population and maintain global competitiveness.

The conversation shifts to privacy concerns in AI, with Chris explaining the concept of Private AI, which is about maintaining business gains from AI while ensuring privacy and compliance needs. He discusses the importance of data sovereignty, control, and not wanting to inadvertently benefit competitors with shared AI services. Chris also highlights the need for access control to prevent unauthorized access to sensitive information through AI models.

He then outlines the importance of choice, cost, performance, and compliance in the AI ecosystem, asserting that organizations should not be locked into a single vertical AI stack. Chris also describes the potential for fine-tuning language models with domain-specific data and the use of technologies like retrieval augmented generation (RAG) for simplifying AI use cases.

Finally, Chris emphasizes the need for adaptability in AI solutions and mentions VMware’s focus on adding value to the ecosystem through partnerships. He briefly touches on technical implementation, including leveraging virtualization support for GPU resources and partnering with companies like IBM Watson for model serving and management. He concludes by providing resources for further information on VMware’s AI initiatives.


Dell Technologies APEX Cloud Platform Cluster Expansion

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Michael Wells

Michael Wells, a Tech Marketing Engineer at Dell Technologies, presents a demonstration on scalability and cluster expansion using the APEX Cloud Platform, specifically focusing on adding worker nodes to an OpenShift cluster. The process involves searching for new nodes, running compatibility checks to ensure they match the existing cluster, and then configuring settings such as the node name, IP address, TPM passphrase, location information, NIC settings, and network settings. The system pre-populates certain values like VLAN IDs based on the existing setup and then validates the configuration before adding the node to the cluster.

He highlights how the APEX Cloud Platform integrates infrastructure management directly into the cloud OS experience, offering a unique solution for different cloud operating models. He also discusses the advantages of installing Red Hat OpenShift on bare metal, which includes better performance due to the absence of a hypervisor, reduced licensing requirements, and a smaller attack surface. Additionally, he explains the benefits of lifecycle management of both OpenShift and hardware together, simplifying the deployment process and providing developers with more direct access to hardware resources.

Wells also touches on the topic of OpenShift virtualization, explaining that running virtual machines inside of OpenShift as pods allows for pod-to-pod networking and avoids the need for routing traffic through an ingress controller. This setup can be more efficient for workloads that need to communicate with other OpenShift services.


Dell Technologies APEX Cloud Platform Lifecycle Management

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Michael Wells

Michael Wells, a Tech Marketing Engineer for the APEX Cloud Platform at Dell Technologies, demonstrates the lifecycle management process for updating Red Hat OpenShift and Azure clusters on the platform. The process involves:

  1. Configuring support portal access with a username and password to check for online updates from the Dell support site.
  2. Using a local update process when no online updates are available by uploading and decompressing an update bundle.
  3. Running pre-checks to ensure the cluster is healthy and in a suitable state for updating.
  4. Reviewing the update details, including versions of software to be updated.
  5. Executing the update, which includes hardware (BIOS, firmware, drivers), OpenShift software, core OS, CSI, and Apex Cloud Platform Foundation software, all in a single workflow to optimize efficiency and minimize reboots.
  6. Applying updates to Azure clusters in a similar fashion, including compliance checks and cluster health pre-checks.
  7. Temporarily disabling lockdown mode on servers during the update process and re-enabling it afterward.
  8. Performing a rolling update across nodes, with each node being updated one at a time in a non-disruptive manner.

The update process is designed to be efficient, reducing downtime by controlling the sequence of updates and using parallel staging where possible. The system provides detailed progress information and time estimates throughout the process.


Dell Technologies APEX Cloud Platform Management Experience

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Michael Wells

In this presentation, Michael Wells, Tech Marketing Engineer at Dell Technologies, discusses the management experience of the APEX Cloud Platform. He highlights the platform’s ability to provide a consistent hybrid management experience across different environments without requiring users to leave their usual management interfaces.

Wells demonstrates the integration of Dell APEX Cloud Platform within the OpenShift web console, showing how users can view node information, cluster status, CPU and memory usage, and manage hardware components directly from the console. He mentions that the platform is set to support hosted control planes (formerly HyperShift) and discusses the ability to expand or remove worker nodes within the cluster.

He also covers the platform’s update mechanism, security features (including certificate management), and support capabilities, such as dial-home alerts and integration with Cloud IQ for hardware-related issues. Additionally, Wells touches on how hardware alerts are integrated into OpenShift alerting, allowing users to leverage existing monitoring and notification setups.

Wells then shifts to discussing the Azure side of things, showing similar capabilities within the Windows Admin Center for Azure Stack HCI, including physical views of nodes, detailed component information, and compliance checks.

Finally, he emphasizes the consistency of the Dell APEX Cloud Platform across different cloud operating systems and how it integrates infrastructure management with cluster management tools used by administrators. He notes the upcoming VMware integration and the ability to lock infrastructure settings for security.


Dell Technologies APEX Cloud Platform Hardware Configurations

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Michael Wells

Michael Wells, Tech Marketing Engineer for Dell Technologies, discusses the hardware configurations for the APEX Cloud Platform.

  • The APEX Cloud Platform uses specialized configurations of PowerEdge servers called MC nodes, specifically the MC660 (1U 10 drive) and MC760 (2U 24 drive).
  • The nodes support Intel scalable fourth-generation processors with 2 to 4 terabytes of memory per node, which is currently limited by supply chain issues rather than technical constraints.
  • There are options for NVMe and SSD storage configurations, as well as Nvidia GPU support, with the 1U supporting single-width cards and the 2U supporting both single-width and double-width cards.
  • Michael mentions a white paper released in November of the previous year about implementing OpenShift AI and a generative AI solution on the APEX Cloud Platform, using Lama 2 and RAG to build a chatbot trained against Dell’s technical documentation.

Michael explains that the MC nodes have a subset of components that are continuously validated to ensure support and control over the configurations. This approach excludes the possibility of using existing servers customers may already have, as the solution requires common building blocks for simplicity and manageability.

There’s also a mention of the possibility of connecting to PowerFlex storage, which supports various operating systems and allows for the connection of bare metal, hypervisors, and other systems. This could be a way for customers to use existing hardware and gradually transition to the APEX Cloud Platform.


Dell Technologies APEX Cloud Platform Cluster Deployment

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Michael Wells

Michael Wells, a Tech Marketing Engineer at Dell Technologies, discusses the APEX Cloud Platform and its deployment process for Microsoft Azure and Red Hat OpenShift. He explains that the deployment for both platforms involves a similar set of steps, such as node discovery, configuration settings, and network information. APEX Cloud Platform for Azure is built on Microsoft’s HCI OS as part of the new Premier partner tier, which allows for deeper integration and collaboration with Microsoft.

The deployment results in a fully configured cluster, an OpenShift cluster on one side and an Azure Stack HCI cluster on the other. The OpenShift cluster includes Red Hat Core OS, Kubernetes, and Dell SDS storage, while the Azure Stack HCI cluster uses Storage Spaces Direct, Hyper-V, and the Microsoft SDN stack. Both deployments include the APEX Cloud Platform Foundation software, which integrates with the cloud OS management experience.

Michael also discusses licensing, entitlements for advanced cluster management and security included with the OpenShift Platform Plus subscription, and the unique capabilities of the Cloud Platform Foundation software. He emphasizes that the APEX Cloud Platform family is designed to offer the same types of results and efficiencies across different cloud OSes.

Lastly, Michael hints at upcoming features, such as the addition of Dell SDS support for the APEX Cloud Platform for Microsoft Azure, which will allow for greater scalability and storage independence.


Dell Technologies APEX Block Storage for Public Cloud Multi-AZ Storage Resilience

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Kiruthika Gopal

Krithika Gopal, a Product Manager at Dell Technologies, discusses the deployment and resilience of APEX Block Storage, particularly in a scenario where one of the three availability zones (AZs) is taken offline. The demonstration uses SQL Server 2022 to illustrate the process, but the principles apply to any application.

The APEX Block Storage cluster is set up with six storage instances, two in each AZ. Krithika emphasizes the importance of verifying the system’s health before simulating an outage by shutting down all instances in one AZ. Using PowerFlex Manager, they browse block volumes and check the health statistics and metadata manager to ensure everything is connected and functioning correctly.

The simulation involves manually stopping two instances in one AZ through the AWS portal and observing the impact on the cluster. Despite a temporary dip in transactions per minute (TPM), the cluster remains online, and the application continues to function. The cluster demonstrates self-healing capabilities, as the rebuild process completes in under 30 seconds, restoring the cluster to a healthy state with four nodes.

Next, Krithika restarts the two stopped instances to observe how the cluster rebalances the workload. With all six instances running again, the cluster quickly returns to normal performance after a brief dip during the rebuild. This test confirms the resilience and self-healing nature of the APEX Block Storage cluster in the event of an AZ outage.


Dell Technologies APEX Navigator for Multicloud Storage APEX Block Storage for AWS Deployment

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Chad Gray

In this presentation, Chad Gray from Dell Technologies is demonstrating how to deploy the APEX Block Storage for AWS using APEX Navigator. He explains the process in four easy steps:

  1. Select Product, Cloud, and Region: Chad selects APEX Block Storage for AWS with Navigator, version 4.5.1, and chooses a region available in the US.
  2. Connect Cloud Account: He selects a previously set up cloud account that was added to APEX Navigator.
  3. Deployment Configuration: Here, Chad provides a deployment name, selects a performance tier (balanced or performance optimized), sets the minimum usable capacity and IOPS, chooses the availability level (single AZ or multi AZ), and decides whether to deploy to an existing VPC or create a new one. He opts to create a new VPC and inputs IP ranges. He also names a key pair for SSH access into the storage instances, which will be stored in AWS Secret Manager.
  4. Review Configuration and Deployment: Chad mentions that there’s a free 90-day evaluation license, and he reviews the AWS resources that will be deployed, noting that they will incur costs.

The deployment can be monitored through APEX Navigator, and the process takes around two hours to complete. The demonstration shows how APEX Navigator simplifies the setup of APEX Block Storage in AWS by automating the deployment, which would be more complex if done manually.


Dell Technologies APEX Navigator for Multicloud Storage AWS Account Connection Demo

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Chad Gray

Chad Gray of Dell Technologies presented APEX Navigator, a product designed to simplify the management of multicloud storage, particularly block storage for AWS. He discussed the product’s five focus areas: security, deployment, management, monitoring, and mobility. Gray emphasized the importance of secure access to customer AWS accounts and explained how APEX Navigator uses AWS roles and policies to access these accounts without the need for exchanging access keys.

During the presentation, Gray demonstrated how to connect an AWS account to APEX Navigator using a custom trust policy and permission policy generated by the platform. He also discussed federated login capabilities with identity providers such as Active Directory, allowing for single sign-on across Dell’s APEX and Cloud IQ services.

Gray mentioned that all the steps he demonstrated in the UI can be automated through APIs, and that Dell recently released a Terraform provider for APEX. He highlighted the availability of infrastructure as code examples for teams using tools like Terraform.

Lastly, Gray showed how to audit access and account management activities within APEX Navigator and within the AWS account using CloudTrail. He pointed out features like tagging sessions with job names and IDs, and passing the source identity of the user for better traceability of actions taken within the customer’s AWS environment.


Dell Technologies APEX Block Storage For Public Cloud Deep Dive

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Kiruthika Gopal

Kiruthika Gopal, a product manager, presents the key features and differentiators of APEX Block Storage. The main points of differentiation include extreme performance, with millions of IOPS for a single volume; flexibility and scalability, with the ability to scale up to 512 storage nodes and independently scale compute and storage; multi-AZ durability, offering the option to deploy storage clusters within a single or multiple availability zones without requiring additional capacity; and hybrid cloud mobility, allowing seamless data movement between on-prem and cloud across regions.

APEX Block Storage offers up to one petabyte per volume, multi-AZ deployment without extra capacity penalties, thin provisioning, snapshots and clones without additional fees, and asynchronous replication with a feature called snap mobility.

The product also provides multi-AZ durability by spreading storage nodes across availability zones, offering protection against entire rack failures. This feature has been tested in on-prem environments for over a decade and is now extended to the public cloud.

Questions from the audience cover topics such as the upgrade process for new instance types, the potential for split-brain scenarios in multi-AZ deployments, the testing methodology used to generate performance numbers, and the orchestration of testing and recovery processes.

Kiruthika also discusses cost savings through thin provisioning and snapshot savings, and the scalability of APEX Block Storage, with deployments ranging from 10 terabytes to multiple petabytes. The pricing model is subscription-based, factoring in capacity and the number of storage nodes.

Finally, Kiruthika touches on the ease of deployment using Dell APEX Navigator, which can set up the necessary AWS infrastructure and deploy the software in four simple steps based on inputted IOPS and capacity requirements.


Dell Technologies APEX Block Storage For Public Cloud Overview

Event: Cloud Field Day 19

Appearance: Dell Technologies Presents at Cloud Field Day 19

Company: Dell Technologies

Video Links:

Personnel: Kiruthika Gopal

Krithika Gopal, a product manager at Dell Technologies, is discussing the features and benefits of Dell APEX Block Storage, which is part of Dell’s universal storage layer initiative aimed at bringing Dell storage to the public cloud. APEX Block Storage is based on PowerFlex IP, a software-defined storage solution that has been around for over a decade, known for its performance, scalability, and flexibility.

APEX Block Storage is available on both AWS and Azure through their respective marketplaces or directly from Dell. It aims to lower the total cost of ownership (TCO) for customers and allows for the deployment of high-performance, mission-critical applications in the public cloud. It also supports seamless data mobility across different environments and offers a unique Multi-AZ Durability feature to increase resiliency.

Krithika highlights the product’s ability to deliver 100x better performance compared to existing public cloud storage, based on internal testing. She explains that APEX Block Storage can scale up to hundreds of nodes, achieving millions of IOPS. The product is designed to not compete with public cloud providers but to enhance the customer experience by addressing needs not currently met in the public cloud.

The discussion also covers the technical aspects of deploying APEX Block Storage, including the use of EC2 instances on AWS and virtual machines on Azure, as well as the integration with Dell Data Domain Virtual Edition for data backup. Additionally, Krithika addresses questions about the product’s performance, cost-effectiveness, software client requirements, and the six-nines availability claim.


Policy Assistant and Experience Insights with Cisco Secure Access

Event: Tech Field Day Extra at Cisco Live EMEA 2024

Appearance: Cisco Security Presents at Tech Field Day Extra at Cisco Live EMEA

Company: Cisco

Video Links:

Personnel: Fay Lee, Justin Murphy

This presentation by Fay Lee and Justin Murphy, focuses on Cisco Secure Access and the integration of Generative AI (Gen AI) and ThousandEyes technology for enhancing security and user experience.

Justin Murphy introduces Cisco Secure Access, which is a cloud-provided security solution offering secure access to apps and the internet. It includes features such as proxy capabilities, data loss prevention (DLP), malware inspection, firewall, and intrusion prevention systems (IPS). He focuses on the data loss prevention aspect and the use of AI applications like ChatGPT by employees, emphasizing the need to monitor their use and prevent data leaks. Cisco has added an AI assistant to simplify policy creation, which can automate repeatable processes and reduce deployment time by about 70%. The assistant can create rules for different users and groups, and monitor and control access to AI applications, preventing the upload of sensitive data. Justin demonstrates how the AI assistant works through a live demo and a video, showing how it can block an employee named Jeff from uploading code to ChatGPT.

Fay Lee then takes over to discuss Experience Insights, a new feature in Cisco Secure Access powered by ThousandEyes technology. It aims to reduce the mean-time-to-response for user experience issues and is integrated into the Secure Access dashboard. Experience Insights allows administrators to monitor endpoint performance, network performance, and the performance of the connection from Cisco’s cloud infrastructure to the applications users are accessing. Fay provides a live demo of the Experience Insights feature, showing a map of connected users, details of their connection, and the performance of commonly used SaaS applications. She explains how administrators can drill down into specific user data to troubleshoot issues and how integration with ThousandEyes provides additional insights.

The presentation ends with a Q&A session where the audience asks about the capabilities and integration of the AI assistant and Experience Insights, the limitations of the ThousandEyes integration, and the possibility of integration with other Cisco products like Meraki.


Cisco Event-Driven Automation with Shangxin Du

Event: Tech Field Day Extra at Cisco Live EMEA 2024

Appearance: Cisco Cloud Networking Presents at Tech Field Day Extra at Cisco Live EMEA

Company: Cisco

Video Links:

Personnel: Shangxin Du

Shangxin Du, a technical marketing engineer from Cisco’s data center switching team, discusses Event-Driven Automation (EDA) in network operations. EDA is a method that automates network configuration changes in response to specific events, aiming to streamline repetitive tasks and mitigate risks during network incidents.

Initially, Shangxin outlines how customers currently manage network configuration, using tools like Ansible, Terraform, Python, or SSH to automate tasks individually or through controllers like Cisco’s ACI for more centralized management. He also touches on the concept of Infrastructure as Code (IaC) and CI/CD pipelines for more integrated change management.

Next, he discusses network observability, emphasizing the importance of monitoring the network for operational data, which is vital for understanding the network’s real-time status. He explains how Cisco’s Nexus OS supports streaming telemetry, and how ACI uses a centralized controller (APIC) to manage configurations and operational data.

Shangxin then introduces the concept of Event-Driven Automation, which combines configuration automation with monitoring to automatically respond to network events. This can help in automating low-risk repetitive tasks, remediating incidents, and enriching support tickets with relevant data for quicker resolution.

He provides a demonstration of EDA using Ansible Rulebooks, which define sources, rules, and actions based on network events. The demo includes two use cases:

  1. Auto-segmentation in ACI, where endpoints are automatically moved to the correct Endpoint Group (EPG) based on MAC address mapping.
  2. Auto-remediation in Nexus OS, where a leaf switch is removed from the forwarding path if multiple uplinks go down, to prevent it from affecting network traffic.

Shangxin concludes that EDA offers limitless possibilities, allowing any source of events to trigger any automation response, depending on the rules defined. He also answers a question about the possibility of implementing a low-code solution for EDA in the Nexus world, similar to what’s available in other Cisco solutions like DNA Center. He suggests that while it’s a good idea, the current approach is to use existing tools and infrastructure for automation due to the diversity of customer preferences and practices.


Cisco Secure Interconnection of Heterogeneous Fabrics (ACI and VXLAN EVPN)

Event: Tech Field Day Extra at Cisco Live EMEA 2024

Appearance: Cisco Cloud Networking Presents at Tech Field Day Extra at Cisco Live EMEA

Company: Cisco

Video Links:

Personnel: Lukas Krattiger, Max Ardica

In this presentation, Lukas Krattiger and Max Ardica from Cisco’s Data Center Business Unit discuss new functionalities for Cisco Data Center networking. They focus on the secure interconnection of heterogeneous fabrics, specifically integrating ACI (Application Centric Infrastructure) and standard VXLAN EVPN (Ethernet VPN) fabrics.

Max introduces the concept of the ACI Border Gateway, which is a device that allows for controlled connectivity between different leaf-spine topologies, enabling the extension of layer 2 and layer 3 connectivity in a controlled manner. The ACI Border Gateway operates in a standard VXLAN EVPN fashion to interconnect with VXLAN EVPN border gateways of other fabrics. This allows for the expansion of a network using either ACI or VXLAN EVPN fabrics within the same multi-fabric domain.

They also introduce the VXLAN Group Policy Option (GPO), which provides secure group segmentation within a VXLAN EVPN fabric, similar to the concept of SGT (Security Group Tag) discussed in a previous session. GPO enables microsegmentation and service chaining, allowing administrators to direct traffic through firewalls or other network services as part of a security policy.

Lukas and Max emphasize the importance of using a control plane to exchange group information, allowing for optimal traffic flow by applying security policies at the ingress leaf. This approach is more efficient as it avoids sending unnecessary traffic across the network only to be dropped at the destination.

The discussion also touches on the need for policy authoring and enforcement, which will be facilitated by software tools like Nexus Dashboard or Ansible playbooks, allowing for consistent policy application across ACI and VXLAN EVPN fabrics.

Throughout the conversation, they address scalability, resource management, and the benefits of using border gateways to abstract network complexity and control inter-fabric connectivity. They also mention the possibility of synchronizing policy across different network domains and the potential integration with third-party security management tools.


NIS2 Compliance with Cisco Industrial Security

Event: Tech Field Day Extra at Cisco Live EMEA 2024

Appearance: Cisco Cloud Networking Presents at Tech Field Day Extra at Cisco Live EMEA

Company: Cisco

Video Links:

Personnel: Andrew McPhee

Andrew McPhee, a solution manager for industrial security at Cisco, discusses how Cisco Cyber Vision and Cisco Secure Equipment Access can assist with NIS2 compliance. NIS2 is a European standard that mandates cybersecurity measures for critical industries. Andrew explains the importance of NIS2 as a forcing factor for industries to implement security measures, which apply to a wide range of industrial verticals.

He highlights the need to understand the risk profile of devices on a network, manage supply chain security, handle vulnerabilities, and implement access control policies, including multi-factor authentication. Andrew emphasizes the role of Cisco Cyber Vision for deep packet inspection and asset visibility in operational technology (OT) environments, which helps assess vulnerabilities and risks. He also discusses Cisco Secure Equipment Access for remote access, moving towards a Zero Trust Network Access (ZTNA) model.

Andrew demonstrates Cisco’s IoT Operations Dashboard, which facilitates secure remote access to network devices and systems. He explains how the dashboard can be used for both clientless and client-based access, with features like session recording and scheduled access for vendors. The demonstration includes an overview of Duo, Cisco’s multi-factor authentication platform, and how it integrates with Secure Equipment Access for identity verification and policy enforcement.

Next, Andrew presents Cisco Cyber Vision, which provides a risk analysis of OT networks through passive monitoring and deep packet inspection. Cyber Vision can detect changes in the network, create baselines, and generate security reports. It can also integrate with Cisco’s Identity Services Engine (ISE) to implement segmentation based on the zones and conduits model from the IEC 62443 standard. He explains how Cyber Vision can share information with ISE to assign devices to security groups and enforce policies.

Throughout the discussion, Andrew addresses questions from the audience regarding the capabilities, integrations, and potential applications of the technologies presented. He clarifies how Cisco’s solutions can be adapted to various network architectures and the benefits of implementing security group tags for macro and micro-segmentation in industrial networks.


IP Fabric and NetBox Cloud – Better Together Demo

Event: Tech Field Day Extra at Cisco Live EMEA 2024

Appearance: IP Fabric and NetBox Labs Present at Tech Field Day Extra at Cisco Live EMEA

Company: IP Fabric, NetBox Labs

Video Links:

Personnel: Alex Gittings

Alex Gittings, a solution architect at IP Fabric, presents a demonstration of a plugin that integrates IP Fabric with NetBox, a source of truth database for network automation. The plugin allows for the automatic synchronization of observed network state data into NetBox, which can be used to maintain an up-to-date source of truth for network automation purposes. This functionality is available for both the open-source and cloud-based versions of NetBox.

During the demo, Alex shows how a network discovered by IP Fabric can be imported into NetBox, including devices, interfaces, VLANs, VRFs, prefixes, and IP addresses. He explains that IP Fabric supports both cloud and on-premises versions of NetBox and demonstrates how to create an ingestion process to synchronize data from IP Fabric into NetBox. The plugin translates data from IP Fabric’s model to NetBox’s model using transform maps.

Alex also addresses questions regarding the plugin’s capabilities and limitations, such as its focus on the underlay network rather than the overlay, its ability to support various technologies, and how it can be used for compliance and change tracking. He explains that while IP Fabric captures snapshots of network state periodically, it does not support real-time monitoring, which means out-of-band changes may not be immediately reflected.

The demo concludes with a discussion on the potential for integration with other tools like Terraform and the challenges of maintaining a single source of truth for network state. Alex emphasizes the importance of aligning tooling with processes to ensure that the network source of truth remains accurate and effective for automation purposes.


Network Assurance in the Automation Ecosystem with IP Fabric

Event: Tech Field Day Extra at Cisco Live EMEA 2024

Appearance: IP Fabric and NetBox Labs Present at Tech Field Day Extra at Cisco Live EMEA

Company: IP Fabric, NetBox Labs

Video Links:

Personnel: Daren Fulwell

Daren Fulwell, the Product Evangelist for IP Fabric, presented on how IP Fabric’s Automated Network Assurance Platform can transform network management. The platform is designed to proactively manage and measure networks, replacing manual documentation with interactive tools and providing an API for network intelligence. It works in conjunction with NetBox Cloud to enrich the automation ecosystem.

Fulwell discussed the challenges network operators face, such as dealing with complex, multi-vendor environments and the need for up-to-date documentation. He emphasized that traditional tools like SNMP monitoring and manual documentation are insufficient for modern network demands.

IP Fabric’s platform addresses these issues by collecting data on inventory, configuration, and state to provide a comprehensive understanding of the network. It then creates visual topologies and simulates traffic flows to understand network behavior. The platform uses snapshots to track changes over time and can flag issues for remediation based on predefined intent checks.

Fulwell also highlighted the importance of integrating IP Fabric’s API with other systems like monitoring platforms, ticketing systems, CMDBs, chatbots, and network automation tools to ensure up-to-date information and to validate changes in the network.

The presentation included a Q&A session where Fulwell answered questions about integrating network components, defining good and bad configurations, and potential impact analysis for network changes. He concluded by mentioning that while a complete digital twin of the network is difficult to achieve, IP Fabric provides the necessary oversight and intelligence to manage complex networks effectively.


NetBox Cloud as Part of a Modern Network Automation Architecture with NetBox Labs

Event: Tech Field Day Extra at Cisco Live EMEA 2024

Appearance: IP Fabric and NetBox Labs Present at Tech Field Day Extra at Cisco Live EMEA

Company: IP Fabric, NetBox Labs

Video Links:

Personnel: Rich Bibby

In this presentation, Rich Bibby, a technical advocate with NetBox Labs, introduces NetBox Cloud and discusses its importance in network automation architecture. NetBox Labs, founded in 2023 in New York, is the commercial steward of the open-source project NetBox and has developed NetBox Cloud, an enterprise-grade, software-as-a-service version of NetBox.

Rich explains that a network source of truth is a representation of the intended state of a network, including devices, configurations, connections, and services. This intended state is distinct from the actual operational state, which is reported by monitoring and assurance tools. NetBox serves as a structured and cohesive data model, which is essential for network automation at scale. It eliminates the need for spreadsheets and disparate data sources, providing a single source of truth and accelerating network automation through its REST API and GraphQL interface.

NetBox Cloud offers push-button lifecycle operations, automated backups, single sign-on, and simplified plugin management. It is designed to be secure and compliant, and it allows for easy upgrades and integration with other tools. Rich also briefly describes the customer journey from documenting networks to full automation and presents a modern network automation reference architecture with NetBox Cloud at its center. This architecture includes operations teams, automation tools, and observability tools that together maintain the feedback loop between intended and actual network states.

Throughout the presentation, Rich also demonstrates the NetBox UI, showing how users can view site details, rack elevations, device configurations, and connections. He clarifies that while NetBox does not actively poll devices for their state, it can integrate with plugins and tools like IP Fabric to reconcile intended and actual states. NetBox Cloud does not require direct connectivity to customer networks, as it primarily interacts with other management tools.

Rich concludes by addressing audience questions about compliance, data validation, integration with existing tools, and the process for updating the intended state in NetBox.


Cisco Routed Optical Networking Automation

Event: Tech Field Day Extra at Cisco Live EMEA 2024

Appearance: Cisco Service Provider Presents at Tech Field Day Extra at Cisco Live EMEA

Company: Cisco

Video Links:

Personnel: Jochen van Guyse, Pedro Do Vale Brites

Jochen van Guyse and Pedro Do Vale Brites from the automation team present a demo on routed optical networking (RON) and its automation. They explain the benefits of RON, which include simplifying network architecture by eliminating the need for separate transponders and reducing power consumption, space, and overall operational costs. They discuss the need for automation in RON due to the challenges it poses, such as the management of wavelengths across different teams (optical and IP networking teams).

The hierarchical controller architecture, which facilitates RON automation, is introduced. This architecture includes the top-level hierarchical controller, domain controllers for IP and optical networks, and the potential for integration across multiple vendors.

Pedro then demonstrates the system, highlighting the ease of setting up end-to-end IP links, including the provisioning within optical networks, and how to troubleshoot faults. He shows how the system can help operations teams by providing a single pane of glass view, correlating IP links with their optical paths, and simplifying fault management. The demo also covers inventory management, proactive maintenance planning through failure impact analysis, and historical data analysis for identifying trends in hardware failures.

The system is designed to fix inventory issues by relying on discovered data rather than user input, thereby serving as a single source of truth. The goal is to simplify operations for first-level network operations center (NOC) personnel by providing easy correlation of faults across layers and offering predictive maintenance insights. Some functionalities, like cross-launching into different domain controllers, are still in progress, but the main features demonstrated are generally available (G.A.) and currently shipping.