JetStream DR Solution Architecture: Achieving both Performance and Cost Savings

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software

Video Links:

Personnel: Dennis Bray

JetStream Software Senior Solutions Architect Dennis Bray presents the unique design of the JetStream DR platform, capturing data immediately as its written to storage, and replicating it as objects to a container in any standard object store, including Microsoft Azure Blob Storage. This enables the most cost- efficient near-zero RPO solution for DR in the cloud. Dennis shows that everything needed to fully recover a set of protected virtual machines (VMs) is in the object store. If desired data can be continuously replicated from the object store to storage in the recovery environment, for a near-zero Recovery Time Objective (RTO).

In this discussion, Dennis Bray from JetStream Software provides an overview of their architecture and proceeds to demonstrate its functionality. He begins by describing the components and technologies involved in their system. The core architecture includes a management server appliance that orchestrates and operates the software, which can be accessed through a vSphere client plugin or APIs for automation and integration. They also utilize IO filters to capture storage traffic and a DR virtual appliance (DRVAs) for processing and replicating data to an object store, such as an Azure storage account. The protected virtual machines, along with their configurations and status information, are stored in the object store, allowing for recovery in the event of a disaster. Dennis explains two recovery options: a standard failover with a longer recovery time objective (RTO) and a near-zero RTO option. The standard failover involves setting up a recovery site, deploying the software, configuring DRVA, and transferring ownership of the protected domain to the recovery site. Once completed, the failover process can be initiated, enabling the recovery of protected virtual machines. The near-zero RTO option requires a preconfigured and running recovery site, where virtual machines can be quickly recovered using the stored data and configurations from the object store. The discussion also addresses some questions from the audience, clarifying aspects such as the need for a live vCenter environment, the responsibility of customers in preparing the recovery site, and the compatibility of different object storage targets.


Economic and Operational Advantages of Cloud DRaaS with JetStream Software

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software

Video Links:

Personnel: Rich Petersen, Serge Shats

Disaster Recovery (DR, or sometimes BC/DR) in the cloud can be more economical than legacy on-premises DR. JetStream Software’s Rich Petersen shows how storing recovery data and metadata in object storage and provisioning cloud-based compute nodes dynamically when needed can reduce the cost of DR infrastructure by 50% to 60%, making cloud a game-changer for DR. At the same time, by using VMware IO Filters (VAIO) to capture data immediately (no snapshots!) JetStream DR delivers a near zero Recovery Point Objective (RPO). Employing NetApp ANF to maintain data reduces Recovery Time Objectives (RTOs) to near-zero as well.

JetStream Software, led by co-founders Rich Peterson and Serge Shats, discusses the reasons why organizations are increasingly turning to the cloud for their disaster recovery (DR) needs. The cloud offers significant advantages in terms of cost and operational performance for DR strategies. JetStream DR, a cloud-based solution, can protect both on-premises workloads and those already migrated to the cloud. It provides failover capabilities, enables restoration of on-premises environments, and allows for the recovery of earlier points of consistency as required. JetStream’s cloud-native approach includes self-service capabilities, integration with cloud vendors’ marketplaces, and a focus on near-zero Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs). The solution utilizes VMware IO Filters to capture and replicate data in real-time, ensuring minimal data loss. By leveraging economical cloud storage and dynamically provisioning compute nodes, organizations can achieve significant cost savings of around 50% to 60% compared to traditional DR approaches. The ability to maintain high application performance without interrupting operations is another key benefit. Finally, JetStream emphasizes the importance of visibility and control for administrators in a cloud-based DR solution.


Couchbase: The Cloud Database Platform for Modern Applications

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Modern customer experiences need a flexible database platform that can power applications spanning from cloud to edge and everything in between. Couchbase’s mission is to be the premiere cloud database platform for organizations building the applications of the future. Join Jeff Morris, VP of Product Marketing, for an introduction to Couchbase and what sets our cloud database platform apart from the rest. He will cover Couchbase’s differentiated architecture, why our DBaaS Capella so unique, primary use cases and some special announcements on what’s new in the latest Capella release.

During the Couchbase introduction on National Cloud Database Day at Cloud Field Day, Jeff Morris highlights the key aspects of Couchbase’s background and the challenges faced by their customers. He emphasizes the need for delivering personalized and highly available applications with real-time capabilities, which has led to a shift towards cloud deployments. Customers are increasingly concerned about rising cloud costs and the complexity of building and managing complex architectures. Couchbase addresses these issues by offering a high-performance, multi-model NoSQL database with features such as search, eventing, and mobile synchronization. The company’s goal is to help customers build better applications faster, reduce infrastructure complexity, and lower costs, ultimately delivering more features in less time and saving money.


Introducing Zerto 10 Secure Virtual Manager Appliance

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Zerto 10 is the first version of Zerto that is exclusively available via the new Zerto Secure Virtual Appliance, allowing for simpler installs and upgrades the Zerto Secure Virtual Appliance comes pre-hardened out of the box so all customers can benefit from increased security without pages of hardening guides to worry about. Learn how to migrate from your Legacy Windows ZVM to the new Zerto Virtual Manager appliance (ZVMA) with the newly released migration tool.

The Zerto secure appliance is a new all-in-one virtual manager appliance that simplifies the deployment, management, and support experience for customers. It has moved away from Windows deployment to Linux, making it easier to troubleshoot and manage. The appliance comes pre-hardened for security, including multi-factor authentication and role-based access control. Zerto has also introduced a seamless migration utility that allows for quick and efficient migration of environments. The appliance is currently delivered as a single virtual machine, but there are plans for future deployments with multiple appliances for redundancy. Zerto aims to provide more frequent updates and move towards a more SaaS-like update process. The architecture has shifted from a monolith to a microservices-based approach, with many components running as web-based services. The appliance communicates with HPE GreenLake and Zerto Analytics containers for data transmission. Keycloak is used for authentication and integration capabilities. The Linux operating system and containers are pre-hardened, although specific details regarding the hardening of the Kubernetes cluster are not mentioned.


Ransomware Resilience with Zerto – Test and Recover

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Even the best laid plans can come undone unless frequent and extensive testing can be completed. Utilizing Zerto’s automation and orchestration capabilities organizations can now test non-disruptively in isolated networks or clean rooms to ensure they are ransomware ready, once testing has been completed ready-made compliance reports make passing audits and regulatory requirements easy.

Chris Rogers, senior technology evangelist at Zerto, discusses the importance of ransomware resilience and testing in data recovery. Zerto has been emphasizing the need for simple, non-impactful testing for years, especially in the context of security. Chris highlights the significance of frequent and extensive testing, rather than just checking a single virtual machine or performing a single restore. By using Zerto, customers have significantly reduced their testing time, completing it in less than two hours compared to three and a half days previously. The testing is fully automated, orchestrated, and does not impact production workloads. Zerto customers perform over 18,000 tests per month on average, with an impressive average recovery time objective (RTO) of three minutes and 19 seconds. Chris also mentions the ability to conduct real-time testing and utilize the isolated recovery environment for various purposes, such as patch testing, vulnerability scanning, data analytics, and forensics. While Zerto does not replace antivirus tools or have official partnerships with malware cleanup companies, they provide the infrastructure and availability for recovery, allowing customers to bring their own tooling and layer additional security measures on top. Zerto offers different recovery options for ransomware, including instant file restore, instant VM restore, recovery from multi-VM app infection, recovery from single site infection using the cloud or secondary site, and extended journal copy for multi-site infection recovery. The recently introduced Rapid Air Gap Recovery using the cyber resilience vault provides an additional layer of protection. Chris acknowledges that Zerto’s focus is not on detection but on recovery, and customers still have work to do in removing the malware or encryption. However, the vault allows customers to recover applications into an isolated environment where they can leverage their own tools and scan the recovered VMs for any infections.


Ransomware Resilience with Zerto – Isolate and Lock with the Cyber Resilience Vault

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Introducing the new Zerto Cyber Resilience Vault. A complete solution combining the powers of Zerto and the wider HPE family, Organizations can now be confident in ensuring recovery even during the worst attacks. Built upon decentralized management with zero trust principles with always immutable data copies. The cyber resilience vault is the only Isolated recovery environment that uses Journalling technology as the primary recovery mechanism rapidly reducing downtime and data loss.

The Zerto Cyber Resilience Vault, also known as Z-Vault, provides organizations with an isolated recovery environment or vault to protect against ransomware attacks. As regulations regarding data protection become stricter, isolated recovery environments are increasingly mandated as the last line of defense and emergency recovery option. Ransomware attacks often target data protection solutions, making it crucial to protect these solutions themselves. The Zerto Cyber Resilience Vault offers a fully isolated and air-gapped environment with mutable data, based on zero-trust principles. It includes components such as HPE ProLiant for compute, HPE Alletra for storage, and HPE Aruba Networking for networking. The vault ensures no network connectivity outside of the replication link between the storage arrays, providing enhanced security. It also supports replication from cloud sources and integration with the HPE Backup and Recovery Service. Z-Vault aims to offer a better, faster, and more cost-effective solution compared to existing cyber vaults on the market, reducing downtime and ransomware impact. By combining the isolated recovery environment and the vault into a single hardware infrastructure, Zerto simplifies the recovery process and ensures data immutability and air-gapped security. The vault helps organizations meet compliance and regulatory requirements while providing enhanced protection against cyber threats.


Enhancing Data Analysis and Anomaly Detection with Zerto’s API and Grafana Integration

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Justin Paul

Zerto leverages Grafana to visually represent data extracted through its API, allowing for the analysis of various metrics. The API provides valuable insights into logical blocks, encrypted and unencrypted data, enabling the identification of trends and anomalies. By examining SCSI blocks, Zerto’s algorithms can detect abnormal levels of compression and encryption, alerting users to potential issues like increased encrypted traffic. Notably, Zerto prioritizes real-time analysis over data storage, ensuring efficient processing. The 10.0 API further expands data availability, providing statistics at the volume, VM, and VPG levels. While Zerto currently recognizes all SCSI traffic as encrypted if the volume is encrypted, efforts are being made to differentiate between normal and malicious encryption. Zerto’s dedicated team continuously improves machine learning algorithms, keeping pace with security standards and advancements made by VMware.

Justin Paul discusses the capabilities of Grafana and the data obtained from Zerto’s API. By utilizing the API data, it is possible to rebuild Zerto analytics and visualize it through graphs. The data includes the total number of logical blocks, encrypted data, unencrypted logical blocks, and their combined total. Anomalies in encrypted traffic can be identified, even for applications not intended to be encrypted. However, systems using specific encryption methods like Linux file systems or Windows BitLocker may not show anomalies as they are already encrypted. Zerto’s algorithms analyze the data at the block layer to detect compression or encryption, with plans to refine and improve the algorithms over time. The data is not stored for long, as Zerto aims to retrieve data quickly and not hold onto it due to high data rates. The analyzed stats are sent to ZVM, which triggers alerts and tag checkpoints when sufficient evidence of a security issue is found. Zerto aims to be one layer of security among others and provide real-time alerts without the need for analyzing previous backups. The newer 10.0 API provides additional statistics at the volume, VM, and VPG levels. The discussion also touches on the potential differentiation between normal and malicious encryption and Zerto’s commitment to improving its algorithms and keeping up with security standards.


Ransomware Resilience with Zerto – Replicate and Detect

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

With the new release of Zerto 10 comes added functionality for you to achieve ransomware resilience with Zerto real time encryption detection built into the Continuous data protection ensures the earliest warning sign that ransomware may be impacting your virtual environment. Using the Zerto unique journal organizations can now be even more confident with rapid RPO and RTO with tagged clean checkpoints for verified recovery.

In this video, Chris Rogers, a senior technology evangelist at Zerto, introduces Zerto 10 and its key enhancements. The tagline for the release is “real-time detection meets real-time protection.” Chris explains that Zerto 10 focuses on four main areas: real-time encryption, ransomware detection, the Zerto cyber resilience vault, and protecting Azure at scale. He highlights the importance of early detection in ransomware attacks and explains how Zerto’s streaming inline detection works, providing real-time alerts and enabling quick recovery. Chris also mentions that Zerto 10 includes additional features like the Zerto secure appliance and emphasizes that the new capabilities are available to existing Zerto 9.7 customers at no additional cost.


Terraform Orchestration Bare Metal Demo via RackN Digital Rebar API

Event: Cloud Field Day 17

Appearance: RackN Presents at Cloud Field Day 17

Company: RackN

Video Links:

Personnel: Rob Hirschfeld, Shane Gibson

Demonstration of declarative APIs in RackN Digital Rebar that make it easy to automate pools of bare metal infrastructure using IaC orchestrators like Terraform to empower developers with available, reliable and fast self service.

In this RackN demo section, the focus is on showing a Terraform demo for orchestration. The scenario involves an enterprise trying to terraform bare metal, including setting a target OS, running Ansible, and conducting security audits. The goal is to make infrastructure easily available, reliable, and fast, allowing for the creation and destruction of resources. The architecture relies on a feature called pooling, where machines are checked in and out, and Terraform is used to interact with the platform. The demo showcases how developers can use Terraform to request and use resources, while the infrastructure architect manages and controls the process using the digital rebar platform. The integration between Terraform and digital rebar simplifies the provisioning and management of resources, allowing for customization and automation of infrastructure.


Generative DevOps Powering the 10x Operator with RackN and Backstage

Event: Cloud Field Day 17

Appearance: RackN Presents at Cloud Field Day 17

Company: RackN

Video Links:

Personnel: Rob Hirschfeld, Shane Gibson

Demos declarative APIs in RackN Digital Rebar that make it easy to automate clusters of bare metal infrastructure using Dev Portals like Backstage.io to empower developers with available, reliable and fast self-service.

In this conversation, Rob and Shane discuss their roles as developers and the integration of Backstage and Digital Rebar. Rob explains that he has set up his own Backstage developer environment and successfully connected it to his Digital Rebar system. However, he encounters an issue where a newly created cluster does not show any machines. Shane suggests looking into the front-end code to ensure that the necessary parameters are passed to the Digital Rebar API. They discuss adding additional information like the context broker and count to make the API work correctly. After making these adjustments, Rob successfully creates clusters and observes the machine resources being generated. They also touch on topics like resizing clusters, job logs for troubleshooting, and cluster cleanup. The conversation ends with a discussion on the on-ramp process for adopting Digital Rebar and the flexibility it offers for multiple teams within an organization. They mention the ability to import existing virtual machines and bare metal servers and the concept of machine objects and runner services in Digital Rebar.


Empowering Platform Engineering with RackN Infrastructure Platform

Event: Cloud Field Day 17

Appearance: RackN Presents at Cloud Field Day 17

Company: RackN

Video Links:

Personnel: Rob Hirschfeld, Shane Gibson

RackN Digital Rebar Infrastructure Platform provides the available, reliable and fast automation needed by operations teams to support enterprise Platform Engineering efforts.

RackN focuses on infrastructure platforms and platform engineering, addressing challenges faced by operations teams through their self-managed software called Digital Rebar. Their goal is to empower companies to independently manage their infrastructure, whether on-premises or in the cloud, by emphasizing infrastructure-as-code pipelines and process automation. They prioritize availability, reliability, and speed as key factors for success in an infrastructure platform. Through customer journey stories, they demonstrate how their solution has reduced equipment onboarding time, improved system availability, and increased operational speed. Automation, integration with various systems, and handling of edge systems are key features. They also address the challenges of managing Dev and Ops tool sprawl and advocate for a consolidated operational experience. RackN’s Digital Rebar is highlighted as a vendor-neutral infrastructure-as-code automation platform that offers customizable workflows and air gap capabilities.


Morpheus Data Workflows: Scaling Automation and IaC via Self-Service

Event: Cloud Field Day 17

Appearance: Morpheus Data Presents at Cloud Field Day 17

Company: Morpheus Data

Video Links:

Personnel: Martez Reed

Going beyond developer self-service and provisioning of application instances, this session will spotlight how Morpheus can be used to help IT simplify operations and scale GitOps style automation initiatives. By decoupling the creation and execution of automation scripts and infrastructure-as-code patterns, organizations can streamline and govern the use of technologies like Terraform, Ansible, Puppet, PowerShell, Python, and more. The session will also spotlight the new dynamic automation target feature which enables use cases like patch management, security remediation, and routine maintenance.

In this session, Martez Reed, Director of Technical Marketing with Morpheus Data, discusses scaling automation in Infrastructure as Code (IAC) through self-service. He talks about the challenges of using YAML and raw automation, such as the lack of user-friendliness and difficulty in understanding available environments. Martez highlights the importance of focusing on the business value rather than getting caught up in the coolest way to implement automation. He explores the concept of click ops and the use of user interfaces (UIs) and no-code solutions for efficient and quick outcomes. Martez demonstrates how Morpheus provides a platform for creating Terraform Cloud workspaces using UI-based interactions, simplifying the sharing and scaling of automation across organizations. The platform’s role-based access control, workflow automation, and integration capabilities enable users to accomplish their tasks without the complexities of manual scripting or infrastructure management. The discussion also touches on features like dynamic automation targeting and the adoption of immutability in handling system redeployment and patching.


Morpheus Data Plugins: Extensibility for Hybrid Cloud Platform Operations

Event: Cloud Field Day 17

Appearance: Morpheus Data Presents at Cloud Field Day 17

Company: Morpheus Data

Video Links:

Personnel: Martez Reed

For years, Morpheus has led the hybrid cloud management market when it comes to rapid integration of third-party technologies with dozens of out-of-the-box codeless integrations into clouds, ITSM, IPAM, Backups. This session goes deep on the underlying service delivery framework within Morpheus and showcases how third-party plugins can be developed to expose data for custom reports, pull in third-party observability and AIOps information, extend cloud connectivity, enable custom user dashboards, and more.

In this session, Martez Reed, Director of Technical Marketing at Morpheus Data, introduces the extensibility features of the Morpheus platform for hybrid cloud operations. He explains that Morpheus is available as a Linux package, installable on-premises or in the public cloud, and highlights its highly modular and extensible nature. The platform supports a plugin framework that allows users to extend its capabilities beyond the out-of-the-box features. Reed discusses various areas of extensibility, such as custom reports and UI customization, including the ability to create a custom dashboard. He also mentions the integration of third-party services through plugins, providing examples like DNS, load balancer, and backup integrations. The Morpheus exchange is mentioned as a repository of curated plugins. The session concludes with a discussion on plugin deployment, management, and versioning, as well as the potential for private repositories and security considerations.


Morpheus Foundation: Developer Self-Service and Platform Operations

Event: Cloud Field Day 17

Appearance: Morpheus Data Presents at Cloud Field Day 17

Company: Morpheus Data

Video Links:

Personnel: David Estes

This demo and deep dive will illustrate how enterprises and service providers can enable DevOps speed and agility while also improving control and reducing cost of delivering IT services. We’ll show how easy it is to integrating technologies like VMware, Nutanix, AWS, Azure, GCP, ServiceNow, Terraform, Ansible, and more while providing a governance framework and policy engine to bring Developers, Security, Finance Operations teams closer together.

Dave Estes, the co-founder and CTO of Morpheus, gave a presentation on the developer and self-service aspects of Morpheus. The presentation included a demonstration of the platform’s features, starting with the new customizable dashboard and self-service portal. Morpheus is a self-service platform that covers various aspects of infrastructure, such as provisioning, networking, lifecycle management, and costing. The presentation focused on provisioning instances, showcasing the flexibility to create basic virtual machines or more complex clusters. It also highlighted the adaptive configuration options based on different cloud providers and the ability to automate tasks like installing Apache, setting up scaling, backups, replication, and power schedules. The presentation also discussed how Morpheus handles eventual inconsistencies and provides a comprehensive history of actions and diagnostics. Additionally, the platform supports CLI automation, API integration, and can function as a Terraform state management system. The presentation emphasized Morpheus as an internal developer platform that eliminates the need for building such a system from scratch, providing benefits like abstraction, easy transition between infrastructure modules, and compatibility with various environments and architectures.


Morpheus Data 101: The Rise of Hybrid Cloud Platform Operations

Event: Cloud Field Day 17

Appearance: Morpheus Data Presents at Cloud Field Day 17

Company: Morpheus Data

Video Links:

Personnel: Brad Parks

For years, Morpheus has led the hybrid cloud management market when it comes to rapid integration of third-party technologies with dozens of out-of-the-box codeless integrations into clouds, ITSM, IPAM, Backups. This session goes deep on the underlying service delivery framework within Morpheus and showcases how third-party plugins can be developed to expose data for custom reports, pull in third-party observability and AIOps information, extend cloud connectivity, enable custom user dashboards, and more.

Brad Parks explains that Morpheus aims to bring teams together and simplify how developers, finance, and operations teams interact with hybrid clouds, containers, and automation tools. He compares the company’s mission to the TV show “Ted Lasso,” highlighting the quote “Taking on a challenge is a lot like riding a horse. You’re comfortable while you’re doing it. You’re probably doing it wrong.” He emphasizes the importance of automating manual processes and enabling self-service for various roles within an organization. Parks outlines the core use cases Morpheus addresses, including consistent provisioning across different cloud environments, Kubernetes cluster management, scaling automation initiatives, and integration with IT service management and monitoring tools. He also discusses the challenges faced by IT organizations, the security, development, and finance teams, and how Morpheus provides solutions for each. Parks concludes by highlighting the platform’s features, such as integration with existing tools, a fine-grained RBAC model, support for development workflows, cost optimization, and data-driven insights for chargeback and showback. He mentions the platform’s scalability, integration capabilities, and freedom of choice in provisioning across various cloud endpoints.


Roots to Cloud: Cisco Wireless Legacy and Vision

Event: Mobility Field Day 9

Appearance: Cisco Presents at Mobility Field Day 9

Company: Cisco

Video Links:

Personnel: Phal Nanda

The Meraki and Catalyst brands come together to unite Cisco Wireless Solutions built on lessons learned from millions of deployed networks across the globe.  The new Unified Cisco Wireless Solutions are transforming the Enterprise supporting the most demanding use cases with ease of deployment, configuration and management through intuitive centralized platforms that offer on premises or cloud managed.


Aruba Atmosphere 2023 Roundtable: Discussing Aruba’s Big Announcements from Atmosphere

Event: Networking Field Day Experience at Aruba Atmosphere 2023

Appearance: Aruba Presents at Networking Field Day Experience at Aruba Atmosphere 2023

Company: HPE Aruba Networking

Video Links:

Personnel: Tom Hollingsworth

Join the Networking Field Day Experience delegates as they discuss the announcements from Aruba Atmosphere 2023. Hear how new technologies and acquisitions are building HPE Aruba Networking into a premier company that can address edge connectivity and security needs. Learn about new wireless technologies that are being integrated into the Aruba portfolio and where you might see them deployed in the future.


NetAlly CyberScope Product Walkthrough and Demo

Event: Mobility Field Day 9

Appearance: NetAlly Presents at Mobility Field Day 9

Company: NetAlly

Video Links:

Personnel: James Kahkoska

In Part 2 of NetAlly’s presentation to #MFD9, James Kahkoska (CTO) covers the benefits and techniques to perform an onsite security assessment by demonstrating the use of CyberScope, NetAlly’s new cyber security analyzer, a handheld instrument that provides unprecedented visibility to expose possible site security concerns and interior threats.


Enabling Wi-Fi Site Assessments with NetAlly CyberScope

Event: Mobility Field Day 9

Appearance: NetAlly Presents at Mobility Field Day 9

Company: NetAlly

Video Links:

Personnel: James Kahkoska

In Part 1 of NetAlly’s presentation to #MFD9, James Kahkoska (CTO) – in his classic Field Day storytelling style – talks about the history of innovation, mobility, and the evolving complexity of site networks – and the impact of this complexity on cyber security. He relates how common network analysis capabilities can be utilized for security workflows. James explains why, whether handling numerous remote sites or a single large campus, the first step in Cyber Security is visibility into all the devices that are present, where they are located, how they are connected and what services they are offering or using.


What’s Cooking in WIPS with Arista

Event: Mobility Field Day 9

Appearance: Arista Presents at Mobility Field Day 9

Company: Arista

Video Links:

Personnel: Jatin Parekh, Robert Ferruolo

A viable WIPS offering needs to constantly adapt to mitigate new threats. In this section we will introduce a number of innovations that enable automatic detection and prevention of threats related to newer standards and practices such as; WPA3, OWE, 802.11w, client MAC randomization.