Cloud-Delivered Cisco Catalyst SD-WAN

Event: Tech Field Day Extra at Cisco Live US 2023

Appearance: Cisco Enterprise Networking Presents at Tech Field Day Extra at Cisco Live US 2023

Company: Cisco

Video Links:

Personnel: Anupam Mishra

In this session, you’ll learn how Cloud-delivered Cisco Catalyst SD-WAN offers an automated solution delivery with Cisco overseeing the lifecycle management of the SD-WAN fabric, and how we provide provide rich, actionable, end-to-end insights while lowering TCO.

Anupam Mishra, the leader of product management in Cisco SD-WAN View, introduced a new deployment model called Cloud-delivered Cisco Catalyst SD-WAN. The objective of this model is to streamline the operational aspects of SD-WAN for customers by assuming the responsibility of managing the fabric to a significant extent. The Cloud-delivered option includes integrated analytics, automated software upgrades, certificate installation, and a bundled SKU for convenient ordering. While the Cloud-delivered model is not currently supported in GovCloud, Cisco plans to add support in later phases. Initially targeting enterprise customers, Cisco will expand its focus to include the MSP and federal markets. The ordering process for AWS is simplified and can be done directly, while partner involvement is necessary for ordering through Cisco Commerce Workplace. The Cloud-delivered Catalyst SD-WAN simplifies onboarding and deployment by capturing user intent during the ordering process, enabling single sign-on, and bundling the required controllers and licenses. In its initial phase, the Cloud-delivered model supports up to 1000 devices and aims to provide customers with a straightforward operational model that allows them to concentrate on their network business outcomes.


Cisco Artificial Intelligence and Machine Learning Data Center Networking Blueprint

Event: Tech Field Day Extra at Cisco Live US 2023

Appearance: Cisco Enterprise Networking Presents at Tech Field Day Extra at Cisco Live US 2023

Company: Cisco

Video Links:

Personnel: Nemanja Kamenica

Artificial Intelligence and Machine Learning is part of many industries and day to day life, and it will expand in the future. This session shows how Ethernet Networks use RoCEv2 transport benefits AI/ML clusters. You’ll also see a demonstration of congestion management capabilities of Nexus switches, that will improve AI workload transports.

Nemanja Kamenica, a technical marketing engineer at Cisco, presented an AI/ML data center networking blueprint. The presentation highlighted the diverse applications of AI in various sectors such as medical research, financial services, public transport optimization, manufacturing, and retail recommendations. Kamenica discussed the two types of AI clusters: distributed training clusters and product inference clusters. The network requirements for AI training networks, including non-blocking transport, lossless Ethernet, and ROC-UV2 technology with PFC and ECN for congestion management, were outlined. A demo showcased congestion occurring when multiple hosts simultaneously sent data to a storage device, leading to congestion on a specific port. To address congestion issues, the Nexus Dashboard Fabric Controller allows the configuration of QoS mechanisms such as WRED, ECN, and PFC. Proper management of bursty all-to-all communication within AI clusters is crucial to prevent congestion and mitigate potential financial losses.


Empowering Enterprise Disaster Recovery with JetStream Software and Microsoft Azure NetApp Files

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software, NetApp

Video Links:

Personnel: Prashant Desai

Prashant Desai, product manager at NetApp for Azure NetApp Files, discusses the partnership between NetApp and JetStream Software, highlighting the value it brings to customers. JetStream’s unique technology, including continuous data protection with near-zero recovery point objectives (RPOs) and recovery time objectives (RTOs), aligns perfectly with the requirements of NetApp’s enterprise customers using Azure NetApp Files (ANF) for business-critical workloads. By integrating JetStream’s solution, customers can leverage ANF for disaster recovery from on-premises Azure VMware Solution (AVS) environments with stringent RTOs. Additionally, the solution can be used for standard DR scenarios where RTOs are not as critical. ANF’s performance tiers offer flexibility and cost optimization, allowing customers to choose the appropriate tier for continuous rehydration and adjust performance levels as needed during a DR event. This collaboration provides customers with a comprehensive DR solution within Azure, supporting AVS private clouds and minimizing infrastructure costs.


JetStream Enables DR of Virtual Machines to Microsoft Azure

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software, Microsoft Azure

Video Links:

Personnel: Justin Jakowski

JetStream DR on AVS allows failover from on-prem VMware clusters to Microsoft’s Azure VMware Solution (AVS), a software-defined data center built on bespoke hardware in Azure data centers. It utilizes VMware’s vSAN for storage, NSXT for networking, and ESXi on physical servers, with management through vCenter. AVS is primarily used for cloud migration, data center extension, and disaster recovery, and JetStream Software is a critical component of DR to AVS. AVS provides performant all-flash storage with deduplication and compression. Networking is connected to Azure via an internal dedicated express route circuit, with global reach for on-premises replication. The solution offers high throughput for migration and replication, and can connect to Azure services like Azure NetApp Files through express route gateways.


Future of Databases in the World of Cloud, Edge and AI with Couchbase

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Ravi Mayuram

Couchbase has reimagined the database with its fast, flexible and affordable cloud database platform Capella, allowing organizations to quickly and cost-effectively build the applications of the future and deliver premium experiences to their customers. Capella uniquely has built-in application services so developers can easily build always on and always reliable apps. With big trends like cloud, AI, edge and digital transformation all colliding, what does the future of applications look like? And how will the database have to evolve to meet the demands of next-gen apps?

In this conversation, Ravi Mayuram, the CTO of Couchbase, discusses the evolution of Couchbase from a simple key-value cache to a distributed database capable of handling transactions seamlessly. He emphasizes the need to modernize databases and consolidate multiple layers of technology to avoid data sprawl and other related issues. Couchbase’s key capability is its support for flexible JSON data without the need to define a schema upfront. It can function as a cache, key-value store, or a full-fledged SQL database with the ability to perform full-text searches on JSON. The architecture allows for multi-dimensional scaling, analytics, eventing, and distributed replication across geographies, enabling operational, analytical, and translytical use cases. Ravi also mentions the visibility and performance insights available through the administration console and REST APIs. He highlights the platform’s versatility, supporting hybrid cloud deployments and offering the same features and capabilities in both self-hosted and Capella environments. The conversation further touches upon the challenges enterprises face in managing data sources, the importance of data strategy aligning with business goals, and the data mesh concept of organizing data domains. Couchbase and Capella play a crucial role in facilitating data consolidation and serving as systems of truth in the data mesh architecture.


How to Build a Multi-Model Data Platform with Couchbase Capella DBaaS

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Tom McSpiritt

Capella is a fully managed JSON document and key-value database with SQL access and built- in full-text search, eventing and analytics. It easily supports a broad range of modern application use cases with multi-model and mobile synchronization capabilities and allows customers to use the programming language of their choice. Capella’s memory-first architecture drives blazingly fast millisecond data responses at scale, resulting in best-in-class price performance of any fully managed document database. In this demo, Tom will show how fast and easy it is to spin up a multi-model data platform with Couchbase’s DBaaS, Capella.

Tom McSpiritt discusses the Couchbase Capella demo. He explains that Couchbase is a multi-model data platform offering different data access patterns through SDKs for building applications. Tom demonstrates various features of Capella, including a sample configuration, document access, index definitions, and full-text search. He also explains that Couchbase allows deployment across different cloud providers and supports regions for flexibility. Tom discusses the control plane and data plane in Couchbase, mentioning that while the control plane is hosted by Couchbase, the data plane can be deployed in the customer’s own VPC with the ability to connect through VPC peering, private links, or public endpoints. He emphasizes that each customer gets their own isolated tenancy for data storage. Tom further explores monitoring tools, SQL queries, indexes, and access settings in Capella. He addresses questions about common connect strings, IP access control, IAM authentication, and automation capabilities. Finally, Tom demonstrates the Couchbase SDKs, showing a Python example of a key-value retrieval.


Couchbase Powers Diverse Industries with Personalized Experiences and Efficient Data Management

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Jeff Morris discusses the wide range of use cases for Couchbase and the benefits it provides. Couchbase is utilized in various industries such as travel, hospitality, telecommunications, retail, financial services, media, and entertainment. It offers features like JSON support, efficient account profile management, and matching dynamic catalogs. The examples mentioned include applications for inventory management, customer loyalty programs, fraud detection, streaming services, customer service improvement, rescue operations, preflight checklists, and more. Couchbase enables personalized experiences, offline-first functionality, and high-speed performance in critical activities.


Couchbase Capella Spring Release: Enhancing Modern Applications with New Features

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Jeff Morris introduces National Cloud Database Day and announces the Couchbase Capella Spring Release and Couchbase Server 7.2, which introduce significant enhancements to support modern applications. The database now includes time series data support in JSON for IoT and financial applications, as well as user-defined functions for transforming and accessing data. Change data capture and cross-base optimization for JSON queries have been added, along with a multi-parallel processing analytic service that aggregates data from Couchbase and cloud storage. Additional features include memory-only buckets for caching, integrations with Netlify and Visual Studio Code, dynamic disk expansion, cluster hibernation, and improved support for cloud service providers.


Couchbase in a Cloud NoSQL Database Comparison

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Organizations have a variety of NoSQL databases to choose from – and understanding the differences between them is critical when planning to deploy a NoSQL database in the cloud. Jeff Morris, VP Product Marketing at Couchbase, compares Couchbase’s award-winning Database-as-a-Service (DBaaS) Couchbase Capella against top NoSQL competitors. Find out how Capella stacks up against competing DBaaS document store offerings and see why Capella stands out from the rest for ease of getting started, multicloud deployment, and price/performance – particularly as application needs grow.

Jeff Morris provides an overview of Couchbase, highlighting its key features and advantages over competitors. He emphasizes that customers primarily choose Couchbase for its performance and the flexibility of a multi-model data store. The platform supports mobile application development and helps reduce the cost of cloud operations. Morris discusses Couchbase’s memory-first design, which enables fast data processing, and its ability to scale services independently for optimal performance. He explains how Couchbase achieves active-active clustering and distributed systems through virtual buckets and application awareness. While migrating from a different database to Couchbase may require some work for developers, the portability of query languages makes the transition smoother. Morris compares Couchbase to MongoDB, highlighting Couchbase’s memory-first design, scalability, low latency, and comprehensive capabilities. He emphasizes Couchbase’s SQL++ query language, which offers powerful querying capabilities similar to relational databases. Customer surveys indicate significant cost savings and improved overall total cost of ownership (TCO) when using Couchbase. Additionally, Couchbase’s Capella database as a service further reduces costs and operational complexities. The presentation concludes by providing an overview of Couchbase’s deployment options and its autonomous operator feature, which includes self-healing and auto-scaling capabilities.


Comprehensive DPaaS for Google Cloud Services with HYCU

Event: Cloud Field Day 17

Appearance: HYCU Presents at Cloud Field Day 17

Company: HYCU

Video Links:

Personnel: Shiva Raja, Shreesha Pai Manoor

As Google Cloud use continues to rise, the need to have all of its IaaS, PaaS, DBaaS and Collaboration services protected and recoverable in a consistent and intuitive manner has never been more critical. Shiva and Shreesha share a compelling demo on HYCU along with the continued innovations to make Google Cloud backup, recovery and its management easy, efficient and extremely affordable across all of the Google Cloud services from compute to database to business intelligence to The Marketplace.

Shiva Raja, Technical Solutions Architect, and Shreesha Pai Manoor, Vice President, Customer and Partner Solutions of HYCU, presented a solution for Google Cloud during which they discussed the broad range of services offered by Google Cloud beyond infrastructure as a service. They highlighted the need for data protection and shared customer use cases. They emphasized the automated deployment and discovery processes, as well as the ability to protect and restore data without the need for agents or manual installations. They also addressed security measures such as authentication, multi-tenancy, backup consolidation, and immutability using Google Cloud’s immutable storage. The presenters compared their solution to manual scripting, emphasizing the comprehensive coverage and ease of use provided by their product. They also mentioned the cost-effectiveness and encryption options available to customers.


Addressing Data Protection Challenge for Atlassian Users Moving from Datacenter to Cloud with HYCU

Event: Cloud Field Day 17

Appearance: HYCU Presents at Cloud Field Day 17

Company: HYCU

Video Links:

Personnel: Andy Fernandez, Subbiah Sundaram

With more than 17,000+ SaaS apps in use across enterprises in North America today, several of the most widely deployed and in use are from Atlassian. As Atlassian users, including Jira and Confluence, look to take advantage of Atlassian Cloud, data needs to be protected and restored by a tightly integrated and enterprise-class solution. Andy shares more in a compelling demo on how HYCU is addressing the Jira Backup challenge along with making restoration in light of disruption a 1-click process.

Andy Fernandez and Subbiah Sundaram discuss the importance of protecting and restoring data within SaaS applications like JIRA. They highlighted JIRA’s significance for both HYCU and Fortune 500 companies, emphasizing its role in software release, bug management, customer experience, and revenue generation. HYCU explained that while many organizations assume SaaS applications are fully protected, it is still their responsibility to safeguard data and ensure compliance. They outlined three critical reasons for protecting data in SaaS applications: operational data loss, cyber events and outages, and compliance. The presenters then introduced HYCU’s data protection platform, Protege, which offers backup and recovery for various SaaS applications, including JIRA. They demonstrated the platform’s capabilities, such as granular restoration at the project, attachment, and subtask levels. They also mentioned that HYCU is working on expanding its integrations and aims to have 100 apps in the marketplace by the end of the year.


Modern Data Protection Starts with the Right Architectural Foundation with HYCU

Event: Cloud Field Day 17

Appearance: HYCU Presents at Cloud Field Day 17

Company: HYCU

Video Links:

Personnel: Goran Garevski

To be able to address the challenges raised by HCI and multi-cloud adoption, data protection solutions need to be architected right from the ground up. Goran shares more on how HYCU was developed with solving not only the immediate needs of multi-cloud data protection but also effectively handling the emergence of as-a-Service and SaaS application use.

Goran Garevski, CTO and co-founder of HYCU, discusses the challenges of data protection in the era of numerous data silos and the company’s approach to simplifying and streamlining the process. The goal is to abstract the problem across different types of data sources, such as classical data centers, file systems, containers, SaaS applications, databases, and service platforms, in order to apply the same policy for efficient data protection. HYCU aims to provide central intelligence for cross-cloud data protection management and standard functionality. They emphasize the importance of abstraction on the policy level to apply it to various sources. The company also focuses on application awareness and offers a marketplace for extending the platform’s functionality. The challenges of discovering and protecting SaaS and DBaaS sources are addressed, and HYCU aims to provide users with information about the protection capabilities of SaaS services. They have developed advanced logic for identification, filtering, and automapping of SaaS instances. The platform visualizes the environment and provides a comprehensive view of data protection and compliance. Additionally, HYCU offers a RESTful API for integration and potential graph-based representation of data.


Modern Data Protection for Modern Applications and the Future of SaaS Backup with HYCU

Event: Cloud Field Day 17

Appearance: HYCU Presents at Cloud Field Day 17

Company: HYCU

Video Links:

Personnel: Simon Taylor

HYCU was founded with the fundamental belief there is a better way to protect mission critical data across on-prem, hybrid and public clouds and with the rise of SaaS application usage in companies. Simon shares more on the history of HYCU (Hybrid Cloud Uptime), the company’s roots and what is driving technology innovation to solve the most significant challenge in recent IT history: the proliferation of SaaS application use and the lack of enterprise-class data protection options.

Simon Taylor, CEO and co-founder of HYCU, introduces the company as a modern data protection business named for hybrid cloud uptime, aiming to simplify data management. HYCU is the world’s fastest-growing backup and data protection as a service provider, with over 3,600 customers in 78 countries and $140 million in funding. They have a strong presence in the US government sector, with over 100 agencies as customers. The company boasts a world-class team and board, including industry experts and successful investors. HYCU’s goal is to solve the challenges of the data protection market, including the complexity of multi-cloud environments, the proliferation of data silos, and the lack of protection for SaaS applications. They prioritize customer success, as evidenced by their high Net Promoter Score (NPS) and their commitment to not charging for professional services. HYCU aims to provide comprehensive and unified data protection across on-premises, public cloud, and SaaS platforms. They emphasize the importance of addressing the increasing threat of ransomware attacks and the erosion of trust caused by the lack of data protection. HYCU’s extensible architecture and focus on simplifying the industry position them as a solution to these challenges.


JetStream DR in Action! Live Demonstration of JetStream DR on Microsoft Azure VMware Solution

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software

Video Links:

Personnel: Dennis Bray

Get hands-on with JetStream DR on AVS, as JetStream Software Senior Solutions Architect Dennis Bray shows failover of three different VMware clusters from an on-prem Software Defined Data Center (SDDC) in San Jose to a Microsoft AVS data center in Sweden. In just a few minutes, virtual machines (VMs) that have “crashed” in San Jose are up and running in the Microsoft AVS environment. Dennis runs three different failover scenarios to show the range of options for DR using a single DR software platform, including the most cost- effective scenario in which data is stored exclusively in Microsoft Azure Blob Storage and the most high-performance scenario that leverages the performance and scalability of Azure NetApp Files.

In this demo by Dennis Bray of JetStream Software, he showcases the recovery process using JetStream’s solution. The protected site is located in San Jose, within the Equinix Data Center, running a small VMware environment. The virtual machines are replicated to an Azure VMware solution private cloud in the Sweden central region. The demo demonstrates both the near-zero Recovery Time Objective (RTO) option and the on-demand option. Three domains are set up with different configurations, including continuous rehydration and replication to different storage accounts. During the demo, Dennis imports the third domain and initiates the failover process. The progress of the failover and rehydration is shown, and the virtual machines are restored onto an Azure NetApp Files (ANF) data store. The demo also includes configuration settings, network mapping, and the use of JetStream’s automation toolkit for managing the recovery process.


JetStream DR Solution Architecture: Achieving both Performance and Cost Savings

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software

Video Links:

Personnel: Dennis Bray

JetStream Software Senior Solutions Architect Dennis Bray presents the unique design of the JetStream DR platform, capturing data immediately as its written to storage, and replicating it as objects to a container in any standard object store, including Microsoft Azure Blob Storage. This enables the most cost- efficient near-zero RPO solution for DR in the cloud. Dennis shows that everything needed to fully recover a set of protected virtual machines (VMs) is in the object store. If desired data can be continuously replicated from the object store to storage in the recovery environment, for a near-zero Recovery Time Objective (RTO).

In this discussion, Dennis Bray from JetStream Software provides an overview of their architecture and proceeds to demonstrate its functionality. He begins by describing the components and technologies involved in their system. The core architecture includes a management server appliance that orchestrates and operates the software, which can be accessed through a vSphere client plugin or APIs for automation and integration. They also utilize IO filters to capture storage traffic and a DR virtual appliance (DRVAs) for processing and replicating data to an object store, such as an Azure storage account. The protected virtual machines, along with their configurations and status information, are stored in the object store, allowing for recovery in the event of a disaster. Dennis explains two recovery options: a standard failover with a longer recovery time objective (RTO) and a near-zero RTO option. The standard failover involves setting up a recovery site, deploying the software, configuring DRVA, and transferring ownership of the protected domain to the recovery site. Once completed, the failover process can be initiated, enabling the recovery of protected virtual machines. The near-zero RTO option requires a preconfigured and running recovery site, where virtual machines can be quickly recovered using the stored data and configurations from the object store. The discussion also addresses some questions from the audience, clarifying aspects such as the need for a live vCenter environment, the responsibility of customers in preparing the recovery site, and the compatibility of different object storage targets.


Economic and Operational Advantages of Cloud DRaaS with JetStream Software

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software

Video Links:

Personnel: Rich Petersen, Serge Shats

Disaster Recovery (DR, or sometimes BC/DR) in the cloud can be more economical than legacy on-premises DR. JetStream Software’s Rich Petersen shows how storing recovery data and metadata in object storage and provisioning cloud-based compute nodes dynamically when needed can reduce the cost of DR infrastructure by 50% to 60%, making cloud a game-changer for DR. At the same time, by using VMware IO Filters (VAIO) to capture data immediately (no snapshots!) JetStream DR delivers a near zero Recovery Point Objective (RPO). Employing NetApp ANF to maintain data reduces Recovery Time Objectives (RTOs) to near-zero as well.

JetStream Software, led by co-founders Rich Peterson and Serge Shats, discusses the reasons why organizations are increasingly turning to the cloud for their disaster recovery (DR) needs. The cloud offers significant advantages in terms of cost and operational performance for DR strategies. JetStream DR, a cloud-based solution, can protect both on-premises workloads and those already migrated to the cloud. It provides failover capabilities, enables restoration of on-premises environments, and allows for the recovery of earlier points of consistency as required. JetStream’s cloud-native approach includes self-service capabilities, integration with cloud vendors’ marketplaces, and a focus on near-zero Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs). The solution utilizes VMware IO Filters to capture and replicate data in real-time, ensuring minimal data loss. By leveraging economical cloud storage and dynamically provisioning compute nodes, organizations can achieve significant cost savings of around 50% to 60% compared to traditional DR approaches. The ability to maintain high application performance without interrupting operations is another key benefit. Finally, JetStream emphasizes the importance of visibility and control for administrators in a cloud-based DR solution.


Couchbase: The Cloud Database Platform for Modern Applications

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Modern customer experiences need a flexible database platform that can power applications spanning from cloud to edge and everything in between. Couchbase’s mission is to be the premiere cloud database platform for organizations building the applications of the future. Join Jeff Morris, VP of Product Marketing, for an introduction to Couchbase and what sets our cloud database platform apart from the rest. He will cover Couchbase’s differentiated architecture, why our DBaaS Capella so unique, primary use cases and some special announcements on what’s new in the latest Capella release.

During the Couchbase introduction on National Cloud Database Day at Cloud Field Day, Jeff Morris highlights the key aspects of Couchbase’s background and the challenges faced by their customers. He emphasizes the need for delivering personalized and highly available applications with real-time capabilities, which has led to a shift towards cloud deployments. Customers are increasingly concerned about rising cloud costs and the complexity of building and managing complex architectures. Couchbase addresses these issues by offering a high-performance, multi-model NoSQL database with features such as search, eventing, and mobile synchronization. The company’s goal is to help customers build better applications faster, reduce infrastructure complexity, and lower costs, ultimately delivering more features in less time and saving money.


Introducing Zerto 10 Secure Virtual Manager Appliance

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Zerto 10 is the first version of Zerto that is exclusively available via the new Zerto Secure Virtual Appliance, allowing for simpler installs and upgrades the Zerto Secure Virtual Appliance comes pre-hardened out of the box so all customers can benefit from increased security without pages of hardening guides to worry about. Learn how to migrate from your Legacy Windows ZVM to the new Zerto Virtual Manager appliance (ZVMA) with the newly released migration tool.

The Zerto secure appliance is a new all-in-one virtual manager appliance that simplifies the deployment, management, and support experience for customers. It has moved away from Windows deployment to Linux, making it easier to troubleshoot and manage. The appliance comes pre-hardened for security, including multi-factor authentication and role-based access control. Zerto has also introduced a seamless migration utility that allows for quick and efficient migration of environments. The appliance is currently delivered as a single virtual machine, but there are plans for future deployments with multiple appliances for redundancy. Zerto aims to provide more frequent updates and move towards a more SaaS-like update process. The architecture has shifted from a monolith to a microservices-based approach, with many components running as web-based services. The appliance communicates with HPE GreenLake and Zerto Analytics containers for data transmission. Keycloak is used for authentication and integration capabilities. The Linux operating system and containers are pre-hardened, although specific details regarding the hardening of the Kubernetes cluster are not mentioned.


Ransomware Resilience with Zerto – Test and Recover

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Even the best laid plans can come undone unless frequent and extensive testing can be completed. Utilizing Zerto’s automation and orchestration capabilities organizations can now test non-disruptively in isolated networks or clean rooms to ensure they are ransomware ready, once testing has been completed ready-made compliance reports make passing audits and regulatory requirements easy.

Chris Rogers, senior technology evangelist at Zerto, discusses the importance of ransomware resilience and testing in data recovery. Zerto has been emphasizing the need for simple, non-impactful testing for years, especially in the context of security. Chris highlights the significance of frequent and extensive testing, rather than just checking a single virtual machine or performing a single restore. By using Zerto, customers have significantly reduced their testing time, completing it in less than two hours compared to three and a half days previously. The testing is fully automated, orchestrated, and does not impact production workloads. Zerto customers perform over 18,000 tests per month on average, with an impressive average recovery time objective (RTO) of three minutes and 19 seconds. Chris also mentions the ability to conduct real-time testing and utilize the isolated recovery environment for various purposes, such as patch testing, vulnerability scanning, data analytics, and forensics. While Zerto does not replace antivirus tools or have official partnerships with malware cleanup companies, they provide the infrastructure and availability for recovery, allowing customers to bring their own tooling and layer additional security measures on top. Zerto offers different recovery options for ransomware, including instant file restore, instant VM restore, recovery from multi-VM app infection, recovery from single site infection using the cloud or secondary site, and extended journal copy for multi-site infection recovery. The recently introduced Rapid Air Gap Recovery using the cyber resilience vault provides an additional layer of protection. Chris acknowledges that Zerto’s focus is not on detection but on recovery, and customers still have work to do in removing the malware or encryption. However, the vault allows customers to recover applications into an isolated environment where they can leverage their own tools and scan the recovered VMs for any infections.


Ransomware Resilience with Zerto – Isolate and Lock with the Cyber Resilience Vault

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Introducing the new Zerto Cyber Resilience Vault. A complete solution combining the powers of Zerto and the wider HPE family, Organizations can now be confident in ensuring recovery even during the worst attacks. Built upon decentralized management with zero trust principles with always immutable data copies. The cyber resilience vault is the only Isolated recovery environment that uses Journalling technology as the primary recovery mechanism rapidly reducing downtime and data loss.

The Zerto Cyber Resilience Vault, also known as Z-Vault, provides organizations with an isolated recovery environment or vault to protect against ransomware attacks. As regulations regarding data protection become stricter, isolated recovery environments are increasingly mandated as the last line of defense and emergency recovery option. Ransomware attacks often target data protection solutions, making it crucial to protect these solutions themselves. The Zerto Cyber Resilience Vault offers a fully isolated and air-gapped environment with mutable data, based on zero-trust principles. It includes components such as HPE ProLiant for compute, HPE Alletra for storage, and HPE Aruba Networking for networking. The vault ensures no network connectivity outside of the replication link between the storage arrays, providing enhanced security. It also supports replication from cloud sources and integration with the HPE Backup and Recovery Service. Z-Vault aims to offer a better, faster, and more cost-effective solution compared to existing cyber vaults on the market, reducing downtime and ransomware impact. By combining the isolated recovery environment and the vault into a single hardware infrastructure, Zerto simplifies the recovery process and ensures data immutability and air-gapped security. The vault helps organizations meet compliance and regulatory requirements while providing enhanced protection against cyber threats.