Couchbase Powers Diverse Industries with Personalized Experiences and Efficient Data Management

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Jeff Morris discusses the wide range of use cases for Couchbase and the benefits it provides. Couchbase is utilized in various industries such as travel, hospitality, telecommunications, retail, financial services, media, and entertainment. It offers features like JSON support, efficient account profile management, and matching dynamic catalogs. The examples mentioned include applications for inventory management, customer loyalty programs, fraud detection, streaming services, customer service improvement, rescue operations, preflight checklists, and more. Couchbase enables personalized experiences, offline-first functionality, and high-speed performance in critical activities.


Couchbase Capella Spring Release: Enhancing Modern Applications with New Features

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Jeff Morris introduces National Cloud Database Day and announces the Couchbase Capella Spring Release and Couchbase Server 7.2, which introduce significant enhancements to support modern applications. The database now includes time series data support in JSON for IoT and financial applications, as well as user-defined functions for transforming and accessing data. Change data capture and cross-base optimization for JSON queries have been added, along with a multi-parallel processing analytic service that aggregates data from Couchbase and cloud storage. Additional features include memory-only buckets for caching, integrations with Netlify and Visual Studio Code, dynamic disk expansion, cluster hibernation, and improved support for cloud service providers.


Couchbase in a Cloud NoSQL Database Comparison

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Organizations have a variety of NoSQL databases to choose from – and understanding the differences between them is critical when planning to deploy a NoSQL database in the cloud. Jeff Morris, VP Product Marketing at Couchbase, compares Couchbase’s award-winning Database-as-a-Service (DBaaS) Couchbase Capella against top NoSQL competitors. Find out how Capella stacks up against competing DBaaS document store offerings and see why Capella stands out from the rest for ease of getting started, multicloud deployment, and price/performance – particularly as application needs grow.

Jeff Morris provides an overview of Couchbase, highlighting its key features and advantages over competitors. He emphasizes that customers primarily choose Couchbase for its performance and the flexibility of a multi-model data store. The platform supports mobile application development and helps reduce the cost of cloud operations. Morris discusses Couchbase’s memory-first design, which enables fast data processing, and its ability to scale services independently for optimal performance. He explains how Couchbase achieves active-active clustering and distributed systems through virtual buckets and application awareness. While migrating from a different database to Couchbase may require some work for developers, the portability of query languages makes the transition smoother. Morris compares Couchbase to MongoDB, highlighting Couchbase’s memory-first design, scalability, low latency, and comprehensive capabilities. He emphasizes Couchbase’s SQL++ query language, which offers powerful querying capabilities similar to relational databases. Customer surveys indicate significant cost savings and improved overall total cost of ownership (TCO) when using Couchbase. Additionally, Couchbase’s Capella database as a service further reduces costs and operational complexities. The presentation concludes by providing an overview of Couchbase’s deployment options and its autonomous operator feature, which includes self-healing and auto-scaling capabilities.


Comprehensive DPaaS for Google Cloud Services with HYCU

Event: Cloud Field Day 17

Appearance: HYCU Presents at Cloud Field Day 17

Company: HYCU

Video Links:

Personnel: Shiva Raja, Shreesha Pai Manoor

As Google Cloud use continues to rise, the need to have all of its IaaS, PaaS, DBaaS and Collaboration services protected and recoverable in a consistent and intuitive manner has never been more critical. Shiva and Shreesha share a compelling demo on HYCU along with the continued innovations to make Google Cloud backup, recovery and its management easy, efficient and extremely affordable across all of the Google Cloud services from compute to database to business intelligence to The Marketplace.

Shiva Raja, Technical Solutions Architect, and Shreesha Pai Manoor, Vice President, Customer and Partner Solutions of HYCU, presented a solution for Google Cloud during which they discussed the broad range of services offered by Google Cloud beyond infrastructure as a service. They highlighted the need for data protection and shared customer use cases. They emphasized the automated deployment and discovery processes, as well as the ability to protect and restore data without the need for agents or manual installations. They also addressed security measures such as authentication, multi-tenancy, backup consolidation, and immutability using Google Cloud’s immutable storage. The presenters compared their solution to manual scripting, emphasizing the comprehensive coverage and ease of use provided by their product. They also mentioned the cost-effectiveness and encryption options available to customers.


Addressing Data Protection Challenge for Atlassian Users Moving from Datacenter to Cloud with HYCU

Event: Cloud Field Day 17

Appearance: HYCU Presents at Cloud Field Day 17

Company: HYCU

Video Links:

Personnel: Andy Fernandez, Subbiah Sundaram

With more than 17,000+ SaaS apps in use across enterprises in North America today, several of the most widely deployed and in use are from Atlassian. As Atlassian users, including Jira and Confluence, look to take advantage of Atlassian Cloud, data needs to be protected and restored by a tightly integrated and enterprise-class solution. Andy shares more in a compelling demo on how HYCU is addressing the Jira Backup challenge along with making restoration in light of disruption a 1-click process.

Andy Fernandez and Subbiah Sundaram discuss the importance of protecting and restoring data within SaaS applications like JIRA. They highlighted JIRA’s significance for both HYCU and Fortune 500 companies, emphasizing its role in software release, bug management, customer experience, and revenue generation. HYCU explained that while many organizations assume SaaS applications are fully protected, it is still their responsibility to safeguard data and ensure compliance. They outlined three critical reasons for protecting data in SaaS applications: operational data loss, cyber events and outages, and compliance. The presenters then introduced HYCU’s data protection platform, Protege, which offers backup and recovery for various SaaS applications, including JIRA. They demonstrated the platform’s capabilities, such as granular restoration at the project, attachment, and subtask levels. They also mentioned that HYCU is working on expanding its integrations and aims to have 100 apps in the marketplace by the end of the year.


Modern Data Protection Starts with the Right Architectural Foundation with HYCU

Event: Cloud Field Day 17

Appearance: HYCU Presents at Cloud Field Day 17

Company: HYCU

Video Links:

Personnel: Goran Garevski

To be able to address the challenges raised by HCI and multi-cloud adoption, data protection solutions need to be architected right from the ground up. Goran shares more on how HYCU was developed with solving not only the immediate needs of multi-cloud data protection but also effectively handling the emergence of as-a-Service and SaaS application use.

Goran Garevski, CTO and co-founder of HYCU, discusses the challenges of data protection in the era of numerous data silos and the company’s approach to simplifying and streamlining the process. The goal is to abstract the problem across different types of data sources, such as classical data centers, file systems, containers, SaaS applications, databases, and service platforms, in order to apply the same policy for efficient data protection. HYCU aims to provide central intelligence for cross-cloud data protection management and standard functionality. They emphasize the importance of abstraction on the policy level to apply it to various sources. The company also focuses on application awareness and offers a marketplace for extending the platform’s functionality. The challenges of discovering and protecting SaaS and DBaaS sources are addressed, and HYCU aims to provide users with information about the protection capabilities of SaaS services. They have developed advanced logic for identification, filtering, and automapping of SaaS instances. The platform visualizes the environment and provides a comprehensive view of data protection and compliance. Additionally, HYCU offers a RESTful API for integration and potential graph-based representation of data.


Modern Data Protection for Modern Applications and the Future of SaaS Backup with HYCU

Event: Cloud Field Day 17

Appearance: HYCU Presents at Cloud Field Day 17

Company: HYCU

Video Links:

Personnel: Simon Taylor

HYCU was founded with the fundamental belief there is a better way to protect mission critical data across on-prem, hybrid and public clouds and with the rise of SaaS application usage in companies. Simon shares more on the history of HYCU (Hybrid Cloud Uptime), the company’s roots and what is driving technology innovation to solve the most significant challenge in recent IT history: the proliferation of SaaS application use and the lack of enterprise-class data protection options.

Simon Taylor, CEO and co-founder of HYCU, introduces the company as a modern data protection business named for hybrid cloud uptime, aiming to simplify data management. HYCU is the world’s fastest-growing backup and data protection as a service provider, with over 3,600 customers in 78 countries and $140 million in funding. They have a strong presence in the US government sector, with over 100 agencies as customers. The company boasts a world-class team and board, including industry experts and successful investors. HYCU’s goal is to solve the challenges of the data protection market, including the complexity of multi-cloud environments, the proliferation of data silos, and the lack of protection for SaaS applications. They prioritize customer success, as evidenced by their high Net Promoter Score (NPS) and their commitment to not charging for professional services. HYCU aims to provide comprehensive and unified data protection across on-premises, public cloud, and SaaS platforms. They emphasize the importance of addressing the increasing threat of ransomware attacks and the erosion of trust caused by the lack of data protection. HYCU’s extensible architecture and focus on simplifying the industry position them as a solution to these challenges.


JetStream DR in Action! Live Demonstration of JetStream DR on Microsoft Azure VMware Solution

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software

Video Links:

Personnel: Dennis Bray

Get hands-on with JetStream DR on AVS, as JetStream Software Senior Solutions Architect Dennis Bray shows failover of three different VMware clusters from an on-prem Software Defined Data Center (SDDC) in San Jose to a Microsoft AVS data center in Sweden. In just a few minutes, virtual machines (VMs) that have “crashed” in San Jose are up and running in the Microsoft AVS environment. Dennis runs three different failover scenarios to show the range of options for DR using a single DR software platform, including the most cost- effective scenario in which data is stored exclusively in Microsoft Azure Blob Storage and the most high-performance scenario that leverages the performance and scalability of Azure NetApp Files.

In this demo by Dennis Bray of JetStream Software, he showcases the recovery process using JetStream’s solution. The protected site is located in San Jose, within the Equinix Data Center, running a small VMware environment. The virtual machines are replicated to an Azure VMware solution private cloud in the Sweden central region. The demo demonstrates both the near-zero Recovery Time Objective (RTO) option and the on-demand option. Three domains are set up with different configurations, including continuous rehydration and replication to different storage accounts. During the demo, Dennis imports the third domain and initiates the failover process. The progress of the failover and rehydration is shown, and the virtual machines are restored onto an Azure NetApp Files (ANF) data store. The demo also includes configuration settings, network mapping, and the use of JetStream’s automation toolkit for managing the recovery process.


JetStream DR Solution Architecture: Achieving both Performance and Cost Savings

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software

Video Links:

Personnel: Dennis Bray

JetStream Software Senior Solutions Architect Dennis Bray presents the unique design of the JetStream DR platform, capturing data immediately as its written to storage, and replicating it as objects to a container in any standard object store, including Microsoft Azure Blob Storage. This enables the most cost- efficient near-zero RPO solution for DR in the cloud. Dennis shows that everything needed to fully recover a set of protected virtual machines (VMs) is in the object store. If desired data can be continuously replicated from the object store to storage in the recovery environment, for a near-zero Recovery Time Objective (RTO).

In this discussion, Dennis Bray from JetStream Software provides an overview of their architecture and proceeds to demonstrate its functionality. He begins by describing the components and technologies involved in their system. The core architecture includes a management server appliance that orchestrates and operates the software, which can be accessed through a vSphere client plugin or APIs for automation and integration. They also utilize IO filters to capture storage traffic and a DR virtual appliance (DRVAs) for processing and replicating data to an object store, such as an Azure storage account. The protected virtual machines, along with their configurations and status information, are stored in the object store, allowing for recovery in the event of a disaster. Dennis explains two recovery options: a standard failover with a longer recovery time objective (RTO) and a near-zero RTO option. The standard failover involves setting up a recovery site, deploying the software, configuring DRVA, and transferring ownership of the protected domain to the recovery site. Once completed, the failover process can be initiated, enabling the recovery of protected virtual machines. The near-zero RTO option requires a preconfigured and running recovery site, where virtual machines can be quickly recovered using the stored data and configurations from the object store. The discussion also addresses some questions from the audience, clarifying aspects such as the need for a live vCenter environment, the responsibility of customers in preparing the recovery site, and the compatibility of different object storage targets.


Economic and Operational Advantages of Cloud DRaaS with JetStream Software

Event: Cloud Field Day 17

Appearance: JetStream Software Presents at Cloud Field Day 17

Company: JetStream Software

Video Links:

Personnel: Rich Petersen, Serge Shats

Disaster Recovery (DR, or sometimes BC/DR) in the cloud can be more economical than legacy on-premises DR. JetStream Software’s Rich Petersen shows how storing recovery data and metadata in object storage and provisioning cloud-based compute nodes dynamically when needed can reduce the cost of DR infrastructure by 50% to 60%, making cloud a game-changer for DR. At the same time, by using VMware IO Filters (VAIO) to capture data immediately (no snapshots!) JetStream DR delivers a near zero Recovery Point Objective (RPO). Employing NetApp ANF to maintain data reduces Recovery Time Objectives (RTOs) to near-zero as well.

JetStream Software, led by co-founders Rich Peterson and Serge Shats, discusses the reasons why organizations are increasingly turning to the cloud for their disaster recovery (DR) needs. The cloud offers significant advantages in terms of cost and operational performance for DR strategies. JetStream DR, a cloud-based solution, can protect both on-premises workloads and those already migrated to the cloud. It provides failover capabilities, enables restoration of on-premises environments, and allows for the recovery of earlier points of consistency as required. JetStream’s cloud-native approach includes self-service capabilities, integration with cloud vendors’ marketplaces, and a focus on near-zero Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs). The solution utilizes VMware IO Filters to capture and replicate data in real-time, ensuring minimal data loss. By leveraging economical cloud storage and dynamically provisioning compute nodes, organizations can achieve significant cost savings of around 50% to 60% compared to traditional DR approaches. The ability to maintain high application performance without interrupting operations is another key benefit. Finally, JetStream emphasizes the importance of visibility and control for administrators in a cloud-based DR solution.


Couchbase: The Cloud Database Platform for Modern Applications

Event: Cloud Field Day 17

Appearance: Couchbase Presents at Cloud Field Day 17

Company: Couchbase

Video Links:

Personnel: Jeff Morris

Modern customer experiences need a flexible database platform that can power applications spanning from cloud to edge and everything in between. Couchbase’s mission is to be the premiere cloud database platform for organizations building the applications of the future. Join Jeff Morris, VP of Product Marketing, for an introduction to Couchbase and what sets our cloud database platform apart from the rest. He will cover Couchbase’s differentiated architecture, why our DBaaS Capella so unique, primary use cases and some special announcements on what’s new in the latest Capella release.

During the Couchbase introduction on National Cloud Database Day at Cloud Field Day, Jeff Morris highlights the key aspects of Couchbase’s background and the challenges faced by their customers. He emphasizes the need for delivering personalized and highly available applications with real-time capabilities, which has led to a shift towards cloud deployments. Customers are increasingly concerned about rising cloud costs and the complexity of building and managing complex architectures. Couchbase addresses these issues by offering a high-performance, multi-model NoSQL database with features such as search, eventing, and mobile synchronization. The company’s goal is to help customers build better applications faster, reduce infrastructure complexity, and lower costs, ultimately delivering more features in less time and saving money.


Introducing Zerto 10 Secure Virtual Manager Appliance

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Zerto 10 is the first version of Zerto that is exclusively available via the new Zerto Secure Virtual Appliance, allowing for simpler installs and upgrades the Zerto Secure Virtual Appliance comes pre-hardened out of the box so all customers can benefit from increased security without pages of hardening guides to worry about. Learn how to migrate from your Legacy Windows ZVM to the new Zerto Virtual Manager appliance (ZVMA) with the newly released migration tool.

The Zerto secure appliance is a new all-in-one virtual manager appliance that simplifies the deployment, management, and support experience for customers. It has moved away from Windows deployment to Linux, making it easier to troubleshoot and manage. The appliance comes pre-hardened for security, including multi-factor authentication and role-based access control. Zerto has also introduced a seamless migration utility that allows for quick and efficient migration of environments. The appliance is currently delivered as a single virtual machine, but there are plans for future deployments with multiple appliances for redundancy. Zerto aims to provide more frequent updates and move towards a more SaaS-like update process. The architecture has shifted from a monolith to a microservices-based approach, with many components running as web-based services. The appliance communicates with HPE GreenLake and Zerto Analytics containers for data transmission. Keycloak is used for authentication and integration capabilities. The Linux operating system and containers are pre-hardened, although specific details regarding the hardening of the Kubernetes cluster are not mentioned.


Ransomware Resilience with Zerto – Test and Recover

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Even the best laid plans can come undone unless frequent and extensive testing can be completed. Utilizing Zerto’s automation and orchestration capabilities organizations can now test non-disruptively in isolated networks or clean rooms to ensure they are ransomware ready, once testing has been completed ready-made compliance reports make passing audits and regulatory requirements easy.

Chris Rogers, senior technology evangelist at Zerto, discusses the importance of ransomware resilience and testing in data recovery. Zerto has been emphasizing the need for simple, non-impactful testing for years, especially in the context of security. Chris highlights the significance of frequent and extensive testing, rather than just checking a single virtual machine or performing a single restore. By using Zerto, customers have significantly reduced their testing time, completing it in less than two hours compared to three and a half days previously. The testing is fully automated, orchestrated, and does not impact production workloads. Zerto customers perform over 18,000 tests per month on average, with an impressive average recovery time objective (RTO) of three minutes and 19 seconds. Chris also mentions the ability to conduct real-time testing and utilize the isolated recovery environment for various purposes, such as patch testing, vulnerability scanning, data analytics, and forensics. While Zerto does not replace antivirus tools or have official partnerships with malware cleanup companies, they provide the infrastructure and availability for recovery, allowing customers to bring their own tooling and layer additional security measures on top. Zerto offers different recovery options for ransomware, including instant file restore, instant VM restore, recovery from multi-VM app infection, recovery from single site infection using the cloud or secondary site, and extended journal copy for multi-site infection recovery. The recently introduced Rapid Air Gap Recovery using the cyber resilience vault provides an additional layer of protection. Chris acknowledges that Zerto’s focus is not on detection but on recovery, and customers still have work to do in removing the malware or encryption. However, the vault allows customers to recover applications into an isolated environment where they can leverage their own tools and scan the recovered VMs for any infections.


Ransomware Resilience with Zerto – Isolate and Lock with the Cyber Resilience Vault

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

Introducing the new Zerto Cyber Resilience Vault. A complete solution combining the powers of Zerto and the wider HPE family, Organizations can now be confident in ensuring recovery even during the worst attacks. Built upon decentralized management with zero trust principles with always immutable data copies. The cyber resilience vault is the only Isolated recovery environment that uses Journalling technology as the primary recovery mechanism rapidly reducing downtime and data loss.

The Zerto Cyber Resilience Vault, also known as Z-Vault, provides organizations with an isolated recovery environment or vault to protect against ransomware attacks. As regulations regarding data protection become stricter, isolated recovery environments are increasingly mandated as the last line of defense and emergency recovery option. Ransomware attacks often target data protection solutions, making it crucial to protect these solutions themselves. The Zerto Cyber Resilience Vault offers a fully isolated and air-gapped environment with mutable data, based on zero-trust principles. It includes components such as HPE ProLiant for compute, HPE Alletra for storage, and HPE Aruba Networking for networking. The vault ensures no network connectivity outside of the replication link between the storage arrays, providing enhanced security. It also supports replication from cloud sources and integration with the HPE Backup and Recovery Service. Z-Vault aims to offer a better, faster, and more cost-effective solution compared to existing cyber vaults on the market, reducing downtime and ransomware impact. By combining the isolated recovery environment and the vault into a single hardware infrastructure, Zerto simplifies the recovery process and ensures data immutability and air-gapped security. The vault helps organizations meet compliance and regulatory requirements while providing enhanced protection against cyber threats.


Enhancing Data Analysis and Anomaly Detection with Zerto’s API and Grafana Integration

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Justin Paul

Zerto leverages Grafana to visually represent data extracted through its API, allowing for the analysis of various metrics. The API provides valuable insights into logical blocks, encrypted and unencrypted data, enabling the identification of trends and anomalies. By examining SCSI blocks, Zerto’s algorithms can detect abnormal levels of compression and encryption, alerting users to potential issues like increased encrypted traffic. Notably, Zerto prioritizes real-time analysis over data storage, ensuring efficient processing. The 10.0 API further expands data availability, providing statistics at the volume, VM, and VPG levels. While Zerto currently recognizes all SCSI traffic as encrypted if the volume is encrypted, efforts are being made to differentiate between normal and malicious encryption. Zerto’s dedicated team continuously improves machine learning algorithms, keeping pace with security standards and advancements made by VMware.

Justin Paul discusses the capabilities of Grafana and the data obtained from Zerto’s API. By utilizing the API data, it is possible to rebuild Zerto analytics and visualize it through graphs. The data includes the total number of logical blocks, encrypted data, unencrypted logical blocks, and their combined total. Anomalies in encrypted traffic can be identified, even for applications not intended to be encrypted. However, systems using specific encryption methods like Linux file systems or Windows BitLocker may not show anomalies as they are already encrypted. Zerto’s algorithms analyze the data at the block layer to detect compression or encryption, with plans to refine and improve the algorithms over time. The data is not stored for long, as Zerto aims to retrieve data quickly and not hold onto it due to high data rates. The analyzed stats are sent to ZVM, which triggers alerts and tag checkpoints when sufficient evidence of a security issue is found. Zerto aims to be one layer of security among others and provide real-time alerts without the need for analyzing previous backups. The newer 10.0 API provides additional statistics at the volume, VM, and VPG levels. The discussion also touches on the potential differentiation between normal and malicious encryption and Zerto’s commitment to improving its algorithms and keeping up with security standards.


Ransomware Resilience with Zerto – Replicate and Detect

Event: Cloud Field Day 17

Appearance: Zerto Presents at Cloud Field Day 17

Company: Zerto

Video Links:

Personnel: Chris Rogers, Justin Paul

With the new release of Zerto 10 comes added functionality for you to achieve ransomware resilience with Zerto real time encryption detection built into the Continuous data protection ensures the earliest warning sign that ransomware may be impacting your virtual environment. Using the Zerto unique journal organizations can now be even more confident with rapid RPO and RTO with tagged clean checkpoints for verified recovery.

In this video, Chris Rogers, a senior technology evangelist at Zerto, introduces Zerto 10 and its key enhancements. The tagline for the release is “real-time detection meets real-time protection.” Chris explains that Zerto 10 focuses on four main areas: real-time encryption, ransomware detection, the Zerto cyber resilience vault, and protecting Azure at scale. He highlights the importance of early detection in ransomware attacks and explains how Zerto’s streaming inline detection works, providing real-time alerts and enabling quick recovery. Chris also mentions that Zerto 10 includes additional features like the Zerto secure appliance and emphasizes that the new capabilities are available to existing Zerto 9.7 customers at no additional cost.


Terraform Orchestration Bare Metal Demo via RackN Digital Rebar API

Event: Cloud Field Day 17

Appearance: RackN Presents at Cloud Field Day 17

Company: RackN

Video Links:

Personnel: Rob Hirschfeld, Shane Gibson

Demonstration of declarative APIs in RackN Digital Rebar that make it easy to automate pools of bare metal infrastructure using IaC orchestrators like Terraform to empower developers with available, reliable and fast self service.

In this RackN demo section, the focus is on showing a Terraform demo for orchestration. The scenario involves an enterprise trying to terraform bare metal, including setting a target OS, running Ansible, and conducting security audits. The goal is to make infrastructure easily available, reliable, and fast, allowing for the creation and destruction of resources. The architecture relies on a feature called pooling, where machines are checked in and out, and Terraform is used to interact with the platform. The demo showcases how developers can use Terraform to request and use resources, while the infrastructure architect manages and controls the process using the digital rebar platform. The integration between Terraform and digital rebar simplifies the provisioning and management of resources, allowing for customization and automation of infrastructure.


Generative DevOps Powering the 10x Operator with RackN and Backstage

Event: Cloud Field Day 17

Appearance: RackN Presents at Cloud Field Day 17

Company: RackN

Video Links:

Personnel: Rob Hirschfeld, Shane Gibson

Demos declarative APIs in RackN Digital Rebar that make it easy to automate clusters of bare metal infrastructure using Dev Portals like Backstage.io to empower developers with available, reliable and fast self-service.

In this conversation, Rob and Shane discuss their roles as developers and the integration of Backstage and Digital Rebar. Rob explains that he has set up his own Backstage developer environment and successfully connected it to his Digital Rebar system. However, he encounters an issue where a newly created cluster does not show any machines. Shane suggests looking into the front-end code to ensure that the necessary parameters are passed to the Digital Rebar API. They discuss adding additional information like the context broker and count to make the API work correctly. After making these adjustments, Rob successfully creates clusters and observes the machine resources being generated. They also touch on topics like resizing clusters, job logs for troubleshooting, and cluster cleanup. The conversation ends with a discussion on the on-ramp process for adopting Digital Rebar and the flexibility it offers for multiple teams within an organization. They mention the ability to import existing virtual machines and bare metal servers and the concept of machine objects and runner services in Digital Rebar.


Empowering Platform Engineering with RackN Infrastructure Platform

Event: Cloud Field Day 17

Appearance: RackN Presents at Cloud Field Day 17

Company: RackN

Video Links:

Personnel: Rob Hirschfeld, Shane Gibson

RackN Digital Rebar Infrastructure Platform provides the available, reliable and fast automation needed by operations teams to support enterprise Platform Engineering efforts.

RackN focuses on infrastructure platforms and platform engineering, addressing challenges faced by operations teams through their self-managed software called Digital Rebar. Their goal is to empower companies to independently manage their infrastructure, whether on-premises or in the cloud, by emphasizing infrastructure-as-code pipelines and process automation. They prioritize availability, reliability, and speed as key factors for success in an infrastructure platform. Through customer journey stories, they demonstrate how their solution has reduced equipment onboarding time, improved system availability, and increased operational speed. Automation, integration with various systems, and handling of edge systems are key features. They also address the challenges of managing Dev and Ops tool sprawl and advocate for a consolidated operational experience. RackN’s Digital Rebar is highlighted as a vendor-neutral infrastructure-as-code automation platform that offers customizable workflows and air gap capabilities.


Morpheus Data Workflows: Scaling Automation and IaC via Self-Service

Event: Cloud Field Day 17

Appearance: Morpheus Data Presents at Cloud Field Day 17

Company: Morpheus Data

Video Links:

Personnel: Martez Reed

Going beyond developer self-service and provisioning of application instances, this session will spotlight how Morpheus can be used to help IT simplify operations and scale GitOps style automation initiatives. By decoupling the creation and execution of automation scripts and infrastructure-as-code patterns, organizations can streamline and govern the use of technologies like Terraform, Ansible, Puppet, PowerShell, Python, and more. The session will also spotlight the new dynamic automation target feature which enables use cases like patch management, security remediation, and routine maintenance.

In this session, Martez Reed, Director of Technical Marketing with Morpheus Data, discusses scaling automation in Infrastructure as Code (IAC) through self-service. He talks about the challenges of using YAML and raw automation, such as the lack of user-friendliness and difficulty in understanding available environments. Martez highlights the importance of focusing on the business value rather than getting caught up in the coolest way to implement automation. He explores the concept of click ops and the use of user interfaces (UIs) and no-code solutions for efficient and quick outcomes. Martez demonstrates how Morpheus provides a platform for creating Terraform Cloud workspaces using UI-based interactions, simplifying the sharing and scaling of automation across organizations. The platform’s role-based access control, workflow automation, and integration capabilities enable users to accomplish their tasks without the complexities of manual scripting or infrastructure management. The discussion also touches on features like dynamic automation targeting and the adoption of immutability in handling system redeployment and patching.