AIStor – PromptObject, AIHub, and MCP Demos with MinIO

Event: Cloud Field Day 23

Appearance: MinIO Presents at Cloud Field Day 23

Company: MinIO

Video Links:

Personnel: Dil Radhakrishnan

Dil Radhakrishnan presented MinIO’s AIStor capabilities at Cloud Field Day 23, focusing on how MinIO is adapting to AI workloads. The presentation demonstrated three key features: AI Hub, PromptObject, and Model Context Protocol (MCP) server. AI Hub provides a Hugging Face-compatible repository for securely storing private AI models and datasets within the AIStor environment. This enables developers to manage and deploy fine-tuned models without exposing them to the public, leveraging the familiar Hugging Face ecosystem.

The presentation then introduced PromptObject, which enables interaction with objects in AIStor using large language models (LLMs). By integrating GenAI capabilities directly into the S3 API, developers can use the “prompt” function to have the LLM extract specific data from unstructured objects, transforming it into structured JSON for easier application integration. This approach eliminates the need for separate RAG pipelines in many scenarios, as prompt objects simplify the process of interacting with single objects. Still, it can also be used in combination with a RAG implementation.

Finally, the presentation showcased the AI Store MCP server, which enables agentic workflows. The MCP server allows AI agents to interact with the data stored in MinIO. This was demonstrated using a cloud desktop, showing how an agent can list buckets, extract information from images, automatically tag data, and create visualizations of the AI Store cluster. This approach enhances data accessibility and facilitates automation in managing and analyzing data.


Introducing MinIO AIStor – Object Storage for AI and Analytics with MinIO

Event: Cloud Field Day 23

Appearance: MinIO Presents at Cloud Field Day 23

Company: MinIO

Video Links:

Personnel: Jason Nadeau

MinIO’s VP of Product Marketing, Jason Nadeau, introduces the AIStor object storage solution, designed for AI and lakehouse analytics environments, at Cloud Field Day 23. AIStor distinguishes itself from object gateway approaches by being object-native. Nadeau highlights the importance of object storage for AI, as evidenced by its use in building large language models and various data lakehouse tools. In contrast to the complex, multi-layered architecture of retrofit object gateway solutions, AIStor presents a simpler, direct-attached architecture, leading to superior performance, data consistency, and scalability.

Nadeau emphasizes MinIO’s object-native architecture, which provides strict consistency and SIMD acceleration, resulting in significant performance advantages. These architectural benefits translate into tangible storage outcomes, allowing customers to scale from petabytes to exabytes.  AIStor’s architecture facilitates real-time data services even during hardware failures. This object-native approach enables optimal hardware utilization and cost-effectiveness. MinIO offers direct engineer support, bypassing traditional support queues and providing customers with direct access to experts. The company is seeing strong enterprise adoption and growth in headcount.

The presentation features examples of AIStor deployments in various use cases, including generative AI, high-performance computing, data lakehouses, and object-native applications, as well as an autonomous vehicle manufacturer, a cybersecurity company, and a fintech payments provider. These deployments are achieving desired performance and are helping to control costs.  MinIO plans to offer a channel bundle skew, which will simplify the acquisition of AIStor by bundling hardware and software into a single SKU.


Optimizing Networking Performance with HPE OpsRamp

Event: Cloud Field Day 23

Appearance: HPE OpsRamp Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Juden Supapo

HPE’s presentation at Cloud Field Day 23, led by Juden Supapo, focused on optimizing networking performance using the HPE OpsRamp software. The presentation centered around demonstrating the platform’s ability to identify and automatically resolve network issues, specifically excessive traffic flooding.

The demonstration showed how OpsRamp’s dashboard provides network observability, allowing users to monitor the health of critical applications and network devices. By simulating a network issue with a script that generated excessive traffic, the presenter demonstrated how OpsRamp identified the problem through its monitoring of switch interfaces and virtual machine (VM) utilization. The system then generated alerts, which, in this case, escalated to show a task that, upon approval, triggered an automation script to block the offending IP address and back up the network configuration.

Beyond the demonstration, the presentation also touched on the future roadmap for OpsRamp. The key areas of focus are new device integrations (weekly updates), more sophisticated alert correlation, and AI-driven dashboard creation. The platform utilizes AI to analyze metrics and detect anomalies. HPE is also exploring the addition of features such as the ability to recommend dashboard thresholds based on historical data analysis.


Minimizing Application Downtime with HPE OpsRamp

Event: Cloud Field Day 23

Appearance: HPE OpsRamp Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Juden Supapo

HPE’s presentation at Cloud Field Day 23, delivered by Juden Supapo, focused on minimizing application downtime using HPE OpsRamp. The demonstration began by displaying a dashboard that monitored a mission-critical ERP application. The speaker highlighted key performance indicators and the overall health of the application. He then presented a service map, a visual representation of the infrastructure supporting the application, including database nodes, servers, and network devices. The service map enables administrators to quickly identify issues by visualizing the relationships between various infrastructure components.

The presentation then illustrated how OpsRamp handles infrastructure issues. By simulating a database outage, the speaker demonstrated how the dashboard and service map responded in real-time. Alerts were triggered, and the service map indicated the location of the problem. He emphasized the alert correlation feature, which uses machine learning to group related alerts, identify probable root causes, and streamline troubleshooting. This grouping allows administrators to address the primary issue instead of dealing with numerous cascading alerts, thus saving time and improving operational efficiency.

Finally, the presentation concluded by showcasing automation and governance within OpsRamp. When the simulated outage occurred, the system automatically generated a task, including an email notification. Through a low-code, no-code process automation workflow, the speaker demonstrated how the system could trigger a script to attempt to restart the affected service. This showcased the platform’s capability to combine automated remediation with governance through approval processes. This combination of features minimizes downtime.


Observe – Analyze – Act. An introduction to HPE OpsRamp

Event: Cloud Field Day 23

Appearance: HPE OpsRamp Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Cato Grace

HPE’s presentation at Cloud Field Day 23 introduced OpsRamp, a SaaS platform designed to address the challenges of modern IT environments. OpsRamp provides a unified approach to managing diverse infrastructure by focusing on observing, analyzing, and acting on collected data. This involves ingesting data from various sources, such as applications, servers, and cloud environments, into a central tool, enabling users to access all their data in one place.

The platform’s key features include robust analytics and automation capabilities. OpsRamp utilizes machine learning to assist users in analyzing data, identifying issues, and automating corrective actions. This automation streamlines issue resolution, potentially reducing resolution times significantly through integrations with over 3,000 systems. Furthermore, OpsRamp offers both agent-based and agentless monitoring options, providing flexibility depending on the type of resources being monitored.

OpsRamp differentiates itself by offering full-stack monitoring and an AI-powered analytics engine that can integrate with existing monitoring tools to correlate alerts across disparate tools. It provides both broad monitoring capabilities and integration of existing tools. The platform’s licensing model is subscription-based, determined by the number of monitored resources and the volume of metrics collected, with data retention policies tailored to different data types.


Customer Discussion – SAP Cloud ERP Customers Prefer CDC with HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono, Randall Grogan

HPE presented a customer discussion at Cloud Field Day focused on the adoption of SAP Cloud ERP (formerly RISE) with the CDC option, specifically highlighting the preferences of Energy Transfer, a major midstream pipeline company. The company chose the CDC option over the hyperscale cloud for several reasons, including the existing data center infrastructure, which offered lower latency and better control over cybersecurity. Energy Transfer’s decision was also influenced by the need to integrate with dependent applications, such as those used for hydrocarbon management.

The presentation emphasized that the chosen solution provides Energy Transfer with a cost-neutral transition. The presentation included a discussion of AI capabilities and the benefits of accessing innovation through the SAP Cloud ERP roadmap. The customer shared that they were early adopters of SAP S/4HANA. They have a strong commitment to AI capabilities that will come with cloud ERP. This decision makes it easier to adopt those AI capabilities in the future.

Finally, the presentation emphasized the importance of a robust governance model and the availability of flexible options for customers considering SAP Cloud ERP, including sandbox environments and customized implementation approaches. The discussion also addressed the shared responsibility model employed by the CDC and its approach to managing risks, including cybersecurity threats such as ransomware. Ultimately, HPE’s presentation highlighted the value of a hybrid approach to SAP solutions, enabling customers like Energy Transfer to tailor their deployments according to their specific needs and priorities.


Modern Cloud – SAP Cloud ERP, Customer Data Center CDC with HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono, Kelly Smith

HPE’s presentation at Cloud Field Day focused on the shift from declining on-premise and public cloud SAP deployments to increasing hybrid and private cloud solutions. Customers are seeking to mitigate risk, maintain control, and address the complex application dependencies inherent in SAP environments. HPE highlights its SAP Cloud ERP, Customer Data Center (CDC) offering as a true “Modern Cloud” solution, recognizing that customers require choices. The presentation highlights that predictable latency is crucial for integrations, making dedicated CDC solutions, with dedicated network paths and firewalls, a compelling option over hyperscale cloud offerings, where resources are shared.

The presentation emphasizes the importance of choice and flexibility when transitioning to cloud ERP.  HPE’s approach caters to various customer needs, offering options such as “lift and shift” and “tailored” methods, which enable customers to transition to cloud ERP without requiring an immediate data center migration. HPE’s strategy is designed to make the transition to cloud ERP as seamless as possible by acknowledging that moving from an existing ERP system to a new one, even with the same vendor, presents a significant project.

A key takeaway from the presentation revolves around the shift to subscription-based models. While it’s the direction most software companies are moving, HPE acknowledges the resistance many customers, like Energy Transfer, have to moving away from perpetual licensing. Jim Loiacono and Kelly Smith highlighted the ability to support those who want to “stay the course” or begin their SAP HANA journey with a lift-and-shift approach, understanding the need to address customers’ concerns about risk and the desire for maximum flexibility.


The evolution of SAP Cloud ERP – HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Kelly Smith

The presentation by HPE at Cloud Field Day 23, led by Kelly Smith from SAP, centered around the evolution of SAP Cloud ERP. The session primarily focused on SAP Cloud ERP, the RISE with SAP methodology, and the SAP Flywheel Effect Strategy. It also covered private cloud ERP transition options.

Smith began by discussing the history of SAP, highlighting its transformation from mainframe-based systems to the current cloud ERP offerings. The core of the presentation revolved around SAP’s Cloud ERP, which offers a private cloud deployment model, often referred to as RISE.  This model encompasses the software, hyperscaler resources, or HPE’s GreenLake, and technical managed support under a single commercial agreement.  This setup shifts the responsibility for day-to-day operations, including upgrades and security, to SAP, enabling customers to focus on their core business strategies.  The presentation emphasized SAP’s commitment to security, highlighting dedicated security teams and workshops, as well as SAP’s handling of responsibilities.

The discussion also addressed integrations, the flywheel effect, and future AI integration, particularly the Business Technology Platform (BTP) for integrations. The presentation touched upon the infrastructure as a service, highlighting the advantages of dedicated hardware and the importance of managing data, particularly for acquisitions and changes in business scale. The presentation addressed the need for additional capacity and SAP’s ability to accommodate it through change requests. Finally, the presentation highlighted the importance of cybersecurity and the role that SAP’s security teams play. SAP will manage software upgrades, with customers having input, but support is discontinued if upgrades fall too far behind.


SAP Legacy SAP ERP Customers Are Nearly Out of Time – HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono

It’s been a decade since S/4HANA was released, but less than half of the existing SAP ERP customers have upgraded, facing looming support deadlines. This presentation from HPE at Cloud Field Day 23 highlights the challenges and the novel solutions for those hesitant to make the move. HPE, along with SAP, is offering a more flexible approach, recognizing that a “one size fits all” approach isn’t suitable for all businesses. The presentation highlights that business disruption is a significant factor in customers delaying upgrades, underscoring the need to consider what is important to all stakeholders involved.

The session presents SAP’s cloud ERP offerings, including a “Customer Data Center” option built on HPE GreenLake. This provides a hybrid cloud environment, allowing customers to choose between public and private cloud deployments, and importantly, the flexibility to retain their existing data centers. HPE’s focus is to make the transition to SAP’s cloud offerings smoother, making it easier for customers to move forward. This approach addresses concerns about data sovereignty and control over upgrades. The conversation also highlights the need for integration and the various challenges that it encompasses when migrating to the new systems.

Ultimately, the presentation highlights the importance of the private cloud option, particularly for large enterprises with complex legacy systems. It highlights the need for more flexibility for all stakeholders. The session concludes that the “Customer Data Center” option, with HPE hardware and services, can provide the security, control, and flexibility that many customers require, while ensuring they can continue to receive the necessary support. The presentation emphasized that there are various options available to meet each customer’s needs, making this a comprehensive plan.


Seamless Business Continuity and Disaster Avoidance: Multi-Cloud Demonstration Workflow with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw, Mike Chmiel

Qumulo presented a demonstration at Cloud Field Day 23 that showcased seamless business continuity and disaster avoidance in a multi-cloud environment.  The core of the presentation centered on simulating a hurricane threat to an on-premises environment, highlighting Qumulo’s ability to provide enterprise resilience and cloud-native scalability. Brandon Whitelaw demonstrated how Qumulo’s Cloud Data Fabric enables disaster avoidance through live application suspension and resumption, data portal redirection, cloud workload scaling, and high-performance edge caching with Qumulo EdgeConnect.  This allows the safe migration of data and applications to the cloud, ensuring continued access and continuity in the event of a disaster.

The demo’s primary focus was on illustrating the ease of transitioning data and operations to the cloud during a simulated disaster scenario.  The process involved disconnecting the on-prem cluster and, using a device like an Asus Nook, accessing data seamlessly from the cloud. This seamless switch allowed government employees to continue their work at an off-site location. This was achieved through data portals, which enable the efficient transfer of data, with 90% bandwidth utilization.  It demonstrated the ability to maintain user experience by removing the need to change user behaviors or adopt new protocols.

Finally, Qumulo’s approach offers high bandwidth utilization, and integration into a multitude of customer use cases, all while ensuring minimal downtime and data integrity during the process.  They showed how edits made on the cloud could be instantly consistent with the on-prem solution. They were able to quickly and effectively restore data access to users after the storm,  Qumulo emphasized that the architecture allowed businesses to be proactive, moving data to the cloud days before a disaster, reducing the reliance on last-minute backups and promoting a more flexible, scalable approach to business continuity, and with the upcoming support for ARM, and the focus on multi-cloud, Qumulo allows for a great deal of flexibility in how a business manages its data.


Seamless Business Continuity and Disaster Avoidance with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw

This Qumulo presentation at Cloud Field Day 23 focuses on delivering business continuity and disaster avoidance through its platform. Qumulo leverages hybrid-cloud architectures to ensure uninterrupted data access and operational resilience by seamlessly synchronizing and migrating unstructured enterprise data between on-premises and cloud environments. This empowers organizations to remain agile in the face of disruptions.

The presentation dives into two main approaches. The first leverages cloud elasticity for a cost-effective disaster recovery solution. By backing up on-premises data to cloud-native, cold storage tiers, Qumulo allows for near-instantaneous failover to an active system. This approach utilizes the same underlying hardware performance for both active and cold storage tiers, enabling a rapid transition and incurring higher costs only when necessary. This is a more cost-effective alternative to building a complete, on-premises continuity and hot standby data center.

The second approach emphasizes building continuity and availability from the ground up. By deploying a cloud-native Qumulo system, the presentation highlights the benefits of multi-zone availability within a region, offering greater durability and resilience compared to traditional on-premises setups. Qumulo’s data fabric ensures real-time data synchronization between on-prem and cloud environments, with data creation cached locally and then instantly available across all connected locations. This offers significant cost savings and operational efficiency by eliminating the need for traditional replication and failover procedures.


Reimagining Data Management in a Hybrid-Cloud World with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Douglas Gourlay

The presentation by Qumulo at Cloud Field Day 23, led by Douglas Gourlay, focuses on the challenges and opportunities of modern data management in hybrid cloud environments. The presentation emphasizes the need for a unified, scalable, and intelligent approach across both on-premises and cloud infrastructures. The speakers prioritize customer stories and use cases to illustrate how Qumulo’s unique architecture provides enhanced performance, visibility, and simplicity for organizations.

A key theme of the presentation is the importance of innovation, specifically in addressing the evolving needs of customers. Qumulo focuses on unstructured data, highlighting its work with diverse clients, including those in the movie production, scientific research, and government sectors. The presentation highlights how Qumulo’s approach enables both data durability and high performance, particularly in scenarios involving edge-to-cloud data synchronization, disaster recovery, and AI-driven data processing.

The presentation showcases how Qumulo enables freedom of choice by supporting any hardware and any cloud environment. Their solutions are designed to manage large-scale data, extending file systems across various locations with strict consistency and high performance. By leveraging cloud elasticity for backup and tiering, Qumulo offers cost-effective options for disaster recovery and provides the agility to adapt to changing business needs.


Learn About Scality RING’s Exabyte Scale, Multidimensional Architecture with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Giorgio Regni

Scality’s Giorgio Regni presented at Cloud Field Day 23, focusing on the Scality RING’s exabyte-scale, multidimensional architecture. Scality’s origin story stems from addressing storage challenges for early cloud providers, such as Comcast. They found that existing solutions weren’t meeting the demands of petabyte-scale data and the need to compete with large providers. The company’s core concept is “scale,” and their system is designed to expand seamlessly across all crucial dimensions. This includes capacity, metadata, and throughput, allowing them to scale each of these components independently.

Regni emphasized the RING’s disaggregated design, highlighting its ability to overcome common storage bottlenecks. The architecture separates storage nodes, I/O daemons, and a connector layer, enabling independent scaling of each component. He shared impressive numbers, including 12 exabytes of data currently in production and 6 trillion objects stored, with customers having billions of objects and applications using the system. The presentation also contrasted Scality’s approach to that of competitors like Ceph and MinIO, highlighting differences in metadata handling, bucket limits, and the flexibility of the architecture’s scaling capabilities.

Finally, the presentation covered the multi-layered architecture that supports various protocols, including S3, a custom REST protocol, and file system connectors. The architecture is based on a peer-to-peer distributed system with no single point of failure, supporting high availability and replication across multiple sites and tiers. It can manage different tiers, such as Ring XP, the all-flash configuration, and long-term storage. Scality RING also offers multi-tenancy and supports usage tracking, allowing customers to build their billing systems, with the overall goal of the system being an infinitely scalable storage solution.


Major European Bank’s Enterprise Cloud Built on RING & S3 APIs with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Aurelien Gelbart

Aurelien Gelbart’s presentation at Cloud Field Day 23 highlighted a major European bank’s successful deployment of a private cloud built on Scality RING and S3 APIs. The bank sought to consolidate disparate storage solutions, aiming for a lower cost per terabyte and easier adoption for its users. Leveraging Scality’s offerings, the bank achieved these goals, fostering widespread adoption of the new platform and significantly reducing costs.

The bank’s private cloud architecture comprises six independent RING clusters across three geographical regions, each with a production and disaster recovery platform. Utilizing Scality’s native S3 API support, the bank implemented multi-site replication and object lifecycle policies to meet stringent financial compliance requirements. The implementation allows the bank to run hundreds of production-grade applications, including database backups, financial market data storage, and document management.

The success of this deployment is evident in substantial growth, with the platform scaling from one petabyte to 50 petabytes of usable storage within seven years. This growth brought new challenges related to performance and resource management across the many different applications. Scality addressed these challenges by improving software performance through reconfiguration and architectural improvements. The results are impressive: the bank now operates 100 petabytes of usable storage, manages 200,000 S3 buckets, and processes 300 billion client objects, achieving a 75% reduction in the cost per terabyte per year compared to its previous solutions.


Leading Space Agency’s Long-Term Scientific Storage at Scale with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Nicolas Sayer

Scality’s presentation at Cloud Field Day 23 focused on their collaboration with a major European space agency, which utilizes Scality RING to manage massive scientific datasets in a hybrid storage model. The agency, dealing with data from approximately 200 satellites, faced challenges with legacy storage solutions and the need for a cost-effective, scalable, and easily accessible system for both live and archival data. Scality’s solution utilizes S3 as a unified namespace, providing a single access point for data regardless of its location, whether on hot or cold storage.

The solution employs a multi-tiered approach, where live data is stored on the RING for active analysis and subsequently moved to tape for long-term archiving after six months. This vast amount of cold data, representing hundreds of petabytes, is managed using a custom-built API called TLP (Tape Library Protocol) to integrate with HSMs from partners such as HP, Atempo, IBM, and Spectra. TLP handles the retrieval and storage of data to tape, providing transparent access for users through the S3 Glacier API. This provides cost savings and energy efficiency by moving data to tape when it is not frequently accessed.

This architecture offers several advantages, including data durability through a three-site stretch ring, with data replicated across two tape libraries for enhanced resilience. The agency’s users and applications interact with the data via a single namespace using S3, unaware of the underlying complexity of the hybrid storage system. This transparency, combined with the cost-effectiveness of the solution and security features like object lock, has made Scality’s solution a key enabler for long-term data access and efficient workflows for the space agency.


S3 Object Storage for Real-Time Compliance Analytics at a Large US Global Bank with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Ben Morge

Ben Morge, VP of Customer Success at Scality, presented a deployment of Scality RING for a large US bank that needed to store 40 petabytes of Splunk SmartStore data across two sites.  The bank required active-active replication and a one-year data retention period. RING’s S3 compatibility enabled seamless integration with Splunk, allowing the indexers to tier data from hot, fast flash storage to the warm Scality RING, which consisted of 80 servers.  The data is immutable on the ring for one year, with the Splunk application handling data deletion.

The Scality deployment leverages S3 for data storage. The solution employs a two-site architecture, where each site features Splunk indexers and a hot storage cluster. The indexers are responsible for replicating indexes and references to objects stored on the ring. Scality manages object replication, utilizing cross-region replication, which is the S3 standard. The system addresses network issues by employing infinite replay to ensure reliable replication and decoupling storage from compute.

The presentation highlighted a couple of challenges and their solutions. The initial ingest exceeding firewall throughput was resolved with infinite replay. The initial difficulties with the read traffic exceeding CPU capabilities, leading to overwhelmed flash caches, were addressed by decoupling the architecture and adding compute resources to handle the metadata layer and S3 stateless services, resulting in simultaneous 75 gigabytes throughput on both sites and a fully replicated cluster with all the objects replicated in under two minutes.  The customer’s active production has been running successfully for over five years.


How we Help Our Customers to Build Exabyte-Scale Clouds with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Paul Speciale

Paul Speciale’s presentation at Cloud Field Day 23 highlighted Scality’s approach to helping customers build exabyte-scale clouds. The presentation opened by addressing the shift towards private cloud computing driven by AI workloads and data sovereignty concerns. Scality RING, already managing a significant amount of data, is chosen by market leaders, including major telcos and banks, to maintain control and achieve cloud-scale performance. The presentation’s core message establishes the business imperative that is driving enterprises away from public cloud dependencies and towards hybrid architectures for both compliance and competitive advantages.

Scality’s presentation focuses on its RING product as the primary solution for cloud infrastructure, data lakes, and data protection. It emphasized the RING’s S3-compatibility, deep support for advanced APIs, and a cloud-like identity and access management system. Furthermore, the presentation highlighted the RING’s distributed data protection, geo-stretched capabilities for high availability, utilization tracking, and the Core 5 initiative that focuses on cyber resiliency. The presentation emphasized the importance of multiscale architecture in a cloud environment due to the varying workload patterns and I/O needs.

The presentation showcased Scality’s market entry in 2010, coinciding with the rise of cloud services. Scality aimed to provide scale-out storage solutions with their Ring product, which has been in the market since 2010 and has been adopted by major telcos and financial institutions. Scality’s presentation also includes customer stories involving U.S. and European banks, the space industry, and Iron Mountain, highlighting the versatility of Ring for various applications and deployment sizes. Scality’s response to questions highlighted backup and automated tiering capabilities within the RING system, underscoring its design for high-capacity use cases.


Cloud Rewind for Cloud Native Applications an Overview with Commvault

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Govind Rangasamy

In this session, Govind Rangaswamy presents Commvault Cloud Rewind, a solution designed to protect cloud-native workloads and enable recovery to a pre-disaster or pre-cyber attack state across AWS, Azure, and GCP. Cloud Rewind facilitates in-place recoveries or recoveries into different tenants and regions, offering business peace of mind and the ability to make an attack seem like it never happened. The presentation highlights the challenges of protecting cloud applications, emphasizing their dynamic, distributed nature, rapid change frequency, and significant scale compared to traditional applications. These factors lead to increased complexity in managing and protecting these environments.

Cloud Rewind tackles these challenges by offering a “cloud time machine” and a “Recovery Escort” feature. The tool addresses the limitations of traditional disaster recovery by capturing not only data but also configurations and dependencies. It uses continuous discovery to track configurations and dependencies. Recovery Escort automates the rebuilding of the entire application environment using infrastructure-as-code, simplifying the recovery process by combining multiple runbooks into a single, automated process. Cloud Rewind leverages native cloud services, such as AWS and Azure backup services, to ensure data management flexibility. This enables options for backups, replication, and recovery within the customer’s cloud environment.

The core benefit of Cloud Rewind, as showcased, is its ability to dramatically reduce recovery time, enabling rapid recovery and testing. Customers can perform comprehensive recovery tests with a few clicks, achieving recoveries in minutes instead of the days required by traditional methods. The tool offers extreme automation by allowing for rebuilding or recovering in a different availability zone, region, or account, which further enhances the service’s ability to deliver application resiliency. It also integrates with other Commvault solutions, promising a unified platform for managing multi-cloud and hybrid cloud environments.


Introducing Clumio by Commvault Cloud

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Akshay Joshi

Clumio by Commvault Cloud offers scalable and efficient data protection for AWS S3 and DynamoDB, addressing the limitations of native AWS capabilities. The presentation highlighted Clumio’s features, including a new recovery modality called S3 Backtrack, and emphasized the importance of air-gapped backups for data resilience. Clumio provides fully managed backup-as-a-service, eliminating the need for managing infrastructure, agents, and servers. The solution offers logical air-gapped backups stored within AWS, but outside a customer’s enterprise security sphere, offering enhanced security and immutability.

The presentation emphasized Clumio’s focus on simplicity, performance, and cost-effectiveness. Clumio claims a 10x faster restore performance compared to competitors and a 30% lower cost. Key features include protection groups for granular backup and restore of S3 buckets, based on various vectors such as tags, prefixes, and regions. For DynamoDB, Clumio offers incremental backups using DynamoDB streams, providing cost savings and the ability to retain numerous backup copies.

The presentation concluded with case studies demonstrating the effectiveness of Clumio’s solutions. Atlassian saw a 70% reduction in costs, a faster RPO, and RTO. Duolingo achieved over 70% savings on DynamoDB backups, with the added benefits of immutability and air-gapping. Clumio’s architecture, utilizing serverless technologies and AWS EventBridge, enables automation and scalability. The solution offers options for encryption key management and supports data residency requirements with regional control planes.


A New Era of Cyber Resilience with Commvault Cloud

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Michael Fasulo

Organizations need to operate continuously, especially with the shift to cloud-first strategies. Commvault Cloud aims to solve the challenges of this shift, particularly in security. Michael Fasulo introduced Commvault Cloud, a cyber resilience platform designed for cloud-first enterprises. This platform addresses challenges like ransomware, hybrid/multi-cloud complexity, and regulatory compliance. Commvault offers a unified platform that incorporates a zero-trust foundation, AI-driven data protection features, and various cloud-first capabilities, including threat detection, anomaly detection, and cyber resilience testing. The platform offers flexibility in deployment, supporting on-premises, SaaS, and appliance models to meet the diverse needs of customers.

Commvault’s platform emphasizes a proactive approach to cyber resilience. It utilizes a zero-trust architecture, featuring CIS-level hardened images, multi-factor authentication (MFA), and least privilege access. A key aspect is the Commvault Risk Analysis product, which provides in-depth data discovery, classification, and remediation capabilities, including integration with Microsoft Purview. The platform also focuses on operational recovery through end-to-end portability, enabling workload transformation. To further enhance security, Commvault offers ThreatWise, which deploys deception technology to lure and analyze threats. This is complemented by integrations with various SIM and SOAR platforms for centralized threat response.

To educate customers, Commvault has launched “Minutes of the Meltdown” for executives and “Recovery Range” for hands-on keyboard experience during simulated cyber attacks. Recovery Range allows teams to test their response to various threats and validate the effectiveness of the Commvault Cloud platform. This includes features like anomaly detection and automated exclusion of threats during recovery. The platform also offers the option for custom dashboards and extensive reporting capabilities, allowing customers to tailor the view of their security posture to their specific needs.