Observe – Analyze – Act. An introduction to HPE OpsRamp

Event: Cloud Field Day 23

Appearance: HPE OpsRamp Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Cato Grace

HPE’s presentation at Cloud Field Day 23 introduced OpsRamp, a SaaS platform designed to address the challenges of modern IT environments. OpsRamp provides a unified approach to managing diverse infrastructure by focusing on observing, analyzing, and acting on collected data. This involves ingesting data from various sources, such as applications, servers, and cloud environments, into a central tool, enabling users to access all their data in one place.

The platform’s key features include robust analytics and automation capabilities. OpsRamp utilizes machine learning to assist users in analyzing data, identifying issues, and automating corrective actions. This automation streamlines issue resolution, potentially reducing resolution times significantly through integrations with over 3,000 systems. Furthermore, OpsRamp offers both agent-based and agentless monitoring options, providing flexibility depending on the type of resources being monitored.

OpsRamp differentiates itself by offering full-stack monitoring and an AI-powered analytics engine that can integrate with existing monitoring tools to correlate alerts across disparate tools. It provides both broad monitoring capabilities and integration of existing tools. The platform’s licensing model is subscription-based, determined by the number of monitored resources and the volume of metrics collected, with data retention policies tailored to different data types.


Customer Discussion – SAP Cloud ERP Customers Prefer CDC with HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono, Randall Grogan

HPE presented a customer discussion at Cloud Field Day focused on the adoption of SAP Cloud ERP (formerly RISE) with the CDC option, specifically highlighting the preferences of Energy Transfer, a major midstream pipeline company. The company chose the CDC option over the hyperscale cloud for several reasons, including the existing data center infrastructure, which offered lower latency and better control over cybersecurity. Energy Transfer’s decision was also influenced by the need to integrate with dependent applications, such as those used for hydrocarbon management.

The presentation emphasized that the chosen solution provides Energy Transfer with a cost-neutral transition. The presentation included a discussion of AI capabilities and the benefits of accessing innovation through the SAP Cloud ERP roadmap. The customer shared that they were early adopters of SAP S/4HANA. They have a strong commitment to AI capabilities that will come with cloud ERP. This decision makes it easier to adopt those AI capabilities in the future.

Finally, the presentation emphasized the importance of a robust governance model and the availability of flexible options for customers considering SAP Cloud ERP, including sandbox environments and customized implementation approaches. The discussion also addressed the shared responsibility model employed by the CDC and its approach to managing risks, including cybersecurity threats such as ransomware. Ultimately, HPE’s presentation highlighted the value of a hybrid approach to SAP solutions, enabling customers like Energy Transfer to tailor their deployments according to their specific needs and priorities.


Modern Cloud – SAP Cloud ERP, Customer Data Center CDC with HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono, Kelly Smith

HPE’s presentation at Cloud Field Day focused on the shift from declining on-premise and public cloud SAP deployments to increasing hybrid and private cloud solutions. Customers are seeking to mitigate risk, maintain control, and address the complex application dependencies inherent in SAP environments. HPE highlights its SAP Cloud ERP, Customer Data Center (CDC) offering as a true “Modern Cloud” solution, recognizing that customers require choices. The presentation highlights that predictable latency is crucial for integrations, making dedicated CDC solutions, with dedicated network paths and firewalls, a compelling option over hyperscale cloud offerings, where resources are shared.

The presentation emphasizes the importance of choice and flexibility when transitioning to cloud ERP.  HPE’s approach caters to various customer needs, offering options such as “lift and shift” and “tailored” methods, which enable customers to transition to cloud ERP without requiring an immediate data center migration. HPE’s strategy is designed to make the transition to cloud ERP as seamless as possible by acknowledging that moving from an existing ERP system to a new one, even with the same vendor, presents a significant project.

A key takeaway from the presentation revolves around the shift to subscription-based models. While it’s the direction most software companies are moving, HPE acknowledges the resistance many customers, like Energy Transfer, have to moving away from perpetual licensing. Jim Loiacono and Kelly Smith highlighted the ability to support those who want to “stay the course” or begin their SAP HANA journey with a lift-and-shift approach, understanding the need to address customers’ concerns about risk and the desire for maximum flexibility.


The evolution of SAP Cloud ERP – HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Kelly Smith

The presentation by HPE at Cloud Field Day 23, led by Kelly Smith from SAP, centered around the evolution of SAP Cloud ERP. The session primarily focused on SAP Cloud ERP, the RISE with SAP methodology, and the SAP Flywheel Effect Strategy. It also covered private cloud ERP transition options.

Smith began by discussing the history of SAP, highlighting its transformation from mainframe-based systems to the current cloud ERP offerings. The core of the presentation revolved around SAP’s Cloud ERP, which offers a private cloud deployment model, often referred to as RISE.  This model encompasses the software, hyperscaler resources, or HPE’s GreenLake, and technical managed support under a single commercial agreement.  This setup shifts the responsibility for day-to-day operations, including upgrades and security, to SAP, enabling customers to focus on their core business strategies.  The presentation emphasized SAP’s commitment to security, highlighting dedicated security teams and workshops, as well as SAP’s handling of responsibilities.

The discussion also addressed integrations, the flywheel effect, and future AI integration, particularly the Business Technology Platform (BTP) for integrations. The presentation touched upon the infrastructure as a service, highlighting the advantages of dedicated hardware and the importance of managing data, particularly for acquisitions and changes in business scale. The presentation addressed the need for additional capacity and SAP’s ability to accommodate it through change requests. Finally, the presentation highlighted the importance of cybersecurity and the role that SAP’s security teams play. SAP will manage software upgrades, with customers having input, but support is discontinued if upgrades fall too far behind.


SAP Legacy SAP ERP Customers Are Nearly Out of Time – HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono

It’s been a decade since S/4HANA was released, but less than half of the existing SAP ERP customers have upgraded, facing looming support deadlines. This presentation from HPE at Cloud Field Day 23 highlights the challenges and the novel solutions for those hesitant to make the move. HPE, along with SAP, is offering a more flexible approach, recognizing that a “one size fits all” approach isn’t suitable for all businesses. The presentation highlights that business disruption is a significant factor in customers delaying upgrades, underscoring the need to consider what is important to all stakeholders involved.

The session presents SAP’s cloud ERP offerings, including a “Customer Data Center” option built on HPE GreenLake. This provides a hybrid cloud environment, allowing customers to choose between public and private cloud deployments, and importantly, the flexibility to retain their existing data centers. HPE’s focus is to make the transition to SAP’s cloud offerings smoother, making it easier for customers to move forward. This approach addresses concerns about data sovereignty and control over upgrades. The conversation also highlights the need for integration and the various challenges that it encompasses when migrating to the new systems.

Ultimately, the presentation highlights the importance of the private cloud option, particularly for large enterprises with complex legacy systems. It highlights the need for more flexibility for all stakeholders. The session concludes that the “Customer Data Center” option, with HPE hardware and services, can provide the security, control, and flexibility that many customers require, while ensuring they can continue to receive the necessary support. The presentation emphasized that there are various options available to meet each customer’s needs, making this a comprehensive plan.


Seamless Business Continuity and Disaster Avoidance: Multi-Cloud Demonstration Workflow with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw, Mike Chmiel

Qumulo presented a demonstration at Cloud Field Day 23 that showcased seamless business continuity and disaster avoidance in a multi-cloud environment.  The core of the presentation centered on simulating a hurricane threat to an on-premises environment, highlighting Qumulo’s ability to provide enterprise resilience and cloud-native scalability. Brandon Whitelaw demonstrated how Qumulo’s Cloud Data Fabric enables disaster avoidance through live application suspension and resumption, data portal redirection, cloud workload scaling, and high-performance edge caching with Qumulo EdgeConnect.  This allows the safe migration of data and applications to the cloud, ensuring continued access and continuity in the event of a disaster.

The demo’s primary focus was on illustrating the ease of transitioning data and operations to the cloud during a simulated disaster scenario.  The process involved disconnecting the on-prem cluster and, using a device like an Asus Nook, accessing data seamlessly from the cloud. This seamless switch allowed government employees to continue their work at an off-site location. This was achieved through data portals, which enable the efficient transfer of data, with 90% bandwidth utilization.  It demonstrated the ability to maintain user experience by removing the need to change user behaviors or adopt new protocols.

Finally, Qumulo’s approach offers high bandwidth utilization, and integration into a multitude of customer use cases, all while ensuring minimal downtime and data integrity during the process.  They showed how edits made on the cloud could be instantly consistent with the on-prem solution. They were able to quickly and effectively restore data access to users after the storm,  Qumulo emphasized that the architecture allowed businesses to be proactive, moving data to the cloud days before a disaster, reducing the reliance on last-minute backups and promoting a more flexible, scalable approach to business continuity, and with the upcoming support for ARM, and the focus on multi-cloud, Qumulo allows for a great deal of flexibility in how a business manages its data.


Seamless Business Continuity and Disaster Avoidance with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw

This Qumulo presentation at Cloud Field Day 23 focuses on delivering business continuity and disaster avoidance through its platform. Qumulo leverages hybrid-cloud architectures to ensure uninterrupted data access and operational resilience by seamlessly synchronizing and migrating unstructured enterprise data between on-premises and cloud environments. This empowers organizations to remain agile in the face of disruptions.

The presentation dives into two main approaches. The first leverages cloud elasticity for a cost-effective disaster recovery solution. By backing up on-premises data to cloud-native, cold storage tiers, Qumulo allows for near-instantaneous failover to an active system. This approach utilizes the same underlying hardware performance for both active and cold storage tiers, enabling a rapid transition and incurring higher costs only when necessary. This is a more cost-effective alternative to building a complete, on-premises continuity and hot standby data center.

The second approach emphasizes building continuity and availability from the ground up. By deploying a cloud-native Qumulo system, the presentation highlights the benefits of multi-zone availability within a region, offering greater durability and resilience compared to traditional on-premises setups. Qumulo’s data fabric ensures real-time data synchronization between on-prem and cloud environments, with data creation cached locally and then instantly available across all connected locations. This offers significant cost savings and operational efficiency by eliminating the need for traditional replication and failover procedures.


Reimagining Data Management in a Hybrid-Cloud World with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Douglas Gourlay

The presentation by Qumulo at Cloud Field Day 23, led by Douglas Gourlay, focuses on the challenges and opportunities of modern data management in hybrid cloud environments. The presentation emphasizes the need for a unified, scalable, and intelligent approach across both on-premises and cloud infrastructures. The speakers prioritize customer stories and use cases to illustrate how Qumulo’s unique architecture provides enhanced performance, visibility, and simplicity for organizations.

A key theme of the presentation is the importance of innovation, specifically in addressing the evolving needs of customers. Qumulo focuses on unstructured data, highlighting its work with diverse clients, including those in the movie production, scientific research, and government sectors. The presentation highlights how Qumulo’s approach enables both data durability and high performance, particularly in scenarios involving edge-to-cloud data synchronization, disaster recovery, and AI-driven data processing.

The presentation showcases how Qumulo enables freedom of choice by supporting any hardware and any cloud environment. Their solutions are designed to manage large-scale data, extending file systems across various locations with strict consistency and high performance. By leveraging cloud elasticity for backup and tiering, Qumulo offers cost-effective options for disaster recovery and provides the agility to adapt to changing business needs.


Learn About Scality RING’s Exabyte Scale, Multidimensional Architecture with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Giorgio Regni

Scality’s Giorgio Regni presented at Cloud Field Day 23, focusing on the Scality RING’s exabyte-scale, multidimensional architecture. Scality’s origin story stems from addressing storage challenges for early cloud providers, such as Comcast. They found that existing solutions weren’t meeting the demands of petabyte-scale data and the need to compete with large providers. The company’s core concept is “scale,” and their system is designed to expand seamlessly across all crucial dimensions. This includes capacity, metadata, and throughput, allowing them to scale each of these components independently.

Regni emphasized the RING’s disaggregated design, highlighting its ability to overcome common storage bottlenecks. The architecture separates storage nodes, I/O daemons, and a connector layer, enabling independent scaling of each component. He shared impressive numbers, including 12 exabytes of data currently in production and 6 trillion objects stored, with customers having billions of objects and applications using the system. The presentation also contrasted Scality’s approach to that of competitors like Ceph and MinIO, highlighting differences in metadata handling, bucket limits, and the flexibility of the architecture’s scaling capabilities.

Finally, the presentation covered the multi-layered architecture that supports various protocols, including S3, a custom REST protocol, and file system connectors. The architecture is based on a peer-to-peer distributed system with no single point of failure, supporting high availability and replication across multiple sites and tiers. It can manage different tiers, such as Ring XP, the all-flash configuration, and long-term storage. Scality RING also offers multi-tenancy and supports usage tracking, allowing customers to build their billing systems, with the overall goal of the system being an infinitely scalable storage solution.


Major European Bank’s Enterprise Cloud Built on RING & S3 APIs with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Aurelien Gelbart

Aurelien Gelbart’s presentation at Cloud Field Day 23 highlighted a major European bank’s successful deployment of a private cloud built on Scality RING and S3 APIs. The bank sought to consolidate disparate storage solutions, aiming for a lower cost per terabyte and easier adoption for its users. Leveraging Scality’s offerings, the bank achieved these goals, fostering widespread adoption of the new platform and significantly reducing costs.

The bank’s private cloud architecture comprises six independent RING clusters across three geographical regions, each with a production and disaster recovery platform. Utilizing Scality’s native S3 API support, the bank implemented multi-site replication and object lifecycle policies to meet stringent financial compliance requirements. The implementation allows the bank to run hundreds of production-grade applications, including database backups, financial market data storage, and document management.

The success of this deployment is evident in substantial growth, with the platform scaling from one petabyte to 50 petabytes of usable storage within seven years. This growth brought new challenges related to performance and resource management across the many different applications. Scality addressed these challenges by improving software performance through reconfiguration and architectural improvements. The results are impressive: the bank now operates 100 petabytes of usable storage, manages 200,000 S3 buckets, and processes 300 billion client objects, achieving a 75% reduction in the cost per terabyte per year compared to its previous solutions.


Leading Space Agency’s Long-Term Scientific Storage at Scale with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Nicolas Sayer

Scality’s presentation at Cloud Field Day 23 focused on their collaboration with a major European space agency, which utilizes Scality RING to manage massive scientific datasets in a hybrid storage model. The agency, dealing with data from approximately 200 satellites, faced challenges with legacy storage solutions and the need for a cost-effective, scalable, and easily accessible system for both live and archival data. Scality’s solution utilizes S3 as a unified namespace, providing a single access point for data regardless of its location, whether on hot or cold storage.

The solution employs a multi-tiered approach, where live data is stored on the RING for active analysis and subsequently moved to tape for long-term archiving after six months. This vast amount of cold data, representing hundreds of petabytes, is managed using a custom-built API called TLP (Tape Library Protocol) to integrate with HSMs from partners such as HP, Atempo, IBM, and Spectra. TLP handles the retrieval and storage of data to tape, providing transparent access for users through the S3 Glacier API. This provides cost savings and energy efficiency by moving data to tape when it is not frequently accessed.

This architecture offers several advantages, including data durability through a three-site stretch ring, with data replicated across two tape libraries for enhanced resilience. The agency’s users and applications interact with the data via a single namespace using S3, unaware of the underlying complexity of the hybrid storage system. This transparency, combined with the cost-effectiveness of the solution and security features like object lock, has made Scality’s solution a key enabler for long-term data access and efficient workflows for the space agency.


S3 Object Storage for Real-Time Compliance Analytics at a Large US Global Bank with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Ben Morge

Ben Morge, VP of Customer Success at Scality, presented a deployment of Scality RING for a large US bank that needed to store 40 petabytes of Splunk SmartStore data across two sites.  The bank required active-active replication and a one-year data retention period. RING’s S3 compatibility enabled seamless integration with Splunk, allowing the indexers to tier data from hot, fast flash storage to the warm Scality RING, which consisted of 80 servers.  The data is immutable on the ring for one year, with the Splunk application handling data deletion.

The Scality deployment leverages S3 for data storage. The solution employs a two-site architecture, where each site features Splunk indexers and a hot storage cluster. The indexers are responsible for replicating indexes and references to objects stored on the ring. Scality manages object replication, utilizing cross-region replication, which is the S3 standard. The system addresses network issues by employing infinite replay to ensure reliable replication and decoupling storage from compute.

The presentation highlighted a couple of challenges and their solutions. The initial ingest exceeding firewall throughput was resolved with infinite replay. The initial difficulties with the read traffic exceeding CPU capabilities, leading to overwhelmed flash caches, were addressed by decoupling the architecture and adding compute resources to handle the metadata layer and S3 stateless services, resulting in simultaneous 75 gigabytes throughput on both sites and a fully replicated cluster with all the objects replicated in under two minutes.  The customer’s active production has been running successfully for over five years.


How we Help Our Customers to Build Exabyte-Scale Clouds with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Paul Speciale

Paul Speciale’s presentation at Cloud Field Day 23 highlighted Scality’s approach to helping customers build exabyte-scale clouds. The presentation opened by addressing the shift towards private cloud computing driven by AI workloads and data sovereignty concerns. Scality RING, already managing a significant amount of data, is chosen by market leaders, including major telcos and banks, to maintain control and achieve cloud-scale performance. The presentation’s core message establishes the business imperative that is driving enterprises away from public cloud dependencies and towards hybrid architectures for both compliance and competitive advantages.

Scality’s presentation focuses on its RING product as the primary solution for cloud infrastructure, data lakes, and data protection. It emphasized the RING’s S3-compatibility, deep support for advanced APIs, and a cloud-like identity and access management system. Furthermore, the presentation highlighted the RING’s distributed data protection, geo-stretched capabilities for high availability, utilization tracking, and the Core 5 initiative that focuses on cyber resiliency. The presentation emphasized the importance of multiscale architecture in a cloud environment due to the varying workload patterns and I/O needs.

The presentation showcased Scality’s market entry in 2010, coinciding with the rise of cloud services. Scality aimed to provide scale-out storage solutions with their Ring product, which has been in the market since 2010 and has been adopted by major telcos and financial institutions. Scality’s presentation also includes customer stories involving U.S. and European banks, the space industry, and Iron Mountain, highlighting the versatility of Ring for various applications and deployment sizes. Scality’s response to questions highlighted backup and automated tiering capabilities within the RING system, underscoring its design for high-capacity use cases.


Cloud Rewind for Cloud Native Applications an Overview with Commvault

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Govind Rangasamy

In this session, Govind Rangaswamy presents Commvault Cloud Rewind, a solution designed to protect cloud-native workloads and enable recovery to a pre-disaster or pre-cyber attack state across AWS, Azure, and GCP. Cloud Rewind facilitates in-place recoveries or recoveries into different tenants and regions, offering business peace of mind and the ability to make an attack seem like it never happened. The presentation highlights the challenges of protecting cloud applications, emphasizing their dynamic, distributed nature, rapid change frequency, and significant scale compared to traditional applications. These factors lead to increased complexity in managing and protecting these environments.

Cloud Rewind tackles these challenges by offering a “cloud time machine” and a “Recovery Escort” feature. The tool addresses the limitations of traditional disaster recovery by capturing not only data but also configurations and dependencies. It uses continuous discovery to track configurations and dependencies. Recovery Escort automates the rebuilding of the entire application environment using infrastructure-as-code, simplifying the recovery process by combining multiple runbooks into a single, automated process. Cloud Rewind leverages native cloud services, such as AWS and Azure backup services, to ensure data management flexibility. This enables options for backups, replication, and recovery within the customer’s cloud environment.

The core benefit of Cloud Rewind, as showcased, is its ability to dramatically reduce recovery time, enabling rapid recovery and testing. Customers can perform comprehensive recovery tests with a few clicks, achieving recoveries in minutes instead of the days required by traditional methods. The tool offers extreme automation by allowing for rebuilding or recovering in a different availability zone, region, or account, which further enhances the service’s ability to deliver application resiliency. It also integrates with other Commvault solutions, promising a unified platform for managing multi-cloud and hybrid cloud environments.


Introducing Clumio by Commvault Cloud

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Akshay Joshi

Clumio by Commvault Cloud offers scalable and efficient data protection for AWS S3 and DynamoDB, addressing the limitations of native AWS capabilities. The presentation highlighted Clumio’s features, including a new recovery modality called S3 Backtrack, and emphasized the importance of air-gapped backups for data resilience. Clumio provides fully managed backup-as-a-service, eliminating the need for managing infrastructure, agents, and servers. The solution offers logical air-gapped backups stored within AWS, but outside a customer’s enterprise security sphere, offering enhanced security and immutability.

The presentation emphasized Clumio’s focus on simplicity, performance, and cost-effectiveness. Clumio claims a 10x faster restore performance compared to competitors and a 30% lower cost. Key features include protection groups for granular backup and restore of S3 buckets, based on various vectors such as tags, prefixes, and regions. For DynamoDB, Clumio offers incremental backups using DynamoDB streams, providing cost savings and the ability to retain numerous backup copies.

The presentation concluded with case studies demonstrating the effectiveness of Clumio’s solutions. Atlassian saw a 70% reduction in costs, a faster RPO, and RTO. Duolingo achieved over 70% savings on DynamoDB backups, with the added benefits of immutability and air-gapping. Clumio’s architecture, utilizing serverless technologies and AWS EventBridge, enables automation and scalability. The solution offers options for encryption key management and supports data residency requirements with regional control planes.


A New Era of Cyber Resilience with Commvault Cloud

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Michael Fasulo

Organizations need to operate continuously, especially with the shift to cloud-first strategies. Commvault Cloud aims to solve the challenges of this shift, particularly in security. Michael Fasulo introduced Commvault Cloud, a cyber resilience platform designed for cloud-first enterprises. This platform addresses challenges like ransomware, hybrid/multi-cloud complexity, and regulatory compliance. Commvault offers a unified platform that incorporates a zero-trust foundation, AI-driven data protection features, and various cloud-first capabilities, including threat detection, anomaly detection, and cyber resilience testing. The platform offers flexibility in deployment, supporting on-premises, SaaS, and appliance models to meet the diverse needs of customers.

Commvault’s platform emphasizes a proactive approach to cyber resilience. It utilizes a zero-trust architecture, featuring CIS-level hardened images, multi-factor authentication (MFA), and least privilege access. A key aspect is the Commvault Risk Analysis product, which provides in-depth data discovery, classification, and remediation capabilities, including integration with Microsoft Purview. The platform also focuses on operational recovery through end-to-end portability, enabling workload transformation. To further enhance security, Commvault offers ThreatWise, which deploys deception technology to lure and analyze threats. This is complemented by integrations with various SIM and SOAR platforms for centralized threat response.

To educate customers, Commvault has launched “Minutes of the Meltdown” for executives and “Recovery Range” for hands-on keyboard experience during simulated cyber attacks. Recovery Range allows teams to test their response to various threats and validate the effectiveness of the Commvault Cloud platform. This includes features like anomaly detection and automated exclusion of threats during recovery. The platform also offers the option for custom dashboards and extensive reporting capabilities, allowing customers to tailor the view of their security posture to their specific needs.


Delegate Roundtable: Point Solutions or Platforms?

Event: Security Field Day 13

Appearance: Security Field Day 13 Delegate Roundtable Discussion

Company: Tech Field Day

Video Links:

Personnel: Tom Hollingsworth

In this Security Field Day delegate roundtable discussion, led by Tom Hollingsworth, aims to dive into “security overload,” where professionals are burdened with an excessive number of disparate security tools. The core of the discussion revolved around the fundamental question of whether to prefer point solutions—specialized tools designed for a single purpose—or integrated platforms that consolidate multiple functionalities. This debate stems from the common experience of needing dozens of tools for a single task, leading to management complexity and inefficiency.

The participants presented compelling arguments for both sides. Proponents of point solutions emphasized their specialized nature, allowing for the “best tool for the job” approach and often offering superior capabilities for specific tasks. However, the downside recognized was the challenge of integrating these numerous tools, leading to potential data silos, increased management complexity, and vendors sometimes deflecting responsibility when issues arise. Conversely, platforms were lauded for their potential to offer a unified experience, streamline vendor management, and simplify hiring expertise, particularly appealing to senior decision-makers due to perceived cost efficiencies and reduced operational friction. Yet, concerns were raised about platforms often failing to achieve true integration, resulting in functional gaps or even hamstringing overall capabilities due to inflexible dependencies.

The conversation also encompassed the economics of security tools, the role of open source versus commercial solutions, and the critical aspects of identity, authentication, and authorization. The “build versus buy” question was a recurring theme, with the understanding that while open-source tools might appear “free,” they often come with significant hidden costs in terms of maintenance and support, or even security risks. The discussion ultimately underscored that the choice between point solutions and platforms is not a simple binary, but rather depends on organizational maturity, budget, desired level of integration, and an awareness of the inherent trade-offs between specialized capabilities and simplified management.


cPacket Network Observability for Incident Validation and Compliance

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Andy Barnes, Ron Nevo

cPacket enables continuous security validation and compliance auditing with deep packet inspection, TLS certificate verification, and external domain access analysis. Its AI-enhanced observability platform ensures regulatory readiness, detects misconfigurations, and identifies policy drift across hybrid cloud and enterprise networks—helping security teams maintain an up-to-date posture and pass audits with real-time, actionable insights. cPacket’s solution focuses on ensuring that security postures don’t deteriorate over time due to new threats, outdated rules, misconfigurations, or broken integrations, which can lead to compliance breakdowns, especially in regulated industries like financial services and healthcare. They achieve this through Deep Packet Inspection (DPI) in their C-Store, which breaks down protocols like HTTPS, DNS, and LDAP to extract relevant metadata and performance data. This DPI capability, distinct from simple string matching, allows cPacket to understand protocol details and extract information crucial for security.

One key application of this capability is ensuring server compliance. cPacket’s dashboard provides real-time visibility into factors like TLS certificate status, cipher suite usage (e.g., ensuring adherence to TLS 1.2/1.3 and detecting insecure cipher suites), and the presence of expired certificates. This detailed monitoring helps organizations proactively identify and address compliance issues before they lead to regulatory scrutiny. Another powerful feature is DNS monitoring, which uses AI-enhanced agents to identify “unknown domains” by comparing accessed domains against known CSPs, CDNs, and top legitimate sites. This helps detect potentially malicious domains generated by Domain Generation Algorithms (DGAs) that might indicate a compromise.

cPacket is also developing AI-driven agents that can query their observability data using natural language, making it easier for security experts to analyze complex network activity without needing to master query languages. These agents are designed with controls to prevent improper operations, ensuring data integrity and security. While still in the lab and not yet in production, this capability holds significant promise for intuitive data exploration. Furthermore, cPacket’s platform allows for the analysis of external PCAP files, enabling security teams to leverage cPacket’s robust analytics tools on data captured by other systems, though a direct UI upload option is not yet readily available. Overall, cPacket aims to augment security postures by providing pervasive, real-time network observability that informs validation, ensures compliance, and aids in rapid incident response.


cPacket Network Observability for Incident Response

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Andy Barnes, Ron Nevo

cPacket powers real-time incident response with lossless packet capture, high-speed indexing, and seamless integration with SOC tools. Acting as the network’s digital black box, it enables rapid forensic analysis, root cause identification, and response automation across hybrid cloud, data center, and enterprise environments—ensuring cybersecurity teams can quickly investigate and neutralize advanced threats. cPacket emphasizes the critical role of packet capture in digital forensics, drawing a parallel to the black box in aviation to highlight its importance in understanding and preventing security incidents. Unlike other forensic methods, packet capture provides complete, tamper-proof context, showing the actual data exchanged during an attack. cPacket’s solution is designed to be pervasive, capturing packets from any point in a hybrid environment at high speeds (up to 200 gigabits per second), and scalable, capable of handling large data volumes while maintaining the ability to quickly index and retrieve relevant packets.

The architecture involves deploying monitoring points across the network, including cloud environments, where the same packet capture software is used as on-premise. This setup allows for centralized control and analysis, even in highly distributed networks. cPacket prioritizes ease of integration with existing security tools, featuring open APIs for seamless data exchange with solutions like DataDog and ServiceNow. Their focus is on providing the raw data and context that security teams need to conduct thorough investigations, rather than attempting to replace existing security systems.

A key capability is the ability to quickly retrieve and analyze captured packets, facilitating rapid root cause analysis and response automation. For example, when a third-party NDR solution detects an SQL injection, cPacket can provide access to the relevant PCAP data directly within the NDR’s interface, allowing security analysts to examine the attack payload and understand the full scope of the incident. This approach enables security teams to move beyond simply detecting threats to understanding their nature and impact, ultimately improving incident response effectiveness.


cPacket Network Observability for AI-Enhanced Incident Detection

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Andy Barnes, Ron Nevo

cPacket uses AI-driven network observability to detect unknown and emerging threats across hybrid cloud and enterprise environments. By applying machine learning and unsupervised anomaly detection to trillions of packets and billions of sessions, it identifies behavioral deviations, flags exfiltration and lateral movement, and delivers deep, real-time insights for proactive, scalable cybersecurity and incident response. The challenge of identifying what constitutes “normal” versus “abnormal” behavior in complex networks is central to cPacket’s AI-driven approach. Instead of relying on static, unmanageable thresholds, their platform uses machine learning to establish a baseline of normal behavior by location, application, and time of day/week, considering all collected metrics (e.g., duration, data volume, latency, connection failures). This allows cPacket to identify subtle anomalies, such as unusually long session durations for specific services or traffic between groups that shouldn’t be communicating, which are indicative of unknown threats like slow-drift exfiltration or lateral movement.

cPacket’s AI capabilities are showcased through examples like detecting exfiltration and lateral movement. For exfiltration, the system can identify both burst and slow-drift data transfers by monitoring session lengths and data volumes, flagging attempts to steal sensitive information. For lateral movement, it detects traffic between unusual or unauthorized network segments. These advanced detections are typically performed on data collected by the packet capture devices (C-Store), where billions of sessions are analyzed. The metrics from these sessions are fed into an S3 bucket, allowing cPacket’s AI model to continuously establish baselines and detect deviations, which are then aggregated into “insights.” These insights provide concise descriptions of anomalous behavior, including when, where, and potentially why they occurred, helping security teams quickly understand and triage potential threats.

The cPacket platform provides a live, real-time view of network activity, with the AI engine continuously generating “insight cards” that group related incidents, such as scanning activity. These cards provide detailed information, including source IP addresses, countries of origin, and communication attempts, which can be further investigated by drilling down to the packet level. While cPacket does not decrypt encrypted traffic, it can still detect numerous indicators of compromise that occur in the clear. Their system is designed for network observability, and its security benefits, such as detecting unusual scanning patterns or unexpected external connections, emerged as a valuable, albeit initially unintended, outcome. This comprehensive approach, including the ability to pull full packet captures for deep forensic analysis, significantly enhances proactive cybersecurity and incident response capabilities.