SAP Legacy SAP ERP Customers Are Nearly Out of Time – HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono

It’s been a decade since S/4HANA was released, but less than half of the existing SAP ERP customers have upgraded, facing looming support deadlines. This presentation from HPE at Cloud Field Day 23 highlights the challenges and the novel solutions for those hesitant to make the move. HPE, along with SAP, is offering a more flexible approach, recognizing that a “one size fits all” approach isn’t suitable for all businesses. The presentation highlights that business disruption is a significant factor in customers delaying upgrades, underscoring the need to consider what is important to all stakeholders involved.

The session presents SAP’s cloud ERP offerings, including a “Customer Data Center” option built on HPE GreenLake. This provides a hybrid cloud environment, allowing customers to choose between public and private cloud deployments, and importantly, the flexibility to retain their existing data centers. HPE’s focus is to make the transition to SAP’s cloud offerings smoother, making it easier for customers to move forward. This approach addresses concerns about data sovereignty and control over upgrades. The conversation also highlights the need for integration and the various challenges that it encompasses when migrating to the new systems.

Ultimately, the presentation highlights the importance of the private cloud option, particularly for large enterprises with complex legacy systems. It highlights the need for more flexibility for all stakeholders. The session concludes that the “Customer Data Center” option, with HPE hardware and services, can provide the security, control, and flexibility that many customers require, while ensuring they can continue to receive the necessary support. The presentation emphasized that there are various options available to meet each customer’s needs, making this a comprehensive plan.


Seamless Business Continuity and Disaster Avoidance: Multi-Cloud Demonstration Workflow with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw, Mike Chmiel

Qumulo presented a demonstration at Cloud Field Day 23 that showcased seamless business continuity and disaster avoidance in a multi-cloud environment.  The core of the presentation centered on simulating a hurricane threat to an on-premises environment, highlighting Qumulo’s ability to provide enterprise resilience and cloud-native scalability. Brandon Whitelaw demonstrated how Qumulo’s Cloud Data Fabric enables disaster avoidance through live application suspension and resumption, data portal redirection, cloud workload scaling, and high-performance edge caching with Qumulo EdgeConnect.  This allows the safe migration of data and applications to the cloud, ensuring continued access and continuity in the event of a disaster.

The demo’s primary focus was on illustrating the ease of transitioning data and operations to the cloud during a simulated disaster scenario.  The process involved disconnecting the on-prem cluster and, using a device like an Asus Nook, accessing data seamlessly from the cloud. This seamless switch allowed government employees to continue their work at an off-site location. This was achieved through data portals, which enable the efficient transfer of data, with 90% bandwidth utilization.  It demonstrated the ability to maintain user experience by removing the need to change user behaviors or adopt new protocols.

Finally, Qumulo’s approach offers high bandwidth utilization, and integration into a multitude of customer use cases, all while ensuring minimal downtime and data integrity during the process.  They showed how edits made on the cloud could be instantly consistent with the on-prem solution. They were able to quickly and effectively restore data access to users after the storm,  Qumulo emphasized that the architecture allowed businesses to be proactive, moving data to the cloud days before a disaster, reducing the reliance on last-minute backups and promoting a more flexible, scalable approach to business continuity, and with the upcoming support for ARM, and the focus on multi-cloud, Qumulo allows for a great deal of flexibility in how a business manages its data.


Seamless Business Continuity and Disaster Avoidance with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw

This Qumulo presentation at Cloud Field Day 23 focuses on delivering business continuity and disaster avoidance through its platform. Qumulo leverages hybrid-cloud architectures to ensure uninterrupted data access and operational resilience by seamlessly synchronizing and migrating unstructured enterprise data between on-premises and cloud environments. This empowers organizations to remain agile in the face of disruptions.

The presentation dives into two main approaches. The first leverages cloud elasticity for a cost-effective disaster recovery solution. By backing up on-premises data to cloud-native, cold storage tiers, Qumulo allows for near-instantaneous failover to an active system. This approach utilizes the same underlying hardware performance for both active and cold storage tiers, enabling a rapid transition and incurring higher costs only when necessary. This is a more cost-effective alternative to building a complete, on-premises continuity and hot standby data center.

The second approach emphasizes building continuity and availability from the ground up. By deploying a cloud-native Qumulo system, the presentation highlights the benefits of multi-zone availability within a region, offering greater durability and resilience compared to traditional on-premises setups. Qumulo’s data fabric ensures real-time data synchronization between on-prem and cloud environments, with data creation cached locally and then instantly available across all connected locations. This offers significant cost savings and operational efficiency by eliminating the need for traditional replication and failover procedures.


Reimagining Data Management in a Hybrid-Cloud World with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Douglas Gourlay

The presentation by Qumulo at Cloud Field Day 23, led by Douglas Gourlay, focuses on the challenges and opportunities of modern data management in hybrid cloud environments. The presentation emphasizes the need for a unified, scalable, and intelligent approach across both on-premises and cloud infrastructures. The speakers prioritize customer stories and use cases to illustrate how Qumulo’s unique architecture provides enhanced performance, visibility, and simplicity for organizations.

A key theme of the presentation is the importance of innovation, specifically in addressing the evolving needs of customers. Qumulo focuses on unstructured data, highlighting its work with diverse clients, including those in the movie production, scientific research, and government sectors. The presentation highlights how Qumulo’s approach enables both data durability and high performance, particularly in scenarios involving edge-to-cloud data synchronization, disaster recovery, and AI-driven data processing.

The presentation showcases how Qumulo enables freedom of choice by supporting any hardware and any cloud environment. Their solutions are designed to manage large-scale data, extending file systems across various locations with strict consistency and high performance. By leveraging cloud elasticity for backup and tiering, Qumulo offers cost-effective options for disaster recovery and provides the agility to adapt to changing business needs.


Learn About Scality RING’s Exabyte Scale, Multidimensional Architecture with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Giorgio Regni

Scality’s Giorgio Regni presented at Cloud Field Day 23, focusing on the Scality RING’s exabyte-scale, multidimensional architecture. Scality’s origin story stems from addressing storage challenges for early cloud providers, such as Comcast. They found that existing solutions weren’t meeting the demands of petabyte-scale data and the need to compete with large providers. The company’s core concept is “scale,” and their system is designed to expand seamlessly across all crucial dimensions. This includes capacity, metadata, and throughput, allowing them to scale each of these components independently.

Regni emphasized the RING’s disaggregated design, highlighting its ability to overcome common storage bottlenecks. The architecture separates storage nodes, I/O daemons, and a connector layer, enabling independent scaling of each component. He shared impressive numbers, including 12 exabytes of data currently in production and 6 trillion objects stored, with customers having billions of objects and applications using the system. The presentation also contrasted Scality’s approach to that of competitors like Ceph and MinIO, highlighting differences in metadata handling, bucket limits, and the flexibility of the architecture’s scaling capabilities.

Finally, the presentation covered the multi-layered architecture that supports various protocols, including S3, a custom REST protocol, and file system connectors. The architecture is based on a peer-to-peer distributed system with no single point of failure, supporting high availability and replication across multiple sites and tiers. It can manage different tiers, such as Ring XP, the all-flash configuration, and long-term storage. Scality RING also offers multi-tenancy and supports usage tracking, allowing customers to build their billing systems, with the overall goal of the system being an infinitely scalable storage solution.


Major European Bank’s Enterprise Cloud Built on RING & S3 APIs with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Aurelien Gelbart

Aurelien Gelbart’s presentation at Cloud Field Day 23 highlighted a major European bank’s successful deployment of a private cloud built on Scality RING and S3 APIs. The bank sought to consolidate disparate storage solutions, aiming for a lower cost per terabyte and easier adoption for its users. Leveraging Scality’s offerings, the bank achieved these goals, fostering widespread adoption of the new platform and significantly reducing costs.

The bank’s private cloud architecture comprises six independent RING clusters across three geographical regions, each with a production and disaster recovery platform. Utilizing Scality’s native S3 API support, the bank implemented multi-site replication and object lifecycle policies to meet stringent financial compliance requirements. The implementation allows the bank to run hundreds of production-grade applications, including database backups, financial market data storage, and document management.

The success of this deployment is evident in substantial growth, with the platform scaling from one petabyte to 50 petabytes of usable storage within seven years. This growth brought new challenges related to performance and resource management across the many different applications. Scality addressed these challenges by improving software performance through reconfiguration and architectural improvements. The results are impressive: the bank now operates 100 petabytes of usable storage, manages 200,000 S3 buckets, and processes 300 billion client objects, achieving a 75% reduction in the cost per terabyte per year compared to its previous solutions.


Leading Space Agency’s Long-Term Scientific Storage at Scale with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Nicolas Sayer

Scality’s presentation at Cloud Field Day 23 focused on their collaboration with a major European space agency, which utilizes Scality RING to manage massive scientific datasets in a hybrid storage model. The agency, dealing with data from approximately 200 satellites, faced challenges with legacy storage solutions and the need for a cost-effective, scalable, and easily accessible system for both live and archival data. Scality’s solution utilizes S3 as a unified namespace, providing a single access point for data regardless of its location, whether on hot or cold storage.

The solution employs a multi-tiered approach, where live data is stored on the RING for active analysis and subsequently moved to tape for long-term archiving after six months. This vast amount of cold data, representing hundreds of petabytes, is managed using a custom-built API called TLP (Tape Library Protocol) to integrate with HSMs from partners such as HP, Atempo, IBM, and Spectra. TLP handles the retrieval and storage of data to tape, providing transparent access for users through the S3 Glacier API. This provides cost savings and energy efficiency by moving data to tape when it is not frequently accessed.

This architecture offers several advantages, including data durability through a three-site stretch ring, with data replicated across two tape libraries for enhanced resilience. The agency’s users and applications interact with the data via a single namespace using S3, unaware of the underlying complexity of the hybrid storage system. This transparency, combined with the cost-effectiveness of the solution and security features like object lock, has made Scality’s solution a key enabler for long-term data access and efficient workflows for the space agency.


S3 Object Storage for Real-Time Compliance Analytics at a Large US Global Bank with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Ben Morge

Ben Morge, VP of Customer Success at Scality, presented a deployment of Scality RING for a large US bank that needed to store 40 petabytes of Splunk SmartStore data across two sites.  The bank required active-active replication and a one-year data retention period. RING’s S3 compatibility enabled seamless integration with Splunk, allowing the indexers to tier data from hot, fast flash storage to the warm Scality RING, which consisted of 80 servers.  The data is immutable on the ring for one year, with the Splunk application handling data deletion.

The Scality deployment leverages S3 for data storage. The solution employs a two-site architecture, where each site features Splunk indexers and a hot storage cluster. The indexers are responsible for replicating indexes and references to objects stored on the ring. Scality manages object replication, utilizing cross-region replication, which is the S3 standard. The system addresses network issues by employing infinite replay to ensure reliable replication and decoupling storage from compute.

The presentation highlighted a couple of challenges and their solutions. The initial ingest exceeding firewall throughput was resolved with infinite replay. The initial difficulties with the read traffic exceeding CPU capabilities, leading to overwhelmed flash caches, were addressed by decoupling the architecture and adding compute resources to handle the metadata layer and S3 stateless services, resulting in simultaneous 75 gigabytes throughput on both sites and a fully replicated cluster with all the objects replicated in under two minutes.  The customer’s active production has been running successfully for over five years.


How we Help Our Customers to Build Exabyte-Scale Clouds with Scality

Event: Cloud Field Day 23

Appearance: Scality Presents at Cloud Field Day 23

Company: Scality

Video Links:

Personnel: Paul Speciale

Paul Speciale’s presentation at Cloud Field Day 23 highlighted Scality’s approach to helping customers build exabyte-scale clouds. The presentation opened by addressing the shift towards private cloud computing driven by AI workloads and data sovereignty concerns. Scality RING, already managing a significant amount of data, is chosen by market leaders, including major telcos and banks, to maintain control and achieve cloud-scale performance. The presentation’s core message establishes the business imperative that is driving enterprises away from public cloud dependencies and towards hybrid architectures for both compliance and competitive advantages.

Scality’s presentation focuses on its RING product as the primary solution for cloud infrastructure, data lakes, and data protection. It emphasized the RING’s S3-compatibility, deep support for advanced APIs, and a cloud-like identity and access management system. Furthermore, the presentation highlighted the RING’s distributed data protection, geo-stretched capabilities for high availability, utilization tracking, and the Core 5 initiative that focuses on cyber resiliency. The presentation emphasized the importance of multiscale architecture in a cloud environment due to the varying workload patterns and I/O needs.

The presentation showcased Scality’s market entry in 2010, coinciding with the rise of cloud services. Scality aimed to provide scale-out storage solutions with their Ring product, which has been in the market since 2010 and has been adopted by major telcos and financial institutions. Scality’s presentation also includes customer stories involving U.S. and European banks, the space industry, and Iron Mountain, highlighting the versatility of Ring for various applications and deployment sizes. Scality’s response to questions highlighted backup and automated tiering capabilities within the RING system, underscoring its design for high-capacity use cases.


Cloud Rewind for Cloud Native Applications an Overview with Commvault

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Govind Rangasamy

In this session, Govind Rangaswamy presents Commvault Cloud Rewind, a solution designed to protect cloud-native workloads and enable recovery to a pre-disaster or pre-cyber attack state across AWS, Azure, and GCP. Cloud Rewind facilitates in-place recoveries or recoveries into different tenants and regions, offering business peace of mind and the ability to make an attack seem like it never happened. The presentation highlights the challenges of protecting cloud applications, emphasizing their dynamic, distributed nature, rapid change frequency, and significant scale compared to traditional applications. These factors lead to increased complexity in managing and protecting these environments.

Cloud Rewind tackles these challenges by offering a “cloud time machine” and a “Recovery Escort” feature. The tool addresses the limitations of traditional disaster recovery by capturing not only data but also configurations and dependencies. It uses continuous discovery to track configurations and dependencies. Recovery Escort automates the rebuilding of the entire application environment using infrastructure-as-code, simplifying the recovery process by combining multiple runbooks into a single, automated process. Cloud Rewind leverages native cloud services, such as AWS and Azure backup services, to ensure data management flexibility. This enables options for backups, replication, and recovery within the customer’s cloud environment.

The core benefit of Cloud Rewind, as showcased, is its ability to dramatically reduce recovery time, enabling rapid recovery and testing. Customers can perform comprehensive recovery tests with a few clicks, achieving recoveries in minutes instead of the days required by traditional methods. The tool offers extreme automation by allowing for rebuilding or recovering in a different availability zone, region, or account, which further enhances the service’s ability to deliver application resiliency. It also integrates with other Commvault solutions, promising a unified platform for managing multi-cloud and hybrid cloud environments.


Introducing Clumio by Commvault Cloud

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Akshay Joshi

Clumio by Commvault Cloud offers scalable and efficient data protection for AWS S3 and DynamoDB, addressing the limitations of native AWS capabilities. The presentation highlighted Clumio’s features, including a new recovery modality called S3 Backtrack, and emphasized the importance of air-gapped backups for data resilience. Clumio provides fully managed backup-as-a-service, eliminating the need for managing infrastructure, agents, and servers. The solution offers logical air-gapped backups stored within AWS, but outside a customer’s enterprise security sphere, offering enhanced security and immutability.

The presentation emphasized Clumio’s focus on simplicity, performance, and cost-effectiveness. Clumio claims a 10x faster restore performance compared to competitors and a 30% lower cost. Key features include protection groups for granular backup and restore of S3 buckets, based on various vectors such as tags, prefixes, and regions. For DynamoDB, Clumio offers incremental backups using DynamoDB streams, providing cost savings and the ability to retain numerous backup copies.

The presentation concluded with case studies demonstrating the effectiveness of Clumio’s solutions. Atlassian saw a 70% reduction in costs, a faster RPO, and RTO. Duolingo achieved over 70% savings on DynamoDB backups, with the added benefits of immutability and air-gapping. Clumio’s architecture, utilizing serverless technologies and AWS EventBridge, enables automation and scalability. The solution offers options for encryption key management and supports data residency requirements with regional control planes.


A New Era of Cyber Resilience with Commvault Cloud

Event: Cloud Field Day 23

Appearance: Commvault presents at Cloud Field Day 23

Company: Commvault

Video Links:

Personnel: Michael Fasulo

Organizations need to operate continuously, especially with the shift to cloud-first strategies. Commvault Cloud aims to solve the challenges of this shift, particularly in security. Michael Fasulo introduced Commvault Cloud, a cyber resilience platform designed for cloud-first enterprises. This platform addresses challenges like ransomware, hybrid/multi-cloud complexity, and regulatory compliance. Commvault offers a unified platform that incorporates a zero-trust foundation, AI-driven data protection features, and various cloud-first capabilities, including threat detection, anomaly detection, and cyber resilience testing. The platform offers flexibility in deployment, supporting on-premises, SaaS, and appliance models to meet the diverse needs of customers.

Commvault’s platform emphasizes a proactive approach to cyber resilience. It utilizes a zero-trust architecture, featuring CIS-level hardened images, multi-factor authentication (MFA), and least privilege access. A key aspect is the Commvault Risk Analysis product, which provides in-depth data discovery, classification, and remediation capabilities, including integration with Microsoft Purview. The platform also focuses on operational recovery through end-to-end portability, enabling workload transformation. To further enhance security, Commvault offers ThreatWise, which deploys deception technology to lure and analyze threats. This is complemented by integrations with various SIM and SOAR platforms for centralized threat response.

To educate customers, Commvault has launched “Minutes of the Meltdown” for executives and “Recovery Range” for hands-on keyboard experience during simulated cyber attacks. Recovery Range allows teams to test their response to various threats and validate the effectiveness of the Commvault Cloud platform. This includes features like anomaly detection and automated exclusion of threats during recovery. The platform also offers the option for custom dashboards and extensive reporting capabilities, allowing customers to tailor the view of their security posture to their specific needs.


Delegate Roundtable: Point Solutions or Platforms?

Event: Security Field Day 13

Appearance: Security Field Day 13 Delegate Roundtable Discussion

Company: Tech Field Day

Video Links:

Personnel: Tom Hollingsworth

In this Security Field Day delegate roundtable discussion, led by Tom Hollingsworth, aims to dive into “security overload,” where professionals are burdened with an excessive number of disparate security tools. The core of the discussion revolved around the fundamental question of whether to prefer point solutions—specialized tools designed for a single purpose—or integrated platforms that consolidate multiple functionalities. This debate stems from the common experience of needing dozens of tools for a single task, leading to management complexity and inefficiency.

The participants presented compelling arguments for both sides. Proponents of point solutions emphasized their specialized nature, allowing for the “best tool for the job” approach and often offering superior capabilities for specific tasks. However, the downside recognized was the challenge of integrating these numerous tools, leading to potential data silos, increased management complexity, and vendors sometimes deflecting responsibility when issues arise. Conversely, platforms were lauded for their potential to offer a unified experience, streamline vendor management, and simplify hiring expertise, particularly appealing to senior decision-makers due to perceived cost efficiencies and reduced operational friction. Yet, concerns were raised about platforms often failing to achieve true integration, resulting in functional gaps or even hamstringing overall capabilities due to inflexible dependencies.

The conversation also encompassed the economics of security tools, the role of open source versus commercial solutions, and the critical aspects of identity, authentication, and authorization. The “build versus buy” question was a recurring theme, with the understanding that while open-source tools might appear “free,” they often come with significant hidden costs in terms of maintenance and support, or even security risks. The discussion ultimately underscored that the choice between point solutions and platforms is not a simple binary, but rather depends on organizational maturity, budget, desired level of integration, and an awareness of the inherent trade-offs between specialized capabilities and simplified management.


cPacket Network Observability for Incident Validation and Compliance

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Andy Barnes, Ron Nevo

cPacket enables continuous security validation and compliance auditing with deep packet inspection, TLS certificate verification, and external domain access analysis. Its AI-enhanced observability platform ensures regulatory readiness, detects misconfigurations, and identifies policy drift across hybrid cloud and enterprise networks—helping security teams maintain an up-to-date posture and pass audits with real-time, actionable insights. cPacket’s solution focuses on ensuring that security postures don’t deteriorate over time due to new threats, outdated rules, misconfigurations, or broken integrations, which can lead to compliance breakdowns, especially in regulated industries like financial services and healthcare. They achieve this through Deep Packet Inspection (DPI) in their C-Store, which breaks down protocols like HTTPS, DNS, and LDAP to extract relevant metadata and performance data. This DPI capability, distinct from simple string matching, allows cPacket to understand protocol details and extract information crucial for security.

One key application of this capability is ensuring server compliance. cPacket’s dashboard provides real-time visibility into factors like TLS certificate status, cipher suite usage (e.g., ensuring adherence to TLS 1.2/1.3 and detecting insecure cipher suites), and the presence of expired certificates. This detailed monitoring helps organizations proactively identify and address compliance issues before they lead to regulatory scrutiny. Another powerful feature is DNS monitoring, which uses AI-enhanced agents to identify “unknown domains” by comparing accessed domains against known CSPs, CDNs, and top legitimate sites. This helps detect potentially malicious domains generated by Domain Generation Algorithms (DGAs) that might indicate a compromise.

cPacket is also developing AI-driven agents that can query their observability data using natural language, making it easier for security experts to analyze complex network activity without needing to master query languages. These agents are designed with controls to prevent improper operations, ensuring data integrity and security. While still in the lab and not yet in production, this capability holds significant promise for intuitive data exploration. Furthermore, cPacket’s platform allows for the analysis of external PCAP files, enabling security teams to leverage cPacket’s robust analytics tools on data captured by other systems, though a direct UI upload option is not yet readily available. Overall, cPacket aims to augment security postures by providing pervasive, real-time network observability that informs validation, ensures compliance, and aids in rapid incident response.


cPacket Network Observability for Incident Response

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Andy Barnes, Ron Nevo

cPacket powers real-time incident response with lossless packet capture, high-speed indexing, and seamless integration with SOC tools. Acting as the network’s digital black box, it enables rapid forensic analysis, root cause identification, and response automation across hybrid cloud, data center, and enterprise environments—ensuring cybersecurity teams can quickly investigate and neutralize advanced threats. cPacket emphasizes the critical role of packet capture in digital forensics, drawing a parallel to the black box in aviation to highlight its importance in understanding and preventing security incidents. Unlike other forensic methods, packet capture provides complete, tamper-proof context, showing the actual data exchanged during an attack. cPacket’s solution is designed to be pervasive, capturing packets from any point in a hybrid environment at high speeds (up to 200 gigabits per second), and scalable, capable of handling large data volumes while maintaining the ability to quickly index and retrieve relevant packets.

The architecture involves deploying monitoring points across the network, including cloud environments, where the same packet capture software is used as on-premise. This setup allows for centralized control and analysis, even in highly distributed networks. cPacket prioritizes ease of integration with existing security tools, featuring open APIs for seamless data exchange with solutions like DataDog and ServiceNow. Their focus is on providing the raw data and context that security teams need to conduct thorough investigations, rather than attempting to replace existing security systems.

A key capability is the ability to quickly retrieve and analyze captured packets, facilitating rapid root cause analysis and response automation. For example, when a third-party NDR solution detects an SQL injection, cPacket can provide access to the relevant PCAP data directly within the NDR’s interface, allowing security analysts to examine the attack payload and understand the full scope of the incident. This approach enables security teams to move beyond simply detecting threats to understanding their nature and impact, ultimately improving incident response effectiveness.


cPacket Network Observability for AI-Enhanced Incident Detection

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Andy Barnes, Ron Nevo

cPacket uses AI-driven network observability to detect unknown and emerging threats across hybrid cloud and enterprise environments. By applying machine learning and unsupervised anomaly detection to trillions of packets and billions of sessions, it identifies behavioral deviations, flags exfiltration and lateral movement, and delivers deep, real-time insights for proactive, scalable cybersecurity and incident response. The challenge of identifying what constitutes “normal” versus “abnormal” behavior in complex networks is central to cPacket’s AI-driven approach. Instead of relying on static, unmanageable thresholds, their platform uses machine learning to establish a baseline of normal behavior by location, application, and time of day/week, considering all collected metrics (e.g., duration, data volume, latency, connection failures). This allows cPacket to identify subtle anomalies, such as unusually long session durations for specific services or traffic between groups that shouldn’t be communicating, which are indicative of unknown threats like slow-drift exfiltration or lateral movement.

cPacket’s AI capabilities are showcased through examples like detecting exfiltration and lateral movement. For exfiltration, the system can identify both burst and slow-drift data transfers by monitoring session lengths and data volumes, flagging attempts to steal sensitive information. For lateral movement, it detects traffic between unusual or unauthorized network segments. These advanced detections are typically performed on data collected by the packet capture devices (C-Store), where billions of sessions are analyzed. The metrics from these sessions are fed into an S3 bucket, allowing cPacket’s AI model to continuously establish baselines and detect deviations, which are then aggregated into “insights.” These insights provide concise descriptions of anomalous behavior, including when, where, and potentially why they occurred, helping security teams quickly understand and triage potential threats.

The cPacket platform provides a live, real-time view of network activity, with the AI engine continuously generating “insight cards” that group related incidents, such as scanning activity. These cards provide detailed information, including source IP addresses, countries of origin, and communication attempts, which can be further investigated by drilling down to the packet level. While cPacket does not decrypt encrypted traffic, it can still detect numerous indicators of compromise that occur in the clear. Their system is designed for network observability, and its security benefits, such as detecting unusual scanning patterns or unexpected external connections, emerged as a valuable, albeit initially unintended, outcome. This comprehensive approach, including the ability to pull full packet captures for deep forensic analysis, significantly enhances proactive cybersecurity and incident response capabilities.


cPacket Network Observability for Deterministic Incident Detection

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Andy Barnes, Ron Nevo

cPacket enables deterministic incident detection by inspecting every byte in every packet at line rate, delivering real-time visibility into threats like DNS beaconing, volumetric DDoS, and C2 channels. With high-speed, packet-level analytics across hybrid cloud and enterprise networks, security teams gain definitive, actionable insights to accelerate threat detection, incident response, and breach prevention. cPacket’s approach to incident detection is “deterministic,” meaning it relies on clear, definable thresholds. For threats like DNS beaconing, cPacket’s smart port technology, leveraging FPGAs and ASICs, can inspect every byte in every packet at line rate to perform string matching. This allows for immediate detection of specific domain requests, such as those associated with supply chain attacks, providing a definitive “yes or no” answer regarding infection status.

For volumetric DDoS attacks, cPacket’s ability to count every packet in real-time allows for rapid detection of anomalies, such as an unusually high ratio of SYN packets to SYN/ACK packets (SYN flood) or excessive DNS responses without corresponding requests (DNS amplification). These detections are measured in seconds, providing much faster and more accurate alerts than traditional methods like NetFlow. While cPacket focuses on detection rather than mitigation, these real-time alerts can be used to initiate on-demand mitigation strategies with ISPs or scrubbing centers, particularly crucial for financial services firms that prioritize low latency.

Furthermore, cPacket’s packet capture solutions can identify long-duration, low-traffic sessions, which are characteristic of command and control (C2) channels. By tracking millions of open TCP sessions, even those with minimal data transfer, cPacket can alert security teams to sessions that persist for days or weeks, indicating potential compromise. While this specific capability primarily applies to TCP sessions, the overall approach of leveraging high-speed, pervasive network observability to detect clear deviations from normal behavior offers invaluable, actionable insights for security teams, complementing existing security tools by providing definitive, packet-level evidence of threats.


cPacket Security Field Day Introduction

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Mark Grodzinsky, Ron Nevo

cPacket delivers zero-downtime observability for mission-critical networks across finance, healthcare, and government. Trusted with over 50% of global market data, our ASIC+FPGA-powered platform aligns with NIST CSF 2.0 to provide pervasive, scalable visibility across hybrid and cloud environments—enabling real-time packet analytics, rapid threat detection, and enhanced protection for SOC/NOC operations. Founded in 2007 as a semiconductor company specializing in hardware-offloaded string search, cPacket evolved to build a full platform for network observability, initially gaining traction with British Telecom for the London 2012 Olympics. Their core strengths lie in providing nanosecond timestamping, pervasive packet capture, and real-time network analytics across hybrid environments, including private and public clouds, and data centers. Their ideal customers are “zero downtime enterprises” in finance, healthcare, and government that demand packet precision, performance, and the newly added context provided by AI.

cPacket believes that robust network observability solutions can significantly augment and strengthen security postures without replacing existing security tools. Their approach is built on a pervasive, independent, and scalable architecture, allowing them to capture packets anywhere in a hybrid network, from 100 to 400 gigabits per second, and process trillions of packets daily. Crucially, their solutions operate independently of application logs, ensuring visibility even if applications are compromised. The cPacket architecture involves monitoring points (taps, spans, virtual taps) that feed into packet brokers equipped with FPGAs and ASICs on every port. These hardware components enable high-speed packet inspection and counting at the port level, allowing for capabilities like string matching on every packet at speeds up to 1.6 terabits per second.

The solution further includes sophisticated packet capture analytics, capable of writing 200 gigabits per second directly to disk while simultaneously indexing and analyzing packets for session length, duration, and latency. While cPacket does not decrypt data, they extract and analyze a vast amount of metadata from handshakes, DNS calls, ICMP, and other network traffic to gain visibility into network health and potential threats. This collected data and metrics are centralized in C-Clear, where they are enriched, analyzed with AI/machine learning algorithms, and presented through dashboards and workflows, including Grafana and custom APIs. cPacket also offers the ability to push metrics and packets to external object storage for long-term retention or more extensive AI analysis, and is investing in LLM-based interactions for agentic AI, demonstrating their commitment to an open API ecosystem that integrates with security companies, SIEMs, and IT service management platforms.


What’s Next from Veeam?

Event: Security Field Day 13

Appearance: Veeam Presents at Security Field Day 13

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

This segment takes a look into the Veeam roadmap from a security perspective, highlighting the fan favorite from VeeamON 2025 – the Veeam Software Appliance. A major upcoming innovation from Veeam is the new Veeam Software Appliance, a fan favorite from VeeamON 2025. This appliance runs the core Veeam platform on Rocky Linux, hardened with DISA STIG security standards, and is designed to be a purpose-built, highly secure backup infrastructure. It aims to significantly enhance the protection of the backup environment itself, moving towards a “secure by default” delivery model. Veeam will manage all security patching for these appliances, offering forced updates with scheduled timelines, thereby reducing the burden on customers for maintaining server security.

Another key future innovation is the introduction of universal continuous data protection (CDP), extending beyond current VMware capabilities to support physical systems and various hypervisors, with future targets including hyperscalers. This aims to provide near-instant recovery point objectives (RPOs) down to two seconds across diverse environments. While Veeam already supports CDP via VMware’s VAIO Filter Driver, this new universal CDP will broaden its applicability across the entire ecosystem.

Finally, Veeam is exploring the integration of AI into its data fabric to unlock deeper insights from customer data, particularly for eDiscovery scenarios. This involves leveraging Veeam’s extensive backup data to enable rapid querying and analysis that would otherwise take significantly longer. While still in early stages and requiring a public statement on responsible AI, this initiative promises attractive future capabilities in data intelligence. Veeam offers flexible licensing through its universal license (VUL) model, which simplifies pricing across various workloads, and their top-tier Veeam CyberSecure offering includes comprehensive capabilities and a ransomware recovery warranty.


The Veeam Difference: Coveware by Veeam

Event: Security Field Day 13

Appearance: Veeam Presents at Security Field Day 13

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

Veeam’s product development and collaboration pace with security vendors is not just a differentiator, it’s a trust signal. Veeam has proven to innovate fast and integrate wide. This session highlights these integrations, iteration velocity and the breadth of the ecosystem. Coveware by Veeam, acquired in March 2024, significantly enhances Veeam’s in-house capabilities in ransomware incident response. Since 2018, Coveware has amassed a large database from supporting 50-100 ransomware cases monthly, allowing them to publish quarterly reports detailing threat actor techniques, tactics, and procedures (TTPs). This proactive intelligence helps organizations understand prevalent threats and implement preventative measures like patching, whitelisting, and enhanced due diligence.

Coveware provides a comprehensive incident response retainer service, including cyber extortion negotiation, cryptocurrency settlements, and decryption support, leveraging their extensive database of decryption tools and keys. They offer 24/7/365 response, typically engaging with organizations within 15 minutes, and partner with other incident response firms like CrowdStrike and Mandiant for specialized containment and eradication efforts. A key differentiator is Coveware’s patent-pending Recon Scanner, a forensic investigation tool deployed on impacted systems to collect logs and build attack timelines. This scanner highlights critical warnings and identifies malicious activity, brute-force attempts, data exfiltration, privilege escalation, and other behaviors indicative of threat actor movement within an environment.

The Recon Scanner’s output, including detailed attack timelines, helps organizations understand the progression of an incident. While its primary use is during an active incident, its ability to uncover historical malicious activity that may have bypassed other security tools makes it a powerful forensic asset. Veeam emphasizes that while they do not advocate paying ransoms, Coveware’s negotiation expertise often focuses on buying time for recovery efforts rather than facilitating payments. This allows organizations to activate their incident response plans, communicate with stakeholders, and restore operations from clean backups. The continuous focus on education and best practices, like immutable backups and encryption passwords, is crucial for organizations to build resilience and improve their posture against evolving cyber threats.