cPacket Network Observability for Deterministic Incident Detection

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Andy Barnes, Ron Nevo

cPacket enables deterministic incident detection by inspecting every byte in every packet at line rate, delivering real-time visibility into threats like DNS beaconing, volumetric DDoS, and C2 channels. With high-speed, packet-level analytics across hybrid cloud and enterprise networks, security teams gain definitive, actionable insights to accelerate threat detection, incident response, and breach prevention. cPacket’s approach to incident detection is “deterministic,” meaning it relies on clear, definable thresholds. For threats like DNS beaconing, cPacket’s smart port technology, leveraging FPGAs and ASICs, can inspect every byte in every packet at line rate to perform string matching. This allows for immediate detection of specific domain requests, such as those associated with supply chain attacks, providing a definitive “yes or no” answer regarding infection status.

For volumetric DDoS attacks, cPacket’s ability to count every packet in real-time allows for rapid detection of anomalies, such as an unusually high ratio of SYN packets to SYN/ACK packets (SYN flood) or excessive DNS responses without corresponding requests (DNS amplification). These detections are measured in seconds, providing much faster and more accurate alerts than traditional methods like NetFlow. While cPacket focuses on detection rather than mitigation, these real-time alerts can be used to initiate on-demand mitigation strategies with ISPs or scrubbing centers, particularly crucial for financial services firms that prioritize low latency.

Furthermore, cPacket’s packet capture solutions can identify long-duration, low-traffic sessions, which are characteristic of command and control (C2) channels. By tracking millions of open TCP sessions, even those with minimal data transfer, cPacket can alert security teams to sessions that persist for days or weeks, indicating potential compromise. While this specific capability primarily applies to TCP sessions, the overall approach of leveraging high-speed, pervasive network observability to detect clear deviations from normal behavior offers invaluable, actionable insights for security teams, complementing existing security tools by providing definitive, packet-level evidence of threats.


cPacket Security Field Day Introduction

Event: Security Field Day 13

Appearance: cPacket Presents at Security Field Day 13

Company: cPacket

Video Links:

Personnel: Mark Grodzinsky, Ron Nevo

cPacket delivers zero-downtime observability for mission-critical networks across finance, healthcare, and government. Trusted with over 50% of global market data, our ASIC+FPGA-powered platform aligns with NIST CSF 2.0 to provide pervasive, scalable visibility across hybrid and cloud environments—enabling real-time packet analytics, rapid threat detection, and enhanced protection for SOC/NOC operations. Founded in 2007 as a semiconductor company specializing in hardware-offloaded string search, cPacket evolved to build a full platform for network observability, initially gaining traction with British Telecom for the London 2012 Olympics. Their core strengths lie in providing nanosecond timestamping, pervasive packet capture, and real-time network analytics across hybrid environments, including private and public clouds, and data centers. Their ideal customers are “zero downtime enterprises” in finance, healthcare, and government that demand packet precision, performance, and the newly added context provided by AI.

cPacket believes that robust network observability solutions can significantly augment and strengthen security postures without replacing existing security tools. Their approach is built on a pervasive, independent, and scalable architecture, allowing them to capture packets anywhere in a hybrid network, from 100 to 400 gigabits per second, and process trillions of packets daily. Crucially, their solutions operate independently of application logs, ensuring visibility even if applications are compromised. The cPacket architecture involves monitoring points (taps, spans, virtual taps) that feed into packet brokers equipped with FPGAs and ASICs on every port. These hardware components enable high-speed packet inspection and counting at the port level, allowing for capabilities like string matching on every packet at speeds up to 1.6 terabits per second.

The solution further includes sophisticated packet capture analytics, capable of writing 200 gigabits per second directly to disk while simultaneously indexing and analyzing packets for session length, duration, and latency. While cPacket does not decrypt data, they extract and analyze a vast amount of metadata from handshakes, DNS calls, ICMP, and other network traffic to gain visibility into network health and potential threats. This collected data and metrics are centralized in C-Clear, where they are enriched, analyzed with AI/machine learning algorithms, and presented through dashboards and workflows, including Grafana and custom APIs. cPacket also offers the ability to push metrics and packets to external object storage for long-term retention or more extensive AI analysis, and is investing in LLM-based interactions for agentic AI, demonstrating their commitment to an open API ecosystem that integrates with security companies, SIEMs, and IT service management platforms.


What’s Next from Veeam?

Event: Security Field Day 13

Appearance: Veeam Presents at Security Field Day 13

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

This segment takes a look into the Veeam roadmap from a security perspective, highlighting the fan favorite from VeeamON 2025 – the Veeam Software Appliance. A major upcoming innovation from Veeam is the new Veeam Software Appliance, a fan favorite from VeeamON 2025. This appliance runs the core Veeam platform on Rocky Linux, hardened with DISA STIG security standards, and is designed to be a purpose-built, highly secure backup infrastructure. It aims to significantly enhance the protection of the backup environment itself, moving towards a “secure by default” delivery model. Veeam will manage all security patching for these appliances, offering forced updates with scheduled timelines, thereby reducing the burden on customers for maintaining server security.

Another key future innovation is the introduction of universal continuous data protection (CDP), extending beyond current VMware capabilities to support physical systems and various hypervisors, with future targets including hyperscalers. This aims to provide near-instant recovery point objectives (RPOs) down to two seconds across diverse environments. While Veeam already supports CDP via VMware’s VAIO Filter Driver, this new universal CDP will broaden its applicability across the entire ecosystem.

Finally, Veeam is exploring the integration of AI into its data fabric to unlock deeper insights from customer data, particularly for eDiscovery scenarios. This involves leveraging Veeam’s extensive backup data to enable rapid querying and analysis that would otherwise take significantly longer. While still in early stages and requiring a public statement on responsible AI, this initiative promises attractive future capabilities in data intelligence. Veeam offers flexible licensing through its universal license (VUL) model, which simplifies pricing across various workloads, and their top-tier Veeam CyberSecure offering includes comprehensive capabilities and a ransomware recovery warranty.


The Veeam Difference: Coveware by Veeam

Event: Security Field Day 13

Appearance: Veeam Presents at Security Field Day 13

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

Veeam’s product development and collaboration pace with security vendors is not just a differentiator, it’s a trust signal. Veeam has proven to innovate fast and integrate wide. This session highlights these integrations, iteration velocity and the breadth of the ecosystem. Coveware by Veeam, acquired in March 2024, significantly enhances Veeam’s in-house capabilities in ransomware incident response. Since 2018, Coveware has amassed a large database from supporting 50-100 ransomware cases monthly, allowing them to publish quarterly reports detailing threat actor techniques, tactics, and procedures (TTPs). This proactive intelligence helps organizations understand prevalent threats and implement preventative measures like patching, whitelisting, and enhanced due diligence.

Coveware provides a comprehensive incident response retainer service, including cyber extortion negotiation, cryptocurrency settlements, and decryption support, leveraging their extensive database of decryption tools and keys. They offer 24/7/365 response, typically engaging with organizations within 15 minutes, and partner with other incident response firms like CrowdStrike and Mandiant for specialized containment and eradication efforts. A key differentiator is Coveware’s patent-pending Recon Scanner, a forensic investigation tool deployed on impacted systems to collect logs and build attack timelines. This scanner highlights critical warnings and identifies malicious activity, brute-force attempts, data exfiltration, privilege escalation, and other behaviors indicative of threat actor movement within an environment.

The Recon Scanner’s output, including detailed attack timelines, helps organizations understand the progression of an incident. While its primary use is during an active incident, its ability to uncover historical malicious activity that may have bypassed other security tools makes it a powerful forensic asset. Veeam emphasizes that while they do not advocate paying ransoms, Coveware’s negotiation expertise often focuses on buying time for recovery efforts rather than facilitating payments. This allows organizations to activate their incident response plans, communicate with stakeholders, and restore operations from clean backups. The continuous focus on education and best practices, like immutable backups and encryption passwords, is crucial for organizations to build resilience and improve their posture against evolving cyber threats.


Security Ecosystem at Veeam

Event: Security Field Day 13

Appearance: Veeam Presents at Security Field Day 13

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

Veeam’s product development and collaboration pace with security vendors is not just a differentiator, it’s a trust signal. Veeam has proven to innovate fast and integrate wide. This session highlights these integrations, iteration velocity and the breadth of the ecosystem. Veeam emphasizes its “power of three” strategy, extending beyond internal innovation to encompass robust partnerships with over 65 security vendors, including major players like Palo Alto, CrowdStrike, Splunk, and Sophos. This extensive ecosystem allows organizations to leverage their existing security investments by feeding information directly from Veeam’s data protection platform into their chosen security tools. The Veeam CyberSecure program, which includes advanced capabilities, incident response retainers, and a ransomware recovery warranty with zero claims to date, further underscores their commitment to data safety.

Veeam provides comprehensive monitoring and reporting through Veeam ONE, which tracks hypervisor, cloud workloads, and Microsoft 365 backup products. This critical data is fed into security partners’ platforms, offering insights into anomalies such as unusual data read-write rates or suspicious login attempts, enabling quicker threat notification. Veeam supports various event types, from malware detection to overall system overviews, making this information available via Syslog and JSON formats. This allows customers to filter events based on their needs and avoid alert fatigue, integrating seamlessly with any Security Information and Event Management (SIEM) tool, including free options. Notably, Veeam makes its documentation publicly accessible, reflecting its commitment to transparency and empowering users.

A key aspect of Veeam’s integration strategy is its recent collaboration with CrowdStrike, offering dashboards for data protection monitoring and security events within the CrowdStrike platform. These pre-built dashboards provide a high-level overview of security events within the Veeam environment, allowing users to drill down for detailed information. Furthermore, Veeam’s integration with Palo Alto XSOAR enables automated playbooks, such as initiating instant VM recovery or deploying security agents on compromised machines. This bidirectional communication helps orchestrate responses across data protection and security operations, enabling security analysts to build customized workflows, even without direct experience with Veeam’s application, as demonstrated by a customer who leveraged Veeam events in Splunk to drive Palo Alto XSOAR automations.


Security Innovations at Veeam

Event: Security Field Day 13

Appearance: Veeam Presents at Security Field Day 13

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

Veeam has delivered true security capabilities in the platform, both to protect the Veeam installation itself and to identify threats in the data they are safeguarding. Veeam has been developing security features and enhancements for its platform, starting with instant virtual machine recovery and extending into proactive threat hunting. Key innovations include the Veeam Data Platform 12.1, which introduced a threat center, AI-based inline malware detection, and proactive threat hunting capabilities. The acquisition of Coveware further strengthened Veeam’s incident response capabilities, providing expertise in ransomware negotiation and proactive incident planning.

Veeam’s security innovations focus on both protecting the Veeam environment and identifying threats within the protected data. Threat Hunter provides signature-based scans of backups, while AI-based inline detection scans data streams for anomalies. Indicators of Compromise (IOC) analysis identifies known attacker toolkits, and suspicious file activity analysis examines unusual file behavior. Veeam also offers security and compliance analyzers to ensure best practices in data protection and infrastructure security, including MFA and four-eyes authorization. These features aim to provide a multi-layered approach to security, addressing threats both during and after the backup process.

To facilitate incident response, Veeam offers an Incident API, enabling bi-directional communication between security tools and the Veeam platform. This allows for automated actions, such as creating out-of-band backups when a security tool detects an active attack. Veeam’s Threat Center provides a high-level overview of the security status of the data protection environment, while the Data Platform Scorecard assesses overall resilience and adherence to best practices. Veeam also integrates with security ecosystems, allowing customers to leverage their existing security investments. This comprehensive approach aims to minimize data loss and accelerate recovery in the event of a security incident.


Have You Seen Veeam Lately?

Event: Security Field Day 13

Appearance: Veeam Presents at Security Field Day 13

Company: Veeam Software

Video Links:

Personnel: Emilee Tellez, Rick Vanover

Veeam is the #1 global market leader in data resilience. Veeam solutions are purpose-built for powering data resilience by providing data backup, data recovery, data portability, data security, and data intelligence. Veeam, a company with over $1.7 billion in revenue and 5,500 employees globally, has significantly expanded its portfolio beyond its origins as a VMware backup tool. They now offer a comprehensive suite of solutions across on-premises, as-a-service, and hybrid models, protecting over 150 different data types. A key recent development is the Veeam Cyber Secure offering, built on the acquisition of Coveware, which is central to their enhanced data security capabilities. Veeam emphasizes that their current offerings represent a significant evolution from the Veeam many people once knew, extending far beyond virtual machine backup to encompass a vast array of data protection needs, including being the most deployed Microsoft 365 backup solution worldwide.

Veeam’s approach to data security is structured around three core pillars: innovation, ecosystem partnerships, and incident response expertise through Coveware. In terms of innovation, Veeam is integrating new security technologies into its products to accelerate mean time to detection and response, ensuring that critical information is readily available to those responding to security events. Their commitment to a strong security ecosystem is demonstrated by alliances with over 65 different security vendors, including major players like CrowdStrike, Palo Alto, and Splunk, acknowledging that organizations have already invested significantly in diverse cybersecurity solutions. This collaborative approach allows Veeam to complement existing security infrastructures rather than attempting to replace them.

The acquisition of Coveware is a cornerstone of Veeam’s data security strategy, particularly for incident response. Coveware is recognized for its extensive data aggregation related to threat actor decryption keys, which is crucial for recovering from ransomware incidents. Beyond their technological prowess, Coveware brings a team of experienced negotiators and cool-headed professionals who assist organizations in navigating the complexities of ransomware incidents and payment negotiations. This unique blend of technology innovation, strategic partnerships, and specialized human expertise positions Veeam as a comprehensive data resilience provider, focused on keeping “good data safe from bad things” and supporting organizations throughout the entire incident response lifecycle.


Microsoft Security Copilot Conditional Access Optimization Agent

Event: Security Field Day 13

Appearance: Microsoft Security Presents at Security Field Day 13

Company: Microsoft Security

Video Links:

Personnel: Nick Goodman

This session explores the evolution and capabilities of Microsoft Security Copilot, focusing on how it’s transforming security operations. Microsoft Security Copilot operates as a unified platform, providing a consistent user experience across its various agents and underlying products. Key features like transparent decision trees, identity and RBAC management, and human-in-the-loop design principles are common across all agents, ensuring that users retain control and can audit AI-driven actions. The Conditional Access Agent, for instance, autonomously scans policies and recommends changes to ensure they align with the current state of the business, enabling rapid updates to security posture and reducing the risk window from months to minutes or hours.

The system incorporates robust guardrails, allowing organizations to control agent operations, particularly concerning new users and applications, and to apply custom natural language instructions to tailor agent behavior. This ensures that AI-generated policy recommendations are balanced with human oversight and business context. Users can also provide feedback to the agents, which directly influences their future reasoning and decision-making, akin to training a new human employee. This continuous learning mechanism is crucial for the AI to adapt to an organization’s specific nuances and improve its effectiveness over time.

While agents are designed to handle resource-intensive tasks like triaging user-submitted phishing emails, the generative AI component is not intended for real-time, high-volume inline processing due to its computational demands. Instead, Microsoft focuses on applying AI where it can most significantly augment human efforts, such as automating time-consuming and low-value tasks. The platform aims to provide clear metrics like resolution rates and time to triage, allowing organizations to assess the economic value of deploying these agents. Furthermore, Microsoft is committed to expanding integrations with third-party data sources and partners, empowering agents to leverage a broader ecosystem of security tools and data, and ultimately enabling customers to build more comprehensive and adaptive security workflows.


Microsoft Security Introducing Security Copilot Agents

Event: Security Field Day 13

Appearance: Microsoft Security Presents at Security Field Day 13

Company: Microsoft Security

Video Links:

Personnel: Nick Goodman

This session explores the evolution and capabilities of Microsoft Security Copilot, focusing on how it’s transforming security operations. Microsoft Security Copilot has evolved to incorporate AI agents, offering a fundamentally different approach to security tasks compared to traditional automation. These agents dynamically plan, reason, and execute tasks, adapting their approach as new information emerges, much like human analysts. This capability has already shown significant benefits, with security teams using Security Copilot reporting incident response times that are approximately 30% faster. The platform is designed to be an ecosystem, with 13 active agents, including six developed by Microsoft and seven by partners, demonstrating a commitment to partner integration and extending AI capabilities across the Microsoft Security Suite.

One notable Microsoft-developed agent is the phishing triage agent, designed to address the overwhelming volume of user-reported phishing incidents. This agent autonomously triages these submissions, analyzing email content, threat intelligence data, and links to determine if an email is genuinely malicious or benign. This frees up human analysts from mundane tasks, allowing them to focus on true threats. The agent learns from human feedback, enabling it to adapt to specific business contexts and improve its accuracy over time. This active learning mechanism, where administrators can provide feedback to the agent, ensures that the AI’s reasoning process is continuously refined, addressing scenarios where the AI might initially misclassify an email due to a lack of organizational-specific knowledge.

Beyond phishing triage, Microsoft Security Copilot includes agents for data loss prevention and insider risk management, which leverage generative AI to classify documents and assist privacy analysts in reviewing alerts. The Conditional Access Agent helps organizations maintain up-to-date security policies by constantly reviewing and suggesting adjustments to conditional access policies, significantly reducing the risk window caused by policy drift. The vulnerability intelligence agent automates the process of reading vulnerability reports, assessing device estates (specifically Windows endpoints), and recommending patching groups in Intune. Lastly, the threat intelligence briefing agent provides organizations with customized reports on cyber threats and vulnerabilities relevant to their specific profile, empowering analysts and organizations that may lack dedicated threat intelligence teams. These agents are designed to integrate seamlessly into existing workflows, enhancing efficiency and enabling security teams to focus on higher-value activities.


Futurum Research Presents Cybersecurity Trends with Fernando Montenegro

Event: Security Field Day 13

Appearance: Futurum Research Presents Cybersecurity Trends at Security Field Day 13

Company: The Futurum Group

Video Links:

Personnel: Fernando Montenegro

Futurum Research acts as an information broker, connecting technology buyers, sellers, investors, and other stakeholders to provide decision support and insights into the cybersecurity landscape. Their research, led by Fernando Montenegro and with contributions from analysts like Krista Case, encompasses both qualitative and quantitative methods, including a recent survey of over 800 decision-makers across various global markets. This survey, conducted between February and April 2025, focused on understanding organizational changes and perspectives on different cybersecurity fields, with a significant emphasis on senior leadership.

The research identified four major trends shaping cybersecurity in 2025: the pervasive influence of AI, the expanding and increasingly complex attack surface, a significant move towards security platforms, and the evolution of data protection into broader resilience strategies. Organizational trends indicate that cybersecurity is gaining executive visibility, with frequent reporting to senior leadership and a notable increase in security budgets driven by modernization efforts, risk management strategies, and regulatory compliance. When evaluating vendors, product effectiveness and capabilities remain paramount, but total cost of ownership and integration with existing tools are increasingly critical factors. The survey also highlighted challenges in vendor evaluation due to the crowded and noisy cybersecurity marketplace.

Key findings across specific cybersecurity domains reveal several insights. Cloud security incidents and data breaches were the most reported incidents, leading to data loss and operational downtime. In application security, talent shortages and legacy application debt are major challenges, with application development teams often leading security efforts. Cloud security sees stronger ownership by security teams in multi-cloud environments, with a preference for cloud provider-native security solutions. Data security initiatives are increasingly leveraging AI/ML for threat detection and focusing on data security posture management. Endpoint security remains stable, primarily providing telemetry for security operations, while identity management is a “new hotness,” especially concerning non-human identities and rising costs. Risk management and security operations are becoming the central nervous system for modern security, with a focus on improving context derivation and incorporating cloud security into SecOps. Network security emphasizes automation, NDR, and micro-segmentation for zero-trust implementation. Lastly, while optimism about AI’s role in security is high, Futurum stresses the need for education regarding AI’s actual capabilities and limitations in replacing human analysts.


Dell Technologies Infrastructure Security with Steve Kenniston

Event: Security Field Day 13

Appearance: Dell Technologies Presents at Security Field Day 13

Company: Dell Technologies

Video Links:

Personnel: Adam Miller, Steve Kenniston

Having a secure and resilient infrastructure gives organizations the confidence they need to innovate. Dell helps organizations stay safe and secure, today and into the future, by manifesting a comprehensive security strategy across three core pillars: modern workspace (PCs), modern data center (storage, servers, data protection, networking, HCI), and AI. This holistic approach, known as the Dell Technology Advantage (DTA), integrates security and sustainability across all three components. A dedicated development organization within Dell focuses on creating consistent security capabilities across their entire portfolio, aiming to reduce tool sprawl and provide a unified management experience for customers, including consistent operating systems across appliance solutions for predictable security implementation.

Dell’s infrastructure security strategy aligns with a “reduce attack surface, detect and respond, and recover” framework. To reduce the attack surface, Dell’s servers incorporate features like system lockdown, signed firmware updates, and dynamic USB control, while networking solutions leverage cryptography and secure authentication. For detection and response, features like iDRAC on servers and BIOS live scanning are used to continuously monitor for changes and send notifications upon physical chassis penetration. In terms of recovery, Dell ensures valid recovery points, scans data before recovery, and offers capabilities like scanning snapshots on primary storage for early threat detection and quicker recovery of business-critical data, complementing their data protection solutions with immutable vaults and isolation.

Dell also emphasizes a zero-trust approach, building capabilities into each solution set to support customers in creating zero-trust environments. While they clarified that “certification” by the Department of Defense is better termed “validation,” Dell’s Project Zero architecture adheres to the DOD’s zero-trust guidelines, having undergone testing and validation against their COA3 for on-prem infrastructure. This validation process involved implementing hardware that the DOD could pen-test and validate against various security controls. Additionally, Dell has partnered with CrowdStrike to enhance threat detection within backup environments, identifying over 70 types of attacks and sending actionable intelligence to SIEMs, thus shifting from reactive incident response to proactive detection and providing comprehensive recovery services through their integrated support and engineering teams.


Dell Technologies AI Security with Arun Krishnamoorthy

Event: Security Field Day 13

Appearance: Dell Technologies Presents at Security Field Day 13

Company: Dell Technologies

Video Links:

Personnel: Adam Miller, Arun Krishnamoorthy

AI has become one of the hottest topics in IT. Learn how Dell can help you make sure that as you deploy AI solutions, you do it in a secure manner.

Dell Technologies emphasizes that security is a critical aspect of their journey in accelerating customer outcomes, whether through private and hybrid cloud solutions or advancements in AI. They have established the Dell AI Factory to mass-produce AI solutions at scale with high quality and efficiency, bringing together Dell’s infrastructure, including AI PCs and data center components (compute, storage, GPU-enabled with partners like Nvidia, Intel, and AMD), along with an ecosystem of AI-enabled partners. This comprehensive approach aims to help customers accelerate their AI innovation and achieve faster time to market, recognizing security as a day-zero conversation for successful AI deployment.

Dell highlights the evolving landscape of AI, from traditional AI to generative AI and the emerging agentic AI. With agentic AI, applications will increasingly think for themselves and exercise judgment with minimal human intervention, posing significant security challenges. To address these evolving risks, Dell advocates for a cross-functional architectural approach involving IT, business, data, and security teams from the outset. They stress the importance of organizing and securing data, which fuels AI models, and implementing robust governance. The company is developing an architecture to secure AI deployments, from model training and data organization to runtime environments on-premise, in the cloud, or on AI PCs, acknowledging the shift of AI use cases to the edge.

Dell’s security strategy for AI focuses on making security and resilience an architectural design choice, providing services like strategic advisory, implementation, and continuous threat management. They offer a virtual CISO for AI, data security posture assessment to identify and reduce AI-related risks like data poisoning and prompt injection, and managed security services, including managed detection and response (MDR). Their MDR service provides full-stack visibility, proactively monitoring infrastructure, data protection environments, and cloud/container levels for threats. Dell is also partnering to develop an “AI proxy” or “AI firewall” for deep prompt-level inspection, compliance violation assessment, and malicious code detection, and offers penetration testing against OWASP Top 10 AI vulnerabilities, emphasizing a proactive and collaborative approach to securing AI implementations.


Dell Technologies Endpoint Security

Event: Security Field Day 13

Appearance: Dell Technologies Presents at Security Field Day 13

Company: Dell Technologies

Video Links:

Personnel: Adam Miller, Justin Vogt

When it comes to security at the endpoint, we will discuss how Dell helps to keep your organization safe.

In this session, Dell Technologies presented a deep dive into their commercial PC endpoint security capabilities. Justin Vogt and Adam Miller outlined how Dell’s approach spans secure manufacturing, below-the-OS threat detection, hardware-based credential protection, and direct integrations with industry-leading security tools. Their security strategy is rooted in building a trusted platform from the supply chain up, with controls like Dell’s SafeBIOS for firmware verification, SafeID for dedicated credential storage, and tamper-evident delivery methods. These security layers are automatically embedded in Dell commercial devices like Dell Pro and Dell Pro Max, which are built with telemetry and validation mechanisms that detect firmware tampering and alert security teams proactively.

Dell also emphasized integration with silicon partners, especially Intel, to gain visibility into memory at the hardware level, bypassing potentially compromised operating systems. Unique capabilities such as off-host firmware verification and BIOS indicators of attack give organizations the ability to detect abnormal behavior even before standard security tools are active. Beyond detection, Dell helps customers with incident response by enabling forensic data capture and supporting secure recovery. These capabilities are made actionable through direct integrations with platforms like Microsoft Intune, CrowdStrike, and Absolute, so customers can manage and respond to threats using tools they already know. All this is offered without additional cost, and Dell provides both a hardware and software bill of materials to ensure transparency and trust in every device.

Complementing their built-in security, Dell offers optional managed services to help customers who may not have mature security operations. Their customer security operations center can monitor hardware-level data and support incident response and recovery–even on non-Dell environments. With a focus on reducing mean time to detect and respond, Dell positions itself not only as a hardware vendor but as a security partner. The presentation highlighted Dell’s ethos of proactive defense, transparency, and support, driven by both product capabilities and service expertise. Customers can rely on Dell for end-to-end security–from foundational hardware assurance to advanced telemetry and threat remediation.


Dell Technologies Security Strategy Overview

Event: Security Field Day 13

Appearance: Dell Technologies Presents at Security Field Day 13

Company: Dell Technologies

Video Links:

Personnel: Adam Miller, Sameer Shah, Steve Kenniston

This session will cover Dell’s overall strategy for cybersecurity including reducing the attack surface, detecting and respond to cyber threats, and recovering from a cyberattack.

In this presentation at Security Field Day 13, Dell Technologies provided a comprehensive overview of their cybersecurity strategy, emphasizing the concept of “advanced cybersecurity maturity.” The speakers explained that Dell aims to help customers progress in their cybersecurity posture by embedding security from the ground up—starting with the supply chain and extending through endpoints, infrastructure, and services. With the rise of AI, the attack surface has expanded, and bad actors have gained access to more sophisticated tools. In response, Dell focuses on three strategic pillars: reducing the attack surface, detecting and responding to threats, and ensuring rapid recovery. Their approach includes building secure hardware (claiming the most secure commercial and AI PCs), utilizing industry-vetted security partnerships, and providing services that align with the Zero Trust model.

Dell’s reduction of the attack surface involves enforcing strong cyber hygiene, emphasizing practices such as regular patching, encryption, multi-factor authentication, and network segmentation. The company views cybersecurity as not only technological but procedural, with detection strategies that incorporate AI/ML-enhanced XDR tools and, where possible, managed detection and response (MDR) services. Their hardware is designed with security baked in, including features like firmware verification and behavioral detection at the BIOS level. They also stress the importance of monitoring, both with built-in capabilities and via partner integrations. Dell aims to give customers not only preventative tools but also detection systems that assume eventual breach, shifting focus toward resilience.

Recovery, as Dell sees it, is the ultimate goal and involves both technology and process readiness. Their solutions include air-gapped cyber vaults and services that support incident response planning, communications, and business continuity. For Zero Trust, Dell supports two paths: an incremental uplift approach for customers with existing investments, and Fort Zero, a pre-certified private cloud solution aligned with the U.S. Department of Defense’s Zero Trust architecture. Fort Zero integrates hardware and software from vetted partners and is delivered as a turnkey system. Dell also supports brownfield environments through consulting services and maturity-based architecture planning. Overall, the presentation framed cybersecurity not as a static goal but as a continual process of improvement, which is rooted in practical frameworks, backed by technology, and supported by Dell’s ecosystem of services and partners.


Driving AI-Powered Analytics with Mike Potter of Qlik

Event: Tech Field Day Experience at Qlik Connect 2025

Appearance: Tech Talks at Qlik Connect 2025

Company: Qlik

Video Links:

Personnel: Mike Potter, Stephen Foskett

In this interview at Tech Field Day during Qlik Connect 2025, Stephen Foskett interviews Mike Potter, Chief Technology Officer at Qlik, discussing the company’s latest advancements in integrating AI into its analytics platform. Potter emphasizes that Qlik’s vision is to manage the entire data lifecycle—from ingestion and transformation to analytics and actionable insights. Qlik’s introduction of the new agentic framework and enhancements such as intelligent data cataloging and business glossary automation are designed to help users turn complex, unstructured data into structured insights. The goal is to shift the focus from technical hurdles to business value by automating routine tasks and creating a more governed and scalable data environment.

A major challenge Potter highlights is that while organizations often have the intelligence they need, they struggle with executing on that information in real-time and at scale. He explains how Qlik’s AI-driven tools—such as Qlik Answers, which allows users to query data in natural language—are democratizing access to analytics. By equipping non-technical users with capabilities traditionally limited to data specialists, Qlik transforms decision-making across entire enterprises. These tools not only facilitate quicker insights but also align with enterprise needs for reliable, referential data by blending deterministic analytics with generative AI to strengthen context and relevance.

Potter also reframes the ongoing debate about cloud adoption, pointing out that most organizations are no longer deciding if they will move to the cloud—but rather how fast they can get there without sacrificing their existing investments. Qlik’s partnership-driven ecosystem, support for open standards like Apache Iceberg, and recent acquisitions further enable seamless cloud migration and integration with both cloud-native and legacy systems. With flexible architecture and strategic alliances with providers like AWS, Qlik ensures customers can innovate on their terms while maintaining agility and enterprise-grade governance.


Olawale Oladehin on Advancing Enterprise AI Adoption with AWS and Qlik

Event: Tech Field Day Experience at Qlik Connect 2025

Appearance: Tech Talks at Qlik Connect 2025

Company: AWS, Qlik

Video Links:

Personnel: Stephen Foskett, Wale Oladehin

At Qlik Connect 2025, Olawale “Wale” Oladehin of AWS reflected on the progress made since the AWS-Qlik collaboration was initiated, emphasizing shared goals in advancing enterprise AI adoption and aligning around open standards, scalability, and governance. Wale highlighted how joint customers benefit from Qlik’s strength in data integration, movement, and quality, paired with AWS’s robust infrastructure and AI capabilities. The partnership ensures enterprises can easily scale AI workloads with reliability while leveraging both platforms for better data-driven insights. One of the key developments they discussed was the shared commitment to open frameworks like Apache Iceberg, which boost interoperability and reduce vendor lock-in—a vital factor for modern analytics and AI workloads.

Wale also explored how AWS and Qlik are delivering customer confidence in generative AI use cases through tools like Amazon Bedrock. These technologies foster responsible AI usage by incorporating features like agent orchestration, LLM guardrails, and governance layers to help prevent misinformation such as hallucinations. The interview underscored how customers, particularly in regulated industries like finance and pharmaceuticals, are rapidly adopting gen AI due to already having solid foundations in data security and compliance. Wale emphasized that AWS builds backward from customers’ needs, ensuring they apply AI solutions appropriate to their business goals, whether building custom models, implementing agents, or utilizing fully managed services for immediate productivity boosts.

The discussion also touched on broader industry shifts, notably the normalization of cloud infrastructure. Wale commented that today’s enterprises not only trust the cloud but view it as a default platform, expecting seamless SaaS and infrastructure-level services. This shift is emphasized through AWS and Qlik’s integration strategies, delivering flexibility through cloud-native but hybrid-compatible solutions. At his Tech Field Day presentation, Wale elaborated on these themes, showing demos and discussing the practical application of AI on AWS for Qlik users.


Agentic AI and the Future of Science — A Conversation with Michael Bronstein and Nick Magnusson at Qlik Connect 2025

Event: Tech Field Day Experience at Qlik Connect 2025

Appearance: Tech Talks at Qlik Connect 2025

Company: Qlik

Video Links:

Personnel: Michael Bronstein, Nick Magnuson, Stephen Foskett

At Qlik Connect 2025, Stephen Foskett interviewed Michael Bronstein, DeepMind Professor of Artificial Intelligence at the University of Oxford, and Nick Magnuson, Head of AI at Qlik, explored the transformative potential of agentic AI in science and enterprise. Bronstein highlighted how agentic AI may represent a seismic shift in the scientific method, moving beyond traditional roles in simulation and prediction to now participating in creative hypothesis generation—a realm historically reserved for human ingenuity. This evolution positions AI as not just a tool but a fellow innovator, potentially capable of reaching milestones like Nobel Prize-worthy contributions. While the impact in experimental sciences faces challenges due to the messiness of real-world labs, fields like mathematics and software development—where elements are inherently digital—might more quickly benefit from AI’s capabilities.

Magnuson elaborated on Qlik’s role in enabling agentic AI across industries through scalable analytics and data infrastructure. Qlik is actively working to address the complexity of querying massive data lakes, especially as AI systems demand broader, multimodal, and high-velocity datasets. This adaptation aligns with the shift towards machine-centric data generation and processing, emphasizing how data and AI models must evolve in tandem. Magnuson also noted that synthetic data is increasingly prevalent and necessary for training AI agents capable of exploring previously unapproachable scenarios at scale. Nonetheless, challenges connected with the trustworthiness and verification of such data remain critical.

The interview concluded with a discussion about the deeper implications of agentic AI creating new paradigms in both science and enterprise. Bronstein suggested that scientific data collection itself may need to evolve, moving towards formats interpretable by AI but potentially opaque to humans. Meanwhile, Qlik’s innovations aim to support this transition by developing infrastructure capable of handling such complex, varied, and massive-scale data input. As the use of agents grows, particularly in autonomous exploration and decision-making, enterprises must not only consider technical capabilities and applications but also ethical and regulatory implications. These developments advance the broader conversation about co-evolution between AI and scientific inquiry and reaffirm the necessity of continued, rigorous interdisciplinary collaboration.


Advancing Data Integration and Analytics with Sam Pierson and Ori Rafael of Qlik

Event: Tech Field Day Experience at Qlik Connect 2025

Appearance: Tech Talks at Qlik Connect 2025

Company: Qlik

Video Links:

Personnel: Ori Rafael, Sam Pierson, Stephen Foskett

At Tech Field Day during Qlik Connect 2025, Stephen Foskett interviewed Sam Pierson, SVP of R&D for the Data Business Unit at Qlik, and Ori Rafael, former CEO and co-founder of Upsolver and now Senior Director of Engineering at Qlik, about the major announcement of Qlik’s Open Lakehouse. This new offering aims to bridge the gap between unstructured data lakes and the highly governed, structured world of data warehouses. Built on open standards like Apache Iceberg, the Open Lakehouse allows Qlik to deliver scalable, performant, and cost-efficient data management that makes data easier to access, transform, and analyze. As the industry sees a shift from mere data storage to true data usability across diverse environments, Qlik sets itself apart by embedding the openness and flexibility that enterprises now demand.

Ori provided valuable insight into how Upsolver’s technology is enhancing Qlik’s data ecosystem. Originally created to simplify Big Data workloads, Upsolver built a declarative, low-engineering approach to ingesting and managing massive datasets. With the integration into Qlik, the capabilities of Upsolver now power the Open Lakehouse, turning what was typically a data engineering bottleneck into a user-friendly and performant experience. Ori emphasized how Upsolver solves the “last mile” challenge in data lakes — turning raw, complex data into consumable assets without the overhead typically associated with Hadoop or similar systems. This evolution allows smaller datasets to be managed with the same agility, leading to a universal platform for beginners and advanced users alike.

Sam highlighted how Upsolver’s ingestion performance and native integration with technologies like Apache Iceberg align strongly with Qlik’s goals and existing offerings, such as the Qlik Talend Cloud. The acquisition has been particularly beneficial in scaling their data integration efforts, improving connectors—especially for more complex systems like SAP and mainframes—and supporting seamless interoperability with key cloud partners like AWS. This alignment of vision and strategy between the two companies has rapidly accelerated product development, with early access programs already in motion. Combining Qlik’s extensive analytics and AI tools with Upsolver’s robust ingestion engine offers a compelling package for customers seeking flexible, open, and high-performance data solutions.


Responsible and Inclusive AI Innovation in Analytics with Mary Kern and Rumman Chowdhury

Event: Tech Field Day Experience at Qlik Connect 2025

Appearance: Tech Talks at Qlik Connect 2025

Company: Qlik

Video Links:

Personnel: Mary Kern, Rumman Chowdhury, Stephen Foskett

In this interview from Tech Field Day Experience at Qlik Connect 2025, Stephen Foskett interviews Mary Kern, VP of Analytics Go-To-Market at Qlik, and Rumman Chowdhury, CEO of Humane Intelligence and a Qlik AI Council member, about the state of AI in today’s analytics landscape. They discuss the paradoxes of 2025, where the rise of powerful centralized AI platforms coincides with a growing open-source movement, enabling wider global access. Rumman highlights the democratizing force of open-source AI, underscoring how inclusive engagement and grassroots participation are essential for future innovation. Mary adds that while AI promises efficiency and transformation, enterprises continue to grapple with responsible implementation and trust in these evolving technologies.

As both leaders emphasize, AI’s impact is most meaningful when it enhances accessibility, enabling users across varying skill levels and geographies to engage with analytics tools more intuitively. Language models acting as user interfaces make complex tools more approachable, especially for users without technical backgrounds or those with disabilities. By leveraging natural language processing and multi-language support, AI can elevate users’ performance and confidence in decision-making, making business intelligence more powerful and human-centric across cultures. Mary notes that the future of analytics isn’t about AI replacing humans but about enabling broader, better performance powered by data-driven insights.

The conversation also delves into the cultural nuances of AI deployment globally. Rumman raises concerns about bias in AI models when applied across different societies and languages. She explains the importance of culturally aware AI, citing her organization’s joint work with ASEAN to rigorously test models for multicultural bias. Mary reflects on how Qlik builds diverse perspectives into their development process, ensuring AI models are not only useful but trustworthy and aligned with enterprise needs. This intentional approach—with baked-in trust, bias monitoring, and global sensitivity—demonstrates Qlik’s commitment to responsible AI integration in analytics at scale.


Qlik Connect 2025 Delegate Roundtable on Agentic AI

Event: Tech Field Day Experience at Qlik Connect 2025

Appearance: Tech Field Day Delegate Roundtable at Qlik Connect 2025

Company: Qlik

Video Links:

Personnel: Stephen Foskett

The Tech Field Day Delegate Roundtable at Qlik Connect 2025 brought together key thought leaders to discuss the rapid evolution and practical implications of agentic AI in the enterprise data ecosystem. With agentic AI emerging as a prominent theme for this year, delegates evaluated its current state, its impact on organizations, and Qlik’s approach to integrating AI-centric workflows into their platform.

Throughout the roundtable, panelists acknowledged that while agentic AI—AI-driven workflows composed of autonomous or semi-autonomous agents—has gained significant marketing traction, its implementation in production environments remains aspirational for most enterprises. They praised Qlik for its data-first approach, recognizing the company’s efforts to merge structured and unstructured data as a foundational step toward enabling more intelligent AI-based decisions. However, skepticism remained about the readiness of many organizations to deploy such systems, given the ongoing challenges with data quality, observability, and trustworthiness in AI outputs. The importance of foundational data governance and automation was emphasized, with clarity needed between traditional scripting, automation, and true AI-enabled agentic systems.

Delegates discussed the philosophical and technical nuances of what qualifies as agentic, highlighting that many current use cases are partially agentic at best, often relying on conventional automation or rules-based logic augmented by AI components. They advocated for a pragmatic view: while agentic AI is real and being pursued, the industrial-scale implementation still hinges on getting core data practices right: cleaning and organizing legacy systems, ensuring output trust, and creating frameworks that allow controlled evolution of AI agents. The group agreed that Qlik is moving the ecosystem in the right direction but stressed that success will depend on keeping expectations grounded and distinguishing aspirational demos from deployable solutions.