Introducing the New SonicWall Cybersecurity Platform

Event: Security Field Day 12

Appearance: SonicWall Presents at Security Field Day 12

Company: SonicWall

Video Links:

Personnel: Chirag Saxena

SonicWall has transformed from a traditional network security vendor to a comprehensive modern cybersecurity platform. Chirag will introduce the new SonicWall platform and detail its adoption by our partners and customers.


Dell’s Cybersecurity Strategy for the Future

Event: Security Field Day 12

Appearance: Dell Technologies Presents at Security Field Day 12

Company: Dell Technologies

Video Links:

Personnel: Steve Kenniston

In this wrap up, Steve Kenniston discusses how Dell is building a holistic cybersecurity solution for the future that embraces key technology advantages. Included is a discussion of post-quantum cryptography and how Dell will be implementing these new standards.


How Dell Technologies Enables Recovery from a Cyberattack

Event: Security Field Day 12

Appearance: Dell Technologies Presents at Security Field Day 12

Company: Dell Technologies

Video Links:

Personnel: Brian White

In this session, Sr. Consultant Brian White will discuss how Dell Technologies helps elevate customers cyber resilience strategy to preserve the confidentiality, integrity, and availability of data to ensure recoverability in the event of a cyberattack.


Detect and Respond to Threats with Dell Technologies

Event: Security Field Day 12

Appearance: Dell Technologies Presents at Security Field Day 12

Company: Dell Technologies

Video Links:

Personnel: Adam Miller

In this session, Adam Miller will discuss Dell Technologies’ strategy to help customers quickly detect and respond to threats across their organization, exploring what we’re hearing from our customers and how solutions from endpoints to managed services can.


Reduce the Attack Surface with Dell Technologies

Event: Security Field Day 12

Appearance: Dell Technologies Presents at Security Field Day 12

Company: Dell Technologies

Video Links:

Personnel: Sameer Shah

During this section we will examine ways to minimize cybersecurity vulnerabilities and entry points. We will discuss strategies, best practices, and how Dell products and services help our customers advance these objectives.


Dell Technologies Cybersecurity Overview

Event: Security Field Day 12

Appearance: Dell Technologies Presents at Security Field Day 12

Company: Dell Technologies

Video Links:

Personnel: Steve Kenniston

In this session we will review Dell’s overall cybersecurity strategy from the ground up starting with our Secure Supply Chain and a review of our key practice areas, Reduce the Attack Surface, Detect and Respond to Cyber Threats and Recover from a Cyberattack to leading with zero trust principles. We will outline how Dell approaches customers to help solve their most critical cybersecurity challenges.


DigiCert ONE Demo with Frank Agurto-Machado

Event: Security Field Day 12

Appearance: DigiCert Presents at Security Field Day 12

Company: DigiCert

Video Links:

Personnel: Frank Agurto-Machado

In the final portion of the presentation, we provide a demonstration of DigiCert’s Trust Lifecycle Manager product. This demo showcases how the platform streamlines the lifecycle of digital certificates, from issuance to expiration, across a variety of use cases. By illustrating key features like automated workflows, certificate tracking, and centralized management, attendees can see firsthand how the solution enhances security, reduces risks, and simplifies digital trust management in complex environments.


Establishing & Managing Trust with DigiCert

Event: Security Field Day 12

Appearance: DigiCert Presents at Security Field Day 12

Company: DigiCert

Video Links:

Personnel: Mike Nelson

In this portion of the event, we go over how DigiCert plays a crucial role in establishing and managing digital trust by issuing and overseeing infrastructure that verify identities, encrypt communications, and secure online interactions. We delve deeper into how DigiCert helps mitigate security risks and fosters a trusted, transparent digital environment.


Introduction to DigiCert and Digital Trust

Event: Security Field Day 12

Appearance: DigiCert Presents at Security Field Day 12

Company: DigiCert

Video Links:

Personnel: Mike Nelson

This portion of the presentation provides an overview of who DigiCert is today, highlighting its current role as a global leader digital trust. Here, we explore DigiCert’s evolution, its contributions to digital trust, and its commitment to future-proofing security in an increasingly digital world.


Simplify and Accelerate AI Adoption with Pure Storage Platform – Real-World Insights & Use Cases

Event: AI Data Infrastructure Field Day 1

Appearance: Pure Storage Presents at AI Data Infrastructure Field Day 1

Company: Pure Storage

Video Links:

Personnel: Robert Alvarez

In this presentation, Pure Storage outlines its approach to help organizations meet their AI storage needs while accelerating and simplifying adoption of their AI initiatives. With the Pure Storage platform, organizations can maximize performance and efficiency of AI workflows, unify data, simplify data storage management and take advantage of a scalable AI data infrastructure.

Scaling AI workloads—including large language models (LLMs), retrieval augmented generation (RAG) pipelines, and computer vision applications—introduces practical challenges that extend far beyond theoretical storage capabilities. AI environments require vast amounts of data, and as models grow in size and complexity, traditional storage systems can struggle to keep up with the demands of AI training and inference. Based on real-world customer experiences, this presentation shares valuable insights into building efficient, scalable, and reliable storage infrastructures to support these complex AI pipelines. We’ll explore how organizations can address storage bottlenecks and optimize their systems to ensure seamless AI operations at an enterprise level.

Presented by Robert Alvarez – Consulting Solutions Architect, Pure Storage


Simplify and Accelerate AI Adoption with Pure Storage Platform – FlashBlade Internals

Event: AI Data Infrastructure Field Day 1

Appearance: Pure Storage Presents at AI Data Infrastructure Field Day 1

Company: Pure Storage

Video Links:

Personnel: Boris Feigin

In this presentation, Pure Storage outlines its approach to help organizations meet their AI storage needs while accelerating and simplifying adoption of their AI initiatives. With the Pure Storage platform, organizations can maximize performance and efficiency of AI workflows, unify data, simplify data storage management and take advantage of a scalable AI data infrastructure. An essential cornerstone to it is Pure FlashBlade, a powerful scale-out storage solution specifically designed to meet the unique demands of AI workloads. It simplifies the integration and deployment of all scales of training and inference routines, and helps democratize AI to enterprises looking to accelerate their AI initiatives.

This section looks at the internal details of FlashBlade and Purity//FB, the software stack for FlashBlade. This part of the session focuses on the building blocks as well as the core architectural decisions that enable FlashBlade to shine in modern AI environments across many use cases. Central to Purity//FB’s success is its ability to handle massively parallel data processing, which is a crucial AI data infrastructure requirement for large-scale datasets. This part of the session will cover the modularThis section looks at the internal details of FlashBlade and Purity//FB, the software stack for FlashBlade. This part of the session focuses on the building blocks as well as the core architectural decisions that enable FlashBlade to shine in modern AI environments across many use cases. Central to Purity//FB’s success is its ability to handle massively parallel data processing, which is a crucial AI data infrastructure requirement for large-scale datasets. This part of the session will cover the modular design of Purity//FB, illustrating how its distributed architecture efficiently manages data flow.

Presented by Boris Feigin – Technical Director, FlashBlade Engineering, Pure Storage


Simplify and Accelerate AI Adoption with Pure Storage Platform – FlashBlade Overview

Event: AI Data Infrastructure Field Day 1

Appearance: Pure Storage Presents at AI Data Infrastructure Field Day 1

Company: Pure Storage

Video Links:

Personnel: Hari Kannan

In this presentation, Pure Storage outlines its approach to help organizations meet their AI storage needs while accelerating and simplifying adoption of their AI initiatives. With the Pure Storage platform, organizations can maximize performance and efficiency of AI workflows, unify data, simplify data storage management and take advantage of a scalable AI data infrastructure.

An essential cornerstone to it is Pure FlashBlade, a powerful scale-out storage solution specifically designed to meet the unique demands of AI workloads. With parallel data architecture and multi-dimensional performance, FlashBlade ensures minimal latency and high bandwidth, to accelerate model training and reduce time to AI results.

This section introduces FlashBlade’s design tenets that make it so well suited for AI projects, and deep dives into the technology that enables Pure’s industry-leading energy efficiency and helps customers overcome data center power constraints in their AI build-outs. The discussion in this section also includes Pure’s DirectFlash module which communicates directly with raw flash to enable greater control, optimized performance, and reduced latency.

Presented by Hari Kannan – Lead Principal Technologist, Pure Storage


Agentic Al – A Look At the Future of Automation with Stephen Foskett

Event: AI Data Infrastructure Field Day 1

Appearance: Ignite Talks at AI Data Infrastructure Field Day 1

Company: The Futurum Group

Video Links:

Personnel: Stephen Foskett

Stephen Foskett considers agentic AI, a transformative approach to automation quite unlike popular generative AI models. Unlike traditional AI applications that rely on user inputs, agentic AI involves autonomous agents that act on behalf of users, automating various processes. These agents can either run continuously, collecting information and performing actions, or serve as the glue in business process automation tasks. Foskett emphasizes that agentic AI can handle unstructured inputs and generate well-formed outputs, making it a powerful tool for business process automation. He draws parallels to existing automation tools like Zapier and IFTTT but highlights that agentic AI goes a step further by incorporating a level of intelligence that can adapt to changing inputs and unexpected scenarios.

Foskett provides practical examples to illustrate the potential of agentic AI. He describes scenarios where these AI agents can manage complex tasks such as processing photos, sensors, or handling enterprise data like insurance audits. These agents can adapt to various data formats and incomplete information, making decisions and taking actions autonomously. For instance, an AI agent could convert a PNG file to a JPEG if needed or wait for an upload to complete before proceeding with the next steps. This adaptability makes agentic AI particularly valuable in fields like sales automation and cybersecurity, where the ability to respond to real-time data and adjust actions accordingly can significantly enhance efficiency and effectiveness.

However, Foskett also addresses the challenges and ethical considerations associated with agentic AI. While these agents can act autonomously, they are not infallible and can still make errors or “hallucinate” incorrect data. This necessitates the implementation of guardrails to prevent costly mistakes. Additionally, there are ethical concerns about the promises these agents might make on behalf of businesses, which could lead to unintended commitments. Despite these challenges, Foskett is optimistic about the future of agentic AI, seeing it as a paradigm shift that will revolutionize customer service, operations, and business process automation. He believes that the rapid adoption of AI agent-based platforms is inevitable and will be the next significant wave in AI applications.


Optimizing Storage for AI Workloads with Solidigm

Event: AI Data Infrastructure Field Day 1

Appearance: Solidigm Presents at AI Data Infrastructure Field Day 1

Company: Solidigm

Video Links:

Personnel: Ace Stryker

In this presentation, Ace Stryker from Solidigm discusses the company’s unique value proposition in the AI data infrastructure market, focusing on their high-density QLC SSDs and the recently announced Gen 5 TLC SSDs. He emphasizes the importance of selecting the right storage architecture for different phases of the AI pipeline, from data ingestion to archiving. Solidigm’s QLC SSDs, with their high density and power efficiency, are recommended for the beginning and end of the pipeline, where large volumes of unstructured data are handled. For the middle stages, where performance is critical, Solidigm offers the D7 PS1010 Gen 5 TLC SSD, which boasts impressive sequential and random read performance, making it ideal for keeping GPUs maximally utilized.

The presentation highlights the flexibility of Solidigm’s product portfolio, which allows customers to optimize for various goals, whether it’s power efficiency, GPU utilization, or overall performance. The Gen 5 TLC SSD, the D7 PS1010, is positioned as the performance leader, capable of delivering 14.5 gigabytes per second sequential read speeds. Additionally, Solidigm offers other options like the 5520 and 5430 drives, catering to different performance and endurance needs. The discussion also touches on the efficiency of these drives, with Solidigm’s products outperforming competitors in various AI workloads, as demonstrated by the ML Commons ML Perf Storage Benchmark results.

A notable case study presented is the collaboration with the Zoological Society of London to conserve urban hedgehogs. Solidigm’s high-density QLC SSDs are used in an edge data center at the zoo, enabling efficient processing and analysis of millions of images captured by motion-activated cameras. This setup allows the organization to assess hedgehog populations and make informed conservation decisions. The presentation concludes by emphasizing the importance of efficient data infrastructure in AI applications and Solidigm’s commitment to delivering high-density, power-efficient storage solutions that meet the evolving needs of AI workloads.


The Energy Crunch Is Not a Future Problem with Solidigm

Event: AI Data Infrastructure Field Day 1

Appearance: Solidigm Presents at AI Data Infrastructure Field Day 1

Company: Solidigm

Video Links:

Personnel: Manzur Rahman

In the presentation by Solidigm at AI Data Infrastructure Field Day 1, Manzur Rahman emphasized the critical issue of energy consumption in AI and data infrastructure. He referenced quotes from industry leaders like Sam Altman and Mark Zuckerberg, highlighting the significant challenge energy poses in scaling AI operations. Rahman discussed findings from white papers by Meta and Microsoft Azure, which revealed that a substantial portion of energy consumption in data centers is attributed to hard disk drives (HDDs). Specifically, Meta’s AI recommendation engine and Microsoft’s cloud services found that HDDs consumed 35% and 33% of total operational energy, respectively. This underscores the need for more energy-efficient storage solutions to manage the growing data demands.

Rahman then explored various use cases and the increasing need for network-attached storage (NAS) in AI applications. He noted that data is growing exponentially, with different modalities like text, audio, and video contributing to the data deluge. For instance, hyper-scale large language models (LLMs) and large video models (LVMs) require massive amounts of storage, ranging from 1.3 petabytes to 32 petabytes per GPU rack. The trend towards synthetic data and data repatriation is further driving the demand for NAS. Solidigm’s model for a 50-megawatt data center demonstrated that using QLC (Quad-Level Cell) storage instead of traditional HDDs and TLC (Triple-Level Cell) storage could significantly reduce energy consumption and increase the number of GPU racks that can be supported.

The presentation concluded with a comparison of different storage configurations, showing that QLC storage offers substantial energy savings and space efficiency. For example, a DGX H100 rack with QLC storage consumed only 6.9 kilowatts compared to 32 kilowatts for a setup with TLC and HDDs. This translates to 4x fewer storage racks, 80% less storage power, and 50% more DGX plus NAS racks in a 50-megawatt data center. Rahman also addressed concerns about heat generation and longevity, noting that while QLC may generate more heat and have fewer P/E cycles compared to TLC, the overall energy efficiency and performance benefits make it a viable solution for modern data centers. Solidigm’s high-density drives, such as the P-5520 and QLCP-P-5430, were highlighted as effective in reducing rack space and power consumption, further supporting the case for transitioning to more energy-efficient storage technologies.


Optimizing Data Center TCO An In Depth Analysis and Sensitivity Study with Solidigm

Event: AI Data Infrastructure Field Day 1

Appearance: Solidigm Presents at AI Data Infrastructure Field Day 1

Company: Solidigm

Video Links:

Personnel: Manzur Rahman

Manzur Rahman from Solidigm presented an in-depth analysis of Total Cost of Ownership (TCO) for data centers, emphasizing its growing importance in the AI era. TCO encompasses acquisition, operation, and maintenance costs, and is crucial for evaluating cost-effective, high-performance hardware like GPUs, storage, and AI chips. Rahman highlighted the need for energy-efficient solutions and the importance of right-sizing storage to avoid over or under-provisioning. He explained that TCO includes both direct costs (materials, labor, energy) and indirect costs (overheads, cooling, carbon tax), and uses a normalization method to provide a comprehensive cost per terabyte effective per month per rack.

Rahman detailed Solidigm’s TCO model, which incorporates dynamic variables such as hardware configuration, drive replacement cycles, and workload mixes. The model also factors in the time value of money, maintenance, disposal costs, and greenhouse gas taxes. By comparing HDD and SSD racks under various scenarios, Solidigm found that SSDs can offer significant TCO benefits, especially when variables like replacement cycles, capacity utilization, and data compression are optimized. For instance, extending the SSD replacement cycle from five to seven years can improve TCO by 22%, and increasing capacity utilization can lead to a 67% improvement.

The presentation concluded with a sensitivity analysis showing that high-density QLC SSDs can significantly reduce TCO compared to HDDs. Even with higher upfront costs, the overall TCO is lower due to better performance, longer replacement cycles, and higher capacity utilization. Rahman projected that high-density QLC SSDs will continue to offer TCO improvements in the coming years, making them a promising solution for data centers, particularly in AI environments. The analysis demonstrated that while CAPEX for SSDs is higher, the overall cost per terabyte effective is lower, making SSDs a cost-effective choice for future data center deployments.


How Data Infrastructure Improves or Impedes Al Value Creation with Solidigm

Event: AI Data Infrastructure Field Day 1

Appearance: Solidigm Presents at AI Data Infrastructure Field Day 1

Company: Solidigm

Video Links:

Personnel: Ace Stryker

Ace Stryker from Solidigm presented on the critical role of data infrastructure in AI value creation, emphasizing the importance of quality and quantity in training data. He illustrated this with an AI-generated image of a hand with an incorrect number of fingers, highlighting the limitations of AI models that lack intrinsic understanding of the objects they depict. This example underscored the necessity for high-quality training data to improve AI model outputs. Stryker explained that AI models predict desired outputs based on training data, which often lacks comprehensive information about the objects, leading to errors. He stressed that these challenges are not unique to image generation but are prevalent across various AI applications, where data variety, low error margins, and limited training data pose significant hurdles.

Stryker outlined the AI data pipeline, breaking it down into five stages: data ingestion, data preparation, model development, inference, and archiving. He detailed the specific data and performance requirements at each stage, noting that data magnitude decreases as it moves through the pipeline, while the type of I/O operations varies. For instance, data ingestion involves large sequential writes to object storage, while model training requires random reads from high-performance storage. He also discussed the importance of checkpointing during model training to prevent data loss and ensure efficient recovery. Stryker highlighted the growing trend of distributing AI workloads across core data centers, regional data centers, and edge servers, driven by the need for faster processing, data security, and reduced data transfer costs.

The presentation also addressed the challenges and opportunities of deploying AI at the edge. Stryker noted that edge environments often have lower power budgets, space constraints, and higher serviceability requirements compared to core data centers. He provided examples of edge deployments, such as medical imaging in hospitals and autonomous driving solutions, where high-density storage solutions like QLC SSDs are used to enhance data collection and processing. Stryker emphasized the need for storage vendors to adapt to these evolving requirements, ensuring that their products can meet the demands of both core and edge AI applications. The session concluded with a discussion on Solidigm’s product portfolio and how their SSDs are designed to optimize performance, energy efficiency, and cost in AI deployments.


Next Gen Data Protection and Recovery with Infinidat

Event: AI Data Infrastructure Field Day 1

Appearance: Infinidat Presents at AI Data Infrastructure Field Day 1

Company: INFINIDAT

Video Links:

Personnel: Bill Basinas

Infinidat’s presentation on next-generation data protection and recovery emphasizes the critical need for robust cyber-focused strategies to safeguard corporate infrastructure and critical data assets. Bill Basinas, the Senior Director of Product Marketing, highlights the importance of moving beyond traditional backup and recovery methods to a more proactive approach that prioritizes business recovery. Infinidat’s solutions are designed to protect data efficiently and ensure its availability, leveraging advanced technologies like immutable snapshots and automated cyber protection to provide a resilient and secure storage environment.

The core of Infinidat’s approach lies in its InfiniSafe technology, which offers immutable snapshots, logical air gapping, and instant recovery capabilities. These features ensure that data remains protected and can be quickly restored in the event of a cyber attack. The immutable snapshots are particularly crucial as they cannot be altered or deleted until their expiration, providing a reliable safeguard against data tampering. Additionally, InfiniSafe’s automated cyber protection (ACP) integrates seamlessly with existing security infrastructures, enabling real-time responses to potential threats and ensuring that data is continuously validated and verified.

Infinidat also collaborates with Index Engines to enhance its cyber detection capabilities. This partnership allows Infinidat to offer advanced content-level scanning and pattern matching to detect anomalies and potential threats with high accuracy. The integration of these technologies ensures that any compromised data is quickly identified and isolated, minimizing the impact of cyber attacks. By focusing on recovery first and ensuring that validated data is readily available, Infinidat provides a comprehensive solution that addresses the evolving challenges of data protection in today’s cyber-centric landscape.


AI Workloads and Infinidat

Event: AI Data Infrastructure Field Day 1

Appearance: Infinidat Presents at AI Data Infrastructure Field Day 1

Company: INFINIDAT

Video Links:

Personnel: Bill Basinas

Infinidat’s presentation at AI Data Infrastructure Field Day 1, led by Bill Basinas, focused on the company’s strategic positioning within the AI infrastructure market. Basinas emphasized that Infinidat has been closely monitoring the AI landscape for over a year to identify where their enterprise storage solutions can best serve AI workloads. He acknowledged the rapid growth and evolving nature of the AI market, particularly in generative AI (Gen AI) and its associated learning models. Infinidat aims to provide robust storage solutions that enhance the accuracy of AI-generated results, especially through their focus on Retrieval Augmented Generation (RAG). This approach is designed to mitigate issues like data inaccuracies and hallucinations in AI outputs by leveraging Infinidat’s existing data center capabilities.

Basinas highlighted that Infinidat’s strength lies in its ability to support mission-critical applications and workloads, including databases, ERP systems, and virtual infrastructures. The company is now extending this expertise to AI workloads by ensuring high performance and reliability. Infinidat’s InfiniSafe technology offers industry-leading cyber resilience, and their “white glove” customer service ensures a seamless integration of their storage solutions into existing infrastructures. The company is not currently involved in data classification or governance but focuses on providing the underlying storage infrastructure that supports AI applications. This strategic choice allows Infinidat to concentrate on their core competencies while potentially partnering with other vendors for data management and security.

Infinidat’s approach to AI infrastructure is pragmatic and customer-centric. They are working closely with clients to understand their needs and are developing workflows and reference architectures to facilitate the deployment of RAG-based infrastructures. The company is also exploring the integration of vector databases and other advanced data management technologies to further enhance their AI capabilities. While Infinidat is not yet offering object storage natively, they are actively working on it and have partnerships with companies like MinIO and Hammerspace to provide interim solutions. Overall, Infinidat aims to leverage its existing infrastructure to support AI workloads effectively, offering a cost-effective and scalable solution for enterprises venturing into AI.


Infinidat InfuzeOS for Hybrid Multi Cloud

Event: AI Data Infrastructure Field Day 1

Appearance: Infinidat Presents at AI Data Infrastructure Field Day 1

Company: INFINIDAT

Video Links:

Personnel: Bill Basinas

Infinidat’s InfuzeOS is a versatile operating system designed to support both on-premises and hybrid multi-cloud environments. Initially developed for on-premises solutions, InfuzeOS has been extended to work seamlessly with major cloud providers like AWS and Microsoft Azure. This extension allows customers to experience the same ease of use and robust functionality in the cloud as they do with their on-premises systems. InfuzeOS retains all its core features, including neural cache, InfiniRAID, InfiniSafe, and InfiniOps, ensuring consistent performance and management across different environments. The system supports both block and file storage, making it adaptable to various workload needs.

InfiniRAID, a key component of Infinidat’s technology, is a software-based RAID architecture that provides exceptional resilience and performance. Unlike traditional RAID systems that can be vulnerable to multiple drive failures, InfiniRAID can handle dozens of device failures without compromising system operations. This high level of resilience is achieved through a patented approach that manages RAID at the software layer, allowing for efficient use of all available devices. This capability is particularly beneficial for enterprise environments where data integrity and uptime are critical. InfiniRAID’s design also simplifies maintenance, as customers can replace failed drives without immediate technical intervention, thanks to the system’s hot-pluggable drives and proactive monitoring.

InfiniSafe, another integral feature, focuses on cyber resilience, providing robust protection against data breaches and cyber threats. While the cloud implementations of InfuzeOS do not offer the same level of hardware control and guarantees as on-premises solutions, they still deliver significant benefits. The cloud version currently operates on a single compute instance, with plans to evolve towards multi-node configurations to better support complex workloads. Despite these differences, the cloud implementation maintains the same software functionality, including compression and support for Ethernet-based protocols. This makes InfuzeOS a flexible and powerful solution for various use cases, from functional test development to backup and replication targets, and increasingly, AI workloads.