Violin Memory System and Fabric Architecture

Event: Storage Field Day 8

Appearance: Violin Memory Presents at Storage Field Day 8

Company: Violin Memory

Video Links:

Personnel: James Bowen


Violin Memory Flash Array Technical Details

Event: Storage Field Day 8

Appearance: Violin Memory Presents at Storage Field Day 8

Company: Violin Memory

Video Links:

Personnel: Vikas Ratna


Violin Memory Product and Positioning Overview

Event: Storage Field Day 8

Appearance: Violin Memory Presents at Storage Field Day 8

Company: Violin Memory

Video Links:

Personnel: Steve Dalton


NexGen Storage: Flash Technology Landscape

Event: Storage Field Day 8

Appearance: NexGen Storage Presents at Storage Field Day 8

Company: NexGen

Video Links:

Personnel: Kelly Long


NexGen Storage: Flash Market Landscape

Event: Storage Field Day 8

Appearance: NexGen Storage Presents at Storage Field Day 8

Company: NexGen

Video Links:

Personnel: Mike Koponen


Infinidat Solution Deep Dive

Event: Storage Field Day 8

Appearance: INFINIDAT Presents at Storage Field Day 8

Company: INFINIDAT

Video Links:

Personnel: Brian Carmody


Infinidat Introduction and Background

Event: Storage Field Day 8

Appearance: INFINIDAT Presents at Storage Field Day 8

Company: INFINIDAT

Video Links:

Personnel: Randy Arseneau


Primary Data DataSphere Demo

Event: Storage Field Day 8

Appearance: Primary Data Presents at Storage Field Day 8

Company: Primary Data

Video Links:

Personnel: Kaycee Lai


Managing Data by Objective with Primary Data

Event: Storage Field Day 8

Appearance: Primary Data Presents at Storage Field Day 8

Company: Primary Data

Video Links:

Personnel: David Flynn


Primary Data Introduction at Storage Field Day 8

Event: Storage Field Day 8

Appearance: Primary Data Presents at Storage Field Day 8

Company: Primary Data

Video Links:

Personnel: Lance Smith


Qumulo Summary and Demo

Event: Storage Field Day 8

Appearance: Qumulo Presents at Storage Field Day 8

Company: Qumulo

Video Links:

Personnel: Peter Godman

Qumulo’s presentation at Storage Field Day 8, led by Peter Godman, highlighted the company’s scalable storage solutions and analytics capabilities. Godman began by demonstrating the ease of building a cluster, showcasing how users can rapidly establish a scalable storage system using Qumulo’s interface. He illustrated this setup process along with the deployment of sample data, emphasizing the user-friendly nature of Qumulo’s configuration and management features, all conducted through a straightforward HTML5-based web interface.

A significant portion of the presentation focused on Qumulo’s advanced analytics features, which are designed to tackle common storage administration challenges. Godman addressed issues that arise during high Input/Output Operations (IOPS), particularly the difficulty in identifying the root causes of spikes in workload without adequate visibility. He explained how Qumulo’s analytics provide users with real-time insights into IOPS hotspots and capacity usage across large-scale file systems, using visually intuitive tools to track where data is being accessed and what drives operational load. This level of detailed visibility is vital for administrators managing vast amounts of data, as it simplifies diagnosing performance issues and optimizing system resources.

In closing, Godman shared examples of Qumulo’s scalability, including a system with over 9.2 billion files and discussions about how their infrastructure is suited to both large and small file environments. He acknowledged the challenges posed by scaling storage systems effectively, especially in handling numerous small files, which is often difficult for traditional systems. By illustrating the architectural strengths and the ongoing innovation at Qumulo, the presentation positioned the company’s storage solutions as robust options for enterprises with demanding data requirements.


Qumulo Core Metadata Analysis

Event: Storage Field Day 8

Appearance: Qumulo Presents at Storage Field Day 8

Company: Qumulo

Video Links:

Personnel: Aaron Passey

In his presentation on Qumulo Core Metadata Analysis at Storage Field Day 8, Aaron Passey focused on the importance of analytics, particularly metadata analytics, in managing large datasets. He emphasized the need for rapid answers to data-related queries in environments with billions of files, highlighting the limitations of traditional methods such as running disk usage commands or using find commands. As data scales up, these conventional approaches become less effective, leading to significant latencies and outdated information when conducting routine operations like data scans for backups.

To tackle the challenges presented by large-scale data, Passey described Qumulo’s innovative approach, which involves integrating metadata analytics directly into the file system itself. By utilizing aggregates—functions that summarize key attributes of files and directories—Qumulo enhances the efficiency of metadata queries. This method allows for instantaneous access to aggregated data such as total blocks used or the last changed times, drastically reducing the need for lengthy tree scans that plague traditional systems. The ability to query the metadata without exhaustive searches not only saves time but also minimizes I/O overhead on storage systems.

Additionally, Passey addressed how aggregates can enhance the search for files modified within specific timeframes, thereby enabling much more efficient incremental backups. He pointed out that unlike conventional methods that can leave data outdated and irrelevant, Qumulo’s system ensures that metadata remains relatively fresh, generally updated within a minute or less. By avoiding the bottlenecks associated with extensive scans and establishing a system that allows for quick access to detailed file metrics, Qumulo positions itself as a leader in storage solutions, particularly suitable for enterprises managing vast amounts of unstructured data.


Qumulo Core Block System

Event: Storage Field Day 8

Appearance: Qumulo Presents at Storage Field Day 8

Company: Qumulo

Video Links:

Personnel: Aaron Passey

The presentation by Qumulo at Storage Field Day 8, led by CTO Aaron Passey, focused on the Qumulo Core Block System, emphasizing its capacity for data protection and enhanced reliability. Passey detailed how failure rates and the chances of a storage system failing correlate directly with factors such as restripe time, component reliability, and cluster size. By optimizing these elements, Qumulo aims to achieve high reliability while lowering protection levels, enabling organizations to minimize costs related to data storage while still safeguarding against data loss. This is achieved through rapid restriping and effective use of storage components.

At the heart of Qumulo’s design is a shared nothing clustered architecture that operates efficiently even during drive failures. Passey illustrated how data is organized into 5 GB chunks, which can be mirrored or erasure-coded for protection. This approach allows for a widespread rebuild process across the cluster, utilizing the performance of multiple drives to expedite data recovery effectively. As a result, Qumulo claims rebuild times significantly lower than those of traditional systems—often completing in minutes rather than days, which is a critical factor in maintaining operational efficiency and data availability.

In addition to the technical aspects, Passey addressed the architecture’s advantages in maintaining optimal I/O performance. By leveraging a hybrid model that combines SSDs with traditional spinning disks, Qumulo optimizes read and write operations for both speed and resource efficiency. The system’s ability to handle large block sizes ensures greater utilization of storage resources while minimizing fragmentation. Moreover, Qumulo is actively working towards features such as data-at-rest encryption and enhanced management capabilities, affirming its commitment to evolving its offerings in response to customer needs. Overall, the presentation highlighted Qumulo’s innovative approach to storage solutions that prioritize performance and reliability.


Qumulo Storage Physics

Event: Storage Field Day 8

Appearance: Qumulo Presents at Storage Field Day 8

Company: Qumulo

Video Links:

Personnel: Peter Godman

The presentation by Qumulo, titled “Qumulo Storage Physics” and led by Peter Godman, delves into the evolving landscape of storage technologies, particularly focusing on the issues associated with managing unstructured data. Godman emphasizes the significant changes that different storage media have undergone over the years, with attention given to the limitations of hard disk drives (HDDs) versus solid-state drives (SSDs). By tracing the historical development of these technologies, he illustrates how throughput, IOPS (Input/Output Operations Per Second), and storage capacities have evolved, highlighting the challenges faced by businesses as data volumes continue to multiply.

Throughout his talk, Godman addresses the fundamental physics of how storage devices operate, illustrating that while throughput may improve, the IOPS performance of HDDs does not scale proportionately with increased storage capacity. He shares specific statistics over time, showing how the performance of HDDs has stagnated compared to the rapid advancement of SSD technology. While SSDs provide faster performance, they also present cost challenges, as the dollar per capacity ratio remains significantly higher than that of HDDs. This disparity creates a pressing need for innovation in storage solutions to bridge the gap between speed and affordability as organizations increasingly demand faster data access amidst growing datasets.

To combat these challenges, Godman outlines the factors that contribute to reduced performance in storage systems, such as fragmentation, the prevalence of small files, random I/O patterns, and inefficient data protection mechanisms. He emphasizes the importance of understanding and addressing these factors to enhance the performance of storage systems. The presentation concludes with a preview of Qumulo’s upcoming innovations aimed at addressing these inefficiencies in network-attached storage and optimizing system performance. Overall, it advocates for a balanced approach to managing storage that both maximizes speed and affordability, critical for the evolving needs of data-intensive environments.


Qumulo Overview with Peter Godman

Event: Storage Field Day 8

Appearance: Qumulo Presents at Storage Field Day 8

Company: Qumulo

Video Links:

Personnel: Peter Godman

In this presentation by Qumulo at Storage Field Day 8, CEO Peter Godman discussed their innovative approach to data storage, emphasizing their company’s evolution and vision. Founded in 2015, Qumulo aims to address the growing complexities associated with managing large-scale data environments. Godman highlighted that the traditional means of handling data have become increasingly cumbersome, with businesses losing sight of what they are storing and struggling to manage the sheer volume of information. By engaging with approximately 600 end users, Qumulo learned that while storage administration is becoming more manageable, understanding and maintaining large data sets is becoming more difficult, presenting an opportunity for growth and innovation in the storage sector.

Godman shared insights into Qumulo’s business model and the technology lies at the heart of their data solutions. Central to their approach is a data-aware primary storage system that integrates real-time analytics, allowing organizations to grasp the intricacies of their data in a transparent manner. This has been designed so that users can acquire scalable and reliable storage without being locked into proprietary hardware, making their solutions adaptable across various environments. Moreover, Qumulo operates on a SaaS model, pushing for continuous development by releasing updates every two weeks, which not only enhances functionality but also aligns with the modern expectations of enterprise software.

The session concluded with a clear message on Qumulo’s commitment to providing user-friendly, high-performance storage solutions tailored for sectors such as media and entertainment and oil and gas, where large quantities of unstructured data are prevalent. Godman reasserted the importance of making data manageability less of a chore for users by focusing on invisible storage that simultaneously elevates data visibility. This innovative philosophy seeks to address the prevalent challenge of understanding data growth and utilization, thus enabling users to derive greater value from their information assets while simplifying the operational aspects of storage management.


Cohesity SFD8 Closing Remarks with Mohit Aron

Event: Storage Field Day 8

Appearance: Cohesity Presents at Storage Field Day 8

Company: Cohesity

Video Links:

Personnel: Mohit Aron

In the closing remarks of Cohesity’s presentation at Storage Field Day 8, Mohit Aron discussed the benefits of Cohesity’s platform, emphasizing its capability to simplify complex legacy storage architectures. By consolidating multiple secondary storage workflows into a unified web-scale storage platform, Cohesity aims to integrate key functions like data protection, DevOps, and analytics. Aron justified the company’s focus on secondary storage over primary storage, explaining that the more significant challenges and opportunities for innovation lie in that space.


A More Efficient Approach to Analytics with Cohesity

Event: Storage Field Day 8

Appearance: Cohesity Presents at Storage Field Day 8

Company: Cohesity

Video Links:

Personnel: Abhijit Chakankar

Abhijit Chakankar from Cohesity discusses how their platform addresses the challenges enterprises face with analytics, particularly concerning secondary data. He outlines the inefficiencies associated with copying data to separate analytics stacks and how Cohesity’s architecture offers a streamlined, integrated solution for in-place analytics.

Chakankar delves into the architecture of Cohesity’s solution, highlighting how production systems can write directly onto Cohesity, where snapshots are taken and potentially cloned for test and development environments. This setup consolidates data into a single, uniform view that simplifies and enhances analytic operations. With built-in analytics features, quality of service (QoS) capabilities, and ample compute resources, Cohesity supports powerful in-place analytics without the need for data redundancies or additional infrastructure.

He further explains key components of Cohesity’s analytics capabilities, including built-in analytics for utilization and capacity, a real-time indexing engine, and the Analytics Workbench (AWB), which uses a MapReduce-based framework. AWB allows for deep, customizable analysis by accepting user-defined code, enabling extensive use cases such as e-discovery, threat analysis, and data anonymization. Chakankar provides a detailed demonstration of creating an AWB app, showcasing its ability to scan for specific patterns, such as social security numbers, and underscores the platform’s flexibility and efficiency in managing and analyzing data directly within the Cohesity ecosystem.


Cohesity Converged Data Protection

Event: Storage Field Day 8

Appearance: Cohesity Presents at Storage Field Day 8

Company: Cohesity

Video Links:

Personnel: Mark Thomas

The Cohesity Converged Data Protection presentation at Storage Field Day 8, delivered by Mark Thomas, focuses on the company’s advanced data protection architecture geared towards simplifying enterprise IT infrastructures. Thomas aims to explain the underlying technology of Cohesity’s solutions, particularly highlighting SnapTree—a data structure that supports efficient data management and protection workflows. The primary purpose of Cohesity’s architecture is to consolidate various operational data workloads onto a unified platform, which significantly streamlines data backup and recovery processes.

The conventional enterprise backup infrastructure, according to Thomas, is often burdened by a complex architecture that includes master servers, media servers, and multiple backup targets, which grow increasingly complicated as the enterprise scales its IT landscape. This complexity results in various inefficiencies such as multiple data silos, ineffective deduplication, and increased operational overhead. Cohesity’s solution replaces these intricate setups by offering a scalable platform that directly integrates with existing virtualized environments and applications, eliminating the need for separate media and master servers. Cohesity also features a distributed file system with SnapTree technology that allows for rapid and frequent data cloning without performance degradation.

The discussion also touched on how Cohesity supports additional workflows such as DevOps and instant VM recovery. Through its ability to create writable and modifiable clones instantly, use of integrated backup software, and a user-friendly interface, Cohesity provides robust solutions for both data protection and development environments. By indexing all backed-up data and enabling API integrations, Cohesity ensures ease of data management and swift recovery operations, making it a versatile tool for contemporary enterprise IT landscapes. The simplified architecture not only reduces the number of moving parts but also consolidates data management tasks onto a single platform, thus fostering a more efficient and less error-prone system.


Cohesity User Interface Demonstration

Event: Storage Field Day 8

Appearance: Cohesity Presents at Storage Field Day 8

Company: Cohesity

Video Links:

Personnel: Nick Howell

At Storage Field Day 8, Nick Howell from Cohesity provides a comprehensive demonstration of the Cohesity User Interface designed for modern data management platforms. The UI features a fully responsive design, displaying well on both mobile and desktop devices, and employs a tiled interface for usability. As users access the interface, they are greeted with essential metrics and alerts related to system health, job statuses, and storage utilization, aiming to deliver key insights immediately. The demo showcases not only the UI’s capability to provide a holistic overview of system performance but also its ability to handle infrastructure concerns effortlessly.

Nick further elaborates on the infrastructure management through the cohesive design that facilitates the addition and monitoring of cluster nodes. By double-clicking on specific nodes, users can view detailed information, including software versions and node activities. The process of adding a new node to an existing cluster is demonstrated, emphasizing the simplicity and automation inherent in the UI. This includes automatic discovery and software version matching, significantly reducing the time and effort required to expand or update the cluster. The ease of managing up to 32-node clusters is highlighted, underscoring the design’s scalability.

The session also delves into the handling of storage partitions and view boxes, which introduce a hierarchical structure aiding in the physical and logical segregation of data. Partitions represent a physical separation, suitable for departmental needs, while view boxes facilitate logical data separation and target deduplication policies. The demo continues with a focus on performance monitoring tools within the UI, offering real-time graphs that allow users to zoom in for detailed analysis. This robust integration proves beneficial for various administrative tasks, such as defining storage tiers and policies that accommodate diverse drive types and performance needs.


Cohesity Data Platform Deep Dive

Event: Storage Field Day 8

Appearance: Cohesity Presents at Storage Field Day 8

Company: Cohesity

Video Links:

Personnel: Johnny Chen

In this presentation at Storage Field Day 8, Johnny Chen provides a detailed overview of the Cohesity Data Platform, focusing on its architecture and capabilities aimed at addressing issues related to secondary storage, which include fragmentation, silos, and challenges in copy data management. The talk is segmented into several parts, beginning with discussions on the scale-out distributed architecture of the Cohesity file system, which is designed to handle mixed workloads effectively and includes an adaptive self-healer for system maintenance and operations. The system’s hardware comprises a 2U chassis with four nodes, each equipped with dual CPUs, memory, SSDs, and hard drives, allowing substantial flexibility and scalability for various enterprise needs, including data protection, DevOps workflows, and analytics.

Chen delves into the specifics of the Cohesity OASIS architecture, highlighting elements such as the distributed lock manager, a strongly consistent NoSQL store, and the intelligent coordination required to ensure seamless integration and operation of multiple nodes within the cluster. Particularly noteworthy is their method of metadata management, including creating, managing, and ensuring the transactional integrity of file operations through a distributed NoSQL store and a two-phase commit process. The platform also employs innovative approaches to manage and optimize data storage through methods like global deduplication and adaptive tiering, which dynamically moves data between SSDs and HDDs based on access patterns, ensuring efficient utilization of storage resources and maintaining performance.

Additionally, Cohesity’s approach to mixed workloads and performance isolation is geared toward maintaining high efficiency and preventing heavy operations, such as large backup jobs, from affecting the performance of other tasks within the system. This is achieved through a user-defined quality of service (QoS) management system, which allocates resources proportionally based on predefined priorities. The self-healing capabilities of the system, running continuously at a low-priority backdrop, ensure that the system remains optimized and fault-tolerant, capable of handling tasks like garbage collection, disk rebalancing, and data replication seamlessly without disrupting primary operations. This continuous background process underscores the platform’s resilience and ability to operate efficiently even under diverse and demanding workload conditions.