Interlock Market Opportunity and Use Cases

Event:

Appearance: Interlock Technology Tech Field Day Showcase

Company: Interlock Technology

Video Links:

Personnel: Noemi Greyzdorf

In this presentation, Noemi Greyzdorf, VP of Operations at Interlock Technology, illustrates how the company’s data migration solutions provide a unique value proposition by simplifying the process, accelerating time to completion, and ensuring compliance and data integrity. Greisdorf highlights two key offerings: DF Classic, a fully managed data migration service where experts handle the entire process, and DATAFORGE, a self-service software designed to automate and expedite data migrations for professionals. This presentation also features a successful case study of a cloud migration for a customer, demonstrating Interlock’s capability to efficiently migrate large volumes of data.


Interlock Architecture and DATAFORGE Demo

Event:

Appearance: Interlock Technology Tech Field Day Showcase

Company: Interlock Technology

Video Links:

Personnel: Massimo Yezzi

In this presentation, Massimo Yezzi, CTO at Interlock Technology, demonstrates the deployment of Interlock technology for optimal operational efficiency and effectiveness. He showcases the DATAFORGE platform, a comprehensive data migration tool designed to facilitate the seamless transfer of large volumes of data across various platforms and storage systems. DATAFORGE offers features such as real-time monitoring of resource usage, a performance scheduler to minimize impact during data transfers, and application-aware transformations that ensure metadata integrity.


Introduction to Interlock

Event:

Appearance: Interlock Technology Tech Field Day Showcase

Company: Interlock Technology

Video Links:

Personnel: Noemi Greyzdorf

In this presentation, Noemi Greyzdorf, VP of Operations at Interlock Technology, introduces Interlock’s data migration solutions. Designed for large-scale data movement, Interlock enables seamless migrations across various storage protocols while maintaining application compatibility. With over 1,000 complex migrations completed, Interlock ensures smooth transitions between protocols such as NAS, SMB, S3, and REST. Their DATAFORGE software facilitates flexible, high-performance migrations across on-premises, cloud, and hybrid environments, bypassing application data paths for minimal disruption.


Ignite Security Field Day – Rethinking Biometrics with Mitch Ashley

Event: Security Field Day 12

Appearance: Ignite Talks at Security Field Day 12

Company: Ignite

Video Links:

Personnel: Mitch Ashley

In this Ignite talk, Mitch Ashley talks about how our public information is creating a biometric digital twin of our experiences. All of the things we buy and the places we go are tracked and integrated with our digital identity and this creates security implications that must be understood.


Ignite Security Field Day – How I Learned to Stop Worrying and Love Automation

Event: Security Field Day 12

Appearance: Ignite Talks at Security Field Day 12

Company: Ignite

Video Links:

Personnel: Alastair Cooke

In this Ignite talk, Alastair Cooke discusses the rise of automation and the role it plays in design as well as security for DevOps. Also discussed are tips and solutions for streamlining your development environment and ensuring that it is fluid with the current state of advancement.


Ignite Security Field Day – Oh No, IO! The Death of a TLD

Event: Security Field Day 12

Appearance: Ignite Talks at Security Field Day 12

Company: Ignite

Video Links:

Personnel: Tom Hollingsworth

What happens when a ccTLD disappears from the Internet? What if it’s one of the most popular TLDs for new startups? In this Ignite talk, Tom Hollingsworth looks at the pending removal of .io and the impact it could have on the Internet. He also discusses what has happened in the past when the process has failed and how modern innovation could prevent this from happening again.


Own Your Career – Career Management for the Modern Technologist with Jack Poller

Event: Cloud Field Day 21

Appearance: Ignite Talks at Cloud Field Day 21

Company: Ignite

Video Links:

Personnel: Jack Poller

Jack Poller’s talk at Cloud Field Day 21 focuses on the importance of actively managing one’s career, particularly in the technology field. He shares his own career journey, which spans from engineering to marketing, consulting, and eventually becoming an industry analyst. Poller emphasizes that career success is not just about technical skills but also about understanding how to make a company more successful, either by increasing revenue or reducing costs. He highlights the importance of being adaptable and willing to take on new roles, as he did when he transitioned from engineering to marketing. Poller also stresses that in any role, the ultimate goal should be to contribute to the company’s success, and this requires understanding the business’s needs and how your work impacts the bottom line.

Poller also discusses the importance of influence and politics in the workplace, especially as one moves up the career ladder. He acknowledges that many technologists view office politics negatively, but he argues that it is a necessary part of getting things done, particularly in leadership roles. He uses examples from his own career, such as managing a crisis after a theft at a startup, to illustrate how sometimes difficult decisions must be made quickly, even if they are not popular. Poller also references the TV show *The Wire* as a great example of how politics, both formal and informal, play out in different organizations, from drug cartels to police departments. He encourages technologists to embrace the reality of workplace politics and learn how to build coalitions and influence others to achieve their goals.

Finally, Poller emphasizes the importance of networking and personal relationships in career advancement. He points out that while technology has made it easier to apply for jobs, it has also created barriers, such as AI-driven applicant tracking systems that may filter out qualified candidates. Therefore, building a strong professional network is crucial, as most job opportunities come through personal connections rather than resumes. Poller advises technologists to be proactive in seeking out mentors and building trust with colleagues, as trust is a key factor in career success. He concludes by encouraging the audience to take ownership of their careers, set clear goals, and continuously work toward them, rather than passively waiting for promotions or opportunities to come their way.


2010 A Service Odyssey with Jay Cuthrell

Event: Cloud Field Day 21

Appearance: Ignite Talks at Cloud Field Day 21

Company: Ignite

Video Links:

Personnel: Jay Cuthrell

In his presentation, Jay Cuthrell reflects on predictions he made in 2008 and 2009 about the future of technology, particularly in the telecommunications and service provider sectors. He humorously critiques his own foresight, acknowledging both the hits and misses in his predictions. Cuthrell draws parallels between his predictions and the famous film “2001: A Space Odyssey,” suggesting that, like the film’s futuristic vision, his own ideas were speculative at the time. He recalls attending various tech conferences and events, such as Google I/O and TechCrunch, where he gathered insights and trends that informed his predictions. These included the rise of IPTV, mobile TV, fiber-to-the-premises, and voice over IP, many of which have since become mainstream, while others, like WiMAX and certain peer-to-peer technologies, have faded into obscurity.

Cuthrell also discusses the evolution of cloud computing, content delivery networks (CDNs), and the increasing importance of multi-RF devices, which allow for multiple radio frequencies in a single device. He notes that while some of his predictions, such as the widespread adoption of fiber optics in homes, were overly optimistic, others, like the growth of cloud-based services and the dominance of content delivery networks, have largely come to fruition. He highlights the shift from physical media to streaming services, with companies like Netflix and YouTube leading the charge, and the eventual dominance of cloud storage and computing. He also touches on the development of mobile infrastructure, such as femtocells and portable Wi-Fi solutions, which have become essential in rural areas and during large events.

In the latter part of the talk, Cuthrell reflects on the broader implications of his predictions, particularly in areas like session control, virtual routers, and impulse enablement, which aimed to simplify network access and transactions. He acknowledges that while some of these ideas have materialized, others were either ahead of their time or missed the mark. He also discusses the role of companies like Oracle in acquiring legacy telecom systems and the ongoing importance of DNS traffic in understanding user behavior. Ultimately, Cuthrell’s presentation serves as a retrospective on the rapid evolution of technology over the past decade, offering a mix of nostalgia, humor, and insight into the unpredictable nature of technological progress.


Nine More Business Lessons I Learned From Baseball with Stephen Foskett

Event: Cloud Field Day 21

Appearance: Ignite Talks at Cloud Field Day 21

Company: Ignite

Video Links:

Personnel: Stephen Foskett

Stephen Foskett revisits his original Ignite talk from nine years ago, where he shared nine business lessons he learned from baseball, and now presents nine more lessons with a more seasoned perspective. Reflecting on his earlier optimism, Foskett acknowledges that his new insights are perhaps more cynical, shaped by years of experience.

He begins by emphasizing the importance of “working the refs,” a metaphor for standing up for oneself in business, even when things don’t go your way. He also touches on the reality that money can indeed buy success, but warns that it’s not a foolproof strategy, as many teams and businesses have learned the hard way. Foskett highlights how management often chases trends, trying to replicate the success of others, but this approach rarely works because the landscape is constantly changing. Foskett also delves into the significance of attitude, noting that even the most talented teams or companies can fail if their people lack motivation or belief in what they are doing. He draws parallels between sports teams with high payrolls that underperform and businesses that think they can buy success without fostering a positive culture. He critiques the idea of the “wisdom of the crowds,” pointing out that popular opinion is often misguided, whether in sports, business, or politics. This leads to another lesson: past success does not guarantee future success. Foskett warns against relying too heavily on historical performance, as circumstances change, and what worked before may not work again.

In his final lesson, Foskett underscores the importance of resilience, stating that it takes many losses before achieving success. He encourages people not to be discouraged by failure, as it is a natural part of the journey toward winning. He concludes on a lighthearted note, reminding the audience not to take themselves too seriously. Drawing from the fun elements of baseball, like mascots and the “Take Me Out to the Ballgame” tradition, Foskett emphasizes that work should be enjoyable. He reflects on his own career, expressing gratitude for the challenges and joys of running Tech Field Day and Gestalt IT, and encourages others to find the same sense of play and fulfillment in their own professional lives.

Watch the original talk, 9 Business Lessons I Learned From Baseball!


The VMware Cloud Foundation Approach to Platform Security

Event: Security Field Day 13

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Bob Plankers

VMware Cloud Foundation offers a wide array of features and capabilities to help organizations be and stay secure. In the short time we have we’ll talk about recent improvements aimed at making hard security tasks easy or non-existent (ESXi Live Patch, Image-Based Lifecycle Management, audit & remediation tools, Identity Federation and its relationship to attacker trends, etc.)

In this presentation, Bob Plankers from VMware by Broadcom discusses the VMware Cloud Foundation’s approach to platform security, emphasizing the importance of making security features easy to use and adopt. He highlights that VMware’s goal is to ensure that security is intrinsic to the system, with minimal effort required from users to enable it. The focus is on reducing friction in security processes, making it easier for organizations to comply with regulatory requirements and adopt security best practices. Plankers explains that VMware has been working on several improvements, such as ESXi Live Patch, Image-Based Lifecycle Management, and audit and remediation tools, all aimed at simplifying traditionally complex security tasks. He also touches on the importance of defense in depth, where multiple layers of security are implemented, starting from hardware-level protections like secure boot and trusted platform modules (TPMs) to software-level features like code signing and encryption.

Plankers also delves into the broader security landscape, discussing how VMware Cloud Foundation integrates security across the entire stack, from infrastructure to workloads. He emphasizes the importance of availability and resilience, noting that features like V-motion, DRS, and high availability are critical security features that ensure systems remain operational even during attacks or failures. Additionally, he discusses VMware’s efforts to support post-quantum encryption, identity federation, and continuous monitoring for security controls. The presentation concludes with a focus on reducing the friction associated with patching and updates, including the introduction of live patching for ESXi, which allows for faster and less disruptive updates. Overall, VMware’s approach is to make security a seamless and integral part of the infrastructure, allowing organizations to focus on their workloads while maintaining a strong security posture.


Run Enterprise Workloads with Kubernetes on VMware Cloud Foundation

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Katarina Brookfield, Vincent Riccio

VMware Cloud Foundation allows customers to run any modern workload alongside any traditional workload, all on the same platform, using a unified set of management tools. In short demos we’ll walk through capabilities of main services, such as VM Service and vSphere Kubernetes Service (VKS), and demonstrate their seamless integration with underlying Network and Storage infrastructure to provide Load-balancing and Persistent Volumes for our workloads. Later we’ll discuss how VCF Automation takes the consumption experience to the next level with the introduction of Blueprints and Self-Service Catalog. In addition we will discuss governance and policies, lifecycle management and ongoing cost visibility of your workloads and applications.

VMware Cloud Foundation (VCF) offers a unified platform for running both traditional and modern workloads, such as virtual machines (VMs) and Kubernetes clusters, using a consistent set of management tools. The platform integrates compute, storage, and networking resources, allowing users to deploy workloads in a seamless manner. VCF’s declarative API, called the VCF Supervisor, enables the deployment of Kubernetes clusters and VMs, providing resource isolation through vSphere namespaces. This allows administrators to set governance policies, such as access control and resource allocation, while also offering additional services like private container image registries and ingress controllers. The platform supports hybrid applications, where both containers and VMs can coexist, and provides a seamless experience for managing these workloads using the same tools. The demo showcased how easy it is to deploy Kubernetes clusters and VMs using VCF’s interface, with options for customizing configurations, such as networking overlays, storage policies, and VM classes.

In addition to workload deployment, VCF also offers automation capabilities through its VCF Automation tool, which allows users to consume and deploy services across private cloud environments. The tool supports templates and self-service catalogs, enabling users to deploy hybrid applications that combine VMs and containers. The automation tool integrates with various services, such as load balancers and persistent volumes, and provides governance features like lease policies to manage resource usage. The demo highlighted how users can create YAML-based templates to automate the deployment of Kubernetes clusters, VMs, and other services, while also offering flexibility for DevOps teams to manage infrastructure as code. Overall, VCF provides a comprehensive solution for managing both traditional and modern workloads, with a focus on automation, governance, and seamless integration across the infrastructure stack.


Monitoring Cost, Capacity, and Health of VMware Cloud Foundation

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Kelcey Lemon

VCF Operations offers full-stack visibility into the VMware Cloud Foundation based private clouds you manage. This includes the cloud components, infrastructure, virtual machines (VMs), containers and applications. VCF Operations provides continuous health updates, performance optimization, as well as efficient cost and capacity management.

● The demonstration will cover the following:
● Full Stack visibility and health across your VCF clouds
● Application and container monitoring
● Cost analytics, including TCO and potential savings
● Predictive capacity management with actionable rightsizing and reclamation tools

In this presentation, Kelcey Lemon from Broadcom demonstrates how VMware Cloud Foundation (VCF) Operations provides comprehensive monitoring and management tools for private clouds. The platform offers full-stack visibility, allowing users to monitor the health of their cloud infrastructure, including virtual machines (VMs), containers, and applications. VCF Operations consolidates various diagnostic tools into a single interface, enabling users to track performance, identify underutilized resources, and address capacity shortfalls through predictive analytics. The platform also helps with cost management by offering insights into total cost of ownership (TCO), potential savings, and resource reclamation, such as deleting idle VMs or old snapshots. Additionally, it provides compliance monitoring, ensuring that the infrastructure adheres to industry standards like HIPAA and CIS, while also tracking sustainability metrics like power consumption and carbon footprint.

The presentation also highlights the platform’s capacity management features, which use historical data and predictive analytics to forecast future resource demands. VCF Operations offers tools for right-sizing VMs, helping users optimize performance and reduce costs by adjusting CPU and memory allocations. The platform also includes reclamation tools that allow users to reclaim unused resources, further optimizing capacity and reducing operational expenses. The unified dashboard provides a centralized view of the entire cloud environment, enabling users to quickly identify and address issues, such as capacity shortages or performance bottlenecks. Overall, VCF Operations aims to streamline cloud management by offering a comprehensive, user-friendly interface that integrates health monitoring, cost management, and capacity planning.


Import, Migrate, and Extend VMware Cloud Foundation

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Eric Gray

VMware Cloud Foundation simplifies infrastructure administration by automating lifecycle management workflows and password/certificate rotation at scale. You don’t have to start from scratch to benefit from these capabilities. Discover how the latest release enhancements enable you to manage existing VMware vSphere deployments with VCF, as well as seamlessly migrate to and from VCF environments, whether on-premises or in the public cloud. In this presentation we will review:
● Introduction to VCF Import
● HCX overview and mention other tools to migrate to VCF
● Extend to the cloud with a focus on Google Cloud VMware Engine

In this presentation, Eric Gray from VMware by Broadcom discusses the technical aspects of importing, migrating, and extending VMware Cloud Foundation (VCF) environments. He explains how VCF simplifies infrastructure lifecycle management by automating tasks like updates, password management, and certificate rotation. One of the key points is that organizations don’t need to start from scratch to benefit from VCF. Existing VMware vSphere environments can be integrated into VCF through a process called “conversion” for management domains and “import” for workload domains. This allows administrators to bring their current infrastructure under VCF management, enabling them to take advantage of VCF’s lifecycle management features without having to rebuild their environments. Gray also highlights the flexibility of storage options, noting that while VCF typically uses vSAN, existing environments with NFS or Fibre Channel storage can still be imported and managed.

The second part of the presentation focuses on migrating workloads to VCF environments, particularly using VMware HCX, a tool that facilitates seamless migration of virtual machines (VMs) from older vSphere environments or even non-vSphere infrastructures like Hyper-V and KVM. HCX allows for live migrations with zero downtime by stretching Layer 2 networks, ensuring that workloads can retain their IP addresses during the move. Gray demonstrates how HCX can be used to migrate workloads between on-premises environments and public cloud platforms like Google Cloud VMware Engine (GCVE). He also touches on other migration options, such as NSX Layer 2 extensions and cross-vCenter migrations, but emphasizes that HCX is included with VCF and is a robust, mature solution for large-scale migrations. The session concludes with a live demo showing the migration of a web application from an on-premises environment to GCVE, illustrating the ease and efficiency of the process.


VMware Cloud Foundation Platform Overview

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Rick Walsworth

This session provides a summary of the VMware Cloud Foundation platform architecture, components and outcomes to build, deploy, operate and consume Private Cloud infrastructure for traditional and modern app workloads. This session will close with a brief overview of the latest VCF 5.2 release.

In this presentation, Rick Walsworth from VMware, now part of Broadcom, provides an overview of VMware Cloud Foundation (VCF) and its evolution since the Broadcom acquisition. He explains how the acquisition has streamlined operations, allowing for faster innovation and a more focused go-to-market strategy. VCF is designed to help organizations build, deploy, and manage private cloud infrastructure using cloud methodologies, while still maintaining the privacy, security, and performance of on-premises systems. Walsworth highlights the challenges customers face when modernizing their infrastructure, particularly when trying to integrate cloud methodologies with traditional three-tier architectures. He notes that many organizations initially view public cloud as a quick solution but often face cost overruns, leading to a trend of repatriating workloads back on-premises. VCF aims to provide a hybrid solution by combining the best of both worlds—on-premises control with cloud-like automation and scalability.

Walsworth also delves into the architecture of VCF, which caters to two main personas: cloud administrators and platform teams. Cloud administrators are provided with tools for capacity management, tenancy management, and fleet management, enabling them to operate infrastructure at scale. Platform teams, on the other hand, focus on delivering infrastructure as a service to developers, often using a combination of traditional VMs and containers orchestrated by Kubernetes. VCF integrates with various advanced services, such as AI workloads, disaster recovery, and security features, which can be added on top of the core platform. The platform also supports automation through infrastructure-as-code methodologies, allowing for seamless integration with DevOps pipelines. Walsworth emphasizes the importance of ongoing education and professional services to help customers fully utilize the platform’s capabilities, especially as they scale and customize their environments.


Private Cloud, Simplified – Introducing VMware Cloud Foundation

Event: Cloud Field Day 21

Appearance: VMware Presents at Cloud Field Day 21

Company: VMware

Video Links:

Personnel: Prashanth Shenoy

Since the completion of Broadcom’s acquisition of VMware, we have been all about change. For the VMware Cloud Foundation division, all of this change was necessary to transform our business to deliver faster innovation with more value to customers, and even better profitability and market opportunity for our partners. So, what’s changed now, post-acquisition, and why will this benefit your organization? In this session we will lay out VMware’s business model transition to subscription licensing which is the standard in the industry, the radical simplification across our portfolio, go-to-market and organizational structure to make it easier to do business with us, and the standardization across our ecosystem.

In this presentation, Prashanth Shenoy from VMware, now part of Broadcom, discusses the significant changes and strategic shifts that have occurred since the acquisition. The primary focus has been on simplifying VMware’s offerings, particularly through the VMware Cloud Foundation (VCF), which integrates various components like vSphere, vSAN, NSX, and ARIA automation into a unified private cloud platform. Shenoy emphasizes that the goal is to provide a consistent and seamless experience for customers, whether they deploy on-premises, at the edge, or through hyperscalers like AWS or Google Cloud. This shift is aimed at addressing customer feedback about the complexity of VMware’s previous offerings, which included thousands of SKUs and multiple independent business entities. By consolidating these into a single business division and product line, VMware aims to accelerate innovation and provide a more streamlined, integrated solution.

Additionally, Shenoy highlights the transition to a subscription-based licensing model, which aligns with industry standards and offers customers greater flexibility in deploying their cloud infrastructure. VMware has also introduced license portability, allowing customers to move their licenses between on-premises and cloud environments without additional costs. The company is also focusing on customer success by offering value-added services, professional support, and training programs to help customers adopt and optimize their private cloud deployments.


Automation Is Overblown – Cloud Field Day 21 Delegate Roundtable Discussion

Event: Cloud Field Day 21

Appearance: Delegate Roundtable at Cloud Field Day 21

Company: Tech Field Day

Video Links:

Personnel: Stephen Foskett

The roundtable discussion at Cloud Field Day 21 delved into the complexities and implications of automation, particularly in the context of cloud computing. The delegates acknowledged that while automation can streamline tedious or dangerous tasks, it also has significant social and economic impacts. Automation has historically displaced workers, and the conversation touched on the ethical considerations of who controls the automation process. The example of CNC milling machines was used to illustrate how automation can shift control from skilled workers to management, often resulting in job losses or reduced wages. The delegates emphasized the need to be mindful of the human element in automation, ensuring that workers are not left behind in the transition.

In the context of cloud computing, automation is seen as essential due to the rapid pace of change and the complexity of managing cloud environments. Automating routine tasks allows cloud professionals to focus on higher-value work, but it also introduces risks. The discussion highlighted the fragility of automated systems, which can fail catastrophically if not properly managed. The delegates shared personal experiences of how automation, when done thoughtfully, can drastically improve efficiency, such as reducing ticket queues for IT teams. However, they also noted that automation should not be implemented blindly; it requires careful consideration of what tasks should be automated and how to maintain quality control.

The conversation also touched on the broader societal implications of automation, including the need for upskilling workers to adapt to new technologies. The delegates agreed that automation is inevitable, but it should be approached with caution and a focus on human oversight. They discussed the importance of maintaining a balance between technological progress and the well-being of workers, suggesting that automation should empower people rather than replace them. Ultimately, the roundtable concluded that while automation offers significant benefits, it also comes with risks that must be carefully managed to avoid unintended consequences.


Cloud Repatriation and Rationalization – Cloud Field Day 21 Delegate Roundtable Discussion

Event: Cloud Field Day 21

Appearance: Delegate Roundtable at Cloud Field Day 21

Company: Tech Field Day

Video Links:

Personnel: Stephen Foskett

The roundtable discussion at Cloud Field Day 21 centered around the growing trend of cloud repatriation, where companies are considering moving workloads from the cloud back to on-premises environments. The delegates discussed the reasons behind this shift, with cost being a primary driver. Many organizations initially moved to the cloud during the COVID-19 pandemic due to the need for remote access and scalability. However, as businesses return to more traditional operations, they are reassessing the financial implications of cloud services, especially when unexpected operational expenses arise. The unpredictability of cloud costs, particularly in OPEX models, has led some companies to reconsider whether certain workloads are better suited for on-premises infrastructure, where they feel they have more control over expenses.

Another key point raised was the evolution of technology and operational models. While many applications were initially lifted and shifted to the cloud, not all of them are optimized for cloud environments, leading to inefficiencies and higher costs. The delegates noted that newer applications, especially those designed to be cloud-native, are likely to remain in the cloud, but legacy applications may be better suited for on-premises environments. Additionally, the conversation highlighted how cloud operational models, such as automation and elasticity, have been adopted in on-premises environments, blurring the lines between cloud and traditional data centers. This shift in operational models has made it easier for companies to repatriate workloads without losing the benefits of cloud-like flexibility.

The discussion also touched on the broader implications of repatriation, including the role of financial models and the importance of aligning technology decisions with business needs. The delegates emphasized that repatriation is not just a technical decision but also a financial one, driven by the need to balance CAPEX and OPEX. They also pointed out that the decision to repatriate should be based on the specific workload and its requirements, rather than a blanket move away from the cloud. Ultimately, the conversation suggested that cloud repatriation is a nuanced topic, with companies needing to carefully evaluate their workloads, costs, and operational models to determine the best approach for their infrastructure.


Cloud Native Qumulo Architecture and Demo

Event: Cloud Field Day 21

Appearance: Qumulo Presents at Cloud Field Day 21

Company: Qumulo

Video Links:

Personnel: Dack Busch

Qumulo’s cloud-native architecture, as presented by Dack Busch, emphasizes elasticity, performance, and cost-efficiency, particularly in AWS environments. The system is designed to scale dynamically, allowing users to adjust the number of nodes in a cluster based on workload demands. This flexibility is crucial for industries like media and entertainment, where workloads can spike unpredictably. Qumulo’s architecture allows users to scale up or down without service disruption, and even change EC2 instance types in real-time to optimize performance and cost. The system’s read cache is stored locally on NVMe drives, while the write cache is stored on EBS, which is more cost-effective and ensures data durability. The architecture also supports multi-AZ deployments, ensuring high availability and durability by spreading clusters across multiple availability zones.

One of the key features of Qumulo’s cloud-native solution is its integration with S3 for persistent storage. The system defaults to S3 Intelligent Tiering, but users can choose other S3 classes based on their needs. The architecture is designed to be highly efficient, with a focus on data consistency and cache coherency. Unlike some cloud systems that are eventually consistent, Qumulo ensures that data is always consistent, which is critical for customers who prioritize data integrity. The system also supports global namespace metadata, allowing users to access their data from anywhere as if it were local. This is particularly useful for scenarios where data needs to be accessed across different regions or environments, such as in disaster recovery or cloud bursting scenarios.

Qumulo’s architecture also offers significant economic advantages. Customers only pay for the capacity they use, and there is no need to provision storage in advance. This pay-as-you-go model aligns with the principles of cloud-native design, where resources are only consumed when needed. The system also supports automated scaling through CloudWatch and Lambda functions, allowing users to add or remove nodes based on real-time performance metrics. Additionally, Qumulo’s integration with third-party tools and its ability to ingest data from existing S3 buckets make it a versatile solution for organizations looking to migrate or manage large datasets in the cloud. The demo showcased the system’s ability to scale from three to 20 nodes in just a few minutes, demonstrating its real-time elasticity and high-performance capabilities.


What Qumulo is Hearing from Customers

Event: Cloud Field Day 21

Appearance: Qumulo Presents at Cloud Field Day 21

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw

In this presentation, Brandon Whitelaw, VP of Cloud at Qumulo, discusses the evolving landscape of data management and the challenges customers face in adopting hybrid and multi-cloud strategies. He highlights that the traditional approach of consolidating disparate file systems into a single scale-out system is no longer sufficient, as most companies now operate across multiple clouds and geographic locations. Whitelaw points out that 94% of companies have adopted multi-cloud strategies, often using different clouds for different workloads, which adds complexity. He emphasizes that file data, once considered secondary, has now become critical, especially with the rise of AI and other next-gen applications. However, many file systems struggle to operate efficiently in the cloud, often offering only a fraction of their on-prem performance at a much higher cost.

Whitelaw explains that one of the key challenges is the inefficiency of moving data between on-prem and cloud environments, particularly when using traditional file systems that are not optimized for cloud performance. He notes that many companies end up creating multiple copies of their data across different systems, which increases costs and complexity. Qumulo aims to address this by providing a unified data fabric that allows seamless access to data across on-prem, cloud, and edge environments. This approach reduces the need for data replication and ensures that data is accessible and performant, regardless of where it resides. Qumulo’s solution also includes real-time file system analytics, which helps optimize data access and performance by preemptively caching frequently accessed data.

The presentation also delves into the technical aspects of Qumulo’s cloud-native file system, which is designed to leverage the scalability and cost-effectiveness of object storage like AWS S3 or Azure Blob, while overcoming the performance limitations typically associated with these storage types. By using advanced data layout techniques and caching mechanisms, Qumulo ensures that data stored in object storage can be accessed with the performance of a traditional file system. This approach allows customers to benefit from the elasticity and cost savings of cloud storage without having to rewrite their applications. Whitelaw concludes by emphasizing the importance of providing a consistent, high-performance data experience across all environments, enabling customers to focus on their workloads rather than managing complex data pipelines.


Douglas Gourlay Introduces the Qumulo Cloud Data Platform

Event: Cloud Field Day 21

Appearance: Qumulo Presents at Cloud Field Day 21

Company: Qumulo

Video Links:

Personnel: Douglas Gourlay

Douglas Gourlay, CEO of Qumulo, introduced the Qumulo Cloud Data Platform by discussing the unprecedented growth in data and the challenges it presents for storage and processing. He highlighted how data growth is outpacing the ability of traditional storage solutions, such as SSDs and HDDs, to keep up, especially in environments where power and space are limited. This has led organizations to explore alternatives like cloud storage or building new data centers. Gourlay emphasized that the data being generated today is not just for storage but is increasingly valuable, feeding into critical applications like medical imaging, AI processing, and research. He shared examples of customers dealing with massive amounts of data, such as research institutions generating hundreds of terabytes weekly, and the need to move this data efficiently to processing centers.

Gourlay also addressed the ongoing debate between cloud and on-premises storage, noting that the industry is moving towards a hybrid model where both options are viable depending on the specific needs of the business. He criticized the myopic views of some industry players who advocate for cloud-only or on-prem-only solutions, arguing that businesses need the freedom to choose the best option for their workloads. Qumulo’s strategy is to eliminate technological barriers, allowing customers to make decisions based on business needs rather than being constrained by the limitations of the technology. By normalizing the cost of cloud storage and making it comparable to on-prem solutions, Qumulo aims to provide flexibility and enable businesses to store and process data wherever it makes the most sense.

The Qumulo Cloud Data Platform is designed to run anywhere, whether on x86, AMD, or ARM architectures, and across multiple cloud providers like Amazon and Azure. The platform’s global namespace feature ensures that data is available everywhere it is needed, with strict consistency to prevent data loss. Gourlay explained how Qumulo’s system optimizes data transfer across wide-area networks, significantly reducing the time it takes to move large datasets between locations. The platform also integrates with AI systems, enabling customers to leverage their data in advanced AI models while protecting their data from being absorbed into the AI’s training process. Looking ahead, Qumulo aims to build a global data fabric that supports both unstructured and structured data, with features like global deduplication and automated data management to ensure data is stored in the most efficient and cost-effective way possible.