Cyber Resiliency is Just Data Protection

Cyber Resiliency is a term that encompasses much more than simply protecting data. This episode features Tom Hollingsworth joined by Krista Macomber and Max Mortillaro discussing the additional features in a cyber resiliency solution and the need to understand how data needs to be safeguarded from destruction or exploitation.


At AI Field Day, Qlik Shows AI-Based Analysis Added to Its Platform

At AI Field Day, Qlik unveiled a wizard-based AI feature that simplifies the process of leveraging on-premises data for insightful analytics, integrating smoothly with Qlik’s cloud services. This enhancement to their analytics platform aims to democratize AI’s benefits, making advanced data analysis accessible to a broader range of users with varying expertise. Qlik’s initiative reflects a commitment to user-friendly, AI-powered analytics, facilitating deeper insights while streamlining the experience for its customers. Read more in this analyst note for The Futurum Group by Alastair Cooke.


Deciding When to Use Intel Xeon CPUs for AI Inference

At AI Field Day, Intel offered insights into strategic decision-making for AI inference, highlighting scenarios where Intel Xeon CPUs outshine traditional GPU solutions on both on-premises and cloud servers. By evaluating the specific requirements of AI inference workloads, Intel guides users to make informed choices that enhance value while optimizing their existing server infrastructure. This approach emphasizes efficiency and practicality in deploying AI capabilities, ensuring that organizations can navigate the complex landscape of hardware selection for their AI initiatives. Read more in this Futurum Research Analyst Note by Alastair Cooke.


Hammerspace Shows Storage Acceleration for AI Training

At AI Field Day, Hammerspace showcased its innovative storage acceleration solution, demonstrating how Hyperscale NAS can be leveraged to enhance the performance of current scale-out NAS systems, particularly in training large language models (LLM) efficiently. This storage boost not only improves speed but also optimizes resource allocation during the intensive LLM training process. Hammerspace’s advancement offers organizations the opportunity to amplify their AI training capabilities without the need to overhaul their existing storage infrastructure. Read more in this Futurum Research Analyst Note by Alastair Cooke.


VMware Private AI at AI Field Day

VMware’s presentation with Intel at AI Field Day centered on optimizing on-premises AI workloads, highlighting the capability of Intel Sapphire Rapids CPUs with Advanced Matrix Extensions (AMX) to efficiently perform large language model (LLM) AI inference, traditionally a task for GPUs. Demonstrating that AI can be resource-effective on CPUs, the discussion covered the technical prerequisites for harnessing AMX in vSphere environments and the ongoing integration of these accelerators into popular AI frameworks. With CPUs increasingly capable of handling AI tasks through built-in matrix math acceleration, VMware showcases a sustainable, cost-effective approach, potentially reshaping the hardware strategies for mixed workload servers. Read more in this analyst note for The Futurum Group by Alastair Cooke.


Gemma and Building Your Own LLM AI

At AI Field Day 4, Intel invited the Google Cloud AI team to showcase their Gemma large language model (LLM), revealing insights into the advanced infrastructure used for building such models on Google Cloud. The presentation underlined Gemma’s efficiency with fewer parameters for inference, highlighting Google Cloud’s strength in analytics and AI, particularly in managing differing resource needs between model training and application inference phases. Google Cloud’s integration of AI in products was illustrated with Google Duet, an AI-based assistant that aids in software development, exemplifying the potential future where AI handles more coding tasks, freeing up developers for high-level problem-solving and design. Read more in this analyst note for The Futurum Group by Alastair Cooke.


Defeating Data Gravity? – Hammerspace

According to Keith Townsend, Hammerspace presented a compelling argument for a shift in overcoming data gravity by moving data closer to accelerated computing resources at AI Field Day. Their solution, a parallel file system, acts as a bridge between dispersed data sources, offering a unified metadata view that streamlines data preparation for AI tasks. While Hammerspace’s technology appears to enhance user experience, it also requires strategic GPU placement and considerations around data governance and movement across geopolitical boundaries.


Insights From the AI Field Day: A Futurum Group Overview

In this LinkedIn Pulse article, Paul Nashawaty of The Futurum Group summarizes all of the AI Field Day presentations, highlighting VMware’s deep dive into Private AI in collaboration with industry giants like NVIDIA and IBM, and Intel’s focus on deploying AI inference models with Xeon CPUs across diverse environments. Next-generation AI-infused storage solutions from Solidigm and SuperMicro underscored the critical role of optimized storage in AI, while Vast Data focused on addressing the growing data demands of AI and HPC workloads. Google Cloud’s session on AI platforms and infrastructures showcased innovative approaches with Kubernetes at the core, paving the way for accessible and powerful AI development and deployment.


Cloud Field Day – Infrastructure Matters on the Road

During Cloud Field Day 19, Infrastructure Matters hosts Steven Dickens and Camberley Bates, alongside industry experts Stephen Foskett and Keith Townsend, explored the use case for presenters NeuroBlade, SoftIron, and Platform9. Each contributes a unique on-premises and cloud solutions aimed at optimizing infrastructure performance and management. The discussions, rooted in technical expertise, provide a comprehensive look into the role of these companies’ technologies in shaping the future of IT infrastructure. Listen to the entire webcast from The Futurum Group for more!


Dell – Streamlining Cloud – On and Off Premises

Camberley Bates’ LinkedIn Pulse article examines Dell’s latest solutions—APEX Storage for Public Cloud and APEX Cloud Platform—introduced at DTW 2023 and further elaborated during Tech Field Day. These offerings aim to enhance cloud management both on and off-premises, with the SDS-based APEX Storage designed for scalability in AWS, Azure, and eventually GCP, complemented by the APEX Navigator for comprehensive management. The article also highlights APEX Cloud Platforms’ on-premises integrations for OpenShift, AzureStack HCI, and VMware Tanzu that simplify deployment and lifecycle management.


SoftIron – A HyperCloud That Is Not Your Typical HCI

Camberley Bates describes SoftIron’s distinctive approach within the Hybrid Cloud Infrastructure market, highlighting its appeal to sectors with zero-trust and high-security requirements. SoftIron stands out by manufacturing its own hardware for compute, network, and storage in the US and Australia, with a software foundation geared towards high-security standards. Their systems can scale significantly beyond the usual limitations of HCI, supporting large clusters with high performance, even in HPC or image processing contexts. The article notes SoftIron’s unique architecture, including a stateless device setup and control plane integration in top of rack networking nodes, and mentions the intention to further investigate their data management and system interface.


Platform Engineering & FinOps Converge

Platform9 stands at the forefront of innovative Kubernetes management, blending platform engineering expertise with FinOps principles to tackle over-provisioning and optimize resource allocation. Their Managed Kubernetes service revolutionizes deployment and operational management, enhancing both operational efficiency and financial optimization for enterprises. With the introduction of Elastic Machine Pool—targeting AWS EKS clusters—Platform9 commitments to substantial cost savings and improved compute utilization, evidencing a future where technical prowess meets financial prudence in cloud infrastructure management. Read more in this analyst note from The Futurum Group by Steven Dickens and Camberley Bates.


Introduction to NeuroBlade

At Cloud Field Day 19, NeuroBlade introduced their innovative SPU solution, which joins the ranks of xPUs designed to boost data processing beyond traditional CPU or GPU capabilities. Their SPU, a PCIe bus-inserted hardware accelerator, is integrated through the DAXL API and SDK, showing promise in Presto, with plans to expand to Spark and Clickhouse, touting substantial performance claims of a 30x improvement. NeuroBlade’s pitch is particularly compelling in the big data era, where their technology could drastically cut the number of servers needed for large workloads, signaling a potentially transformative impact on cost, space, and power efficiency in data analytics. Read more in this LinkedIn Pulse article by Camberley Bates of The Futurum Group.