|
![]() Mark Seither presented for HPE at AI Infrastructure Field Day 3 |
Your turnkey AI Factory for Rapid Development with Hewlett Packard Enterprise
Watch on YouTube
Watch on Vimeo
The vast majority of enterprise AI initiatives fail to deliver ROI, not because of a lack of innovation, but due to a significant gap between development and production. This session will explore the “token economics” behind these failures and introduce HPE Private Cloud AI, a turnkey AI factory designed to bridge this gap. We’ll show how this solution simplifies the journey from concept to full-scale deployment and demonstrate its power with a real-world use case: a powerful LLM built for the NBA, empowering you to drive measurable business value from your AI investments.
Mark Seither, Solutions Architect at HPE, introduced Private Cloud AI (PCAI), a turnkey AI factory designed to bridge the gap between AI development and production. PCAI is a fully integrated appliance comprised of HPE hardware, NVIDIA GPUs and switches, and HPE’s AI Essentials software, along with NVIDIA’s NVAI Enterprise (NVAIE). Seither emphasized that this is not a hastily assembled product but the result of long-term development, internal innovation, and strategic acquisitions, positioning PCAI as a unique and compelling solution in the AI market. He highlights the evolution of AI, noting that the current outcomes are so advanced that they are practically indistinguishable from what was once considered far-off science fiction, making it crucial for businesses to embrace and understand its potential.
The speaker also touched on the practical applications of AI, ranging from personalized product recommendations in retail to computer vision for threat detection and anomaly identification. He underscored a key trend he’s observing with his customers: the primary focus is not on replacing employees with AI but on enhancing their capabilities and improving customer experiences. Seither highlighted the challenges companies face in implementing AI, including a lack of enterprise AI strategies and difficulties in scaling AI projects from pilot to production. Data privacy, control, accessibility, and cost-effective deployment methodologies are also significant hurdles.
HPE’s PCAI aims to address these challenges by providing a ready-to-use solution that eliminates the need for companies to grapple with hardware selection, software integration, and driver compatibility. Offered in different “t-shirt” sizes, including a developer system, PCAI is designed to cater to various needs, from inferencing to fine-tuning. The goal is to empower data scientists to start working on AI projects from day one, focusing on differentiated work that directly impacts the business rather than on the complexities of setting up the AI infrastructure.
Personnel: Mark Seither
The AI Chasm: Bridging the gap from pilot to production with Hewlett Packard Enterprise
Watch on YouTube
Watch on Vimeo
The AI market is booming with innovation, yet a significant and costly gap exists between the proof-of-concept phase and successful production deployment. A staggering number of AI projects fail to deliver on their promise, often stalling in “pilot purgatory” due to fragmented tools, unpredictable costs, and a lack of scalable infrastructure. In this session, we’ll examine why so many promising AI initiatives fall short and detail the key friction points—from data pipeline complexity and integration issues to governance and security concerns—that prevent organizations from translating AI ambition into measurable business value.
Mark Seither from HPE discusses the challenges organizations face in moving AI projects from pilot to production. He highlights the rapid pace of innovation in foundation models and AI services, making it difficult for companies to keep up and choose the right tools. A major concern is data security, with companies fearing data exposure when using AI models. The time and effort required to coordinate different teams and make decisions on building AI solutions also contributes to the delays.
Seither emphasizes that hardware alone is insufficient for successful AI implementation, and the conversation must center on business objectives. HPE offers a composable and extensible platform with a pre-validated stack of tools for data connectivity, analytics, workflow automation, and data science. Customers can also integrate their own preferred tools via Helm charts, though they are responsible for the lifecycle of those tools. The HPE platform is a co-engineered system with NVIDIA, meaning hardware choices are optimized for cost and performance and that the platform isn’t a reference architecture.
The HPE Data Lakehouse Gateway provides a single namespace for accessing and managing data assets, regardless of their location. HPE also has an Unleash AI program with validated ISV partners and supports NVIDIA Blueprints for end-to-end customizable reference architectures. Furthermore, HPE offers a private cloud solution with cost savings compared to public cloud alternatives, emphasizing faster time to value, complete control over security and data sovereignty, and predictable costs through both CapEx and OpEx models, including flexible capacity with GreenLake.
Personnel: Mark Seither
The AI Factory: A strategic overview with Hewlett Packard Enterprise
Watch on YouTube
Watch on Vimeo
Many organizations find that their AI initiatives, despite early promise, fail to deliver a positive ROI. This can be traced to “token economics”—the complex and often unpredictable costs associated with consuming AI models, particularly in the public cloud. This session will dissect these hidden costs and the architectural bottlenecks that lead to runaway spending and stalled projects. We’ll then present a comprehensive overview of HPE Private Cloud AI, a full-stack, turnkey solution designed to provide predictable costs, superior performance, and total control. We will explore how its integrated hardware and software—from NVIDIA GPUs and HPE servers to a unified management console—enable a powerful and predictable path to production, turning AI from a financial gamble into a strategic business asset.
The presentation highlights the often-overlooked costs associated with AI initiatives in the public cloud, citing examples like over-provisioning, lack of checkpointing, and inefficient data usage. The speaker emphasizes that many companies experience significantly higher operational costs than initially anticipated, with one example of an oil and gas company spending ten times more than projected. While some companies may not be overly concerned with these cost overruns if the AI models deliver results, HPE contends that this isn’t sustainable for most organizations and that there are cost savings to be found.
HPE’s solution, Private Cloud AI, offers a predictable cost model and significant savings compared to cloud-based alternatives. These cost savings, averaging around 45%, are most pronounced with larger systems managed within the customer’s own data center, though co-location options are also available with slightly higher overhead. Furthermore, HPE’s solution addresses the hidden costs associated with building and managing an AI infrastructure from scratch, including the need for specialized teams and resources for each layer of the technology stack.
Beyond cost considerations, HPE’s Private Cloud AI provides greater control over data, mitigating concerns about data privacy and usage in downstream training cycles, which is important considering inquiries into the training data used for some AI models. The solution offers flexible purchasing options, including both CapEx and OpEx models, with HPE GreenLake enabling reserved capacity and on-demand access to additional resources without upfront costs. This combination of cost-effectiveness, control, and flexibility positions HPE Private Cloud AI as a compelling alternative to the public cloud for AI deployments.
Personnel: Mark Seither
The AI Factory in Action: Basketball play classification with Hewlett Packard Enterprise
Watch on YouTube
Watch on Vimeo
This session provides a live demonstration of a practical AI application built on top of HPE Private Cloud AI (PCAI). The speaker, Mark Seither, showcases a basketball play classification application that leverages a machine learning model trained on PCAI. This model accurately recognizes and categorizes various basketball plays, such as pick and roll, isolation, and fast break. The demo highlights how the powerful and predictable infrastructure of PCAI enables the development and deployment of complex, real-world AI solutions. This example illustrates the full lifecycle of an AI project—from training to deployment—on a private cloud platform.
The presentation details the development of an AI application for an NBA team that focuses on video analysis, starting with the specific use case of identifying player fatigue. The initial approach involved using an open-source video classification model called Slow Fast, which was trained to recognize basketball plays such as pick and rolls, and isolations. To create a labeled dataset for training, the presenter manually extracted and labeled video clips from YouTube using tools like QuickTime and Label Studio. The model, trained on a small dataset of labeled plays, demonstrated promising accuracy in identifying these plays, and although it had limitations, the presentation illustrates a basic but functional model.
The speaker then discusses the next steps involving HPE’s Machine Learning Inferencing Service (MLIS) to deploy the model as an endpoint. This would allow the team to upload and classify video clips more easily. Furthermore, he plans to integrate the play classification with a video language model (VLM) enabling the team to query their video assets using natural language, such as “Show me every instance of Steph Curry running a pick and roll in the fourth quarter of a game in 2017.” He also showcased the RAG capabilities of the platform using the NBA collective bargaining agreement to answer specific questions, highlighting the platform’s potential to provide quick, valuable insights to customers.
Personnel: Mark Seither