|
![]() Viabhav Rastogi and Scott Shaffer presented for HPE at Next Gen HPE ProLiant Compute Deep Dive |
The HPE ProLiant team presents their Al-ready server portfolio: PCAI, DL145, DL380a, and DL384. The team also discusses computer vision use cases with customers and considers compute as a foundation for Al. Presented by Scott Shaffer, CTO, HPE Compute, and Vaibhav Rastogi, Compute Solutions Manager.
Follow on Twitter using the following hashtags or usernames: #HPEProLiant
HPE ProLiant Compute AI Portfolio and Solutions
Watch on YouTube
Watch on Vimeo
The HPE ProLiant team presents their AI-ready server portfolio: PCAI, DL145, DL380a, and DL384. The team also discusses computer vision use cases with customers and considers compute as a foundation for AI. Presented by Scott Shaffer, CTO, HPE Compute, and Vaibhav Rastogi, Compute Solutions Manager. The presentation begins by emphasizing real-world enterprise applications of AI where HPE is helping customers unlock business value. Common use cases include predictive maintenance, self-checkout in retail, and fraud detection in finance. HPE distinguishes itself with expertise in both HPC and AI, leveraging know-how from Cray and SGI to meet infrastructure demands. The presenters note that while many AI proof-of-concepts fail to move into production, HPE’s customers see better success due to the company’s experience, comprehensive offerings, and ability to create full-stack, optimized solutions, from training to inference.
HPE’s AI portfolio is divided across edge to core implementations, with the ProLiant Gen 12 server line tailored to enterprise inferencing needs and private AI deployments. Servers such as the DL380a and DL384 are equipped to handle modern GPUs and versatile AI workloads. The new Grace Hopper platform is highlighted for its extreme performance through CPU-GPU integration via high-speed NVLink. HPE’s Private Cloud AI is introduced as an all-in-one as-a-service solution, integrating hardware, software, and networking to simplify AI adoption. Designed for flexibility, it supports small workloads to larger inference clusters and is model-agnostic, catering to diverse customer needs ranging from retail experiences to healthcare diagnostics. The platform also includes MLPerf benchmark results, showcasing its optimization and performance across popular AI models like LLaMA and Mistral.
Customer stories reinforce the practicality and adaptability of HPE’s AI solutions. Examples include a Portuguese firm enabling cashier-less shopping using HPE ProLiant servers, Bosch using digital twins and real-time sensor analytics for turbine maintenance, and a school district leveraging computer vision for security across 3,000 cameras. These scenarios demonstrate the edge-to-cloud continuum and the shift from generic AI approaches to complete business-driven workflows. HPE positions itself not just as a provider of compute but as a partner offering validated solutions with ISVs, robust security measures, edge capabilities, and AI-specific services. Ultimately, HPE aims to make AI a manageable, performant workload by applying their legacy of enterprise computing, building optimized, reliable infrastructure platforms amidst the evolving AI landscape.
Personnel: Scott Shaffer, Viabhav Rastogi