Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Appearances / HPE Presents at AI Data Infrastructure Field Day 1

HPE Presents at AI Data Infrastructure Field Day 1



AI Data Infrastructure Field Day 1

Alexander Ollman presented for HPE at AI Data Infrastructure Field Day 1

This Presentation date is October 2, 2024 at 13:30-15:00.

Presenters: Alexander Ollman


Follow on Twitter using the following hashtags or usernames: @HPE_AI, @HPE_GreenLake

A Step-by-Step Guide to Build Robust AI with Hewlett Packard Enterprise


Watch on YouTube
Watch on Vimeo

Generative AI holds the promise of transformative advancements, but its development requires careful planning and execution. Hewlett Packard Enterprise (HPE) leverages its extensive experience to navigate the intricacies of building enterprise-grade generative AI, covering aspects from infrastructure and data management to model deployment. Alexander Ollman, a product manager at HPE, emphasizes the importance of integrating the needs of those who will use the AI infrastructure into the decision-making process, highlighting the rapid and essential demand for robust AI solutions in the enterprise sector.

Ollman provides a detailed explanation of the evolution and significance of generative AI, particularly focusing on the development of transformer models by Google in 2017, which revolutionized the field by enabling real-time generation of responses. He distinguishes between traditional AI models, which are often specific and smaller, and generative models, which are large, computationally intensive, and designed for general applications. This distinction is crucial for understanding the different infrastructure requirements for each type of AI, as generative models necessitate more substantial computational resources and sophisticated data management strategies.

The presentation underscores the complexity of deploying generative AI applications, outlining a multi-step process that includes data gathering, preparation, selection, model training, and validation. Ollman stresses the importance of automating and abstracting these steps to streamline the process and make it accessible to various personas involved in AI development, from data engineers to application developers. He also highlights the necessity of high-performance infrastructure, such as GPU-accelerated compute and fast networking, to support the large-scale models used in generative AI. By abstracting technical complexities, HPE aims to empower organizations to harness the full potential of generative AI while ensuring reliability and efficiency in their AI deployments.

Personnel: Alexander Ollman

Building a Generative AI Foundation with HPE


Watch on YouTube
Watch on Vimeo

Join Hewlett Packard Enterprise’s product team for a deep dive into the AI architecture and infrastructure needed to deploy generative AI at enterprise scale. We’ll explore the essential components—from high-performance compute and storage to orchestration—that power these models. Using real-world case studies, we’ll uncover the intricacies of balancing computational resources, networking, and optimization. Discover how Hewlett Packard Enterprise simplifies this process with integrated solutions.

In the presentation, Alex Ollman and Edward Holden from HPE discuss the comprehensive infrastructure required to support generative AI at an enterprise level, focusing on both hardware and software components. They emphasize the importance of a holistic approach that integrates high-performance computing, storage, and orchestration to manage the complex workflows involved in machine learning operations. The HPE Ezmeral platform is highlighted as a key solution that abstracts the underlying infrastructure, making it easier for data scientists, engineers, and developers to focus on their specific tasks without worrying about the technical complexities of setting up and managing the infrastructure.

The presentation also delves into the roles of different personas within an organization, such as cloud administrators, AI administrators, and AI developers. Each role has specific needs and responsibilities, and HPE’s Private Cloud AI offering is designed to cater to these needs by providing a unified platform that simplifies user management, data access, and resource allocation. The platform allows for seamless integration of various tools and frameworks, such as Apache Airflow for data engineering and Jupyter Notebooks for development, all pre-configured and ready to use. This approach not only accelerates the deployment of AI models but also ensures that the infrastructure can scale efficiently to meet the demands of enterprise applications.

Furthermore, the presentation touches on the collaboration between HPE and NVIDIA to enhance the capabilities of the Private Cloud AI platform. This partnership aims to deliver scalable, enterprise-grade AI solutions that can handle large language models and other complex AI workloads. The integration of NVIDIA’s AI Enterprise stack with HPE’s infrastructure ensures that users can deploy and manage AI models at scale, leveraging the best of both companies’ technologies. The session concludes with a discussion on the support and diagnostic capabilities of the platform, ensuring that organizations can maintain and troubleshoot their AI infrastructure effectively.

Personnel: Alexander Ollman, Edward Holden

Streamline AI Projects with Infrastructure Abstraction from HPE


Watch on YouTube
Watch on Vimeo

In this presentation, Alex Ollman from Hewlett Packard Enterprise (HPE) discusses the transformative potential of infrastructure abstraction in accelerating AI projects. The focus is on HPE’s Private Cloud AI, a solution designed to simplify the management of complex systems, thereby allowing data engineers, scientists, and machine learning engineers to concentrate on developing and refining AI applications. By leveraging HPE Ezmeral Software, the Private Cloud AI aims to provide a unified experience that maintains control over both the infrastructure and the associated data, ultimately fostering innovation and productivity in AI-driven projects.

Ollman emphasizes the importance of abstracting the underlying infrastructure, including GPU accelerator compute, storage for models, and high-speed networking, into a virtualized software layer. This abstraction reduces the time and effort required to manage these components directly, enabling users to focus on higher-level tasks. HPE’s GreenLake Cloud Platform plays a crucial role in this process by automating the configuration of entire racks, which can be set up with just three clicks. This ease of use is further enhanced by HPE AI Essentials, which allows for the creation and deployment of automations tailored to the unique data structures of different organizations.

The presentation also highlights HPE’s collaboration with NVIDIA to scale the development and deployment of large language models and other generative models. This partnership aims to make these advanced AI components more accessible and scalable for enterprises. HPE’s solution accelerators, part of the Private Cloud AI offering, promise to streamline the deployment of data, models, and applications with a single click. This capability is expected to be formally released by the end of the year, providing a powerful tool for enterprises to manage and scale their AI projects efficiently.

Personnel: Alexander Ollman


  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • May 28-May 29 — Security Field Day 13
  • Jun 4-Jun 5 — Cloud Field Day 23
  • Jun 10-Jun 11 — Tech Field Day Extra at Cisco Live US 2025
  • Jul 9-Jul 10 — Networking Field Day 38
  • Jul 16-Jul 17 — Edge Field Day 4
  • Sep 10-Sep 11 — AI Infrastructure Field Day 3
  • Oct 29-Oct 30 — AI Field Day 7

Latest Links

  • Compliance Does Not Equal Security
  • Meraki Campus Gateway: Cloud-Managed Overlay for Complex Networks
  • Exploring the Future of Cybersecurity at Security Field Day 13
  • 5G Neutral Host: Solving Enterprise Cellular Coverage Gaps
  • Qlik Connect 2025: Answers For Agentic AI

Return to top of page

Copyright © 2025 · Genesis Framework · WordPress · Log in