Cisco Silicon One ASICs: A100 and E100/K100

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: Cisco Presents at Tech Field Day Extra at Cisco Live US 2025

Company: Cisco

Video Links:

Personnel: Shawn Wargo

Tune in for this overview of the newest additions to the Cisco SiliconOne lineup, the A100 and the E100/K100. These new Network Processing Units (NPUs) represent a second generation of Silicon One ASICs specifically designed for campus environments, prioritizing features and high scale over just raw speed. Unlike previous generations, these ASICs are built for a feature-rich environment, supporting large tables for MAC addresses, Access Control Lists (ACLs), and NetFlow. This new design is crucial for enabling advanced capabilities like application hosting for containerized environments, AI/ML models, and HyperShield, a containerized distributed firewall, directly on the switch hardware. The intelligence behind this is rooted in cloud-native IOS-XE, which seamlessly integrates with both Meraki Dashboard and Catalyst Center, offering a unified and automated management experience without the need for special commands or reboots.

The A100 and K100 ASICs boast significant advancements in memory and table management, critical for modern network demands. They feature enhanced Longest Prefix Match (LPM) for highly efficient routing table entries, achieving over 90% utilization for millions of routes. A key innovation is HCAM (Hash-based Algorithmic TCAM), which combines a reasonably sized TCAM with fast, cost-effective SRAM to deliver massive scale for ACLs and NetFlow, a crucial requirement for campus networks. This hybrid approach allows for flexible allocation of memory based on specific needs through customizable SDM templates. Furthermore, these ASICs include hardware-based MACsec and IPsec for line-rate data encryption, and support for Precision Time Protocol (PTP) and Audio Video Bridging (AVB) to address latency-sensitive traffic. The A100 and K100 can scale from 400 Gigabit Ethernet all the way down to 10 Megabit half-duplex, accommodating a wide range of devices, from high-performance uplinks to legacy printers.

The presentation also highlighted the architectural benefits of the new switches, particularly the next-generation StackWise. This redesigned stacking capability utilizes a Linux kernel with separate processes for bootstrapping and cluster management, enabling in-service software upgrades (ISSU) and minimizing disruption during updates. The cluster remains operational even if an individual switch process is interrupted, preventing catastrophic link downtime. This standardized, VXLAN-based stacking architecture provides dynamic link additions and ensures consistent management across both the C9350 and C9610. The underlying hardware improvements, including latest-model Intel X86 CPUs with higher and faster DRAM, are fundamental to supporting these advanced software capabilities and the demanding requirements of AI, security, and high-density network environments.


Introducing Cisco C9000 Series Smart Switches

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: Cisco Presents at Tech Field Day Extra at Cisco Live US 2025

Company: Cisco

Video Links:

Personnel: Kenny Lei, Muhammad Imam

Join Cisco as we introduce the Catalyst C9000 family of switches featuring Cisco SiliconOne. This new series represents a significant evolution from the highly successful Catalyst 9000, introduced in 2017, which has been widely adopted by hundreds of thousands of customers. 2025 presents new challenges and opportunities, including the proliferation of AI-powered devices, the widespread adoption of multi-gigabit Wi-Fi 7, and the increasing relevance of quantum computing, especially concerning network security. These changes necessitate a new generation of networking hardware capable of handling evolving traffic patterns, such as the reverse flow of data driven by AI applications, and addressing the threat of “harvest now, decrypt later” from future quantum adversaries.

To address these challenges, Cisco is introducing the C9000 series smart switches, starting with two primary devices: the Cisco C9350 stackable switch and the Cisco C9610. The C9350 is a next-generation stackable switch, while the C9610 is a 10-slot chassis designed as a successor to platforms like the Catalyst 6500 and Nexus 7700. Both leverage Cisco Silicon One ASICs (K100 and A100) for high performance and come with an enhanced IOS-XE operating system, which includes microservices architecture, improved application hosting, and a roadmap towards quantum-safe compliance. A key innovation is the unified management approach, allowing customers to manage these switches via Meraki Dashboard, Catalyst Center, or traditional CLI/API, offering unprecedented operational flexibility.

The C9350 features a Silicon One A6 capable of 1.3 terabits of bandwidth and 1.6 terabits of stacking bandwidth, with significantly higher ACL and routing table scales compared to its predecessors. Its new stacking architecture utilizes standard Ethernet-based VXLAN for greater flexibility and improved resiliency against cable failures. The C9610 chassis is designed for 51.2 terabits per second, with supervisors supporting 25.6 terabits, and incorporates a centralized cable backplane for efficient front-to-back airflow. Both C9000 series switches are built with post-quantum cryptography compliance in mind, featuring secure unique identifiers (SUID) at the hardware level, and are ready for future security enhancements like HyperShield and Cisco Live Protect for rapid vulnerability remediation. The enhanced application hosting capabilities with faster CPUs, increased memory, and internal data connections further solidify their readiness for the AI era.


Assessing the Current State of AI-driven Packet Analysis with VIAVI

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: VIAVI Presents at Tech Field Day Extra at Cisco Live US 2025

Company: VIAVI Solutions

Video Links:

Personnel: Chris Greer, Ward Cobleigh

As network environments grow in complexity, speeds, and feeds, packet analysis gets increasingly difficult. In this session, we shared the results of research into how Artificial Intelligence has the potential to change the game, including automating anomaly detection, accelerating root cause analysis, and revealing patterns in network traffic that might otherwise go unnoticed. But how can AI fit into your current troubleshooting workflow, where is it reliable, and where do we need to validate its findings? Can AI really spot the issues that matter? Whether you’re a network engineer, a security analyst, or anyone responsible for performance and uptime, you’ll walk away from this session with practical guidance on how to use AI effectively, and a better understanding of its limitations.
Ward Cobleigh and Chris Greer continued their discussion on the practical challenges of using AI in packet analysis, particularly focusing on managing large PCAP files. They emphasized that as network speeds increase, PCAP files can grow rapidly, making analysis difficult. Greer’s best practices included capturing only necessary data and using Wireshark’s rolling capture to limit file sizes. For complex, multi-tier applications, it’s crucial to identify the right capture points to find the root cause, not just symptoms. VIAVI Solutions helps customers by providing tools to efficiently capture and analyze relevant packets, avoiding the overwhelming task of sifting through massive data sets. Their approach involves using machine learning to score network performance and identify problem domains, then narrowing down to specific socket connections for detailed analysis.
The VIAVI Solutions Observer platform uses an end-user experience (EUE) scoring method to pinpoint inefficiencies, categorizing them as network, client, app, or server-related issues. They demonstrated how their on-demand application dependency map visualizes the service architecture, helping to identify problematic servers. By focusing on specific socket connections and filtering out irrelevant data, they enable users to export small, manageable PCAP files for further analysis in tools like Wireshark. This approach streamlines the troubleshooting process, allowing analysts to concentrate on relevant data and resolve network issues more effectively. They also addressed challenges in capturing data in cloud environments, noting the varying capabilities of AWS, Azure, and Google Cloud, and the importance of reliable data capture methods.


Enhancing Packet Analysis with AI – Smarter Faster and More Effective with VIAVI

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: VIAVI Presents at Tech Field Day Extra at Cisco Live US 2025

Company: VIAVI Solutions

Video Links:

Personnel: Chris Greer, Ward Cobleigh

As network environments grow in complexity, speeds, and feeds, packet analysis gets increasingly difficult. In this session, we shared the results of research into how Artificial Intelligence has the potential to change the game, including automating anomaly detection, accelerating root cause analysis, and revealing patterns in network traffic that might otherwise go unnoticed. But how can AI fit into your current troubleshooting workflow, where is it reliable, and where do we need to validate its findings? Can AI really spot the issues that matter? Whether you’re a network engineer, a security analyst, or anyone responsible for performance and uptime, you’ll walk away from this session with practical guidance on how to use AI effectively, and a better understanding of its limitations.
Ward Cobleigh and Chris Greer discussed the current state of AI-driven packet analysis, particularly focusing on how popular Large Language Models (LLMs) handle PCAP data. They presented a small, deliberately crafted PCAP file with one significant anomaly (a 132-second server response time) to various LLMs, including Claude, Sonnet 4, GPT, Copilot, and Gemini (OG and 2.5 Pro preview).


AI-Driven Network Automation and Insights with Opengear’s Production Platform

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: Opengear Presents at Tech Field Day Extra at Cisco Live US 2025

Company: Opengear

Video Links:

Personnel: Andrew Pearce, Matt Witmer

This session will feature live demos showing how Opengear’s Unified Digital Operations Platform (UDOP) uses an Independent Data Management Path (IDMP) to overcome data silos, improve security, and reduce congestion. Attendees will see how UDOP enables resilient operations, enforces governance end to end, and powers proactive, agentic AI to drive automation from day one.

The presentation focuses on the practical application of OpenGear’s Unified Digital Operations Platform (UDOP) in standard network engineering scenarios, encompassing design, deployment, operations, and troubleshooting. The demo highlights how UDOP, enhanced with agentic AI, addresses key challenges in network management. The presentation details various OpenGear solutions, including Lighthouse Service Portal (LSP), Smart Management Fabric (SMF), and Connected Resource Catalog (CRC), emphasizing the extensibility and flexibility of the OpenGear platform beyond traditional console servers. The demo showcases a high level network architecture and a detailed network diagram to illustrate the use of these tools in real world scenarios.

The presentation outlines a scenario where Acme Corporation expands its Utah location, requiring the integration of a new rack into the existing network. The process begins with a network architect creating a diagram, which is then converted into structured JSON data using a custom web application. This data is pushed to the Connected Resource Catalog (CRC), enabling automated inventory management. The demo further illustrates the staging process at an integration lab, where the OpenGear Operations Manager (OM) utilizes LSP for zero touch provisioning. This process automates the configuration of connected devices, including Cisco routers, and reconciles the deployed devices with the reference design. The presentation also covers the operational phase, demonstrating how the system handles vulnerability audits by checking CVEs and generating reports, and how log analytics are used to identify and address network issues like port flapping.

The presentation concludes with a discussion of log analytics and the use of AI to analyze log data for network troubleshooting. Matt Witmer demonstrated how the system can identify potential network issues, such as spanning tree loops, by analyzing log data streamed from connected resources. The AI, while still in early development, shows promise in providing actionable insights, though it requires further training to differentiate between standard functional behavior and actual network anomalies. The presentation also addressed the flexibility of the system, including the ability to swap out AI models and integrate with customer’s private AI systems. The demo underscores the early stage of development but highlights the potential of OpenGear’s UDOP in streamlining network operations and enhancing security through intelligent automation.


Enabling Agentic AI and Securing the Data Perimeter with Opengear SMF and UDOP

Event: Tech Field Day Extra at Cisco Live US 2025

Appearance: Opengear Presents at Tech Field Day Extra at Cisco Live US 2025

Company: Opengear

Video Links:

Personnel: Doug Wadkins

At Tech Field Day during Cisco Live San Diego, Opengear CTO Douglas Wadkins introduced the Unified Digital Operations Platform (UDOP), a new category of intelligent infrastructure built for AI driven network operations. UDOP builds on Opengear’s Smart Out of Band technologies to deliver a centralized, secure, and intelligent foundation for enterprise grade AIOps.

Opengear, has over 20 years of experience in IT and network management solutions. They are leveraging their established platform to develop a Unified Digital Operations Platform (UDOP) for AI driven network operations. This initiative stems from a recent push into the rapidly evolving AI landscape, which revealed both immense potential and significant societal implications, particularly concerning the future of junior roles in the workforce due to increased AI driven efficiency. The core challenge in developing effective AI agents is providing them with rich, contextual data, which is often fragmented across various siloed business systems.

Opengear’s UDOP aims to break down these data silos by building upon their existing platform’s ability to connect to virtually any sensor or management port, pulling in diverse contextual data and enabling control. This is critical because, as the presentation highlights, siloed data leads to siloed, less intelligent AI agents. The discussion also touched upon the industry shift away from traditional Software as a Service (SaaS) applications towards agent centric models, as evidenced by statements from CEOs of major tech companies like Microsoft and Salesforce. This transition emphasizes the need for new platforms that can secure proprietary domain knowledge when exposed to AI agents.

The presentation then discusses the evolution of AI agents, from simple first generation query response systems to more sophisticated second generation agents that incorporate external data and tools, and finally to the anticipated third generation agents that will operate with a higher degree of autonomy. Opengear’s UDOP is designed to support these advanced agents by providing a secure and governed framework for data ingestion, access control, and a feedback loop, potentially incorporating simulation and digital twins for training. The platform addresses emerging industry protocols like Anthropic’s Model Context Protocol (MCP) for normalizing disparate data and Google’s agent-to-agent protocol for inter-agent communication, while also emphasizing the critical need for robust security and identity management within these new AI driven ecosystems.


Intel Gaudi 3 AI Performance Testing with Signal65

Event: Cloud Field Day 23

Appearance: Signal65 Presents at Cloud Field Day 23

Company: Signal65

Video Links:

Personnel: Mitch Lewis

Over the last few years, generative AI has demonstrated its immense potential as a revolutionary technology. AI-powered applications have demonstrated the ability to enhance automation, streamline workflows, and accelerate innovation. Furthermore, the technology has proven to be broadly applicable, with opportunities for creating new, intelligent applications across virtually every industry. While the value of generative AI is apparent, the powerful hardware required to run such applications often serves as a barrier. As AI is increasingly moving from an experimental trend to the backbone of real-world applications, IT organizations are challenged with balancing the necessary performance with economic considerations of AI hardware, and doing so at scale.

Signal65, a performance testing and benchmarking team within the Futurum group, presented their findings on Intel Gaudi 3 AI accelerators at Cloud Field Day 23. The presentation focused on AI inference performance, detailing two main projects: on-premises testing and cloud-based testing on IBM Cloud. The on-premises testing compared Gaudi 3 with NVIDIA H100, using the Kamawaza AI testing suite on Meta’s Llama models (8B and 70B parameters) with varying input/output token shapes. The results showcased Gaudi 3’s competitive performance, especially when factoring in the lower cost, resulting in up to 2.5 times better price-performance than the H100.

The presentation then shifted to Gaudi 3’s performance on IBM Cloud, testing against both H100 and H200. The testing included Granite, Mixtral, and Llama models. Gaudi 3 consistently showed better performance compared to H100 and was very competitive against H200, also showing significant cost advantages, with a 30% lower hourly rate than the NVIDIA options. In both on-premise and cloud scenarios, the speaker highlighted the importance of considering both performance and price when evaluating AI hardware options, particularly for enterprises deploying AI applications at scale. The presentation concluded with a call to recognize the growing competitiveness of the AI hardware market, moving away from a singular NVIDIA dominance.


MinIO AIStor, S3 Express API, NVIDIA GDS,m and BF3 Overview with MinIO

Event: Cloud Field Day 23

Appearance: MinIO Presents at Cloud Field Day 23

Company: MinIO

Video Links:

Personnel: AB Periasamy

AB Periasamy, Co-CEO of MinIO, presented at Cloud Field Day 23, focusing on the AI-centric capabilities of MinIO AIStor. The presentation highlighted three key areas: S3 Express API compatibility, integration with NVIDIA GPUDirect Storage (GDS), and the forthcoming integration with NVIDIA BlueField 3 DPUs. These technologies aim to enhance performance and efficiency for AI and data-intensive workloads.

The discussion began with S3 Express, a refined subset of the Amazon S3 API designed for high-performance applications, particularly those involving AI workloads. MinIO has implemented the S3 Express API, offering users the choice between the regular S3 API and the S3 Express API, without requiring changes to data formats. The presentation emphasized that S3 Express eliminates performance bottlenecks, such as directory sorting and unnecessary checksum computations, that limit modern AI applications. It provides faster time-to-first-byte metrics compared to traditional S3.

Next, Periasamy introduced GPU Direct, an NVIDIA interface that allows direct data transfer between storage and GPU memory, thereby bypassing the CPU. The upcoming integration with MinIO will enable this functionality using the S3 API. This is done by utilizing a control plane via HTTP with RDMA as a data channel. The presentation concluded with a discussion of how integrating with NVIDIA BlueField 3 DPUs would enable an ultra-efficient JBOD-based storage solution, also known as JBOF. This new design will result in a low-power solution with high performance due to being a solely smart NIC-based system.


AIStor – PromptObject, AIHub, and MCP Demos with MinIO

Event: Cloud Field Day 23

Appearance: MinIO Presents at Cloud Field Day 23

Company: MinIO

Video Links:

Personnel: Dil Radhakrishnan

Dil Radhakrishnan presented MinIO’s AIStor capabilities at Cloud Field Day 23, focusing on how MinIO is adapting to AI workloads. The presentation demonstrated three key features: AI Hub, PromptObject, and Model Context Protocol (MCP) server. AI Hub provides a Hugging Face-compatible repository for securely storing private AI models and datasets within the AIStor environment. This enables developers to manage and deploy fine-tuned models without exposing them to the public, leveraging the familiar Hugging Face ecosystem.

The presentation then introduced PromptObject, which enables interaction with objects in AIStor using large language models (LLMs). By integrating GenAI capabilities directly into the S3 API, developers can use the “prompt” function to have the LLM extract specific data from unstructured objects, transforming it into structured JSON for easier application integration. This approach eliminates the need for separate RAG pipelines in many scenarios, as prompt objects simplify the process of interacting with single objects. Still, it can also be used in combination with a RAG implementation.

Finally, the presentation showcased the AI Store MCP server, which enables agentic workflows. The MCP server allows AI agents to interact with the data stored in MinIO. This was demonstrated using a cloud desktop, showing how an agent can list buckets, extract information from images, automatically tag data, and create visualizations of the AI Store cluster. This approach enhances data accessibility and facilitates automation in managing and analyzing data.


Introducing MinIO AIStor – Object Storage for AI and Analytics with MinIO

Event: Cloud Field Day 23

Appearance: MinIO Presents at Cloud Field Day 23

Company: MinIO

Video Links:

Personnel: Jason Nadeau

MinIO’s VP of Product Marketing, Jason Nadeau, introduces the AIStor object storage solution, designed for AI and lakehouse analytics environments, at Cloud Field Day 23. AIStor distinguishes itself from object gateway approaches by being object-native. Nadeau highlights the importance of object storage for AI, as evidenced by its use in building large language models and various data lakehouse tools. In contrast to the complex, multi-layered architecture of retrofit object gateway solutions, AIStor presents a simpler, direct-attached architecture, leading to superior performance, data consistency, and scalability.

Nadeau emphasizes MinIO’s object-native architecture, which provides strict consistency and SIMD acceleration, resulting in significant performance advantages. These architectural benefits translate into tangible storage outcomes, allowing customers to scale from petabytes to exabytes.  AIStor’s architecture facilitates real-time data services even during hardware failures. This object-native approach enables optimal hardware utilization and cost-effectiveness. MinIO offers direct engineer support, bypassing traditional support queues and providing customers with direct access to experts. The company is seeing strong enterprise adoption and growth in headcount.

The presentation features examples of AIStor deployments in various use cases, including generative AI, high-performance computing, data lakehouses, and object-native applications, as well as an autonomous vehicle manufacturer, a cybersecurity company, and a fintech payments provider. These deployments are achieving desired performance and are helping to control costs.  MinIO plans to offer a channel bundle skew, which will simplify the acquisition of AIStor by bundling hardware and software into a single SKU.


Optimizing Networking Performance with HPE OpsRamp

Event: Cloud Field Day 23

Appearance: HPE OpsRamp Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Juden Supapo

HPE’s presentation at Cloud Field Day 23, led by Juden Supapo, focused on optimizing networking performance using the HPE OpsRamp software. The presentation centered around demonstrating the platform’s ability to identify and automatically resolve network issues, specifically excessive traffic flooding.

The demonstration showed how OpsRamp’s dashboard provides network observability, allowing users to monitor the health of critical applications and network devices. By simulating a network issue with a script that generated excessive traffic, the presenter demonstrated how OpsRamp identified the problem through its monitoring of switch interfaces and virtual machine (VM) utilization. The system then generated alerts, which, in this case, escalated to show a task that, upon approval, triggered an automation script to block the offending IP address and back up the network configuration.

Beyond the demonstration, the presentation also touched on the future roadmap for OpsRamp. The key areas of focus are new device integrations (weekly updates), more sophisticated alert correlation, and AI-driven dashboard creation. The platform utilizes AI to analyze metrics and detect anomalies. HPE is also exploring the addition of features such as the ability to recommend dashboard thresholds based on historical data analysis.


Minimizing Application Downtime with HPE OpsRamp

Event: Cloud Field Day 23

Appearance: HPE OpsRamp Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Juden Supapo

HPE’s presentation at Cloud Field Day 23, delivered by Juden Supapo, focused on minimizing application downtime using HPE OpsRamp. The demonstration began by displaying a dashboard that monitored a mission-critical ERP application. The speaker highlighted key performance indicators and the overall health of the application. He then presented a service map, a visual representation of the infrastructure supporting the application, including database nodes, servers, and network devices. The service map enables administrators to quickly identify issues by visualizing the relationships between various infrastructure components.

The presentation then illustrated how OpsRamp handles infrastructure issues. By simulating a database outage, the speaker demonstrated how the dashboard and service map responded in real-time. Alerts were triggered, and the service map indicated the location of the problem. He emphasized the alert correlation feature, which uses machine learning to group related alerts, identify probable root causes, and streamline troubleshooting. This grouping allows administrators to address the primary issue instead of dealing with numerous cascading alerts, thus saving time and improving operational efficiency.

Finally, the presentation concluded by showcasing automation and governance within OpsRamp. When the simulated outage occurred, the system automatically generated a task, including an email notification. Through a low-code, no-code process automation workflow, the speaker demonstrated how the system could trigger a script to attempt to restart the affected service. This showcased the platform’s capability to combine automated remediation with governance through approval processes. This combination of features minimizes downtime.


Observe – Analyze – Act. An introduction to HPE OpsRamp

Event: Cloud Field Day 23

Appearance: HPE OpsRamp Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Cato Grace

HPE’s presentation at Cloud Field Day 23 introduced OpsRamp, a SaaS platform designed to address the challenges of modern IT environments. OpsRamp provides a unified approach to managing diverse infrastructure by focusing on observing, analyzing, and acting on collected data. This involves ingesting data from various sources, such as applications, servers, and cloud environments, into a central tool, enabling users to access all their data in one place.

The platform’s key features include robust analytics and automation capabilities. OpsRamp utilizes machine learning to assist users in analyzing data, identifying issues, and automating corrective actions. This automation streamlines issue resolution, potentially reducing resolution times significantly through integrations with over 3,000 systems. Furthermore, OpsRamp offers both agent-based and agentless monitoring options, providing flexibility depending on the type of resources being monitored.

OpsRamp differentiates itself by offering full-stack monitoring and an AI-powered analytics engine that can integrate with existing monitoring tools to correlate alerts across disparate tools. It provides both broad monitoring capabilities and integration of existing tools. The platform’s licensing model is subscription-based, determined by the number of monitored resources and the volume of metrics collected, with data retention policies tailored to different data types.


Customer Discussion – SAP Cloud ERP Customers Prefer CDC with HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono, Randall Grogan

HPE presented a customer discussion at Cloud Field Day focused on the adoption of SAP Cloud ERP (formerly RISE) with the CDC option, specifically highlighting the preferences of Energy Transfer, a major midstream pipeline company. The company chose the CDC option over the hyperscale cloud for several reasons, including the existing data center infrastructure, which offered lower latency and better control over cybersecurity. Energy Transfer’s decision was also influenced by the need to integrate with dependent applications, such as those used for hydrocarbon management.

The presentation emphasized that the chosen solution provides Energy Transfer with a cost-neutral transition. The presentation included a discussion of AI capabilities and the benefits of accessing innovation through the SAP Cloud ERP roadmap. The customer shared that they were early adopters of SAP S/4HANA. They have a strong commitment to AI capabilities that will come with cloud ERP. This decision makes it easier to adopt those AI capabilities in the future.

Finally, the presentation emphasized the importance of a robust governance model and the availability of flexible options for customers considering SAP Cloud ERP, including sandbox environments and customized implementation approaches. The discussion also addressed the shared responsibility model employed by the CDC and its approach to managing risks, including cybersecurity threats such as ransomware. Ultimately, HPE’s presentation highlighted the value of a hybrid approach to SAP solutions, enabling customers like Energy Transfer to tailor their deployments according to their specific needs and priorities.


Modern Cloud – SAP Cloud ERP, Customer Data Center CDC with HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono, Kelly Smith

HPE’s presentation at Cloud Field Day focused on the shift from declining on-premise and public cloud SAP deployments to increasing hybrid and private cloud solutions. Customers are seeking to mitigate risk, maintain control, and address the complex application dependencies inherent in SAP environments. HPE highlights its SAP Cloud ERP, Customer Data Center (CDC) offering as a true “Modern Cloud” solution, recognizing that customers require choices. The presentation highlights that predictable latency is crucial for integrations, making dedicated CDC solutions, with dedicated network paths and firewalls, a compelling option over hyperscale cloud offerings, where resources are shared.

The presentation emphasizes the importance of choice and flexibility when transitioning to cloud ERP.  HPE’s approach caters to various customer needs, offering options such as “lift and shift” and “tailored” methods, which enable customers to transition to cloud ERP without requiring an immediate data center migration. HPE’s strategy is designed to make the transition to cloud ERP as seamless as possible by acknowledging that moving from an existing ERP system to a new one, even with the same vendor, presents a significant project.

A key takeaway from the presentation revolves around the shift to subscription-based models. While it’s the direction most software companies are moving, HPE acknowledges the resistance many customers, like Energy Transfer, have to moving away from perpetual licensing. Jim Loiacono and Kelly Smith highlighted the ability to support those who want to “stay the course” or begin their SAP HANA journey with a lift-and-shift approach, understanding the need to address customers’ concerns about risk and the desire for maximum flexibility.


The evolution of SAP Cloud ERP – HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Kelly Smith

The presentation by HPE at Cloud Field Day 23, led by Kelly Smith from SAP, centered around the evolution of SAP Cloud ERP. The session primarily focused on SAP Cloud ERP, the RISE with SAP methodology, and the SAP Flywheel Effect Strategy. It also covered private cloud ERP transition options.

Smith began by discussing the history of SAP, highlighting its transformation from mainframe-based systems to the current cloud ERP offerings. The core of the presentation revolved around SAP’s Cloud ERP, which offers a private cloud deployment model, often referred to as RISE.  This model encompasses the software, hyperscaler resources, or HPE’s GreenLake, and technical managed support under a single commercial agreement.  This setup shifts the responsibility for day-to-day operations, including upgrades and security, to SAP, enabling customers to focus on their core business strategies.  The presentation emphasized SAP’s commitment to security, highlighting dedicated security teams and workshops, as well as SAP’s handling of responsibilities.

The discussion also addressed integrations, the flywheel effect, and future AI integration, particularly the Business Technology Platform (BTP) for integrations. The presentation touched upon the infrastructure as a service, highlighting the advantages of dedicated hardware and the importance of managing data, particularly for acquisitions and changes in business scale. The presentation addressed the need for additional capacity and SAP’s ability to accommodate it through change requests. Finally, the presentation highlighted the importance of cybersecurity and the role that SAP’s security teams play. SAP will manage software upgrades, with customers having input, but support is discontinued if upgrades fall too far behind.


SAP Legacy SAP ERP Customers Are Nearly Out of Time – HPE GreenLake SAP

Event: Cloud Field Day 23

Appearance: HPE SAP GreenLake Presents at Cloud Field Day 23

Company: HPE

Video Links:

Personnel: Jim Loiacono

It’s been a decade since S/4HANA was released, but less than half of the existing SAP ERP customers have upgraded, facing looming support deadlines. This presentation from HPE at Cloud Field Day 23 highlights the challenges and the novel solutions for those hesitant to make the move. HPE, along with SAP, is offering a more flexible approach, recognizing that a “one size fits all” approach isn’t suitable for all businesses. The presentation highlights that business disruption is a significant factor in customers delaying upgrades, underscoring the need to consider what is important to all stakeholders involved.

The session presents SAP’s cloud ERP offerings, including a “Customer Data Center” option built on HPE GreenLake. This provides a hybrid cloud environment, allowing customers to choose between public and private cloud deployments, and importantly, the flexibility to retain their existing data centers. HPE’s focus is to make the transition to SAP’s cloud offerings smoother, making it easier for customers to move forward. This approach addresses concerns about data sovereignty and control over upgrades. The conversation also highlights the need for integration and the various challenges that it encompasses when migrating to the new systems.

Ultimately, the presentation highlights the importance of the private cloud option, particularly for large enterprises with complex legacy systems. It highlights the need for more flexibility for all stakeholders. The session concludes that the “Customer Data Center” option, with HPE hardware and services, can provide the security, control, and flexibility that many customers require, while ensuring they can continue to receive the necessary support. The presentation emphasized that there are various options available to meet each customer’s needs, making this a comprehensive plan.


Seamless Business Continuity and Disaster Avoidance: Multi-Cloud Demonstration Workflow with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw, Mike Chmiel

Qumulo presented a demonstration at Cloud Field Day 23 that showcased seamless business continuity and disaster avoidance in a multi-cloud environment.  The core of the presentation centered on simulating a hurricane threat to an on-premises environment, highlighting Qumulo’s ability to provide enterprise resilience and cloud-native scalability. Brandon Whitelaw demonstrated how Qumulo’s Cloud Data Fabric enables disaster avoidance through live application suspension and resumption, data portal redirection, cloud workload scaling, and high-performance edge caching with Qumulo EdgeConnect.  This allows the safe migration of data and applications to the cloud, ensuring continued access and continuity in the event of a disaster.

The demo’s primary focus was on illustrating the ease of transitioning data and operations to the cloud during a simulated disaster scenario.  The process involved disconnecting the on-prem cluster and, using a device like an Asus Nook, accessing data seamlessly from the cloud. This seamless switch allowed government employees to continue their work at an off-site location. This was achieved through data portals, which enable the efficient transfer of data, with 90% bandwidth utilization.  It demonstrated the ability to maintain user experience by removing the need to change user behaviors or adopt new protocols.

Finally, Qumulo’s approach offers high bandwidth utilization, and integration into a multitude of customer use cases, all while ensuring minimal downtime and data integrity during the process.  They showed how edits made on the cloud could be instantly consistent with the on-prem solution. They were able to quickly and effectively restore data access to users after the storm,  Qumulo emphasized that the architecture allowed businesses to be proactive, moving data to the cloud days before a disaster, reducing the reliance on last-minute backups and promoting a more flexible, scalable approach to business continuity, and with the upcoming support for ARM, and the focus on multi-cloud, Qumulo allows for a great deal of flexibility in how a business manages its data.


Seamless Business Continuity and Disaster Avoidance with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Brandon Whitelaw

This Qumulo presentation at Cloud Field Day 23 focuses on delivering business continuity and disaster avoidance through its platform. Qumulo leverages hybrid-cloud architectures to ensure uninterrupted data access and operational resilience by seamlessly synchronizing and migrating unstructured enterprise data between on-premises and cloud environments. This empowers organizations to remain agile in the face of disruptions.

The presentation dives into two main approaches. The first leverages cloud elasticity for a cost-effective disaster recovery solution. By backing up on-premises data to cloud-native, cold storage tiers, Qumulo allows for near-instantaneous failover to an active system. This approach utilizes the same underlying hardware performance for both active and cold storage tiers, enabling a rapid transition and incurring higher costs only when necessary. This is a more cost-effective alternative to building a complete, on-premises continuity and hot standby data center.

The second approach emphasizes building continuity and availability from the ground up. By deploying a cloud-native Qumulo system, the presentation highlights the benefits of multi-zone availability within a region, offering greater durability and resilience compared to traditional on-premises setups. Qumulo’s data fabric ensures real-time data synchronization between on-prem and cloud environments, with data creation cached locally and then instantly available across all connected locations. This offers significant cost savings and operational efficiency by eliminating the need for traditional replication and failover procedures.


Reimagining Data Management in a Hybrid-Cloud World with Qumulo

Event: Cloud Field Day 23

Appearance: Qumulo Presents at Cloud Field Day 23

Company: Qumulo

Video Links:

Personnel: Douglas Gourlay

The presentation by Qumulo at Cloud Field Day 23, led by Douglas Gourlay, focuses on the challenges and opportunities of modern data management in hybrid cloud environments. The presentation emphasizes the need for a unified, scalable, and intelligent approach across both on-premises and cloud infrastructures. The speakers prioritize customer stories and use cases to illustrate how Qumulo’s unique architecture provides enhanced performance, visibility, and simplicity for organizations.

A key theme of the presentation is the importance of innovation, specifically in addressing the evolving needs of customers. Qumulo focuses on unstructured data, highlighting its work with diverse clients, including those in the movie production, scientific research, and government sectors. The presentation highlights how Qumulo’s approach enables both data durability and high performance, particularly in scenarios involving edge-to-cloud data synchronization, disaster recovery, and AI-driven data processing.

The presentation showcases how Qumulo enables freedom of choice by supporting any hardware and any cloud environment. Their solutions are designed to manage large-scale data, extending file systems across various locations with strict consistency and high performance. By leveraging cloud elasticity for backup and tiering, Qumulo offers cost-effective options for disaster recovery and provides the agility to adapt to changing business needs.