MLCommons MLPerf Client Overview

Event: AI Field Day 6

Appearance: ML Commons Presents at AI Field Day 6

Company: ML Commons

Video Links:

Personnel: David Kanter

MLCommons presented MLPerf Client, a new benchmark designed to measure the performance of PC-class systems, including laptops and desktops, on large language model (LLM) tasks. Released in December 2024, it’s an installable, open-source application (available on GitHub) that allows users to easily test their systems and provides early access for feedback and improvement. The initial release focuses on a single large language model, LLaMA 2.7 billion, using the Open Orca dataset, and includes four tests simulating different LLM usage scenarios like content generation and summarization. The benchmark prioritizes response latency as its primary metric, mirroring real-world user experience.

A key aspect of MLPerf Client is its emphasis on accuracy. While prioritizing performance, it incorporates the MMLU (Massive Multitask Language Understanding) benchmark to ensure the measured performance is achieved with acceptable accuracy. This prevents optimizations that might drastically improve speed but severely compromise the quality of the LLM’s output. The presenters emphasized that this is not intended to evaluate production-ready LLMs, but rather to provide a standardized and impartial way to compare the performance of different hardware and software configurations on common LLM tasks.

The benchmark utilizes a single-stream approach, feeding queries one at a time, and supports multiple GPU acceleration paths via ONNX Runtime and Intel OpenVINO. The presenters highlighted the flexibility of allowing hardware vendors to optimize the model (LLaMA 2.7B) for their specific devices, even down to 4-bit integer quantization, while maintaining sufficient accuracy as judged by the MMLU threshold. Future plans include expanding hardware support, adding more tests and models, and implementing a graphical user interface (GUI) to improve usability.


MLCommons and MLPerf – An Introduction

Event: AI Field Day 6

Appearance: ML Commons Presents at AI Field Day 6

Company: ML Commons

Video Links:

Personnel: David Kanter

MLCommons is a non-profit industry consortium dedicated to improving AI for everyone by focusing on accuracy, safety, speed, and power efficiency. The organization boasts over 125 members across six continents and leverages community participation to achieve its goals. A key project is MLPerf, an open industry standard benchmark suite for measuring the performance and efficiency of AI systems, providing a common framework for comparison and progress tracking. This transparency fosters collaboration among researchers, vendors, and customers, driving innovation and preventing inflated claims.

The presentation highlights the crucial relationship between big data, big models, and big compute in achieving AI breakthroughs. A key chart illustrates how AI model performance significantly improves with increased data, but eventually plateaus. This necessitates larger models and more powerful computing resources, leading to an insatiable demand for compute power. MLPerf benchmarks help navigate this landscape by providing a standardized method of measuring performance across various factors including hardware, algorithms, software optimization, and scale, ensuring that improvements are verifiable and reproducible.

MLPerf offers a range of benchmarks covering diverse AI applications, including training, inference (data center, edge, mobile, tiny, and automotive), storage, and client systems. The benchmarks are designed to be representative of real-world use cases and are regularly updated to reflect technological advancements and evolving industry practices. While acknowledging the limitations of any benchmark, the presenter emphasizes MLPerf’s commitment to transparency and accountability through open-source results, peer review, and audits, ensuring that reported results are not merely flukes but can be validated and replicated. This approach promotes a collaborative, data-driven approach to developing more efficient and impactful AI solutions.


Enabling AI Ready Data Products with Qlik Talend Cloud

Event:

Appearance: Qlik Tech Field Day Showcase

Company: Qlik

Video Links:

Personnel: Sharad Kumar

In this video, Sharad Kumar, Field CTO of Data at Qlik, discusses how Qlik is revolutionizing how organizations create, manage, and consume data products, bridging the gap between data producers and business users. Qlik’s platform enables teams to deliver modular, trusted, and easily consumable data that’s packed with business semantics, quality rules, and access policies. With Qlik, data ownership, transparency, and collaboration are simplified, empowering organizations to leverage data for advanced analytics, machine learning, and AI at scale. Unlock faster decision-making, reduced costs, and impactful insights with Qlik’s data product marketplace and powerful federated architecture.


Transforming Data Architecture – Qlik’s Approach to Open Table Lakehouses

Event:

Appearance: Qlik Tech Field Day Showcase

Company: Qlik

Video Links:

Personnel: Sharad Kumar

In this video, Sharad Kumar, Field CTO of Data at Qlik, discusses the future of data architecture with Open Table-based Lakehouses. Learn how formats like Apache Iceberg are transforming the way businesses store and manage data, offering unparalleled flexibility by decoupling compute from storage. Sharad highlights how Qlik’s integration with Iceberg enables seamless data transformations, empowering customers to optimize performance and costs using engines like Spark, Trino, and Snowflake. Discover how Qlik simplifies building modern data lakes with Iceberg, providing the scalability, control, and efficiency needed to drive business success.


Driving AI Adoption with Qlik – Key Market Trends in Data and AI

Event:

Appearance: Qlik Tech Field Day Showcase

Company: Qlik

Video Links:

Personnel: Sharad Kumar

In this video, Sharad Kumar, Field CTO of Data at Qlik, dives into the latest market trends, including the rise of generative AI and its impact on data integration and analytics. Explore how organizations can unlock the full potential of AI by preparing data for real-time, AI-ready consumption. Qlik Talend Cloud empowers businesses to ensure data quality, enhance security, and make data more accessible and actionable. See how Qlik is building a trusted data foundation that drives smarter decision-making and sustainable AI success.


The Future of Wireless-as-a-Service

Event: Mobility Field Day 12

Appearance: Mobility Field Day 12 Delegate Roundtable

Company: Tech Field Day

Video Links:

Personnel: Tom Hollingsworth

Network-as-a-Service appears to be the future of operations. Executives love the idea of consistent costs and predictable upgrades. Operations teams have questions about the way that the solutions will be deployed and maintained. It’s almost like the two discussions going on aren’t speaking to the same audience. In this Mobility Field Day delegate roundtable, Tom Hollingsworth moderates a discussion between the traditional network engineering and operations teams as well as the people seeing the changes on the horizon. Hear the challenges that Network-as-a-Service might face in the wireless realm as well as the unease that IT teams feel when confronted with this new operational model.


Nile Access Service AI Network Optimization and Automated Day N Operations

Event: Mobility Field Day 12

Appearance: Nile Presents at Mobility Field Day 12

Company: Nile

Video Links:

Personnel: Ebrahim Safavi

Nile’s purpose-built AI network foundation eliminates common challenges like siloed data, fragmented feedback, and scalability limits seen in traditional AI solutions. Ebrahim Safavi, Head of AI Engineering, discusses how Nile’s advanced AI framework produces high-quality data and client-level insights, empowering faster deployment of generative AI enhancements specifically designed for streamlined network management.


Wi-Fi Optimization with the Nile AI Automation Center

Event: Mobility Field Day 12

Appearance: Nile Presents at Mobility Field Day 12

Company: Nile

Video Links:

Personnel: Dipen Vardhe

Optimizing wireless network access begins with a standardized architecture and an efficient data store. Dipen Vardhe, Head of Wireless Service & AI Automation Center, shares how the Nile AI Automation Center uses real-time network telemetry from the Nile Access Service to deliver AI-driven insights and automated optimizations. This approach enhances wired and wireless experiences while enabling zero-touch network administration.


Nile Access Service Network Planning, Design and Deployment

Event: Mobility Field Day 12

Appearance: Nile Presents at Mobility Field Day 12

Company: Nile

Video Links:

Personnel: Shiv Mehra

Shiv Mehra, VP of Service and Solutions, highlights how outdated approaches to campus networks often result in subpar designs and deployments, causing poor wired and wireless experiences. Explore how Nile leverages AI-driven automation to transform campus network design and deployment, delivering deterministic performance and ensuring exceptional end-user experiences.


Introduction to the Nile Access Service

Event: Mobility Field Day 12

Appearance: Nile Presents at Mobility Field Day 12

Company: Nile

Video Links:

Personnel: Suresh Katukam

Discover Nile’s revolutionary Campus Network-as-a-Service (NaaS), the industry’s first solution offering guaranteed wired and wireless performance, integrated Zero Trust security, and AI-driven self-healing technology—all available through a subscription model. Co-founder and CPO, Suresh Katukam discusses how wired and wireless networks can now be delivered as simply, securely, and reliably as electricity.


Cisco Amplified NetOps with AI

Event: Mobility Field Day 12

Appearance: Cisco Presents at Mobility Field Day 12

Company: Cisco

Video Links:

Personnel: Karthik Iyer, Minse Kim

Learn how Cisco AI-Enhanced Radio Resource Management (AI-RRM) improves wireless network performance and user experience. Never miss wireless connection issue using Intelligent Capture with Proactive PCAP.


Cisco Ultra-Reliable Wireless Backhaul (URWB) Update

Event: Mobility Field Day 12

Appearance: Cisco Presents at Mobility Field Day 12

Company: Cisco

Video Links:

Personnel: Dave Benham

Dive into one of the latest advancements in the Cisco Wireless portfolio, where Cisco simultaneously combines the Ultra-reliable Wireless Backhaul solution with Wi-Fi on the same AP.


Cisco Wi-Fi 7 – What You Need to Know

Event: Mobility Field Day 12

Appearance: Cisco Presents at Mobility Field Day 12

Company: Cisco

Video Links:

Personnel: Ameya Ahir, Nicholas Swiatecki

Wi-Fi 7 from Cisco is here! Get an overview of not only the APs that are now fully unified, but also insights into client ecosystem readiness and pitfalls. Cisco furthermore explores the real world impliciations of deploying a Wi-Fi 7 network, and how you can prepare for it.


Demonstrating the Codiac Value-Driven Engineering Platform

Event: AppDev Field Day 2

Appearance: Introducing Codiac at AppDev Field Day 2

Company: Codiac

Video Links:

Personnel: Michael Levan

Michael Levan demonstrated the Codiac Value-Driven Engineering Platform, focusing on simplifying the deployment and management of applications across Kubernetes clusters. Levan emphasized the challenges developers face when dealing with infrastructure and configuration management, particularly the repetitive nature of writing YAML or Infrastructure as Code (IAC) configurations. He highlighted how Codiac addresses these pain points by allowing users to deploy applications without worrying about the underlying infrastructure. Through a drag-and-drop interface, users can deploy containerized applications, such as a Go web app, without needing to manually manage Kubernetes manifests or other complex configurations. The platform abstracts much of the infrastructure management, making it easier for developers to focus on their applications rather than the environment they are running in.

Levan also demonstrated how Codiac allows for dynamic configuration management across different environments, such as development, staging, and production. Users can easily adjust parameters like replica counts for different environments without needing to maintain multiple Kubernetes manifests or use tools like Helm or Kustomize. The platform provides a central configuration system that can be modified per environment or per “cabinet,” which Levan likened to a Linux namespace or a mini-environment. This flexibility allows for more efficient application management, as users can make changes to configurations and redeploy applications either through the graphical interface or via the command line interface.

Additionally, Levan introduced the concept of “snapshots” within Codiac, which allows users to capture the state of their application stacks and easily redeploy them across different environments or clusters. This feature is particularly useful for scenarios like blue-green or canary deployments, where different versions of an application need to be tested or rolled out incrementally. The platform also supports cluster migrations, enabling users to move applications between clusters with minimal effort. Codiac abstracts much of the complexity of managing Kubernetes clusters, allowing developers to treat clusters as ephemeral resources that can be easily replaced or upgraded without manual intervention. Overall, the platform aims to streamline the deployment process, reduce the need for manual configuration, and provide a more efficient way to manage applications across multiple environments.


Value-Driven Engineering for Everyone with Codiac

Event: AppDev Field Day 2

Appearance: Introducing Codiac at AppDev Field Day 2

Company: Codiac

Video Links:

Personnel: Mark Freydl

Mark Freydl, CEO and Co-Founder, introduces Codiac, a platform designed to streamline the build and release process for SREs and development teams by addressing the friction and complexity that often arise in modern DevOps workflows. The platform focuses on simplifying the communication and coordination between different team members involved in the software development lifecycle (SDLC). By providing a common language and intuitive interface, Codiac aims to reduce the manual overhead and miscommunication that can occur when managing infrastructure and deployments. The platform offers both a CLI and GUI, allowing users to interact with it in various ways, whether through a browser, console, or pipeline, ensuring that all team members, from developers to project managers, can understand and contribute to the process.

One of the key features of Codiac is its “build once, configure on deploy” approach, which allows teams to build a container once and then configure it dynamically as it moves through different environments, such as development, QA, and production. This eliminates the need for manual configuration changes and reduces the risk of errors during deployment. The platform also supports snapshot deployments, where multiple services can be deployed together as a collective version, ensuring consistency across environments. Additionally, Codiac automates tasks like ingress management and environment scaling, further reducing the burden on SREs and allowing them to focus on higher-level discussions around performance and utilization rather than getting bogged down in the minutiae of YAML files and configuration management.

The motivation behind Codiac stems from the founders’ frustration with the growing complexity of modern infrastructure and the inefficiencies it creates for development teams. They recognized that while tools like Kubernetes offer powerful capabilities, they also introduce significant overhead, making it difficult for teams to move quickly and efficiently. By abstracting away much of the complexity and providing a more user-friendly interface, Codiac enables teams to focus on delivering value to the business rather than getting stuck in the technical weeds. The platform is designed to be extensible and adaptable to different workflows, making it a valuable tool for organizations looking to improve their DevOps processes and reduce the friction that often accompanies large-scale software development.


What’s Next for Heroku from Salesforce

Event: AppDev Field Day 2

Appearance: Heroku Presents at AppDev Field Day 2

Company: Heroku

Video Links:

Personnel: Chris Peterson

In this presentation, Chris Peterson, Senior Director of Product Management at Heroku, discusses the future of Heroku and its foundational principles, particularly the 12-Factor App Manifesto. This manifesto, created by Heroku co-founder Adam Wiggins, outlines best practices for building scalable and maintainable applications. These principles have guided Heroku’s development, ensuring that apps built on the platform can scale horizontally and integrate seamlessly with services like databases through environment variables. Heroku is now revisiting and modernizing the 12-Factor App Manifesto to address contemporary needs such as app identity, logging, and telemetry, and is actively seeking feedback from the developer community through open-source discussions.

Peterson also highlights recent advancements in Heroku’s scalability and language support. In 2024, Heroku introduced new features to ensure that customers can scale both horizontally and vertically, offering larger dynos with up to 128GB of memory and smaller options for enterprise customers. The platform has also modernized its language support, adding faster package managers like PNPM and new tools like Poetry for Python. Additionally, Heroku has expanded its Postgres offerings, providing larger database plans and new versions to accommodate growing customer needs. The platform has also integrated with other Salesforce services, such as Mulesoft Flex Gateway, to enhance API management and security within private spaces.

Looking ahead, Heroku is focusing on enhanced networking, including HTTP2 and HTTP3 support, and expanding its language ecosystem with the addition of .NET support. The platform is also working on deeper integrations with Salesforce through event-driven and API-driven solutions, allowing developers to easily connect Heroku apps with Salesforce events and APIs. Heroku is also embracing open standards, particularly in the Kubernetes ecosystem, and is collaborating with AWS to leverage new services and technologies. These efforts are part of a broader strategy to re-platform Heroku and refresh its core values, ensuring it remains a leading platform for developers in the cloud-native era.


Building, Deploying, and Scaling Applications with Heroku from Salesforce

Event: AppDev Field Day 2

Appearance: Heroku Presents at AppDev Field Day 2

Company: Heroku

Video Links:

Personnel: Julián Duque

In this presentation, Julián Duque, Principal Developer Advocate at Heroku, demonstrates how developers can build, deploy, and scale applications using the Heroku platform. He introduces a fictitious company, Lumina Solar, which offers solar energy solutions and uses Heroku to manage its web applications. The architecture of Lumina Solar’s system is divided into two environments: staging for testing and production for live services. The application is split into two parts: an API connected to backend services and a UI built with modern web technologies like React and Node.js. Julián walks through the process of deploying the UI to Heroku’s Common Runtime using the Heroku dashboard, showing how developers can connect their GitHub repositories, set up automatic deployments, and manage scaling options for their applications.

Julián also explains the concept of “dynos,” which are the computational units where Heroku applications run. He demonstrates how to scale applications both vertically and horizontally, depending on the needs of the app, and introduces auto-scaling features that allow applications to automatically adjust based on traffic. This is particularly useful for handling high-traffic events like Black Friday sales. He also highlights Heroku’s built-in metrics dashboard, which provides insights into memory usage, response times, and other performance indicators. Developers can set up alerts for issues like failed requests or high response times, and these notifications can be sent to team members to ensure quick responses to potential problems.

The presentation also covers Heroku’s managed data services, such as Heroku Postgres, and how developers can easily provision databases for their applications. Julián demonstrates how to integrate third-party services like Papertrail for logging and Heroku Connect for syncing data between Heroku and Salesforce. He also touches on enterprise features like Private Spaces and Shield, which offer enhanced security and compliance for applications that need to meet standards like HIPAA or PCI. Finally, Julián shows how developers can use the Heroku CLI for managing applications, scaling, and deploying code, providing a flexible and powerful tool for those who prefer working in a terminal environment.


Build, Deploy, and Scale Your App Your Way with Heroku from Salesforce

Event: AppDev Field Day 2

Appearance: Heroku Presents at AppDev Field Day 2

Company: Heroku

Video Links:

Personnel: Adam Zimman, Betty Junod

Heroku, a platform as a service (PaaS) provider, has been a pioneer in simplifying the process of building, deploying, and scaling applications. The platform allows developers to focus on their core tasks—coming up with ideas and writing code—while Heroku handles the complexities of infrastructure management. By abstracting away the need to manage servers, databases, and other backend components, Heroku enables developers to bring their applications to market faster and with less effort. This is particularly beneficial for enterprises that need to manage updates, security patches, and scaling without getting bogged down by the intricacies of cloud infrastructure. Heroku’s opinionated system provides a streamlined developer experience, allowing teams to focus on innovation rather than the “plumbing” of their applications.

As part of Salesforce, Heroku offers deep integrations with the Salesforce ecosystem, making it easier for businesses to extend their Salesforce applications and deliver seamless customer experiences. The platform is also a valuable tool for enterprises looking to migrate workloads to the cloud without having to navigate the complexities of services like AWS. Heroku simplifies this process by offering a set of primitives that handle the heavy lifting, allowing companies to focus on their core business functions. The platform has been widely adopted across various industries, including finance, retail, healthcare, and automotive, with major companies like T-Mobile and Live Nation relying on Heroku to scale their operations.

Heroku’s value proposition is evident in its ability to increase developer productivity, reduce DevOps costs, and provide a strong return on investment. The platform has supported over 13 million apps and handles more than 60 billion requests per day, demonstrating its reliability and scalability. Customer success stories, such as Health Sherpa and Leather Spa, highlight how Heroku can cater to both large enterprises and small businesses. Health Sherpa, for instance, was able to handle massive traffic spikes during open enrollment for health insurance, while Leather Spa, a single-developer operation, saw significant improvements in sales and operational efficiency. Heroku’s ability to “just work,” much like electricity from a socket, allows businesses to focus on what truly differentiates them in the market, leaving the infrastructure management to Heroku.

Introduced by Betty Junod, CMO and SVP, and presented by Adam Zimman, Sr. Director, Product Marketing, Heroku from Salesforce


How SOUTHWORKS Leverages Multi-Cloud Technologies

Event: AppDev Field Day 2

Appearance: SOUTHWORKS Presents at AppDev Field Day 2

Company: SOUTHWORKS

Video Links:

Personnel: Johnny Halife

In this presentation, Johnny Halife, CTO of SOUTHWORKS, discusses the company’s approach to leveraging multi-cloud technologies and the importance of being cloud-agnostic. He emphasizes that the cloud landscape is no longer about choosing between AWS, Azure, or Google Cloud, but rather about integrating services across multiple providers to offer interoperability. SOUTHWORKS aims to build bridges between different cloud platforms, allowing clients to take advantage of the best offerings from each provider. This approach is driven by both technical and non-technical factors, such as market trends and business needs, which may require companies to switch cloud providers for reasons unrelated to the quality of the service itself. SOUTHWORKS positions itself as a partner that can work with all major cloud platforms, offering flexibility and resilience to its clients.

Halife also highlights the importance of understanding the strengths and weaknesses of each cloud provider. SOUTHWORKS maintains strong partnerships with AWS, Azure, and Google Cloud, and its team is well-versed in the specific technologies and terminologies of each platform. This knowledge allows the company to advise clients on the best migration strategies and to help them navigate the complexities of moving between clouds. SOUTHWORKS prides itself on being transparent with its clients, offering them a range of options and ensuring that they are not locked into a single cloud provider. The company’s cloud-agnostic approach is a key part of its value proposition, enabling it to provide agile and flexible solutions that meet the evolving needs of its clients.

Finally, Halife touches on the role of Kubernetes in achieving cloud standardization, noting that while Kubernetes is a good starting point, it is not a complete solution. He explains that while containerization allows for some level of portability across clouds, there is still a need for more advanced tools and abstractions to fully take advantage of cloud-native services. SOUTHWORKS is actively involved in contributing to open-source projects that aim to fill these gaps, such as Terraform and Carpenter. The company’s commitment to multi-cloud solutions is reflected in its day-to-day operations and its development philosophy, which focuses on providing short-term, hands-on support to help clients achieve their goals quickly and efficiently.


SOUTHWORKS Migrations and Multi-Cloud Scenarios

Event: AppDev Field Day 2

Appearance: SOUTHWORKS Presents at AppDev Field Day 2

Company: SOUTHWORKS

Video Links:

Personnel: Johnny Halife

In this presentation, Johnny Halife, CTO of SOUTHWORKS, discusses the company’s approach to cloud migrations and multi-cloud scenarios, focusing on how they help clients transition from traditional infrastructure to SaaS models. One of the key examples he provides is a customer who initially deployed VMs on customer subscriptions but wanted to shift to a SaaS model to reduce infrastructure costs and management overhead. SOUTHWORKS helped by containerizing workloads, leveraging Kubernetes, and using tools like Crossplane and Argo to manage multi-tenancy and compliance. The project was completed in nine months with three squads, allowing the customer to accelerate their time to market and reskill their workforce in cloud-agnostic technologies like Kubernetes, rather than being tied to specific cloud providers.

Another case involved a large-scale migration and modernization project for a company with 250 accounts and 100 teams across five regions. The company had acquired multiple businesses without integrating their technologies, leading to a complex infrastructure spread across various cloud providers. SOUTHWORKS created a blueprint using Terraform to streamline operations, prioritize managed resources, and ensure zero downtime during the migration. They also implemented a virtual network that connected data centers globally, allowing for seamless workload movement and high availability. The project took 14 months and involved 15 SOUTHWORKS engineers working alongside 30 client engineers, with a focus on creating a unified security and deployment strategy across the organization.

Throughout the presentation, Halife emphasizes the importance of flexibility, cloud-agnostic design, and the ability to scale rapidly. He discusses how SOUTHWORKS helps clients balance the use of advanced cloud services while maintaining the flexibility to move between providers. By creating abstraction layers and leveraging open-source tools, SOUTHWORKS enables clients to tap into the best features of each cloud provider without being locked in. Halife also highlights the importance of teaching clients to “fish” by integrating their teams into the process, ensuring they can maintain and evolve their infrastructure independently after the engagement.