Why AI is All About Object Storage with MinIO

Event: AI Data Infrastructure Field Day 1

Appearance: MinIO Presents at AI Data Infrastructure Field Day 1

Company: MinIO

Video Links:

Personnel: Jonathan Symonds

Almost every major LLM is trained on an object store. Why is that? The answer lies in the unique properties of a modern object store – performance (throughput and IOPS), scale and simplicity. In this segment, MinIO details how AI scale is stressing traditional technologies and why object storage is the de facto storage standard for modern AI architectures.

Jonathan Symonds kicks off the presentation by MinIO at AI Data Infrastructure Field Day 1, describing the critical role of object storage in the realm of artificial intelligence (AI). Symonds begins by highlighting the unprecedented scale of data involved in AI, where petabytes have become the new terabytes, and the industry is rapidly approaching exabyte-scale challenges. Traditional storage technologies like NFS are struggling to keep up with this scale, leading to a shift towards object storage, which offers the necessary performance, scalability, and simplicity. Symonds emphasizes that the distributed nature of data creation, encompassing various formats such as video, audio, and log files, further necessitates the adoption of object storage to handle the massive and diverse data volumes efficiently.

Symonds also addresses the economic and operational considerations driving the adoption of object storage in AI. Enterprises are increasingly repatriating data from public clouds to private clouds to achieve better cost control and economic viability. This shift is facilitated by the cloud operating model, which includes containerization, orchestration, and APIs, making it easier to manage large-scale data infrastructures. The presentation underscores the importance of control over data, with Symonds citing industry leaders who advocate for keeping data within the organization’s control to maintain competitive advantage. This control is crucial for enterprises to maximize the value of their data and protect it from external threats.

The presentation concludes by discussing the unique features of object storage that make it ideal for AI workloads. These include the simplicity of the S3 API, fine-grained security controls, immutability, continuous data protection, and active-active replication for high availability. Symonds highlights that these features are essential for managing the performance and scale required by modern AI applications. He also notes that the simplicity of object storage scales operationally, technically, and economically, making it a robust solution for the growing demands of AI. The presentation reinforces the idea that object storage is not just a viable option but a necessary one for enterprises looking to harness the full potential of AI at scale.


Rethinking Data Center Infrastructure Automation with Nokia – Operations Made Easy

Event: Networking Field Day Exclusive with Nokia

Appearance: Rethinking Data Center Infrastructure Automation

Company: Nokia

Video Links:

Personnel: Bruce Wallis

Bruce Wallis, Senior PLM on the Event Driven Automation (EDA) solution shifts gears to explain the operational capabilities of the data center fabric with a focus on the state of the network and the way that EDA abstractions reduce the complexity of the day to day tasks for data center operations teams.


Rethinking Data Center Infrastructure Automation with Nokia – Fabric Creation

Event: Networking Field Day Exclusive with Nokia

Appearance: Rethinking Data Center Infrastructure Automation

Company: Nokia

Video Links:

Personnel: Bruce Wallis

Bruce Wallis, Senior PLM on the Event Driven Automation (EDA) solution delves deeper into how EDA automates the key deployments steps of setting up a new data center fabric, including the creation of an underlay network (the fabric) and overlay services (EVPN) via the power of the EDA app-store.


Rethinking Data Center Infrastructure Automation with Nokia – EDA Overview

Event: Networking Field Day Exclusive with Nokia

Appearance: Rethinking Data Center Infrastructure Automation

Company: Nokia

Video Links:

Personnel: Bruce Wallis

Bruce Wallis, Senior PLM on the Event Driven Automation (EDA) solution explains the thinking behind the EDA infrastructure automation engine and the core foundation of the solution for data center networks.


Networking Field Day Exclusive with Nokia Delegate Roundtable

Event: Networking Field Day Exclusive with Nokia

Appearance: Networking Field Day Exclusive with Nokia Delegate Roundtable

Company: Nokia

Video Links:

Personnel: Tom Hollingsworth

The state of data center automation is fragmented and too bespoke. The next generation of IT specialists aren’t going to be building their strategy from scratch. They’ll be using familiar tools and working with comfortable platforms. In this roundtable discussion, Tom Hollingsworth is joined by the Networking Field Day Exclusive delegates as well as employees from Nokia to discuss Nokia EDA and how it is transforming the landscape of data center networking automation. They also discuss the role that management has to play in the process and how we can better prepare our replacements for the future.


The Path Towards Consumable Network Automation with Nokia

Event: Networking Field Day Exclusive with Nokia

Appearance: Nokia for Data Center Networking

Company: Nokia

Video Links:

Personnel: Wim Henderickx

Network automation has been talked about for over 10 years, yet the adoption of automation tools within the enterprise network is sporadic, with many only deploying islands of automation. Nokia IP Head of Technology and Architecture and Bell Labs Fellow, Wim Henderickx will discuss what needs to change to make the network more consumable. Through open and extensible frameworks, the aspirations of network automation are within reach, Wim will tell you how.


Nokia for IP Networking with Vach Kompella

Event: Networking Field Day Exclusive with Nokia

Appearance: Nokia for Data Center Networking

Company: Nokia

Video Links:

Personnel: Vach Kompella

In a rapidly evolving tech landscape, Nokia has emerged as a formidable player in global IP networking, captivating the attention of technical engineers worldwide. Nokia is not just keeping pace with the demands of modern networking but is also setting new standards. Hear from Vach Kompella, the leader of the Nokia IP team on how they achieve best-in-industry software quality, hardware capacities and operational toolsets that allow IP networkers to build at speed and with confidence.


Nokia for the Data Center and Beyond

Event: Networking Field Day Exclusive with Nokia

Appearance: Nokia for Data Center Networking

Company: Nokia

Video Links:

Personnel: Michael Bushong

Industry veteran Michael Bushong has recently joined the Nokia team to lead business development for IP solutions into the data center. What piqued Mikes interest in what Nokia was building and ultimately drove him to join? Mike will cover his view from the outside and provide insights into Nokia and the key differentiations that sets the solution set apart from the competition.


The AI Development Lifecycles with Guy Currier of The Futurum Group

Event: Edge Field Day 3

Appearance: Ignite Talks at Edge Field Day 3

Company: Ignite, The Futurum Group

Video Links:

Personnel: Guy Currier

Guy Currier from The Futurum Group discusses the AI development lifecycle, emphasizing that AI is not an application in itself but rather a service integrated into applications. He begins by clarifying a common misconception: AI is often thought of as a standalone application, but in reality, it is a service that enhances applications by providing intelligent responses or actions. For example, a chat application may use AI to generate responses, but the AI is just one component of the broader application. This distinction is crucial for organizations looking to adopt AI, as they need to understand that AI development involves creating services that can be integrated into various applications, rather than building a single AI “application.”

Currier outlines three distinct life cycles in AI development. The first involves foundational models, such as GPT or DALL-E, which are typically developed by large organizations or cloud providers. While companies can create their own foundational models, it is often more practical to adopt existing models and tune them using proprietary data. This tuning process is where organizations begin to develop their own intellectual property, as they adapt the foundational models to their specific needs. For instance, companies like Thomson Reuters have incorporated AI into their legal services, using proprietary legal data to enhance the AI’s capabilities. This tuning process follows a development cycle similar to traditional software development, involving infrastructure, tooling, and iterative testing.

The third life cycle Currier discusses is Retrieval Augmented Generation (RAG), which adds contextual data to user prompts. RAG involves searching for relevant information, either from internal or external sources, to enhance the AI’s responses. This process requires its own development and monitoring, as the quality and relevance of the contextual data are critical to the AI’s performance. Currier emphasizes that these three life cycles—foundational model development, model tuning, and RAG—each have distinct infrastructure and development strategies. Organizations must address all three to create AI services that are responsive, contextual, and tailored to specific business scenarios.


Private Cellular and Salt with Mark Houtz

Event: Edge Field Day 3

Appearance: Ignite Talks at Edge Field Day 3

Company: Ignite

Video Links:

Personnel: Mark Houtz

Mark Houtz, a network engineer working with school districts in Utah, shared his recent experiments with private cellular networks, particularly focusing on CBRS (Citizens Broadband Radio Service) and Wi-Fi technologies. He explained that CBRS operates in the 3.55 to 3.7 GHz band in the U.S. and is gaining traction globally. Mark and his team conducted tests at the Bonneville Salt Flats, a vast, flat area known for land speed records, making it an ideal location for testing wireless technologies over long distances. In their initial tests two years ago, they managed to achieve a two-mile range using a 15-foot antenna for CBRS, but they wanted to push the limits further with more advanced equipment.

In their recent tests, Mark and his team used a cellular cow (Cell on Wheels) with a 60-foot antenna to improve the range and performance of their wireless technologies. They tested both LTE and 5G radios, along with Wi-Fi Halo, which operates in the 900 MHz spectrum. While Wi-Fi Halo didn’t perform as well as expected, reaching only about a mile instead of the hoped-for three kilometers, the CBRS tests were more successful. They achieved a four-mile range with usable signal strength, allowing them to perform speed tests and browse the internet. Mark emphasized the importance of antenna height and line of sight in achieving better performance, noting that in some pristine conditions, they had previously reached up to 12 miles with private cellular.

Mark also highlighted the potential for edge computing in these setups, particularly in remote or mobile environments like the Bonneville Salt Flats. By integrating edge computing into the cellular cow or even on the client side, they could handle data processing closer to the source, improving efficiency and reducing latency. The tests demonstrated the viability of private cellular networks for high-speed, long-distance connectivity, especially in challenging environments, and underscored the importance of proper equipment setup, including antenna height and spectrum analysis, for optimal performance.


Zero Trust is a Strategy Not a Product with Jack Poller

Event: Edge Field Day 3

Appearance: Ignite Talks at Edge Field Day 3

Company: Ignite

Video Links:

Personnel: Jack Poller

In this talk, Jack Poller emphasizes that Zero Trust is a cybersecurity strategy, not a product. He begins by reflecting on the pre-pandemic era when VPNs were the primary method for remote workers to access internal networks. However, the sudden shift to remote work during the COVID-19 pandemic exposed the limitations of VPNs, particularly their scalability and security vulnerabilities. This led to the rise of Zero Trust Network Access (ZTNA), which improved security by eliminating direct inbound connections to servers. Instead, both clients and servers connect outbound to a cloud solution, reducing the attack surface. However, Poller clarifies that ZTNA is just a product and not the full embodiment of Zero Trust.

Poller traces the origins of Zero Trust back to 2010 when John Kindervag, an analyst at Forrester, introduced the concept to address the flaws in the traditional “castle and moat” security model. In this older model, once a user passed through the firewall, they had broad access to the internal network, which attackers could exploit through lateral movement. Zero Trust, on the other hand, operates on the principle of “never trust, always verify,” requiring strict authentication and authorization for every interaction, whether it’s between users, devices, or APIs. Google’s implementation of Zero Trust through its BeyondCorp initiative in 2014 further popularized the concept, demonstrating how it could be applied to large-scale environments.

Poller outlines the core principles of Zero Trust, including explicit verification, least privilege access, and the assumption that breaches will occur. He stresses the importance of strong identity controls, device security, network security, and data protection, all underpinned by visibility, analytics, and automation. Zero Trust requires a comprehensive, integrated approach to security, tailored to the specific needs of each organization. Poller concludes by reminding the audience that Zero Trust is not a one-size-fits-all solution but a strategic framework that must be customized based on the unique requirements and risks of each business.


Object Storage For Disaster Recovery with Jim Jones

Event: Edge Field Day 3

Appearance: Ignite Talks at Edge Field Day 3

Company: Ignite

Video Links:

Personnel: Jim Jones

Jim Jones from 11:11 Systems discusses the evolving landscape of disaster recovery (DR) and how object storage plays a crucial role in modern strategies, particularly in the face of increasing ransomware attacks. He emphasizes that traditional DR concerns like fire and flood have been overshadowed by the growing threat of ransomware, which has become a global issue. Attackers now target backups, attempting to exfiltrate and delete them, making it essential to have encrypted, immutable backups. Jones stresses the importance of a layered approach to resilience, combining active defense, well-architected recovery strategies, and regular testing of backups to ensure they are functional when needed.

Object storage, according to Jones, has become the preferred solution for modern backup and disaster recovery due to its inherent immutability, scalability, and cost-effectiveness. Unlike traditional storage methods like tape or disk, object storage abstracts the underlying hardware and distributes data across multiple locations, ensuring durability. He highlights that hyperscalers like Amazon S3 offer exceptional durability by replicating data across multiple zones, making it a reliable option for large-scale disaster recovery. Additionally, object storage is cost-efficient, especially for backup workloads that involve many writes but few reads, making it suitable for cold storage tiers like Amazon S3’s infrequent access or Azure’s cold tier.

Jones also explains how object storage can be integrated into both on-premises and cloud-based DR strategies. He advises using on-premises object storage for initial backups of critical workloads like VMs and databases, while leveraging cloud storage for off-site copies. This hybrid approach ensures that data is protected both locally and remotely. He also touches on the flexibility of object storage in edge environments, where backups can be regionalized or store-specific, and how hyperscaler clouds can serve as a secondary data center for recovery in case of a major disaster. Ultimately, Jones underscores the importance of having a comprehensive, scalable, and tested disaster recovery plan that leverages the strengths of object storage.


CXL Test Drives featuring Andy Banta

Event: AI Field Day 5

Appearance: Ignite Talks at AI Field Day 5

Company: Ignite

Video Links:

Personnel: Andy Banta

Andy Banta’s talk at AI Field Day 5 delves into the concept of CXL (Compute Express Link) and its potential to revolutionize memory access in computing architectures. He begins by explaining the traditional concept of Non-Uniform Memory Access (Numa), where memory access times vary depending on the proximity of the memory to the processor. CXL extends this idea by allowing memory to be connected via a CXL channel, which operates over a PCI-E bus, rather than the traditional DDR channel. This innovation enables memory to be located both inside and outside the physical box, and even connected through future CXL switches, which will allow shared memory access among multiple hosts. The potential for CXL to incorporate SSDs means that memory access times could range from nanoseconds to milliseconds, offering a wide array of possibilities for memory management.

Banta highlights the current limitations in testing CXL devices, noting that many are still in the conceptual phase and not readily available for purchase or testing. He draws an analogy to test-driving a car, where certain limitations prevent a thorough evaluation of the vehicle’s performance. Similarly, with CXL, the lack of commercially available switches and the ongoing development of standards for shared switches make it challenging to conduct meaningful tests. To address this, Banta proposes a simulation-based approach, akin to practices in other engineering disciplines like electrical or mechanical engineering. He suggests that software engineering and system design should also adopt simulation to evaluate different configurations and workloads, thereby optimizing performance and resource allocation.

Banta introduces Magnition, a company he is consulting with, which has developed a large-scale simulation framework for CXL environments. This framework allows for distributed, multi-processor simulations of various components, enabling users to run genuine applications and workloads to capture memory access patterns. By simulating different configurations and workloads, Magnition’s framework can help identify optimal memory allocation strategies and performance sweet spots. Banta emphasizes the importance of consistent results in performance engineering and demonstrates how their simulation framework can achieve this by running controlled experiments. The ultimate goal is to provide a reliable and efficient way to “test drive” CXL systems, ensuring that users can make informed decisions about their memory management strategies.


Thoughts on Enterprise Ready Solutions featuring Karen Lopez

Event: AI Field Day 5

Appearance: Ignite Talks at AI Field Day 5

Company: Ignite

Video Links:

Personnel: Karen Lopez

Karen Lopez’s talk at AI Field Day 5 delves into the evolution of enterprise software acquisition and the critical considerations that have emerged over her extensive career. Reflecting on her 38 years in the field, Lopez contrasts the early days of software procurement, where software was a tangible product with limited integration capabilities, to the current landscape where integration, security, and compatibility with existing enterprise systems are paramount. She recalls a time when software came in physical boxes, required manual data integration, and had limited scalability and backup options. The roles of system integrators and specialized experts were crucial due to the complexity and cost of integrating disparate systems.

Lopez emphasizes that modern enterprise software acquisition now demands a holistic view that goes beyond the software’s inherent features. She highlights the importance of understanding how new solutions will fit within an organization’s existing infrastructure, including integration with current administrative, security, and privacy tools. Lopez points out that many vendors often gloss over these integration details during their pitches, which can lead to significant hidden costs and implementation challenges. She stresses the need for thorough questioning about a solution’s compatibility with continuous deployment environments, identity systems, governance frameworks, and monitoring tools to ensure that the software can be seamlessly integrated and managed within the enterprise.

In her current approach, Lopez places greater weight on external features such as security practices, data classification capabilities, and the ability to use existing analytical tools. She shares her experience with a recent acquisition project where the lack of granular security controls in a hastily purchased solution posed significant risks. Lopez advocates for a comprehensive evaluation of a solution’s enterprise readiness, including its support for modern security measures like multi-factor authentication and its ability to integrate with existing data management and monitoring systems. By focusing on these broader considerations, Lopez aims to reduce the cost and risk associated with new software implementations, ensuring that they deliver long-term value to the organization.


FIDO Phishing-Resistant Authentication featuring Jack Poller

Event: AI Field Day 5

Appearance: Ignite Talks at AI Field Day 5

Company: Ignite

Video Links:

Personnel: Jack Poller

Jack Poller, founder and principal analyst of Paradigm Technica, discusses the evolution and challenges of authentication methods, particularly focusing on the limitations of traditional passwords. He explains that passwords, which have been used since ancient times, are fundamentally flawed because they are shared secrets that can be easily stolen or phished. Despite the implementation of multi-factor authentication (MFA) to enhance security by combining something you know (password) with something you have (a device) or something you are (biometrics), these methods still rely on shared secrets that can be compromised through social engineering tactics.

Poller introduces public key cryptography as a more secure alternative for authentication, which has been around since the 1970s but is relatively new in the context of identity and access management. Public key cryptography involves a pair of keys: a private key that encrypts data and a public key that decrypts it. This method ensures that the private key, stored in a secure vault within a trusted processor module (TPM), cannot be extracted or misused, even under duress. The TPM not only stores the keys securely but also performs the encryption and decryption processes, ensuring that the keys are never exposed.

He further elaborates on how the FIDO (Fast Identity Online) protocol leverages this technology to provide phishing-resistant authentication. When a user attempts to log in to a website, the site sends a challenge to the user’s device, which is then encrypted using the private key stored in the TPM. The encrypted response is sent back to the website, which decrypts it using the corresponding public key to verify the user’s identity. This method eliminates the risks associated with password reuse and phishing, making it a more secure and user-friendly solution. Poller emphasizes the importance of adopting passkeys offered by websites to enhance overall internet security.


Accelerate Your SDLC with BEST featuring Calvin Hendryx-Parker of Six Feet Up

Event: AI Field Day 5

Appearance: Ignite Talks at AI Field Day 5

Company: Ignite

Video Links:

Personnel: Calvin Hendryx-Parker

Calvin Hendryx-Parker from Six Feet Up discusses the importance of optimizing the software development lifecycle (SDLC) in his talk at AI Field Day 5. He begins by acknowledging the widespread integration of software in various aspects of life and the common challenges faced by software teams. Calvin introduces Six Feet Up, a Python and AI agency known for tackling complex problems with a highly experienced team. He shares a case study of a client with over 30 sub-organizations, each with its own software development team, struggling to operate efficiently due to siloed operations and lack of collaboration.

To address these inefficiencies, Calvin’s team conducted a thorough two-month evaluation of the client’s software teams, identifying key issues such as the absence of continuous integration/continuous deployment (CI/CD) practices, manual intervention steps, and technical debt. They also assessed the onboarding process for new developers and the overall skill gaps within the teams. The goal was to transform the existing developers into more effective contributors without the need for drastic measures like hiring or firing. This comprehensive analysis led to the development of a scoring system to compare and evaluate the performance of different teams, ultimately providing tailored recommendations for improvement.

Calvin introduces BEST (Best Enterprise Software Techniques), a product designed to streamline the evaluation process. BEST consists of online surveys that assess various aspects of the SDLC across four stages and 19 units, enabling teams to identify areas for improvement quickly. The tool generates heat maps and radar charts to visualize performance and facilitate cross-team learning and collaboration. Calvin emphasizes that while BEST can significantly enhance the SDLC, the underlying principles and evaluation framework can be adopted by any organization to improve their software development processes. He concludes by encouraging teams to focus on continuous improvement and collaboration to achieve a more efficient and effective SDLC.


Exploring the Network of Experts Required to Design and Implement Effective Edge Computing Solutions with OnLogic

Event: Edge Field Day 3

Appearance: OnLogic Presents at Edge Field Day 3

Company: OnLogic

Video Links:

Personnel: Amber Cobb

The presentation by Amber Cobb from OnLogic at Edge Field Day 3 focused on the importance of a collaborative ecosystem in designing and implementing effective edge computing solutions. OnLogic specializes in creating rugged, industrial edge devices that are highly configurable to meet the demands of challenging environments. Amber emphasized that the edge is not just a network boundary but a dynamic and complex place where real-time data and insights are crucial for driving efficiencies. To address these challenges, OnLogic relies on a network of partners, including Intel, AWS, Red Hat, Avassa, and Guise AI, to provide comprehensive solutions that go beyond just hardware. These partnerships allow OnLogic to offer scalable frameworks and innovative products that help customers navigate the complexities of edge computing.

Intel plays a foundational role in OnLogic’s ecosystem, particularly with its embedded lifecycle components designed for long-term edge deployments. Intel’s OpenVino Toolkit is also highlighted for its ability to optimize AI workloads at the edge, enabling enterprises to deploy and scale AI applications effectively. AWS is another key partner, providing cloud services like Greengrass and Kinesis Video Stream, which are essential for managing and deploying edge applications. Red Hat’s Enterprise Linux (REL) is described as a versatile and secure operating system that is ideal for critical workloads at the edge, offering a solid foundation for businesses to build upon. Avassa simplifies the orchestration of containerized applications at the edge, ensuring that even in environments with intermittent connectivity, edge applications remain functional and reliable.

Guise AI addresses the challenges of deploying and managing AI models at the edge, offering a no-code platform that streamlines AI deployments and optimizes performance. This platform is particularly useful for industries like oil and gas, where real-time monitoring and predictive maintenance are critical. Amber concluded by reiterating that the edge is a complex environment that requires more than just hardware; it requires a robust ecosystem of partners. By working with industry leaders and innovative companies, OnLogic is able to provide its customers with the tools and support they need to succeed in their edge computing endeavors.


A Look at Edge Computing in Building Automation and Steel Production with OnLogic

Event: Edge Field Day 3

Appearance: OnLogic Presents at Edge Field Day 3

Company: OnLogic

Video Links:

Personnel: Ross Hamilton

In the second session of his presentation at Edge Field Day 3, Ross Hamilton, Systems Architect at OnLogic, delves into two specific edge computing use cases: steel production and building automation. He begins by discussing the harsh environment of a steel mill, where extreme temperatures, dust, and debris pose significant challenges for technology. OnLogic’s ruggedized hardware, such as the Tacton TC 401 Panel PC, is designed to withstand these conditions. The panel PC offers a user-friendly touch interface that allows workers to manage complex machinery efficiently, reducing downtime and improving reliability. Its rugged construction, fanless design, and wide operating temperature range make it ideal for the punishing environment of a steel mill, where it can be mounted in industrial control panels or used in outdoor settings with UV-resistant displays.

Hamilton then shifts focus to the second use case: building automation in smart buildings, using a real-world example from New York City. He recalls the 2003 blackout and how edge computing could have mitigated some of the issues, such as people being trapped in elevators. OnLogic’s Helix platform is highlighted as a solution for gathering and analyzing data in smart buildings. The Helix platform is designed to operate in rugged environments like mechanical spaces in buildings, offering features such as solid-state construction, extended temperature ranges, and resistance to dust and particulates. It can gather data from various building systems and relay it to a central dashboard, enabling proactive decision-making to prevent issues like power failures.

Throughout the presentation, Hamilton emphasizes the importance of flexibility and resilience in edge computing hardware. OnLogic’s products are designed to meet the unique challenges of different environments, from the extreme conditions of a steel mill to the more controlled but still demanding settings of smart buildings. The company offers modular solutions that can be customized to fit specific needs, whether it’s legacy protocol support in industrial settings or ruggedized power protection for environments with unstable power sources. By providing robust, adaptable hardware, OnLogic aims to help businesses optimize efficiency, reduce downtime, and improve safety across a variety of edge computing applications.


A Look at Edge Computing in Smart Agriculture and Mining Automation with OnLogic

Event: Edge Field Day 3

Appearance: OnLogic Presents at Edge Field Day 3

Company: OnLogic

Video Links:

Personnel: Ross Hamilton

The Edge conjures images of network architecture diagrams, but for users of edge computers the edge is a physical location. Today’s modern edge systems are deployed well away from carpeted spaces, and users of edge computing have very specific requirements. In this presentation, OnLogic Systems Architect, Ross Hamilton explores specific edge use cases (Smart Agriculture & Mining Automation), what they represent about the changing needs of tech innovators, and what goes into designing solutions that can survive wherever they’re needed.

In this presentation, Ross Hamilton from OnLogic discusses the evolving landscape of edge computing, emphasizing that the edge is not just a network concept but a physical location where computing systems are deployed in challenging environments. OnLogic specializes in creating industrial-grade edge computers designed to withstand harsh conditions, such as extreme temperatures, vibrations, and dust. Their product lines, including the Karbon series, are built to meet the specific needs of industries like agriculture and mining, where reliable, rugged computing is essential for real-time data processing and decision-making.

Hamilton highlights two specific use cases to illustrate the challenges and solutions in edge computing. In smart agriculture, OnLogic worked with a company developing robotic food harvesters that needed to operate in environments with fluctuating temperatures, vibrations, and dust. The Karbon 800 series was ideal for this application, offering fanless design, wide temperature range support, and the ability to process sensor data and communicate with motor controllers. The system also supports cellular connectivity, enabling real-time data transmission to the cloud, making it a robust solution for agricultural automation.

In the mining industry, OnLogic’s Karbon 400 series was deployed in a facility in northern Sweden, where temperatures can drop to -40°C. The system needed to operate reliably in these extreme conditions while supporting LiDAR sensors to detect spillage and ensure worker safety in dark, hazardous environments. The Karbon 400 series, with its Intel Atom processor and long lifecycle, provided the necessary compute power and connectivity, including dual LAN with Power over Ethernet (PoE) for cameras and sensors. These examples demonstrate how OnLogic’s rugged edge computing solutions are tailored to meet the specific demands of various industries, ensuring reliability and performance in the most challenging environments.


The Edge is Just the Beginning – Advancing Edge Computing Hardware with OnLogic

Event: Edge Field Day 3

Appearance: OnLogic Presents at Edge Field Day 3

Company: OnLogic

Video Links:

Personnel: Lisa Groeneveld

OnLogic, co-founded by Lisa Groeneveld and her husband Roland, designs and manufactures industrial and rugged computers for edge computing. The company, headquartered in Vermont, has grown from a small operation in a Boston apartment to a global business with locations in the U.S., Europe, and Asia. Lisa shared the story of how the company started by importing mini ITX motherboards and selling them online, leveraging the early days of e-commerce and Google AdWords to reach customers. Over time, OnLogic expanded its product offerings and built strong relationships with clients by understanding their needs and providing tailored solutions. Today, the company has shipped over 800,000 systems and components, serving industries such as manufacturing, transportation, and logistics.

OnLogic’s computers are designed to operate in harsh environments, making them ideal for industries that require high reliability and durability. Lisa highlighted examples of how OnLogic’s rugged computers are used in various applications, such as in the manufacturing processes of companies like Bridgestone and Michelin, and in Amazon’s fulfillment centers. She also shared a case study of Steel Dynamics, which needed computers that could withstand high vibration and extreme temperatures in their industrial setting. Another notable project involved Universal Studios, where OnLogic provided computers for kiosks in their water park, ensuring they could handle wide temperature ranges and outdoor conditions. These examples illustrate the versatility and robustness of OnLogic’s products, which are used in a wide range of industries and environments.

OnLogic’s approach to business is rooted in its core values of being open, fair, independent, and innovative. The company prides itself on transparency, offering open pricing and sharing financial information with employees. This culture of openness extends to their customer relationships, where they provide customizable solutions through a configurator on their website, allowing clients to tailor products to their specific needs. Lisa emphasized that OnLogic’s edge computing solutions are not just endpoints but the beginning of meaningful processes for their customers. One example she shared was a project involving AI systems at the base of wind turbines to prevent harm to migratory birds, showcasing how OnLogic’s technology can contribute to solving global challenges.