Private Cellular and Salt with Mark Houtz

Event: Edge Field Day 3

Appearance: Ignite Talks at Edge Field Day 3

Company: Ignite

Video Links:

Personnel: Mark Houtz

Mark Houtz, a network engineer working with school districts in Utah, shared his recent experiments with private cellular networks, particularly focusing on CBRS (Citizens Broadband Radio Service) and Wi-Fi technologies. He explained that CBRS operates in the 3.55 to 3.7 GHz band in the U.S. and is gaining traction globally. Mark and his team conducted tests at the Bonneville Salt Flats, a vast, flat area known for land speed records, making it an ideal location for testing wireless technologies over long distances. In their initial tests two years ago, they managed to achieve a two-mile range using a 15-foot antenna for CBRS, but they wanted to push the limits further with more advanced equipment.

In their recent tests, Mark and his team used a cellular cow (Cell on Wheels) with a 60-foot antenna to improve the range and performance of their wireless technologies. They tested both LTE and 5G radios, along with Wi-Fi Halo, which operates in the 900 MHz spectrum. While Wi-Fi Halo didn’t perform as well as expected, reaching only about a mile instead of the hoped-for three kilometers, the CBRS tests were more successful. They achieved a four-mile range with usable signal strength, allowing them to perform speed tests and browse the internet. Mark emphasized the importance of antenna height and line of sight in achieving better performance, noting that in some pristine conditions, they had previously reached up to 12 miles with private cellular.

Mark also highlighted the potential for edge computing in these setups, particularly in remote or mobile environments like the Bonneville Salt Flats. By integrating edge computing into the cellular cow or even on the client side, they could handle data processing closer to the source, improving efficiency and reducing latency. The tests demonstrated the viability of private cellular networks for high-speed, long-distance connectivity, especially in challenging environments, and underscored the importance of proper equipment setup, including antenna height and spectrum analysis, for optimal performance.


Zero Trust is a Strategy Not a Product with Jack Poller

Event: Edge Field Day 3

Appearance: Ignite Talks at Edge Field Day 3

Company: Ignite

Video Links:

Personnel: Jack Poller

In this talk, Jack Poller emphasizes that Zero Trust is a cybersecurity strategy, not a product. He begins by reflecting on the pre-pandemic era when VPNs were the primary method for remote workers to access internal networks. However, the sudden shift to remote work during the COVID-19 pandemic exposed the limitations of VPNs, particularly their scalability and security vulnerabilities. This led to the rise of Zero Trust Network Access (ZTNA), which improved security by eliminating direct inbound connections to servers. Instead, both clients and servers connect outbound to a cloud solution, reducing the attack surface. However, Poller clarifies that ZTNA is just a product and not the full embodiment of Zero Trust.

Poller traces the origins of Zero Trust back to 2010 when John Kindervag, an analyst at Forrester, introduced the concept to address the flaws in the traditional “castle and moat” security model. In this older model, once a user passed through the firewall, they had broad access to the internal network, which attackers could exploit through lateral movement. Zero Trust, on the other hand, operates on the principle of “never trust, always verify,” requiring strict authentication and authorization for every interaction, whether it’s between users, devices, or APIs. Google’s implementation of Zero Trust through its BeyondCorp initiative in 2014 further popularized the concept, demonstrating how it could be applied to large-scale environments.

Poller outlines the core principles of Zero Trust, including explicit verification, least privilege access, and the assumption that breaches will occur. He stresses the importance of strong identity controls, device security, network security, and data protection, all underpinned by visibility, analytics, and automation. Zero Trust requires a comprehensive, integrated approach to security, tailored to the specific needs of each organization. Poller concludes by reminding the audience that Zero Trust is not a one-size-fits-all solution but a strategic framework that must be customized based on the unique requirements and risks of each business.


Object Storage For Disaster Recovery with Jim Jones

Event: Edge Field Day 3

Appearance: Ignite Talks at Edge Field Day 3

Company: Ignite

Video Links:

Personnel: Jim Jones

Jim Jones from 11:11 Systems discusses the evolving landscape of disaster recovery (DR) and how object storage plays a crucial role in modern strategies, particularly in the face of increasing ransomware attacks. He emphasizes that traditional DR concerns like fire and flood have been overshadowed by the growing threat of ransomware, which has become a global issue. Attackers now target backups, attempting to exfiltrate and delete them, making it essential to have encrypted, immutable backups. Jones stresses the importance of a layered approach to resilience, combining active defense, well-architected recovery strategies, and regular testing of backups to ensure they are functional when needed.

Object storage, according to Jones, has become the preferred solution for modern backup and disaster recovery due to its inherent immutability, scalability, and cost-effectiveness. Unlike traditional storage methods like tape or disk, object storage abstracts the underlying hardware and distributes data across multiple locations, ensuring durability. He highlights that hyperscalers like Amazon S3 offer exceptional durability by replicating data across multiple zones, making it a reliable option for large-scale disaster recovery. Additionally, object storage is cost-efficient, especially for backup workloads that involve many writes but few reads, making it suitable for cold storage tiers like Amazon S3’s infrequent access or Azure’s cold tier.

Jones also explains how object storage can be integrated into both on-premises and cloud-based DR strategies. He advises using on-premises object storage for initial backups of critical workloads like VMs and databases, while leveraging cloud storage for off-site copies. This hybrid approach ensures that data is protected both locally and remotely. He also touches on the flexibility of object storage in edge environments, where backups can be regionalized or store-specific, and how hyperscaler clouds can serve as a secondary data center for recovery in case of a major disaster. Ultimately, Jones underscores the importance of having a comprehensive, scalable, and tested disaster recovery plan that leverages the strengths of object storage.


CXL Test Drives featuring Andy Banta

Event: AI Field Day 5

Appearance: Ignite Talks at AI Field Day 5

Company: Ignite

Video Links:

Personnel: Andy Banta

Andy Banta’s talk at AI Field Day 5 delves into the concept of CXL (Compute Express Link) and its potential to revolutionize memory access in computing architectures. He begins by explaining the traditional concept of Non-Uniform Memory Access (Numa), where memory access times vary depending on the proximity of the memory to the processor. CXL extends this idea by allowing memory to be connected via a CXL channel, which operates over a PCI-E bus, rather than the traditional DDR channel. This innovation enables memory to be located both inside and outside the physical box, and even connected through future CXL switches, which will allow shared memory access among multiple hosts. The potential for CXL to incorporate SSDs means that memory access times could range from nanoseconds to milliseconds, offering a wide array of possibilities for memory management.

Banta highlights the current limitations in testing CXL devices, noting that many are still in the conceptual phase and not readily available for purchase or testing. He draws an analogy to test-driving a car, where certain limitations prevent a thorough evaluation of the vehicle’s performance. Similarly, with CXL, the lack of commercially available switches and the ongoing development of standards for shared switches make it challenging to conduct meaningful tests. To address this, Banta proposes a simulation-based approach, akin to practices in other engineering disciplines like electrical or mechanical engineering. He suggests that software engineering and system design should also adopt simulation to evaluate different configurations and workloads, thereby optimizing performance and resource allocation.

Banta introduces Magnition, a company he is consulting with, which has developed a large-scale simulation framework for CXL environments. This framework allows for distributed, multi-processor simulations of various components, enabling users to run genuine applications and workloads to capture memory access patterns. By simulating different configurations and workloads, Magnition’s framework can help identify optimal memory allocation strategies and performance sweet spots. Banta emphasizes the importance of consistent results in performance engineering and demonstrates how their simulation framework can achieve this by running controlled experiments. The ultimate goal is to provide a reliable and efficient way to “test drive” CXL systems, ensuring that users can make informed decisions about their memory management strategies.


Thoughts on Enterprise Ready Solutions featuring Karen Lopez

Event: AI Field Day 5

Appearance: Ignite Talks at AI Field Day 5

Company: Ignite

Video Links:

Personnel: Karen Lopez

Karen Lopez’s talk at AI Field Day 5 delves into the evolution of enterprise software acquisition and the critical considerations that have emerged over her extensive career. Reflecting on her 38 years in the field, Lopez contrasts the early days of software procurement, where software was a tangible product with limited integration capabilities, to the current landscape where integration, security, and compatibility with existing enterprise systems are paramount. She recalls a time when software came in physical boxes, required manual data integration, and had limited scalability and backup options. The roles of system integrators and specialized experts were crucial due to the complexity and cost of integrating disparate systems.

Lopez emphasizes that modern enterprise software acquisition now demands a holistic view that goes beyond the software’s inherent features. She highlights the importance of understanding how new solutions will fit within an organization’s existing infrastructure, including integration with current administrative, security, and privacy tools. Lopez points out that many vendors often gloss over these integration details during their pitches, which can lead to significant hidden costs and implementation challenges. She stresses the need for thorough questioning about a solution’s compatibility with continuous deployment environments, identity systems, governance frameworks, and monitoring tools to ensure that the software can be seamlessly integrated and managed within the enterprise.

In her current approach, Lopez places greater weight on external features such as security practices, data classification capabilities, and the ability to use existing analytical tools. She shares her experience with a recent acquisition project where the lack of granular security controls in a hastily purchased solution posed significant risks. Lopez advocates for a comprehensive evaluation of a solution’s enterprise readiness, including its support for modern security measures like multi-factor authentication and its ability to integrate with existing data management and monitoring systems. By focusing on these broader considerations, Lopez aims to reduce the cost and risk associated with new software implementations, ensuring that they deliver long-term value to the organization.


FIDO Phishing-Resistant Authentication featuring Jack Poller

Event: AI Field Day 5

Appearance: Ignite Talks at AI Field Day 5

Company: Ignite

Video Links:

Personnel: Jack Poller

Jack Poller, founder and principal analyst of Paradigm Technica, discusses the evolution and challenges of authentication methods, particularly focusing on the limitations of traditional passwords. He explains that passwords, which have been used since ancient times, are fundamentally flawed because they are shared secrets that can be easily stolen or phished. Despite the implementation of multi-factor authentication (MFA) to enhance security by combining something you know (password) with something you have (a device) or something you are (biometrics), these methods still rely on shared secrets that can be compromised through social engineering tactics.

Poller introduces public key cryptography as a more secure alternative for authentication, which has been around since the 1970s but is relatively new in the context of identity and access management. Public key cryptography involves a pair of keys: a private key that encrypts data and a public key that decrypts it. This method ensures that the private key, stored in a secure vault within a trusted processor module (TPM), cannot be extracted or misused, even under duress. The TPM not only stores the keys securely but also performs the encryption and decryption processes, ensuring that the keys are never exposed.

He further elaborates on how the FIDO (Fast Identity Online) protocol leverages this technology to provide phishing-resistant authentication. When a user attempts to log in to a website, the site sends a challenge to the user’s device, which is then encrypted using the private key stored in the TPM. The encrypted response is sent back to the website, which decrypts it using the corresponding public key to verify the user’s identity. This method eliminates the risks associated with password reuse and phishing, making it a more secure and user-friendly solution. Poller emphasizes the importance of adopting passkeys offered by websites to enhance overall internet security.


Accelerate Your SDLC with BEST featuring Calvin Hendryx-Parker of Six Feet Up

Event: AI Field Day 5

Appearance: Ignite Talks at AI Field Day 5

Company: Ignite

Video Links:

Personnel: Calvin Hendryx-Parker

Calvin Hendryx-Parker from Six Feet Up discusses the importance of optimizing the software development lifecycle (SDLC) in his talk at AI Field Day 5. He begins by acknowledging the widespread integration of software in various aspects of life and the common challenges faced by software teams. Calvin introduces Six Feet Up, a Python and AI agency known for tackling complex problems with a highly experienced team. He shares a case study of a client with over 30 sub-organizations, each with its own software development team, struggling to operate efficiently due to siloed operations and lack of collaboration.

To address these inefficiencies, Calvin’s team conducted a thorough two-month evaluation of the client’s software teams, identifying key issues such as the absence of continuous integration/continuous deployment (CI/CD) practices, manual intervention steps, and technical debt. They also assessed the onboarding process for new developers and the overall skill gaps within the teams. The goal was to transform the existing developers into more effective contributors without the need for drastic measures like hiring or firing. This comprehensive analysis led to the development of a scoring system to compare and evaluate the performance of different teams, ultimately providing tailored recommendations for improvement.

Calvin introduces BEST (Best Enterprise Software Techniques), a product designed to streamline the evaluation process. BEST consists of online surveys that assess various aspects of the SDLC across four stages and 19 units, enabling teams to identify areas for improvement quickly. The tool generates heat maps and radar charts to visualize performance and facilitate cross-team learning and collaboration. Calvin emphasizes that while BEST can significantly enhance the SDLC, the underlying principles and evaluation framework can be adopted by any organization to improve their software development processes. He concludes by encouraging teams to focus on continuous improvement and collaboration to achieve a more efficient and effective SDLC.


Exploring the Network of Experts Required to Design and Implement Effective Edge Computing Solutions with OnLogic

Event: Edge Field Day 3

Appearance: OnLogic Presents at Edge Field Day 3

Company: OnLogic

Video Links:

Personnel: Amber Cobb

The presentation by Amber Cobb from OnLogic at Edge Field Day 3 focused on the importance of a collaborative ecosystem in designing and implementing effective edge computing solutions. OnLogic specializes in creating rugged, industrial edge devices that are highly configurable to meet the demands of challenging environments. Amber emphasized that the edge is not just a network boundary but a dynamic and complex place where real-time data and insights are crucial for driving efficiencies. To address these challenges, OnLogic relies on a network of partners, including Intel, AWS, Red Hat, Avassa, and Guise AI, to provide comprehensive solutions that go beyond just hardware. These partnerships allow OnLogic to offer scalable frameworks and innovative products that help customers navigate the complexities of edge computing.

Intel plays a foundational role in OnLogic’s ecosystem, particularly with its embedded lifecycle components designed for long-term edge deployments. Intel’s OpenVino Toolkit is also highlighted for its ability to optimize AI workloads at the edge, enabling enterprises to deploy and scale AI applications effectively. AWS is another key partner, providing cloud services like Greengrass and Kinesis Video Stream, which are essential for managing and deploying edge applications. Red Hat’s Enterprise Linux (REL) is described as a versatile and secure operating system that is ideal for critical workloads at the edge, offering a solid foundation for businesses to build upon. Avassa simplifies the orchestration of containerized applications at the edge, ensuring that even in environments with intermittent connectivity, edge applications remain functional and reliable.

Guise AI addresses the challenges of deploying and managing AI models at the edge, offering a no-code platform that streamlines AI deployments and optimizes performance. This platform is particularly useful for industries like oil and gas, where real-time monitoring and predictive maintenance are critical. Amber concluded by reiterating that the edge is a complex environment that requires more than just hardware; it requires a robust ecosystem of partners. By working with industry leaders and innovative companies, OnLogic is able to provide its customers with the tools and support they need to succeed in their edge computing endeavors.


A Look at Edge Computing in Building Automation and Steel Production with OnLogic

Event: Edge Field Day 3

Appearance: OnLogic Presents at Edge Field Day 3

Company: OnLogic

Video Links:

Personnel: Ross Hamilton

In the second session of his presentation at Edge Field Day 3, Ross Hamilton, Systems Architect at OnLogic, delves into two specific edge computing use cases: steel production and building automation. He begins by discussing the harsh environment of a steel mill, where extreme temperatures, dust, and debris pose significant challenges for technology. OnLogic’s ruggedized hardware, such as the Tacton TC 401 Panel PC, is designed to withstand these conditions. The panel PC offers a user-friendly touch interface that allows workers to manage complex machinery efficiently, reducing downtime and improving reliability. Its rugged construction, fanless design, and wide operating temperature range make it ideal for the punishing environment of a steel mill, where it can be mounted in industrial control panels or used in outdoor settings with UV-resistant displays.

Hamilton then shifts focus to the second use case: building automation in smart buildings, using a real-world example from New York City. He recalls the 2003 blackout and how edge computing could have mitigated some of the issues, such as people being trapped in elevators. OnLogic’s Helix platform is highlighted as a solution for gathering and analyzing data in smart buildings. The Helix platform is designed to operate in rugged environments like mechanical spaces in buildings, offering features such as solid-state construction, extended temperature ranges, and resistance to dust and particulates. It can gather data from various building systems and relay it to a central dashboard, enabling proactive decision-making to prevent issues like power failures.

Throughout the presentation, Hamilton emphasizes the importance of flexibility and resilience in edge computing hardware. OnLogic’s products are designed to meet the unique challenges of different environments, from the extreme conditions of a steel mill to the more controlled but still demanding settings of smart buildings. The company offers modular solutions that can be customized to fit specific needs, whether it’s legacy protocol support in industrial settings or ruggedized power protection for environments with unstable power sources. By providing robust, adaptable hardware, OnLogic aims to help businesses optimize efficiency, reduce downtime, and improve safety across a variety of edge computing applications.


A Look at Edge Computing in Smart Agriculture and Mining Automation with OnLogic

Event: Edge Field Day 3

Appearance: OnLogic Presents at Edge Field Day 3

Company: OnLogic

Video Links:

Personnel: Ross Hamilton

The Edge conjures images of network architecture diagrams, but for users of edge computers the edge is a physical location. Today’s modern edge systems are deployed well away from carpeted spaces, and users of edge computing have very specific requirements. In this presentation, OnLogic Systems Architect, Ross Hamilton explores specific edge use cases (Smart Agriculture & Mining Automation), what they represent about the changing needs of tech innovators, and what goes into designing solutions that can survive wherever they’re needed.

In this presentation, Ross Hamilton from OnLogic discusses the evolving landscape of edge computing, emphasizing that the edge is not just a network concept but a physical location where computing systems are deployed in challenging environments. OnLogic specializes in creating industrial-grade edge computers designed to withstand harsh conditions, such as extreme temperatures, vibrations, and dust. Their product lines, including the Karbon series, are built to meet the specific needs of industries like agriculture and mining, where reliable, rugged computing is essential for real-time data processing and decision-making.

Hamilton highlights two specific use cases to illustrate the challenges and solutions in edge computing. In smart agriculture, OnLogic worked with a company developing robotic food harvesters that needed to operate in environments with fluctuating temperatures, vibrations, and dust. The Karbon 800 series was ideal for this application, offering fanless design, wide temperature range support, and the ability to process sensor data and communicate with motor controllers. The system also supports cellular connectivity, enabling real-time data transmission to the cloud, making it a robust solution for agricultural automation.

In the mining industry, OnLogic’s Karbon 400 series was deployed in a facility in northern Sweden, where temperatures can drop to -40°C. The system needed to operate reliably in these extreme conditions while supporting LiDAR sensors to detect spillage and ensure worker safety in dark, hazardous environments. The Karbon 400 series, with its Intel Atom processor and long lifecycle, provided the necessary compute power and connectivity, including dual LAN with Power over Ethernet (PoE) for cameras and sensors. These examples demonstrate how OnLogic’s rugged edge computing solutions are tailored to meet the specific demands of various industries, ensuring reliability and performance in the most challenging environments.


The Edge is Just the Beginning – Advancing Edge Computing Hardware with OnLogic

Event: Edge Field Day 3

Appearance: OnLogic Presents at Edge Field Day 3

Company: OnLogic

Video Links:

Personnel: Lisa Groeneveld

OnLogic, co-founded by Lisa Groeneveld and her husband Roland, designs and manufactures industrial and rugged computers for edge computing. The company, headquartered in Vermont, has grown from a small operation in a Boston apartment to a global business with locations in the U.S., Europe, and Asia. Lisa shared the story of how the company started by importing mini ITX motherboards and selling them online, leveraging the early days of e-commerce and Google AdWords to reach customers. Over time, OnLogic expanded its product offerings and built strong relationships with clients by understanding their needs and providing tailored solutions. Today, the company has shipped over 800,000 systems and components, serving industries such as manufacturing, transportation, and logistics.

OnLogic’s computers are designed to operate in harsh environments, making them ideal for industries that require high reliability and durability. Lisa highlighted examples of how OnLogic’s rugged computers are used in various applications, such as in the manufacturing processes of companies like Bridgestone and Michelin, and in Amazon’s fulfillment centers. She also shared a case study of Steel Dynamics, which needed computers that could withstand high vibration and extreme temperatures in their industrial setting. Another notable project involved Universal Studios, where OnLogic provided computers for kiosks in their water park, ensuring they could handle wide temperature ranges and outdoor conditions. These examples illustrate the versatility and robustness of OnLogic’s products, which are used in a wide range of industries and environments.

OnLogic’s approach to business is rooted in its core values of being open, fair, independent, and innovative. The company prides itself on transparency, offering open pricing and sharing financial information with employees. This culture of openness extends to their customer relationships, where they provide customizable solutions through a configurator on their website, allowing clients to tailor products to their specific needs. Lisa emphasized that OnLogic’s edge computing solutions are not just endpoints but the beginning of meaningful processes for their customers. One example she shared was a project involving AI systems at the base of wind turbines to prevent harm to migratory birds, showcasing how OnLogic’s technology can contribute to solving global challenges.


Navigating the Tides – Overcoming Global Challenges in International Maritime Shipping with ZEDEDA

Event: Edge Field Day 3

Appearance: ZEDEDA Presents at Edge Field Day 3

Company: ZEDEDA

Video Links:

Personnel: Jason Grimm

In this presentation, ZEDEDA’s Consulting Solutions Architect, Jason Grimm, discusses how ZEDEDA’s edge platform is helping a global shipping company overcome significant challenges in monitoring and maintaining the integrity of refrigerated containers during long sea voyages. The shipping company operates over 600 vessels, transporting millions of containers, including up to a million refrigerated units carrying fresh and frozen goods. These containers must be kept at precise temperatures to avoid spoilage, which poses a $21 billion risk to the company. However, the lack of reliable connectivity on ships, which often rely on outdated radio networks, makes it difficult to monitor the containers in real-time, leading to potential spoilage and revenue loss.

ZEDEDA’s solution involves modernizing the shipping company’s communication infrastructure by upgrading from 2G to 4G and integrating Starlink satellite connectivity. This allows for real-time monitoring of IoT sensor data from the refrigerated containers, enabling the company to detect temperature fluctuations and take immediate action to prevent spoilage. ZEDEDA’s platform also ensures the security of the edge devices on the ships, preventing unauthorized access and tampering. By implementing a cloud-like operating model at the edge, ZEDEDA enables the shipping company to manage its fleet more efficiently, even without local IT staff on board, and to push updates and fixes remotely.

The presentation also highlights ZEDEDA’s broader capabilities, including the use of SD-WAN to manage communications between ships and shore, as well as the ability to orchestrate and automate various edge devices and applications. ZEDEDA’s platform not only reduces the risk of spoilage but also opens up opportunities for further improvements in operational safety and efficiency. The company plans to expand its solution to chartered vessels, offering a portable kit that can be deployed on leased ships. ZEDEDA’s edge platform is designed to be flexible and scalable, allowing customers to experiment with it using virtualized environments or inexpensive hardware like Raspberry Pi.


AI on Track – Revolutionizing Rail Freight and Solving the Industry’s Toughest Challenges with ZEDEDA

Event: Edge Field Day 3

Appearance: ZEDEDA Presents at Edge Field Day 3

Company: ZEDEDA

Video Links:

Personnel: Jason Grimm

New federal railway regulations have introduced the need for enhanced detection capabilities and more frequent inspections along railroad tracks. For large railway operators, this presents significant challenges, especially when managing thousands of miles of tracks in remote and harsh environments. ZEDEDA, through its unified edge platform, is helping one of the largest American railroad operators address these challenges. The operator, which manages 8,000 locomotives and transports over 500 million tons of freight annually, is required to increase the number of hotbox detectors from 4,000 to 20,000. These detectors monitor the temperature of train wheels to prevent derailments, a leading cause of accidents. The new regulations also mandate the use of advanced technologies like computer vision and AI inference at the edge to detect additional issues beyond just heat.

The scale of this deployment is a major technical challenge. Managing 20,000 edge devices in remote, often unsecured locations requires robust solutions for connectivity, security, and operational efficiency. ZEDEDA’s platform is designed to handle such scale, offering zero-touch deployment and management of edge devices. The platform ensures that devices are secure, even in physically vulnerable locations, by implementing measures like encryption, read-only file systems, and intrusion detection. Connectivity is another hurdle, as these devices must operate with various types of network connections, including 2G, 4G, satellite, and microwave. ZEDEDA’s platform simplifies this by automatically configuring devices based on available connectivity, ensuring seamless operation across diverse environments.

In addition to addressing current regulatory requirements, ZEDEDA’s platform provides flexibility for future advancements. The platform allows for the deployment of new applications and updates over time, enabling the railroad operator to adapt to evolving technologies and regulations. For example, future use cases include computer vision for wheel defect monitoring, railroad crossing safety, and advanced traffic control. By containerizing legacy systems and enabling real-time AI inference at the edge, ZEDEDA is helping the railroad industry modernize its operations, improve safety, and meet regulatory demands efficiently.


Driving Innovation – Empowering Service Centers for Software Dependent Vehicles with ZEDEDA

Event: Edge Field Day 3

Appearance: ZEDEDA Presents at Edge Field Day 3

Company: ZEDEDA

Video Links:

Personnel: Manny Calero

The presentation by Manny Calero from ZEDEDA at Edge Field Day 3 focused on how ZEDEDA is helping one of the world’s largest auto manufacturers modernize its dealership infrastructure to address the growing complexity of software-dependent vehicles. With the rise of electric vehicles (EVs) and increasingly stringent security regulations, such as UNECE R155, the need for secure and efficient software delivery has become paramount. The auto manufacturer, which produces around 8 million vehicles annually and services tens of millions more, faced challenges in delivering large, secure software updates to its 70,000 dealerships and service centers. ZEDEDA’s edge platform enables the manufacturer to securely manage and deliver these updates at scale, ensuring compliance with regulations and addressing bandwidth limitations.

ZEDEDA’s solution revolves around managing workloads at the edge, allowing dealerships to process software updates locally rather than transferring massive amounts of data from a central location. This approach not only reduces bandwidth usage but also ensures that each vehicle receives its unique software image securely. The platform is designed to handle large-scale deployments, managing tens of thousands of endpoints as a fleet, with a focus on security and flexibility. ZEDEDA’s platform is hardware-agnostic, supporting both x86 and ARM-based processors, which allows the manufacturer to avoid vendor lock-in and diversify its hardware supply chain, a lesson learned from the disruptions caused by the COVID-19 pandemic.

In addition to software updates, ZEDEDA’s platform is being used to consolidate other dealership applications, such as inventory management, onto a single edge platform. The platform’s flexibility allows it to support both legacy applications and modern cloud-native designs, such as Kubernetes. While the current focus is on dealership infrastructure, ZEDEDA is also working on other projects with the manufacturer, including in-car solutions and manufacturing use cases. The platform’s ability to manage edge devices at scale, with centralized cloud-based management, makes it a powerful tool for modernizing the infrastructure of industries like automotive, where secure and efficient software delivery is critical.


Edge Management and Orchestration with ZEDEDA

Event: Edge Field Day 3

Appearance: ZEDEDA Presents at Edge Field Day 3

Company: ZEDEDA

Video Links:

Personnel: Michael Maxey

ZEDEDA provides an edge orchestration solution that allows customers to manage their edge computing infrastructure with ease. The company was founded in 2016 with the vision of addressing the growing need for edge computing and the challenges of managing data at the edge. ZEDEDA’s platform enables customers to deploy and manage applications on any hardware at scale, while connecting to any cloud or on-premises systems. The company’s solution is particularly useful in industries such as oil and gas, agriculture, and manufacturing, where edge computing is critical for real-time data analysis, AI workloads, and secure software delivery. ZEDEDA’s open-source operating system, EVE (Edge Virtualization Engine), plays a central role in this, providing a lightweight, secure, and flexible platform for running various workloads, including virtual machines and containerized applications.

ZEDEDA’s platform is designed to address the unique challenges of edge environments, such as limited network connectivity, security risks, and the need for remote management. For example, in the oil and gas industry, ZEDEDA’s solution is used to deploy AI-powered analytics at well sites to optimize oil extraction and monitor methane burnoff using computer vision. The platform also supports a wide range of hardware, from small devices like Raspberry Pi to large Intel servers with GPUs, making it adaptable to different use cases. ZEDEDA’s customers include large enterprises like Chevron and Emerson, who not only use the platform but have also invested in the company, demonstrating the strategic importance of ZEDEDA’s technology in their operations.

The platform’s architecture is built around a cloud-based controller that manages edge devices through a secure API, ensuring that all configurations and updates are centrally managed. This eliminates the need for local access to edge devices, enhancing security and reducing the risk of tampering. ZEDEDA also emphasizes the importance of zero-touch updates and measured boot processes to ensure that devices remain secure and operational without requiring physical intervention. The platform supports a wide range of applications, from legacy systems to modern Kubernetes clusters, making it a versatile solution for edge computing across various industries.


Enabling AI at the Edge with Tsecond’s BRYCK AI Platform

Event: Edge Field Day 3

Appearance: Tsecond Introduces Edge AI at Edge Field Day 3

Company: Tsecond

Video Links:

Personnel: Manavalan Krishnan

Enterprises are increasingly facing challenges in processing large volumes of data at the edge, especially with the growing demand for AI-driven decision-making. Traditionally, edge data is sent to the cloud for processing, but this approach is becoming impractical due to network limitations and the sheer volume of data being generated. Tsecond’s BRYCK AI platform addresses these challenges by enabling AI inferencing directly at the edge, eliminating the need to transfer data to the cloud. The BRYCK AI platform integrates both storage and AI processing capabilities within a single unit, allowing for real-time decision-making without the need for external GPUs or cloud connectivity. This is particularly beneficial for edge environments where traditional AI infrastructure, such as GPUs, cannot be easily deployed due to space and power constraints.

The BRYCK AI platform is highly configurable, allowing customers to tailor the amount of storage and AI processing power to their specific needs. It supports a wide range of AI workloads, from small-scale applications like drones to large-scale operations such as ships or satellites. The platform’s architecture is designed for high-speed data processing, with both storage and AI chips connected via PCIe Gen 4, ensuring minimal latency and eliminating bottlenecks typically seen in traditional GPU-based systems. This allows for faster inferencing, with benchmarks showing the BRYCK AI platform to be 10 to 20 times faster than comparable Nvidia solutions. Additionally, the platform is power-efficient, making it suitable for deployment in various edge environments where power consumption is a critical factor.

Tsecond offers different configurations of the BRYCK AI platform, including rugged, portable versions for harsh environments and more flexible, open versions for data center use. The platform also supports a leasing model, allowing customers to upgrade their systems as their needs evolve. The BRYCK AI platform is designed to handle large-scale data processing, such as in manufacturing plants generating petabytes of data daily, where traditional AI systems would struggle to keep up. By integrating AI processing directly with storage, the BRYCK AI platform provides a scalable, efficient solution for edge AI inferencing, enabling enterprises to make real-time decisions without the delays and limitations of cloud-based processing.


Lights, Camera, Action – How Tsecond’s BRYCK Platform is Advancing Creative Workflows​

Event: Edge Field Day 3

Appearance: Tsecond Introduces Edge AI at Edge Field Day 3

Company: Tsecond

Video Links:

Personnel: Jimmy Fusil

In the presentation, Jimmy Fusil, a Media and Entertainment (M&E) technologist at Tsecond, discusses the challenges of managing vast amounts of data in the media and entertainment industry, particularly in film and TV production. He highlights how the transition from analog to digital workflows over the past 25 years has led to an explosion in data generation, with increasingly larger file sizes due to advancements in resolution from SD to 8K and beyond. This shift has made data the most valuable asset in production, requiring secure, efficient, and high-speed storage solutions. Tsecond’s BRYCK platform, a portable NVMe storage device, addresses these challenges by enabling on-device data processing, secure transport, and quick backups, making it an essential tool for modern creative workflows.

Fusil emphasizes that film production is inherently an edge-centric activity, where data is generated in real-time on set and needs to be processed and accessed immediately. Directors and cinematographers require low-latency, high-throughput solutions to review footage and make creative decisions on the spot. The BRYCK platform, with its ability to handle up to a petabyte of data and deliver high throughput, ensures that large volumes of data can be accessed and processed quickly, even in remote or challenging environments. Fusil also discusses the importance of portability, as BRYCK allows data to be easily transported from one location to another, such as from a film set to a post-production facility, without the need for lengthy data transfers.

Several use cases are presented to illustrate the BRYCK’s capabilities, including a miniseries production where 130 terabytes of data were transported from New York to Los Angeles for color grading, and a high-profile project for Sphere, which required handling 16K footage at 60 frames per second. Additionally, Fusil describes an extreme use case in the Atacama Desert, where BRYCK was used to record 150 terabytes of footage from multiple cameras in harsh conditions. These examples demonstrate how BRYCK not only meets the high-performance demands of modern media production but also provides a reliable and flexible solution for managing data in edge environments.


Deep Dive Into Tsecond’s BRYCK Platform – Low SWaP Dense Petabyte Scale and High Performance Storage

Event: Edge Field Day 3

Appearance: Tsecond Introduces Edge AI at Edge Field Day 3

Company: Tsecond

Video Links:

Personnel: Manavalan Krishnan

Tsecond’s BRYCK platform is designed to address the growing challenges of data management at the edge, particularly in environments that are not conducive to traditional data center infrastructure. These edge environments, such as autonomous vehicles, aerospace, and military operations, generate massive amounts of data, often in the range of hundreds of terabytes to petabytes, and require solutions that can handle harsh conditions like extreme temperatures, vibrations, and limited power availability. The BRYCK platform is a rugged, portable NVMe storage device that can store up to a petabyte of data and transfer it at speeds of up to 40GBps. This makes it ideal for capturing, processing, and moving large datasets from remote or mobile edges to data centers or the cloud, overcoming the limitations of network throughput and the time it takes to transfer data over traditional networks.

The BRYCK platform is built with a focus on security and durability, making it suitable for sensitive applications in sectors like government, defense, and aerospace. The device is designed to be tamper-proof, using a process called potting, which makes it nearly impossible to access the internal components without damaging the device. Additionally, the BRYCK is equipped with fault-tolerant mechanisms, such as erasure coding and self-healing capabilities, to protect data from various types of failures, including electrical, mechanical, and environmental damage. The platform also supports multi-level encryption and key management, ensuring that data remains secure even if the device is physically transported across borders or between different locations. This level of security and resilience makes the BRYCK a reliable solution for transporting sensitive data in military and aerospace applications.

In addition to its hardware capabilities, the BRYCK platform integrates with Tsecond’s software stack, which provides enterprise-grade storage features like high-speed data transfer, self-healing, and snapshot capabilities. The platform is also designed to work seamlessly with major cloud providers like AWS, Azure, and Google Cloud, enabling fast data uploads and downloads. Tsecond offers a service called DataDot, which facilitates high-speed data transfers to the cloud by physically moving BRYCK devices to data centers with high-speed cloud connections. This service is particularly useful for industries like media and entertainment, where large datasets need to be processed quickly, as well as for autonomous vehicle testing, where vast amounts of sensor data are generated daily. Overall, the BRYCK platform provides a comprehensive solution for managing, securing, and transporting large datasets in challenging edge environments.


Enabling Large Data Capture, Data Transport and AI Inferencing with Tsecond’s Edge Infrastructure​​

Event: Edge Field Day 3

Appearance: Tsecond Introduces Edge AI at Edge Field Day 3

Company: Tsecond

Video Links:

Personnel: Sahil Chawla

Edge environments often face challenges related to power, space, and connectivity, which can hinder the ability to collect, transport, and analyze large amounts of data. In this presentation, Sahil Chawla, co-founder and CEO of Tsecond, introduces the company’s innovations in edge infrastructure, focusing on secure data storage, seamless data movement, and rapid AI inferencing. Tsecond was founded in 2020 after identifying the need for modern solutions to handle the increasing data generated at the edge, particularly in industries like manufacturing, oil and gas, and autonomous vehicles. The company’s flagship product, the BRIC platform, is designed to address these challenges by offering a compact, high-capacity storage solution that can capture and store up to one petabyte of data in a small form factor, with deduplication capabilities that can increase storage efficiency by up to 8x.

Tsecond’s BRIC platform is built to be lightweight, power-efficient, and scalable, making it ideal for edge environments where space and power are limited. The system is capable of high throughput, with theoretical speeds of up to 256 GB per second, and is designed to be sustainable, consuming less power than a typical hair dryer. The BRIC platform can be customized with AI accelerator chips, allowing for on-site AI inferencing alongside data storage. This flexibility makes it suitable for a wide range of industries, from aerospace and aviation to media and entertainment, where large amounts of data need to be processed and analyzed quickly. Tsecond also offers a service called Data Dart, which facilitates the secure movement of data between edge locations and centralized data centers or the cloud.

The company’s innovations are inspired by existing solutions like Amazon Snowball and Seagate Live, but Tsecond aims to provide a more compact, efficient, and versatile alternative. Their products are designed to meet the specific needs of industries that require rugged, high-performance systems capable of handling large data volumes in remote or challenging environments. Tsecond’s focus on sustainability and security also positions them to address future demands for energy-efficient data centers and secure data transfer solutions. With a growing portfolio of products and a strong focus on edge AI and storage infrastructure, Tsecond is poised to play a significant role in the evolving landscape of edge computing.


Safeguarding On-Site Edge Applications and Data with Avassa

Event: Edge Field Day 3

Appearance: Avassa Presents at Edge Field Day 3

Company: Avassa

Video Links:

Personnel: Carl Moberg, Fredrik Jansson

In this demo, we explore best practices for handling sensitive data at the distributed edge and how to safeguard it against potential breaches with Avassa’s edge-native security features. We illustrate what happens when a host running business-critical applications is unexpectedly powered down or even stolen.

Discover how cryptographic materials stored in memory are immediately erased, locking down the secrets management vault and certain event streaming topics. By the end of this video, you’ll understand how Avassa protects sensitive data against security breaches across edge sites, ensuring resilient and secure deployments.

Learn more about security at the edge in this whitepaper: https://info.avassa.io/securing-the-edge