|
Guy Currier, Jim Jones, Jack Poller, and Mark Houtz presented Ignite Talks at Edge Field Day 3 |
This Presentation date is September 19, 2024 at 9:00-10:00.
Presenters: Guy Currier, Jack Poller, Jim Jones, Mark Houtz
Object Storage For Disaster Recovery with Jim Jones
Watch on YouTube
Watch on Vimeo
Jim Jones from 11:11 Systems discusses the evolving landscape of disaster recovery (DR) and how object storage plays a crucial role in modern strategies, particularly in the face of increasing ransomware attacks. He emphasizes that traditional DR concerns like fire and flood have been overshadowed by the growing threat of ransomware, which has become a global issue. Attackers now target backups, attempting to exfiltrate and delete them, making it essential to have encrypted, immutable backups. Jones stresses the importance of a layered approach to resilience, combining active defense, well-architected recovery strategies, and regular testing of backups to ensure they are functional when needed.
Object storage, according to Jones, has become the preferred solution for modern backup and disaster recovery due to its inherent immutability, scalability, and cost-effectiveness. Unlike traditional storage methods like tape or disk, object storage abstracts the underlying hardware and distributes data across multiple locations, ensuring durability. He highlights that hyperscalers like Amazon S3 offer exceptional durability by replicating data across multiple zones, making it a reliable option for large-scale disaster recovery. Additionally, object storage is cost-efficient, especially for backup workloads that involve many writes but few reads, making it suitable for cold storage tiers like Amazon S3’s infrequent access or Azure’s cold tier.
Jones also explains how object storage can be integrated into both on-premises and cloud-based DR strategies. He advises using on-premises object storage for initial backups of critical workloads like VMs and databases, while leveraging cloud storage for off-site copies. This hybrid approach ensures that data is protected both locally and remotely. He also touches on the flexibility of object storage in edge environments, where backups can be regionalized or store-specific, and how hyperscaler clouds can serve as a secondary data center for recovery in case of a major disaster. Ultimately, Jones underscores the importance of having a comprehensive, scalable, and tested disaster recovery plan that leverages the strengths of object storage.
Personnel: Jim Jones
Zero Trust is a Strategy Not a Product with Jack Poller
Watch on YouTube
Watch on Vimeo
In this talk, Jack Poller emphasizes that Zero Trust is a cybersecurity strategy, not a product. He begins by reflecting on the pre-pandemic era when VPNs were the primary method for remote workers to access internal networks. However, the sudden shift to remote work during the COVID-19 pandemic exposed the limitations of VPNs, particularly their scalability and security vulnerabilities. This led to the rise of Zero Trust Network Access (ZTNA), which improved security by eliminating direct inbound connections to servers. Instead, both clients and servers connect outbound to a cloud solution, reducing the attack surface. However, Poller clarifies that ZTNA is just a product and not the full embodiment of Zero Trust.
Poller traces the origins of Zero Trust back to 2010 when John Kindervag, an analyst at Forrester, introduced the concept to address the flaws in the traditional “castle and moat” security model. In this older model, once a user passed through the firewall, they had broad access to the internal network, which attackers could exploit through lateral movement. Zero Trust, on the other hand, operates on the principle of “never trust, always verify,” requiring strict authentication and authorization for every interaction, whether it’s between users, devices, or APIs. Google’s implementation of Zero Trust through its BeyondCorp initiative in 2014 further popularized the concept, demonstrating how it could be applied to large-scale environments.
Poller outlines the core principles of Zero Trust, including explicit verification, least privilege access, and the assumption that breaches will occur. He stresses the importance of strong identity controls, device security, network security, and data protection, all underpinned by visibility, analytics, and automation. Zero Trust requires a comprehensive, integrated approach to security, tailored to the specific needs of each organization. Poller concludes by reminding the audience that Zero Trust is not a one-size-fits-all solution but a strategic framework that must be customized based on the unique requirements and risks of each business.
Personnel: Jack Poller
Private Cellular and Salt with Mark Houtz
Watch on YouTube
Watch on Vimeo
Mark Houtz, a network engineer working with school districts in Utah, shared his recent experiments with private cellular networks, particularly focusing on CBRS (Citizens Broadband Radio Service) and Wi-Fi technologies. He explained that CBRS operates in the 3.55 to 3.7 GHz band in the U.S. and is gaining traction globally. Mark and his team conducted tests at the Bonneville Salt Flats, a vast, flat area known for land speed records, making it an ideal location for testing wireless technologies over long distances. In their initial tests two years ago, they managed to achieve a two-mile range using a 15-foot antenna for CBRS, but they wanted to push the limits further with more advanced equipment.
In their recent tests, Mark and his team used a cellular cow (Cell on Wheels) with a 60-foot antenna to improve the range and performance of their wireless technologies. They tested both LTE and 5G radios, along with Wi-Fi Halo, which operates in the 900 MHz spectrum. While Wi-Fi Halo didn’t perform as well as expected, reaching only about a mile instead of the hoped-for three kilometers, the CBRS tests were more successful. They achieved a four-mile range with usable signal strength, allowing them to perform speed tests and browse the internet. Mark emphasized the importance of antenna height and line of sight in achieving better performance, noting that in some pristine conditions, they had previously reached up to 12 miles with private cellular.
Mark also highlighted the potential for edge computing in these setups, particularly in remote or mobile environments like the Bonneville Salt Flats. By integrating edge computing into the cellular cow or even on the client side, they could handle data processing closer to the source, improving efficiency and reducing latency. The tests demonstrated the viability of private cellular networks for high-speed, long-distance connectivity, especially in challenging environments, and underscored the importance of proper equipment setup, including antenna height and spectrum analysis, for optimal performance.
Personnel: Mark Houtz
The AI Development Lifecycles with Guy Currier of The Futurum Group
Watch on YouTube
Watch on Vimeo
Guy Currier from The Futurum Group discusses the AI development lifecycle, emphasizing that AI is not an application in itself but rather a service integrated into applications. He begins by clarifying a common misconception: AI is often thought of as a standalone application, but in reality, it is a service that enhances applications by providing intelligent responses or actions. For example, a chat application may use AI to generate responses, but the AI is just one component of the broader application. This distinction is crucial for organizations looking to adopt AI, as they need to understand that AI development involves creating services that can be integrated into various applications, rather than building a single AI “application.”
Currier outlines three distinct life cycles in AI development. The first involves foundational models, such as GPT or DALL-E, which are typically developed by large organizations or cloud providers. While companies can create their own foundational models, it is often more practical to adopt existing models and tune them using proprietary data. This tuning process is where organizations begin to develop their own intellectual property, as they adapt the foundational models to their specific needs. For instance, companies like Thomson Reuters have incorporated AI into their legal services, using proprietary legal data to enhance the AI’s capabilities. This tuning process follows a development cycle similar to traditional software development, involving infrastructure, tooling, and iterative testing.
The third life cycle Currier discusses is Retrieval Augmented Generation (RAG), which adds contextual data to user prompts. RAG involves searching for relevant information, either from internal or external sources, to enhance the AI’s responses. This process requires its own development and monitoring, as the quality and relevance of the contextual data are critical to the AI’s performance. Currier emphasizes that these three life cycles—foundational model development, model tuning, and RAG—each have distinct infrastructure and development strategies. Organizations must address all three to create AI services that are responsive, contextual, and tailored to specific business scenarios.
Personnel: Guy Currier