Tech Field Day

The Independent IT Influencer Event

  • Home
    • The Futurum Group
    • FAQ
    • Staff
  • Sponsors
    • Sponsor List
      • 2026 Sponsors
      • 2025 Sponsors
      • 2024 Sponsors
      • 2023 Sponsors
      • 2022 Sponsors
    • Sponsor Tech Field Day
    • Best of Tech Field Day
    • Results and Metrics
    • Preparing Your Presentation
      • Complete Presentation Guide
      • A Classic Tech Field Day Agenda
      • Field Day Room Setup
      • Presenting to Engineers
  • Delegates
    • Delegate List
      • 2025 Delegates
      • 2024 Delegates
      • 2023 Delegates
      • 2022 Delegates
      • 2021 Delegates
      • 2020 Delegates
      • 2019 Delegates
      • 2018 Delegates
    • Become a Field Day Delegate
    • What Delegates Should Know
  • Events
    • All Events
      • Upcoming
      • Past
    • Field Day
    • Field Day Extra
    • Field Day Exclusive
    • Field Day Experience
    • Field Day Live
    • Field Day Showcase
  • Topics
    • Tech Field Day
    • Cloud Field Day
    • Mobility Field Day
    • Networking Field Day
    • Security Field Day
    • Storage Field Day
  • News
    • Coverage
    • Event News
    • Podcast
  • When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.
You are here: Home / Appearances / Xsight Labs Presents at AI Infrastructure Field Day

Xsight Labs Presents at AI Infrastructure Field Day



AI Infrastructure Field Day 4

John C Carney andTed Weatherford presented for Xsight Labs at AI Infrastructure Field Day 4

This Presentation date is January 29, 2026 at 10:00AM - 11:30AM PT.

Presenters: John C Carney, Ted Weatherford

Xsight Labs Presents at AI Infrastructure Field Day
This presentation provides a comprehensive deep dive into a next-generation semiconductor ecosystem designed for AI Factories and Cloud infrastructure. It highlights the technical architecture and real-world applications of the X-Series and E-Series chips, focusing on how “software-defined” hardware can optimize scale, power efficiency, and time-to-market.


Redefining Infrastructure Philosophy – the Xsight Labs Vision


Watch on YouTube
Watch on Vimeo

This introductory session establishes the company’s core identity and its unique approach to the semiconductor market. It explores a product philosophy built on the pillars of extreme scalability, open architecture, and vertical integration to reduce Total Cost of Ownership (TCO). By the end of this section, the audience will understand how the company’s commitment to agility and simplicity drives its engineering decisions. Xsight Labs, a fabless semiconductor company founded in 2017, designs and sells chips manufactured through TSMC. Led by serial entrepreneurs and backed by $440 million in top-tier VC funding, the company employs over 200 engineers globally. Their unique approach aims to democratize the semiconductor space by providing open, programmable, and vertically integrated solutions for the rapidly evolving AI and data center infrastructure markets.

Xsight Labs focuses on two critical components of the “AI factory” or “token machine”: an Ethernet switch chip (X-series) and a Data Processing Unit (DPU) or infrastructure processor (E-series). These products are developed on 5nm technology, are generally available, and the X-series is already in mass production. The company emphasizes a “software-defined infrastructure” philosophy, claiming to be the first chip company to offer wire-speed, energy-efficient, and truly programmable products without compromising performance, price, or power. This agility is crucial given the unpredictable nature of future AI applications, and their open instruction sets and collateral allow for community contributions and custom compilers, accelerating innovation.

The E-series DPU, specifically the E1 800 gig product, is designed from an ARM server perspective rather than a traditional network interface card, offering 64-core ARM chips with derivatives to optimize for various power and performance needs. The upcoming E1L will be a low-power version targeting control plane markets and programmable SmartNICs. The X-series Ethernet switch, with its X2 12.8 terabit monolithic die, stands out for its exceptionally low power consumption (180W compared to competitors’ 300-600W) while maintaining high performance, low latency, and full programmability from Layer 1 to Layer 4 with embedded memory switches. The future X3 will further expand bandwidth and radix points through clever die combining, reinforcing Xsight Labs’ commitment to innovative, power-efficient, and highly flexible infrastructure solutions.

Personnel: Ted Weatherford

The X Series Architecting for High Performance Scale with Xsight Labs


Watch on YouTube
Watch on Vimeo

This section provides a high-level overview of the product roadmap, specifically introducing the X-Series and E-Series lineups. It identifies the six critical chips required to build modern AI Factories and explains the concept of a “Truly Software Defined” stack that operates at full line-rate across layers L1-7. This serves as the technical foundation for the subsequent specialized deep dives.

The X-Series, an Ethernet switch, distinguishes itself through a “truly software-defined” programmable architecture, utilizing 3072 Harvard architecture cores operating on a “run-to-complete” model, unlike competitors’ fixed pipelines. This provides unparalleled flexibility, enabling parallel packet operations, recursion, and extensive header processing, including 11 layers of MPLS and various encapsulations. This design is particularly well-suited for emerging AI-centric protocols such as Ultra Ethernet (UEC) and ESON, enabling customizable congestion management and efficient in-flight packet handling. The X-Series boasts significantly lower latency, achieving 450 nanoseconds compared to the typical 800 nanoseconds, and demonstrates exceptional buffer utilization, consistently above 86% even under heavy load.

The X-Series also stands out with its low power consumption, operating at under 200 watts for a 12.8T switch, which is described as disruptive. Its software-defined physical layer supports diverse SERDES speeds (10G to 200G) and modulation schemes, enabling mixed-and-matched configurations that facilitate connections between new and legacy interfaces. The programming model, though initially assembler-based with Python wrappers and libraries, has seen customers such as Oxide develop P4 compilers, with Xsight Labs planning to develop their own. This powerful, flexible, and low-power solution is specifically designed for edge deployments, including half-rack to two-rack configurations, satellites, and base stations, delivering significant reductions in power, rack space, and cost. The X-Series product was generally available in November 2022 and has been in mass production since the summer of 2023.

Personnel: John C Carney, Ted Weatherford

The E-Series Delivering Cloud-on-a-Chip from Xsight Labs


Watch on YouTube
Watch on Vimeo

The E-Series session explores the convergence of storage, networking, compute, and security into a single, cohesive silicon platform. This “Cloud-on-a-chip” approach is dissected through its architecture and programming model to show how it simplifies complex data center environments. We will highlight our partnership with the Hammerspace solution, demonstrating how E-Series silicon powers a global data environment. Xsight Labs presents its E-Series, a System-on-a-Chip (SOC) designed to deliver “Cloud-on-a-chip” capabilities by integrating essential cloud elements. This includes Ethernet connectivity, robust security features, virtualized storage, and powerful processing via 64 ARM Neoverse N2 cores. The E-Series chip, which has been generally available for about four months, is offered in various form factors, including a server, an add-in card, and a COMEX module, targeting applications ranging from embedded systems to full servers.

Xsight Labs differentiates its E-Series architecture from traditional DPUs, which typically evolve from a NIC with a constrained CPU cluster. The E-Series began with a server-class compute system featuring 64 ARM Neoverse N2 cores, specifically optimized and sized for data-plane applications. This allows all packets and PCIe transactions to be terminated and processed in software using standard programming models like Linux, DPDK, or SPDK, eliminating proprietary code. The chip integrates an E-unit for Ethernet connectivity, offering inline encryption and stateless offloads, and a P-unit for PCIe Gen 5, providing up to 40 lanes and 800Gb bandwidth. This PCIe unit can software-emulate various devices (storage, networking, RDMA), offering immense flexibility. With a typical power consumption of 50-75W (up to 120W TDP) and a SpecInt rating of 170, the E-Series offers significant compute power efficiently. Beyond network and memory encryption, the roadmap for the follow-on E2 product includes CXL support, targeting 1.6T bandwidth.

The E-Series supports a broad range of use cases, from front-end DPUs in public cloud and AI clusters (offloading the host, providing virtualization and isolation) to back-end DPUs in AI inference clusters for KV cache offload. It also extends to local storage, bump-in-the-wire network appliances for security and load balancing, smart switches for stateful processing, edge servers, and storage target appliances. Xsight Labs provides a comprehensive software development kit, ensuring compatibility with standard ARM server operating systems such as Ubuntu, as well as Linux and DPDK drivers. A key demonstration of the E-Series’ capability is its performance on the Sonic Dash “Hero Benchmark,” a highly intensive SDN workload. This test requires processing millions of routes, prefixes, and mappings, which largely depends on off-chip DRAM due to poor cache locality. The E1 exceeded the benchmark requirement of 12 million new connections per second with 120 million background connections, without packet drops, by almost 20%, while still retaining CPU capacity for control-plane operations, making it the only DPU to pass this test at 800Gb with a single device.

Personnel: John C Carney, Ted Weatherford

Real World Deployments for AI at the Edge with Xsight Labs


Watch on YouTube
Watch on Vimeo

This final technical section transitions from theoretical architecture to practical use cases, spanning Warm Flash Storage to “Extreme Edge” networking satellites. It showcases industry-first milestones, such as the 800G DPU for virtualized hosting and SmartSwitch technology for NIC pooling. Each example demonstrates how the X and E series products solve specific bottlenecks in modern cloud compute and AI storage networks. Xsight Labs, a nine-year-old fabless semiconductor company, focuses on real-world deployments for AI at the edge using its X series Ethernet switch and E series DPU. Their core philosophy centers on being software-defined, appealing to software engineers by offering performance comparable to fixed-function products while providing greater flexibility through an open instruction set architecture and Linux-based programming with tools such as DPTK or Open Virtual Switch. They target the edge market, believing it holds the highest volume, and have designed their single-die products for extreme power efficiency and high performance.

The company’s chips are deployed in diverse settings, from the “extreme edge” to terrestrial wireless infrastructure. A significant win is their integration into Starlink Gen 3 satellites, where multiple Ethernet switches per satellite are being launched at scale. This required Xsight Labs to deliver unparalleled programmability, power efficiency, and resilience against vibration, radiation, and extreme temperatures, crucial for a system that cannot be physically serviced. Similarly, their programmable Ethernet switches and DPUs are ideal for 5.5G or 6G terrestrial wireless infrastructure, addressing the complex, stateful packet-processing needs of antennas and associated processing units. These low-power, single-die solutions offer advantages in temperature range, cost, and operating expenses, including reduced carbon footprint.

Xsight Labs is also targeting the expanding AI market, particularly for inference, which is pushing computing out into half-rack, full-rack, and multi-row deployments. Their DPUs serve as front-end and scale-out back-end solutions for these systems, enabling very high-density general compute. Additionally, their Ethernet switches are used to cluster these AI systems, marking a departure from traditional “clos” architectures by supporting local clustering topologies such as Dragonfly. For example, in AI training systems similar to Amazon’s ultra-servers, Xsight Labs’ products with 100G serdes and 6.4T/12.8T switches can replicate or enhance existing topologies. The Starlink win underscores their capability to provide future-proof, high-performance, and power-efficient solutions essential for the most demanding and inaccessible environments.

Personnel: John C Carney, Ted Weatherford

Warm Flash for AI Context Storage using the Open Flash Platform with Xsight Labs and Hammerspace


Watch on YouTube
Watch on Vimeo

The Open Flash Platform, co-founded by Xsight Labs and Hammerspace, introduces a new approach to “warm flash storage” for AI context, promising enhanced efficiency and performance. This collaborative effort leverages Hammerspace’s software, built on a Linux-based NFS file system that supports distributed management. By separating metadata from the data path and decentralizing storage, the platform eliminates traditional x86 servers, drastically reducing total cost of ownership, power consumption, and system complexity. This streamlined architecture also minimizes data hops, improving performance for large AI clusters by enabling direct data access to storage targets while managing metadata out of band.

Xsight Labs contributes its E1 DPU, forming the core of a unique cartridge design developed with Lumineer. Each cartridge is a self-contained “server on a cartridge,” featuring two 400 GbE ports for NVMe-style fabric and eight flash drives. These cartridges offer exceptional density, allowing for exabyte-scale memory within a standard seven-foot rack. The E1 chip’s ability to run a full Linux operating system makes it more powerful than a simple network interface card, supporting additional use cases beyond mere data transport. Data protection is managed through erasure coding across multiple blades rather than within individual SSDs, embracing a model where the blade is the consumable unit, further simplifying infrastructure and reducing failure points.

The concept of “warm flash” is vital for AI, as these applications require data that is always ready and accessible, rather than relying on traditional cold storage. This aligns with a growing customer demand to move away from disk drives towards all-flash environments, even for archival purposes, as flash longevity has significantly improved. The Xsight Labs E1 chip is precisely balanced for this, delivering optimal throughput without overprovisioning and ensuring quick data extraction even from very large flash capacities per blade. With significant market interest, the product is moving towards production, with initial releases expected soon and full production anticipated by year-end, underscoring a successful partnership focused on software-defined efficiency.

Personnel: Kurt Kuckein, Ted Weatherford

Interface Masters Smart Switch Delivers 11.2tbps Switching and Routing With Xsight Labs DPUs


Watch on YouTube
Watch on Vimeo

Xsight Labs, represented by VP of BizDev Ted Weatherford and distinguished engineer John Carney, introduced a smart switch developed with Interface Masters, designed for government security applications. This 1RU device offers 28x 400G connectivity using modern QSFP112 100G SerDes, powered by Xsight Labs’ X2 12.8terabit Ethernet switch chip. A key feature is 1.6 terabits of line-rate stateful processing (Layer 4-7) provided by two E1 DPUs, which together offer 128 ARM cores and up to 1 terabyte of DRAM (0.5 TB per DPU). This solution boasts significantly higher compute and networking density per rack unit compared to incumbent offerings, delivering a “line rate smart switch” that is available bare metal.

The presentation also showcased a larger 6.4 terabit top-of-rack (ToR) solution, integrating eight E1 DPU add-in cards into a PCIe motherboard, providing 500 Neoverse 2 ARM cores and up to 2 terabytes of memory. This proof of concept, developed for a major US Cloud Service Provider, demonstrated substantial power and cost savings, up to 25% less power and less than half the cost, compared to traditional deployments. Crucially, both the E1 and X2 chips incorporate comprehensive packet timing features, including a stratum three clock, real-time clock logic for timestamping interfaces, PTP synchronization, and physical PPS in/out connectors, making them suitable for timing-sensitive applications.

Xsight Labs positions its X-series Ethernet switch as a superior ToR upgrade strategy for AI and general compute fabrics. Instead of relying on expensive, high-power, and often oversized 51.2 terabit switches, their solution offers a programmable, high-performance alternative at a fraction of the cost ($1,500 vs. $14,000), consuming significantly less power (300-400 watts) and occupying just one RU. This not only frees up rack space for additional GPUs but also provides lower latency and flexible congestion management. By optimizing for typical ToR needs, where downlink speeds rarely exceed 400 gig per NIC, Xsight Labs aims to reduce overall infrastructure costs and power consumption in data centers and infrastructure-as-a-service environments.

Personnel: John C Carney, Ted Weatherford

  • Bluesky
  • LinkedIn
  • Mastodon
  • RSS
  • Twitter
  • YouTube

Event Calendar

  • Jan 28-Jan 30 — AI Infrastructure Field Day 4
  • Mar 11-Mar 12 — Cloud Field Day 25
  • Mar 23-Mar 24 — Tech Field Day Extra at RSAC 2026
  • Apr 8-Apr 10 — Networking Field Day 40
  • Apr 15-Apr 16 — AI AppDev Field Day 3
  • Apr 29-Apr 30 — Security Field Day 15
  • May 6-May 8 — Mobility Field Day 14
  • May 13-May 14 — AI Field Day 8

Latest Coverage

  • From Brownfield Complexity to Automated Fabric at Global Scale
  • Managing Edge AI and Computer Vision at Scale
  • Digitate ignio and the 2025 AIOps Question: Build or Buy?
  • Reimagining AI from a Security Risk into an Asset with Fortinet
  • ResOps: The Convergence of Security and Operations

Tech Field Day News

  • Cutting-Edge AI Networking and Storage Kick Off 2026 at AI Infrastructure Field Day 4
  • Commvault Shift 2025 Live Blog

Return to top of page

Copyright © 2026 · Genesis Framework · WordPress · Log in