A Different Type of Datacenter is Needed for AI



AI demands specialized data center designs due to its unique hardware utilization and networking needs, which require a new type of infrastructure. This Tech Field Day Podcast episode features Denise Donohue, Karen Lopez, Lino Telera, and Alastair Cooke. Network design has been a consistent part of the AI infrastructure discussions at Tech Field Day events. The need for a dedicated network to interconnect GPUs differentiates AI training and fine-tuning networks from general-purpose computing. The vast power demand for high-density GPU servers highlights a further need for different data centers with liquid cooling and massive power distribution. Model training is only one part of the AI pipeline; business value is delivered by AI inference with a different set of needs and a closer eye on financial management. Inference will likely require servers with GPUs and high-speed local storage, but not the same networking density as training and fine-tuning. Inference will also need servers adjacent to existing general-purpose infrastructure running existing business applications. Some businesses may be able to fit their AI applications into their existing data centers, but many will need to build or rent new infrastructure.

Panelists

Alastair Cooke

@DemitasseNZ

Alastair is a Tech Field Day event lead at the Futurum group, specializing in Cloud, DevOps, and Edge.

Denise Donohue

Business architect & technical author, still fascinated by new shiny things.

Karen Lopez

@DataChick

Data Evangelist and Architect

Lino Telera

@LinoTelera

Platform Engineer focused in IaC automation @ InfoCert S.p.A., blogger, podcaster