Unified Flash Memory and Reduced HBM are Reshaping AI Training and Inference with Phison

AI will need less HBM (high bandwidth memory) because flash memory unification is changing training and inference. This episode of the Tech Field Day podcast features Sebastien Jean from Phison, Max Mortillaro, Brian Martin, and Alastair Cooke. Training, fine-tuning, and inference with Large Language Models traditionally use GPUs with high bandwidth memory to hold entire data models and data sets. Phison’s aiDaptiv+ framework offers the ability to trade lower cost of infrastructure against training speed or allow larger data sets (context) for inference. This approach enables users to balance cost, compute, and memory needs, making larger models accessible without requiring top-of-the-line GPUs, and giving smaller companies more access to generative AI.


Software is Automating Your AI Data Centre Infrastructure

Hardware always matters, especially in AI and now software is automating your AI data centre infrastructure. This episode of the Tech Field Day podcast features Gina Rosenthal, Barton George, Andy Banta, and Alastair Cooke. Generative AI brought new hardware into enterprise data centres; GPUs, TPUs, NPUs, XPUs all offload AI processing from CPUs for more performance and efficiency. Feeding these accelerators requires fast networks and fast storage, common topics for AI Infrastructure Field Day events. In parallel, sophisticated software to automate the deployment and operation of this new hardware is vital to return value fast and optimize the value from the hardware investment. Automation platforms are moving up towards delivering multiple AI applications on shared XPU infrastructure, where AI inference delivers the business value.


Pushing the Boundaries of AI Performance, Scale, and Innovation at AI Infrastructure Field Day 3

Tech Field Day is heading back in Santa Clara, California on September 10th and 11th for AI Infrastructure Field Day 3. You can watch live on the Tech Field Day website, LinkedIn Page, or Techstrong TV to see how the boundaries of performance, scalability, and innovation are being pushed by our presenting companies. The event […]


Agentic AI Spells the End of Dial Twiddlers

If you haven’t already, start working with Generative AI now and make sure to control your ongoing costs. This episode of the Tech Field Day podcast features Russ Fellows, Mitch Lewis, and Brian Martin, all from Signal65, and is hosted by Alastair Cooke. Generative AI is delivering value to businesses of all sizes, but significant evolution in models and technologies remains before maturity is achieved. Experimentation is essential to understand the value of new technologies, starting with cloud resources or small-scale on-premises servers. Business value is derived from the inference stage, where AI tools generate actionable information for users. Generative AI is like a knowledgeable and well-intentioned intern; someone more senior must ensure AI is given good instructions and check their work. In production, grounding and guard rails are vital to keep your AI an asset, not a liability.


Cloud Consumption in Your Data Center With VCF 9.0

Alastair Cooke reviews Cloud Consumption in your Data Center with VCF 9.0, detailing how VMware Cloud Foundation (VCF) enhances data center operations by simplifying cloud consumption across hybrid environments. He highlights new features and updates in version 9.0, emphasizing improvements in scalability, management, and overall user experience. For comprehensive coverage of this topic, explore additional insights following our special event with Broadcom focused on VMware Cloud Foundation 9 on Techstrong IT.


Qlik Answers from the New Zealand Government 2025 Budget

Alastair Cooke provides a detailed analysis of how the New Zealand Government’s 2025 budget addresses Qlik solutions. Focusing on the implications for IT and data analytics sectors, the article explores strategic investments and policy changes. You can find additional insights on this topic by Alastair Cooke on LinkedIn Pulse.


Early Adoption of Generative AI Helps Control Costs with Signal65

If you haven’t already, start working with Generative AI now and make sure to control your ongoing costs. This episode of the Tech Field Day podcast features Russ Fellows, Mitch Lewis, and Brian Martin, all from Signal65, and is hosted by Alastair Cooke. Generative AI is delivering value to businesses of all sizes, but significant evolution in models and technologies remains before maturity is achieved. Experimentation is essential to understand the value of new technologies, starting with cloud resources or small-scale on-premises servers. Business value is derived from the inference stage, where AI tools generate actionable information for users. Generative AI is like a knowledgeable and well-intentioned intern; someone more senior must ensure AI is given good instructions and check their work. In production, grounding and guard rails are vital to keep your AI an asset, not a liability.


Datacenter Networking Needs AIOps with HPE Juniper Networks

Enterprise networking is too large and complex, we need AI Operations. This spotlight episode of the Tech Field Day podcast features Bob Friday and Ben Baker, both from Juniper Networks, with Jack Poller and Alastair Cooke. Modern enterprise networks reach far beyond the well-controlled walls of data centres and corporate buildings. The rate of change enabled by public cloud platforms makes an enterprise network highly dynamic. Access to cloud and on-premises applications over the Internet means your users are dependent on many network elements outside of your control. Bob founded Mist Networks to help businesses manage the complexity of user-to-cloud networking. Juniper Networks acquired Mist, and now HPE has acquired Juniper. I don’t think he is alone in seeing the necessity of using AI to manage complex and critical networks. Yet new tools always bring new challenges; the cost of AI infrastructure may be a concern, and Generative AI has challenges with hallucinations. The security and governance practices around AI tools are still developing, and the non-deterministic nature of AI needs careful consideration.


Simplifying Cloud Application Resilience in a Dynamic World

Alastair Cooke discusses the crucial need for resilient cloud applications in today’s dynamic digital environment, emphasizing the challenges and solutions for enterprises aiming to maintain robust operations amid frequent changes. He explores how simplifying the resilience of cloud-based applications can significantly benefit businesses by enhancing their ability to adapt and respond to new demands and potential disruptions. You can explore more coverage of Cloud Field Day 23 by Alastair Cooke on DevOps dot Com.


Satellite Data’s Journey: How Ring is Helping ESA Manage Petabytes of Information

Alastair Cooke explores how Ring, Scality’s data storage and processing solution, is revolutionizing the way the European Space Agency (ESA) manages vast amounts of satellite data. He details the innovative methods and technologies employed by ESA to handle and utilize petabytes of information more efficiently. For additional insights into Cloud Field Day 23, check out Alastair Cooke’s coverage on Techstrong IT.


Securing the Future: Juniper’s Approach to AI Data Center Security

Alastair Cooke recently discussed Juniper Networks’ strategy in enhancing AI-driven data center security, highlighting their comprehensive approach to protect complex network environments. He emphasized how Juniper leverages AI to automate threat detection and response, ensuring a more resilient infrastructure. For additional insights on Cloud Field Day 23, you can read more from Alastair Cooke on Techstrong AI.


The Easy On-Ramp to Private AI with Nutanix Enterprise AI

Alastair Cooke recently highlighted how Nutanix is simplifying the integration of AI technologies within private infrastructures, positioning their Enterprise AI as an easy on-ramp for companies keen on adopting private AI solutions. He emphasized the platform’s ability to support various AI and machine learning workloads, enhancing agility and operational efficiency without compromising security. For more comprehensive insights, check out Alastair Cooke’s coverage of Cloud Field Day 23.


SSD Innovation for AI from Solidigm

Alastair Cooke recently explored the advancements in SSD technology tailored for AI applications, specifically from Solidigm. Highlighting the impact of next-generation SSDs, Cooke discusses how these innovations enhance data processing speeds crucial for AI workloads. For more on AI Infrastructure Field Day 2, you can find comprehensive coverage by Alastair Cooke.


A Different Type of Datacenter is Needed for AI

AI demands specialized data center designs due to its unique hardware utilization and networking needs, which require a new type of infrastructure. This Tech Field Day Podcast episode features Denise Donohue, Karen Lopez, Lino Telera, and Alastair Cooke. Network design has been a consistent part of the AI infrastructure discussions at Tech Field Day events. The need for a dedicated network to interconnect GPUs differentiates AI training and fine-tuning networks from general-purpose computing. The vast power demand for high-density GPU servers highlights a further need for different data centers with liquid cooling and massive power distribution. Model training is only one part of the AI pipeline; business value is delivered by AI inference with a different set of needs and a closer eye on financial management. Inference will likely require servers with GPUs and high-speed local storage, but not the same networking density as training and fine-tuning. Inference will also need servers adjacent to existing general-purpose infrastructure running existing business applications. Some businesses may be able to fit their AI applications into their existing data centers, but many will need to build or rent new infrastructure.


Make an AI-Ready Data Center With Help From Juniper

Alastair Cooke explores the crucial role of Juniper Networks in preparing data centers for AI workloads, emphasizing optimized network architecture that supports the demanding requirements of AI technologies. He discusses Juniper’s solutions that streamline operations and enhance the efficiency necessary for handling intensive AI-driven processes. For additional insights on AI Infrastructure Field Day 2, see Alastair Cooke’s coverage on The Futurum Group.


Scaling Smarter Optimizes Cloud Costs in the Age of Data Abundance

Keeping every application and every scrap of data on the public cloud becomes very expensive; we need to improve our cloud economics. This episode of the Tech Field Day podcast features Vriti Magee, Mitch Lewis, and Alastair Cooke. The belief that data is the new oil has led many companies to retain every piece of data they generate, often in object storage on public cloud platforms. The continuous growth of this data leads to a growing bill from the cloud provider, often with no clear plan in place for recouping the value of the money spent. Generative AI requires training data, which is another reason to retain everything; again, there needs to be value returned to the business. New designs for cloud applications must include data management and managed retention as key criteria. Sustainable, honest designs that enable business change are vital for delivering value back to the business.


Exploring Cloud Resilience, AI, and Data at Cloud Field Day 23

Cloud Field Day is making its highly anticipated return to San Francisco on June 4th and 5th, bringing together some of the biggest names in cloud technology for two days of in-depth insights and live demos. You can catch every moment of the action live on the Tech Field Day LinkedIn page and Techstrong TV. […]


Build Your Own AI Infrastructure Using Google Cloud

Alastair Cooke explores the practicalities and advantages of constructing your own AI infrastructure using Google Cloud, highlighting the accessibility and customization benefits that come with building a bespoke environment. He provides insights into how organizations can leverage Google Cloud’s robust tools and services to tailor AI solutions to their specific needs, enhancing both efficiency and scalability. For additional insights and extensive coverage of AI Infrastructure Field Day 2, watch The Futurum Group blogs.


The Unknown Unknowns of Cloud Providers with Catchpoint

Your Internet Application is full of unknowns, which will affect its performance and availability for your customers. This episode of the Tech Field Day Podcast features Catchpoint CEO and co-founder Mehdi Daoudi, Eric Wright, Jon Myer, and Alastair Cooke. Internet applications are seldom self-contained, relying on other web services for specialized functions and needing responses from the services before a final response to a user. Functions such as DDoS protection, tracking, embedded advertising, and other valuable services enable faster application feature development, but at what cost? Any delayed response from these services can slow down your application for your users, leading to dissatisfaction, even when your servers perform beautifully. Remember that the services you choose to use may, in turn, use other external services. Catchpoint champions user-centric monitoring and Internet Performance Monitoring (IPM) to complement existing APM tools. Visibility of issues outside your data center is vital to identifying issues before they become helpdesk tickets or application outages. If this Tech Field Day Podcast episode piques your interest, watch the Catchpoint appearance at Cloud Field Day on YouTube.


Managing Hybrid Cloud Networks Complexity with Infoblox

Managing hybrid-cloud networks is complex due to differing architectures and naming between on-premises and the multiple public cloud platforms. This Tech Field Day Podcast episode features Glenn Sullivan, Senior Director of Product Management at Infoblox, Eric Wright, and Alastair Cooke. Each public cloud has a unique management console and network management paradigm; none provides deep integration with each other or with on-premises networking. It is left to individual customers to assemble a jigsaw of pieces into a coherent whole. Customers may not plan to use multiple public clouds, but through different project requirements or mergers and acquisitions, most large organizations find themselves in a hybrid multi-cloud environment. Combining fast-changing public cloud applications with on-premises applications further complicates network management, requiring an automation-based approach. Infoblox UDDI (Universal DNS, DHCP, and IPAM) provides a consistent, automatable interface to manage and operate basic network infrastructure across all enterprise locations. UDDI includes bi-directional operation where changes using cloud-native consoles are visible in UDDI and vice versa.