Alastair Cooke explores how Zededa simplifies orchestrating edge computing applications, emphasizing its solutions that facilitate IT professionals in managing remote computing resources effortlessly. He highlights the operational benefits and scalability offered by Zededa, positioning it as a practical choice for businesses looking to optimize their edge computing strategies. For more insights on this topic, follow Alastair Cooke’s coverage on LinkedIn Pulse.
Private Cloud is Not just Self-Service Virtualization
Private cloud is not just virtualization 4.0, self-service VM deployment doesn’t fulfil the same need as the Public Cloud. This episode of the Tech Field Day podcast features Mike Graff, Jon Hildebrand, and Alastair Cooke. Private cloud has evolved from simple virtualization to a more comprehensive, cloud-like experience, emphasizing the need for on-premises infrastructure to offer the same developer-friendly tools and APIs as public clouds. Some application repatriation is driven by cost concerns and enabled by rise of technologies like Kubernetes and OpenShift for managing containerized workloads. A unified control plane for hybrid cloud environments is vital, as is accurate cost accounting for on-premises resources. Enterprises will search for a hybrid approach where developers can deploy applications without needing to worry about the underlying infrastructure.
The Evolution of Cloud at Cloud Field Day 24
Cloud Field Day 24 is back in San Francisco on October 22nd and 23rd, bringing the brightest minds in enterprise cloud together for two days of innovation, insight, and live demos.
Your Edge Projects will Fail Without Fleet Lifecycle Management with ZEDEDA
Projects to deliver applications to edge locations will fail without comprehensive fleet lifecycle management. This episode of the Tech Field Day podcast features Sachin Vasudeva from Zededa discussing the importance of long-term edge management with Guy Currier and Alastair Cooke. There are unique challenges of managing edge deployments compared to cloud or on-premises environments. Focusing on business logic and application outputs while leveraging infrastructure providers to handle the complexities of packaging, deploying, and monitoring AI models enables diverse edge environments. Edge locations might have different hardware deployed, intermittent connectivity, requiring a balance between standardization and flexibility in managing edge devices and applications. Teams with rapid responsiveness and adaptation will better enable their business to respond to changing conditions, especially with the rapid pace of AI innovation.
Unified Flash Memory and Reduced HBM are Reshaping AI Training and Inference with Phison
AI will need less HBM (high bandwidth memory) because flash memory unification is changing training and inference. This episode of the Tech Field Day podcast features Sebastien Jean from Phison, Max Mortillaro, Brian Martin, and Alastair Cooke. Training, fine-tuning, and inference with Large Language Models traditionally use GPUs with high bandwidth memory to hold entire data models and data sets. Phison’s aiDaptiv+ framework offers the ability to trade lower cost of infrastructure against training speed or allow larger data sets (context) for inference. This approach enables users to balance cost, compute, and memory needs, making larger models accessible without requiring top-of-the-line GPUs, and giving smaller companies more access to generative AI.
Agentic AI Spells the End of Dial Twiddlers
If you haven’t already, start working with Generative AI now and make sure to control your ongoing costs. This episode of the Tech Field Day podcast features Russ Fellows, Mitch Lewis, and Brian Martin, all from Signal65, and is hosted by Alastair Cooke. Generative AI is delivering value to businesses of all sizes, but significant evolution in models and technologies remains before maturity is achieved. Experimentation is essential to understand the value of new technologies, starting with cloud resources or small-scale on-premises servers. Business value is derived from the inference stage, where AI tools generate actionable information for users. Generative AI is like a knowledgeable and well-intentioned intern; someone more senior must ensure AI is given good instructions and check their work. In production, grounding and guard rails are vital to keep your AI an asset, not a liability.
Cloud Consumption in Your Data Center With VCF 9.0
Alastair Cooke reviews Cloud Consumption in your Data Center with VCF 9.0, detailing how VMware Cloud Foundation (VCF) enhances data center operations by simplifying cloud consumption across hybrid environments. He highlights new features and updates in version 9.0, emphasizing improvements in scalability, management, and overall user experience. For comprehensive coverage of this topic, explore additional insights following our special event with Broadcom focused on VMware Cloud Foundation 9 on Techstrong IT.
Qlik Answers from the New Zealand Government 2025 Budget
Alastair Cooke provides a detailed analysis of how the New Zealand Government’s 2025 budget addresses Qlik solutions. Focusing on the implications for IT and data analytics sectors, the article explores strategic investments and policy changes. You can find additional insights on this topic by Alastair Cooke on LinkedIn Pulse.
Early Adoption of Generative AI Helps Control Costs with Signal65
If you haven’t already, start working with Generative AI now and make sure to control your ongoing costs. This episode of the Tech Field Day podcast features Russ Fellows, Mitch Lewis, and Brian Martin, all from Signal65, and is hosted by Alastair Cooke. Generative AI is delivering value to businesses of all sizes, but significant evolution in models and technologies remains before maturity is achieved. Experimentation is essential to understand the value of new technologies, starting with cloud resources or small-scale on-premises servers. Business value is derived from the inference stage, where AI tools generate actionable information for users. Generative AI is like a knowledgeable and well-intentioned intern; someone more senior must ensure AI is given good instructions and check their work. In production, grounding and guard rails are vital to keep your AI an asset, not a liability.
Datacenter Networking Needs AIOps with HPE Juniper Networks
Enterprise networking is too large and complex, we need AI Operations. This spotlight episode of the Tech Field Day podcast features Bob Friday and Ben Baker, both from Juniper Networks, with Jack Poller and Alastair Cooke. Modern enterprise networks reach far beyond the well-controlled walls of data centres and corporate buildings. The rate of change enabled by public cloud platforms makes an enterprise network highly dynamic. Access to cloud and on-premises applications over the Internet means your users are dependent on many network elements outside of your control. Bob founded Mist Networks to help businesses manage the complexity of user-to-cloud networking. Juniper Networks acquired Mist, and now HPE has acquired Juniper. I don’t think he is alone in seeing the necessity of using AI to manage complex and critical networks. Yet new tools always bring new challenges; the cost of AI infrastructure may be a concern, and Generative AI has challenges with hallucinations. The security and governance practices around AI tools are still developing, and the non-deterministic nature of AI needs careful consideration.
Simplifying Cloud Application Resilience in a Dynamic World
Alastair Cooke discusses the crucial need for resilient cloud applications in today’s dynamic digital environment, emphasizing the challenges and solutions for enterprises aiming to maintain robust operations amid frequent changes. He explores how simplifying the resilience of cloud-based applications can significantly benefit businesses by enhancing their ability to adapt and respond to new demands and potential disruptions. You can explore more coverage of Cloud Field Day 23 by Alastair Cooke on DevOps dot Com.
Satellite Data’s Journey: How Ring is Helping ESA Manage Petabytes of Information
Alastair Cooke explores how Ring, Scality’s data storage and processing solution, is revolutionizing the way the European Space Agency (ESA) manages vast amounts of satellite data. He details the innovative methods and technologies employed by ESA to handle and utilize petabytes of information more efficiently. For additional insights into Cloud Field Day 23, check out Alastair Cooke’s coverage on Techstrong IT.
Securing the Future: Juniper’s Approach to AI Data Center Security
Alastair Cooke recently discussed Juniper Networks’ strategy in enhancing AI-driven data center security, highlighting their comprehensive approach to protect complex network environments. He emphasized how Juniper leverages AI to automate threat detection and response, ensuring a more resilient infrastructure. For additional insights on Cloud Field Day 23, you can read more from Alastair Cooke on Techstrong AI.
The Easy On-Ramp to Private AI with Nutanix Enterprise AI
Alastair Cooke recently highlighted how Nutanix is simplifying the integration of AI technologies within private infrastructures, positioning their Enterprise AI as an easy on-ramp for companies keen on adopting private AI solutions. He emphasized the platform’s ability to support various AI and machine learning workloads, enhancing agility and operational efficiency without compromising security. For more comprehensive insights, check out Alastair Cooke’s coverage of Cloud Field Day 23.
SSD Innovation for AI from Solidigm
Alastair Cooke recently explored the advancements in SSD technology tailored for AI applications, specifically from Solidigm. Highlighting the impact of next-generation SSDs, Cooke discusses how these innovations enhance data processing speeds crucial for AI workloads. For more on AI Infrastructure Field Day 2, you can find comprehensive coverage by Alastair Cooke.
A Different Type of Datacenter is Needed for AI
AI demands specialized data center designs due to its unique hardware utilization and networking needs, which require a new type of infrastructure. This Tech Field Day Podcast episode features Denise Donohue, Karen Lopez, Lino Telera, and Alastair Cooke. Network design has been a consistent part of the AI infrastructure discussions at Tech Field Day events. The need for a dedicated network to interconnect GPUs differentiates AI training and fine-tuning networks from general-purpose computing. The vast power demand for high-density GPU servers highlights a further need for different data centers with liquid cooling and massive power distribution. Model training is only one part of the AI pipeline; business value is delivered by AI inference with a different set of needs and a closer eye on financial management. Inference will likely require servers with GPUs and high-speed local storage, but not the same networking density as training and fine-tuning. Inference will also need servers adjacent to existing general-purpose infrastructure running existing business applications. Some businesses may be able to fit their AI applications into their existing data centers, but many will need to build or rent new infrastructure.
Make an AI-Ready Data Center With Help From Juniper
Alastair Cooke explores the crucial role of Juniper Networks in preparing data centers for AI workloads, emphasizing optimized network architecture that supports the demanding requirements of AI technologies. He discusses Juniper’s solutions that streamline operations and enhance the efficiency necessary for handling intensive AI-driven processes. For additional insights on AI Infrastructure Field Day 2, see Alastair Cooke’s coverage on The Futurum Group.
Scaling Smarter Optimizes Cloud Costs in the Age of Data Abundance
Keeping every application and every scrap of data on the public cloud becomes very expensive; we need to improve our cloud economics. This episode of the Tech Field Day podcast features Vriti Magee, Mitch Lewis, and Alastair Cooke. The belief that data is the new oil has led many companies to retain every piece of data they generate, often in object storage on public cloud platforms. The continuous growth of this data leads to a growing bill from the cloud provider, often with no clear plan in place for recouping the value of the money spent. Generative AI requires training data, which is another reason to retain everything; again, there needs to be value returned to the business. New designs for cloud applications must include data management and managed retention as key criteria. Sustainable, honest designs that enable business change are vital for delivering value back to the business.


















