The big headlines that we’re seeing around the massive funding of large AI companies are a distraction from the reality that AI is being built and used in business applications. This episode of the Tech Field Day podcast features Frederic Van Haren, Chris Grundemann, Brian Martin, and Alastair Cooke reflecting after AI Infrastructure Field Day in Santa Clara. Popular news often covers the creation of large, general purpose AI models, yet the real-world application of AI through inference is where most companies see a return on their investment. Similarly, the common understanding of “AI” is as a single topic, without a more granular view that differentiates between rules-based systems, traditional machine learning, and emergent generative models like Large Language Models (LLMs). Specialized AI models will be vital for cost-effective applications with enhanced efficiency and the integration of diverse AI capabilities into agentic architectures. Advanced security protocols and regulatory frameworks are vital to mitigate novel vulnerabilities, organizations must adapt to an extraordinarily rapid pace of technological evolution. AI has already had a profound impact on software development, potentially enabling widespread custom application creation.
AI Needs Resource Efficiency
As we build out AI infrastructure and applications we need resource efficiency, continuously buying more horsepower cannot go on forever. This episode of the Tech Field Day podcast features Pete Welcher, Gina Rosenthal, Andy Banta, and Alastair Cooke hoping for a more efficient AI future. Large language models are trained using massive farms of GPUs and massive amounts of Internet data, so we expect to use large farms of GPUs and unstructured data to run those LLMs. Those large farms have led to scarcity of GPUs, and now RAM price increases that are impeding businesses building their own large AI infrastructure. Task-specific AIs, that use more efficient, task-specific models should be the future of Agentic AI and AI embedded in applications. More efficient and targeted AI may be the only way to get business value from the investment, especially in resource constrained edge environments. Does every AI problem need a twenty billion parameter model? More mature use of LLMs and AI will focus on reducing the cost of delivering inference to applications, your staff, and your customers.
Your Security Strategy Needs Graphs
Modern security needs more than checklists. Instead of working down a process you need to start thinking like the attackers trying to get into your systems. In this episode of the Tech Field Day Podcast, Jay Cuthrell and Girard Kavelines join Tom Hollingsworth to discuss how Microsoft Sentinel helps bring this new security strategy to your environment.
They discuss the advances that have been made in data lake technology, including the increased retention time offered by Microsoft. They also talk about the way that graphs help train new security analysts to understand the way that attackers think. They discuss how you can adapt your plans in the new year to take advantage of new offerings and the questions you should be asking to make sure you’re not missing out.
2026 Is A Bright Year for HPE and Juniper
2025 was a big year for both HPE and Juniper, but 2026 is the year when the real integration starts. In this episode of the Tech Field Day podcast, Tom Hollingsworth is joined by wireless experts Jonathan Davis and Keith Parsons as they look at the acquisition of Juniper Networks by HPE and what the future holds for the new combined company.
They discuss how the culture of the two companies is coming together as well as the unification of their management platforms. They look at the differences between Juniper Mist and Aruba Central and which platform will be the one users log into going forward. The guests also look at the competition between Cisco and HPE and how the race will develop as customers look to refresh their installations and what they are likely to choose.
Tech Field Day Takeaways for 2025: Making AI Less Manic
2025 was the year of AI Mania. Everyone wants you to know where they stand with AI in their product. Tech Field Day has a different approach. In this special year end episode, Tom Hollingsworth, Stephen Foskett, and Alastair Cooke look back at the discussions and deep dives into AI and how Tech Field Day grounded them all in practical real terms. Our event leads discuss the boring reality of AI tools and how AI has a dual nature that must be understood to get real value. They also debate the changing landscape of security where AI is concerned, including the importance of data sovereignty. They wrap up with a focus on the fundamentals and how Tech Field Day continues to make those important for the community at large.
Generative AI Coding Tools Make Enterprise Applications Worse
AI is writing a large proportion of modern software and Generative AI coding tools make enterprise applications worse. This episode of the Tech Field Day podcast looks at AI generated applications with Calvin Hendryx-Parker, Jim Czuprynski, Jay Cuthrell, and Alastair Cooke. Satya Nadella says that up to 30% of the code Microsoft writes is AI generated, AWS is at about 25% AI generated code. We ponder whether there is a link between this AI generated code and the quality of the Windows 11 codebase, possibly even the recent AWS outage? Calvin has hands-on experience with a range of AI coding tools, finding he uses different AI tools for specialist tasks in his development projects. The easy task for AI coding is translating existing applications from one platform version to another, or rewriting existing application code in new languages. Both these tasks are onerous for human developers and ideal for an AI assistant. The unanswered question is whether generative AI tools can handle creating new functionality in enterprise applications, can AI fulfill the role of the senior developer or software architect?
Simplification of IT is Really an Illusion
Simplification in IT is an illusion; increasing complexity outpaces every effort to simplify. This episode of the Tech Field Day podcast, recorded on-site at Cloud Field Day 24, features Camberley Bates, Nathan Nielsen, Guy Currier, and Alastair Cooke. Cloud services and centralized management platforms offer simplified interfaces but also introduce a multitude of choices and underlying complexities. History matters; advancements from mainframes to PCs demonstrate continuously shifting goalposts, while the more recent integration of cloud and AI contributes to increased complexity. It may be that AI brings simply advanced simplicity, yet it may also bring the unintended consequence of people becoming “ignorant” of how IT works. CIOs and CTOs need to think strategically to manage increasingly complex environments, striking a balance between patchwork fixes and long-term strategic approaches.
DNS Must Be Secured Presented by Infoblox
DNS security is no longer optional. This service is not only being attacked by nefarious actors but it is also being leveraged in ways to compromise users and exfiltrate data. In this episode of the Tech Field Day podcast, brought to you by Infoblox, Tom Hollingsworth is joined by Jack Poller and Cricket Liu. They talk about the historical openness of DNS and how that has led to it becoming easy to see what users are doing and create ways to manipulate them. They discuss ways to secure the protocol and how companies like Infoblox are extending the capabilities for future security.
Well Managed Kubernetes Means Infrastructure Finally Doesn’t Matter
In a world of well-managed Kubernetes, we hoped that infrastructure finally wouldn’t matter. This episode of the Tech Field Day podcast features John Willis and Guy Currier wishing that infrastructure didn’t matter, with Alastair Cooke. Every new infrastructure revolution claims to make infrastructure invisible, from virtualization through HCI and cloud to containers and Kubernetes. The reality has always been that these revolutions shift the definition of infrastructure and bring some new aspect to be managed. Developers building features and applications want to focus on satisfying some business need, not considering storage devices and network configurations. Virtualization and Kubernetes both made delivering infrastructure easier, but neither eliminated infrastructure architecture and management. The dream of self-deploying and self-organizing infrastructure is as distant as it ever was. Agentic AI is the latest new hope to eliminate infrastructure challenges, yet it brings its own complex infrastructure requirements. Will we ever stop caring about IT infrastructure?
NetAIOps Has Its Challenges
The industry has embraced AI for every possible problem. Operations will eventually embrace it as well but questions remain about how it will be implemented. In this episode, Tom Hollingsworth sits down with Pete Welcher, Rita Younger, and Jonathan Davis to discuss the issues that remain with implementing AI into an operations workflow. They discuss licensing and procurement, the need for institutional knowledge, and how this will all work in a multivendor world. They wrap up with some guidance about how to approach your next big AIOps project.
Ready or Not, AI is Coming to the Enterprise
Despite widespread skepticism, AI is already widely used in the enterprise, often in the form of so-called shadow applications outside traditional IT. This episode of the Tech Field Day Podcast, recorded on the eve of AI Field Day, features delegates Ryan Booth and Dave Graham discussing the real state of AI adoption in the enterprise with host Stephen Foskett. Just like the advent of the PC, generative AI is widely used across businesses, typically on a bring-your-own basis rather than as a coordinated effort by the IT department. The same process happened in the Software-as-a-Service world, where each department and even individual adopted multiple tools that met their needs. There will soon be a reckoning, where businesses try to get their hands around all of the AI applications being used across the enterprise. The next step is to develop a plan to control sprawl of tools, models, data, and subscriptions to ensure that this shadow AI doesn’t become a risk to the company. Then companies need to be prepared as AI agents become critical to their operations, likely also deployed by individuals without corporate control.
Private Cloud is Not just Self-Service Virtualization
Private cloud is not just virtualization 4.0, self-service VM deployment doesn’t fulfil the same need as the Public Cloud. This episode of the Tech Field Day podcast features Mike Graff, Jon Hildebrand, and Alastair Cooke. Private cloud has evolved from simple virtualization to a more comprehensive, cloud-like experience, emphasizing the need for on-premises infrastructure to offer the same developer-friendly tools and APIs as public clouds. Some application repatriation is driven by cost concerns and enabled by rise of technologies like Kubernetes and OpenShift for managing containerized workloads. A unified control plane for hybrid cloud environments is vital, as is accurate cost accounting for on-premises resources. Enterprises will search for a hybrid approach where developers can deploy applications without needing to worry about the underlying infrastructure.
Your Edge Projects will Fail Without Fleet Lifecycle Management with ZEDEDA
Projects to deliver applications to edge locations will fail without comprehensive fleet lifecycle management. This episode of the Tech Field Day podcast features Sachin Vasudeva from Zededa discussing the importance of long-term edge management with Guy Currier and Alastair Cooke. There are unique challenges of managing edge deployments compared to cloud or on-premises environments. Focusing on business logic and application outputs while leveraging infrastructure providers to handle the complexities of packaging, deploying, and monitoring AI models enables diverse edge environments. Edge locations might have different hardware deployed, intermittent connectivity, requiring a balance between standardization and flexibility in managing edge devices and applications. Teams with rapid responsiveness and adaptation will better enable their business to respond to changing conditions, especially with the rapid pace of AI innovation.
Every AI Strategy Needs a Data Protection Strategy with Commvault
Every company lives in fear of a ransomware attack, whether they have suffered one or not, and this is even more critical in the era of AI. This episode of the Tech Field Day Podcast looks forward to Commvault SHIFT in November with a discussion of the importance of data protection to AI applications with Tim Zonca from Commvault, frequent delegate Gina Rosenthal, and host Stephen Foskett. AI applications are reliant on good data, and yet this same technology makes it easier for attackers to breach corporate controls. Today’s social engineering and phishing is more convincing than ever thanks to generative AI, and this has helped ransomware crews to adopt larger and more powerful attacks. Ransomware is a massive business, and it isn’t going away any time soon. At the same time, GenAI applications offer a new attack surface, as agentic AI is empowered to take action based on untrusted inputs. Not only can we not stop ransomware, but the pace and technical capabilities of these attacks keeps accelerating. There is reason for optimism, however, as data protection tools keep getting better. Today’s AI-optimized tools can effectively categorize data, burst or migrate to different locations, and roll back or recover from corruption or compromise. In the future, we will see increasing use of AI to monitor systems and data, detecting patterns and hardening the attack surface.
Moving Enterprise AI Applications From Experiments to Production with NetApp
Running enterprise applications in production is a lot different from the AI experiments many of us have been involved with so far. This episode of the Tech Field Day podcast, recorded prior to NetApp Insight 2025, features Ingo Fuchs from NetApp along with Gina Rosenthal, Glenn Dekhayser, and Stephen Foskett. AI applications often start as experiments with a limited data set, but once these are moved to production there are many critical decisions to be made. Data must be classified and cleaned, removing personal and financial data and proprietary information before it even reaches an LLM. Data also must be structured for embedding and vectorization prior to use by an LLM. And we have to ensure that data is up to date or the application will not serve the customer properly. Finally we have to consider whether it is proper and ethical to share and act on this data. Many of the challenges facing modern AI applications are similar to the historic issues faced by enterprise storage, and this is an area in which NetApp and their customers have decades of experience.
Passkeys are the Future
Passwords create friction and therefore users find ways around them. New technology such as secure enclaves and PKI allow us to create better solutions like passkeys. In this episode of the Tech Field Day Podcast. Alan Shimel and Kate Scarcella join Tom Hollingsworth to discuss the problems with traditional passwords and how passkeys overcome them. They also talk about why it has taken so long to adopt passkeys and what barriers remain to full implementation. The wrap up with a look at what might lay ahead on the horizon for the future of user security.
Unified Flash Memory and Reduced HBM are Reshaping AI Training and Inference with Phison
AI will need less HBM (high bandwidth memory) because flash memory unification is changing training and inference. This episode of the Tech Field Day podcast features Sebastien Jean from Phison, Max Mortillaro, Brian Martin, and Alastair Cooke. Training, fine-tuning, and inference with Large Language Models traditionally use GPUs with high bandwidth memory to hold entire data models and data sets. Phison’s aiDaptiv+ framework offers the ability to trade lower cost of infrastructure against training speed or allow larger data sets (context) for inference. This approach enables users to balance cost, compute, and memory needs, making larger models accessible without requiring top-of-the-line GPUs, and giving smaller companies more access to generative AI.
Agentic AI Spells the End of Dial Twiddlers
If you haven’t already, start working with Generative AI now and make sure to control your ongoing costs. This episode of the Tech Field Day podcast features Russ Fellows, Mitch Lewis, and Brian Martin, all from Signal65, and is hosted by Alastair Cooke. Generative AI is delivering value to businesses of all sizes, but significant evolution in models and technologies remains before maturity is achieved. Experimentation is essential to understand the value of new technologies, starting with cloud resources or small-scale on-premises servers. Business value is derived from the inference stage, where AI tools generate actionable information for users. Generative AI is like a knowledgeable and well-intentioned intern; someone more senior must ensure AI is given good instructions and check their work. In production, grounding and guard rails are vital to keep your AI an asset, not a liability.


























