Hardware always matters, especially in AI and now software is automating your AI data centre infrastructure. This episode of the Tech Field Day podcast features Gina Rosenthal, Barton George, Andy Banta, and Alastair Cooke. Generative AI brought new hardware into enterprise data centres; GPUs, TPUs, NPUs, XPUs all offload AI processing from CPUs for more performance and efficiency. Feeding these accelerators requires fast networks and fast storage, common topics for AI Infrastructure Field Day events. In parallel, sophisticated software to automate the deployment and operation of this new hardware is vital to return value fast and optimize the value from the hardware investment. Automation platforms are moving up towards delivering multiple AI applications on shared XPU infrastructure, where AI inference delivers the business value.
Agentic AI Spells the End of Dial Twiddlers
If you haven’t already, start working with Generative AI now and make sure to control your ongoing costs. This episode of the Tech Field Day podcast features Russ Fellows, Mitch Lewis, and Brian Martin, all from Signal65, and is hosted by Alastair Cooke. Generative AI is delivering value to businesses of all sizes, but significant evolution in models and technologies remains before maturity is achieved. Experimentation is essential to understand the value of new technologies, starting with cloud resources or small-scale on-premises servers. Business value is derived from the inference stage, where AI tools generate actionable information for users. Generative AI is like a knowledgeable and well-intentioned intern; someone more senior must ensure AI is given good instructions and check their work. In production, grounding and guard rails are vital to keep your AI an asset, not a liability.
Networks Need Agentic AI with HPE Juniper Networking
Agentic AI is reshaping the IT landscape and networking is no exception. Building upon the previous research into machine learning means we have a head start on harnessing that power. In this episode of the Tech Field Day podcast, brought to you by HPE Juniper Networking, Tom Hollingsworth is joined by Keith Parsons and Sunalini Sankhavaram. They talk about how agentic AI is driving new methods for operating networks and helping humans concentrate on real problems instead of menial tasks. They also discuss how agentic AI can power self-driving networks where configuration and provisioning are done automatically or with a minimum of effort to ensure resiliency and enhance user expectations.
SASE Makes Convergence Simple with HPE Aruba Networking
Converged networking is creating complexity with network operations and engineering teams. Ensuring that users are able to access resources and complete workflows means being up-to-date with the latest technology behind the scenes. You need a trusted partner to make it all happen with minimal issues.
In this episode of the Tech Field Day Podcast, brought to you by HPE Aruba Networking, Tom Hollingsworth is joined by Chris Grundemann, Jeff Olson, and Adam Fuoss as they discuss the growing challenges with networking and how HPE Aruba Networking is addressing them. They talk about SASE and SSE and how tools such as Cloud-Native NAC and SASE AI Copilot work together to address the needs of the modern network. These features, debuted at Black Hat 2025, help teams by making sense of the network and keeping it secure from intruders.
They discuss how AI is going to accelerate operations teams while providing context and understanding for challenges. They also talk about how cloud native principles an apply to both online and on-prem configurations. The panel wraps up with a discussion of the importance of a sole-source provider for these solutions and how HPE Aruba Networking is addressing the integration of recent acquisitions.
Early Adoption of Generative AI Helps Control Costs with Signal65
If you haven’t already, start working with Generative AI now and make sure to control your ongoing costs. This episode of the Tech Field Day podcast features Russ Fellows, Mitch Lewis, and Brian Martin, all from Signal65, and is hosted by Alastair Cooke. Generative AI is delivering value to businesses of all sizes, but significant evolution in models and technologies remains before maturity is achieved. Experimentation is essential to understand the value of new technologies, starting with cloud resources or small-scale on-premises servers. Business value is derived from the inference stage, where AI tools generate actionable information for users. Generative AI is like a knowledgeable and well-intentioned intern; someone more senior must ensure AI is given good instructions and check their work. In production, grounding and guard rails are vital to keep your AI an asset, not a liability.
Datacenter Networking Needs AIOps with HPE Juniper Networks
Enterprise networking is too large and complex, we need AI Operations. This spotlight episode of the Tech Field Day podcast features Bob Friday and Ben Baker, both from Juniper Networks, with Jack Poller and Alastair Cooke. Modern enterprise networks reach far beyond the well-controlled walls of data centres and corporate buildings. The rate of change enabled by public cloud platforms makes an enterprise network highly dynamic. Access to cloud and on-premises applications over the Internet means your users are dependent on many network elements outside of your control. Bob founded Mist Networks to help businesses manage the complexity of user-to-cloud networking. Juniper Networks acquired Mist, and now HPE has acquired Juniper. I don’t think he is alone in seeing the necessity of using AI to manage complex and critical networks. Yet new tools always bring new challenges; the cost of AI infrastructure may be a concern, and Generative AI has challenges with hallucinations. The security and governance practices around AI tools are still developing, and the non-deterministic nature of AI needs careful consideration.
Network Engineers are Facing an Identity Crisis
Network engineers are the firefighters and knowledge bases of enterprise IT, however the role of a network engineer is rapidly evolving. With the rise of automation, orchestration, and AI, the familiar image of an engineer hunched over a command-line interface (CLI) is giving way, leading many to question the future of their profession. In this episode, Tom Hollingsworth is joined by Ryan Harris, Chris Grundemann, and Nathan Nielsen as they discuss how the perception of their role has shifted, the continuous need for learning and adaptation, and whether the CLI is truly dead.
The conversation explores the challenges and opportunities presented by these technological advancements, highlighting how network engineers are embracing new tools like chatbots and GUIs for enhanced visualization. While some aspects of the job, like manual CLI work, may be diminishing, the core principles of understanding network functionality remain core to the role of the network engineer. The panel talks about identity crisis in a field where continuous learning is essential, contrasting it with professions like doctors and lawyers who deal with slower-changing fundamentals. They discuss the value of specialization versus being a generalist, the concept of the “pitchfork engineer,” and ultimately, how redefining their identity as lifelong learners can help network engineers thrive in this ever-changing landscape.
Re-Imagining the Mainframe for the AI Era at SHARE
As Tech Field Day heads to SHARE in Cleveland, we are considering the many ways the mainframe has been re-imagined and re-built for the AI era. This episode of the Tech Field Day podcast features Cynthia Overby of Rocket Software and SHARE, Derek Britton, and Jeffrey Powers discussing the modern mainframe with Stephen Foskett. Walk around the SHARE conference and you’ll see the same concepts and technologies that you would find at any other conference: AI, security, data management, software development, and connectivity. Although the mainframe platform is radically different from the so-called open systems used in the cloud and elsewhere, IBM has re-engineered the new z17 mainframe for the AI age. The mainframe hosts the most valuable data in the world, and if AI is going to be used it has to run locally. That’s why IBM added a second-generation AI accelerator to the latest Telum II processor, the Spyre Accelerator, and z/OS itself. We’re excited to be bringing Tech Field Day to SHARE to learn more about the modern mainframe and share the state of the art with our audience.
The DoJ Just Devalued Juniper Mist
The proposed remedies for the HPE acquisition of Juniper Networks did a real disservice to Juniper Mist. The confusion around what’s going on with the proposed Juniper AIOps for Mist auction have professionals asking a lot of questions. In this episode, recorded on the eve of the close of the acquisition, Tom Hollingsworth sits down with Sam Clements, Jake Snyder, and Ed Weadon to make sense of it all. There are discussions about what exactly is included in the auction and what benefit will come from the license to use Juniper AIOps for Mist. Also discussed is who might be a good bidder for the solution and how long it will take for them to get any real value from it.
Enterprises Shouldn’t Be Outsourcing Their IT Anymore
Enterprise networks are complicated but outsourcing all of the operations team doesn’t lead to better outcomes. It’s important to remember that enterprise covers a wide range of network definitions. In this episode, Ed Weadon, Chris Grundemann, and Jody Lemoine join Tom Hollingsworth as they discuss how businesses see the network and IT in general as a cost center instead of value generation. They also talk about the various sizes of networks and why each of them has issues with the most popular outsourcing methods. They also discuss the human factor and why not all managed providers can give you the same level of service.
We Are Long Past Passwords
Passwords have served their use in the enterprise. We need to start moving away from simple passwords as an authentication mechanism. In this episode of the Tech Field Day podcast, Tom Hollingsworth is joined by Tony Efantis, Karen Lopez, and Fernando Montenegro as they discuss the premise that we are long past passwords, exploring the complexities and frustrations of relying on them for myriad online accounts. The conversation highlights the concept of economic externalities, where developers easily implement simple passwords, but the burden of managing hundreds of unique credentials falls on the individual user. While passwords were initially designed for basic authentication, there has been a shift towards alternative mechanisms like one-time codes sent to email or passkeys because of user laziness and the security risks associated with password reuse and compromised credentials. Ultimately, what is needed a balanced, risk-based authentication approach is necessary, tailoring security levels to the sensitivity of the data being protected, and leveraging technologies like biometrics and background risk assessments to create a more convenient and secure user experience, even as attackers continue to evolve their methods.
MLO is a Lie
One of the most anticipated features of Wi-Fi 7 isn’t ready for the public. Worse yet, it may never deliver on the promise of fast, reliable wireless connectivity. In this episode, Tom Hollingsworth is joined by Allyn Crowe, Peter Mackenzie, and Chris Reed as they discuss the way that multi-link operation (MLO) has been included in the specification for Wi-Fi 7 yet not quite implemented. They highlight the technical difficulties of deploying such a complicated protocol and how vendors are trying to squeeze every drop of performance out of their hardware. They wrap up with advice on whether or not to plan your next deployment around a technology that isn’t quite ready yet.
A Different Type of Datacenter is Needed for AI
AI demands specialized data center designs due to its unique hardware utilization and networking needs, which require a new type of infrastructure. This Tech Field Day Podcast episode features Denise Donohue, Karen Lopez, Lino Telera, and Alastair Cooke. Network design has been a consistent part of the AI infrastructure discussions at Tech Field Day events. The need for a dedicated network to interconnect GPUs differentiates AI training and fine-tuning networks from general-purpose computing. The vast power demand for high-density GPU servers highlights a further need for different data centers with liquid cooling and massive power distribution. Model training is only one part of the AI pipeline; business value is delivered by AI inference with a different set of needs and a closer eye on financial management. Inference will likely require servers with GPUs and high-speed local storage, but not the same networking density as training and fine-tuning. Inference will also need servers adjacent to existing general-purpose infrastructure running existing business applications. Some businesses may be able to fit their AI applications into their existing data centers, but many will need to build or rent new infrastructure.
User-Centric Connectivity Has to Innovate
Modern networking is being disrupted in the data center but user-facing networking has largely stagnated. Users are getting slightly faster connections but everything feels mostly the same. In this episode, Tom Hollingsworth is joined by Sam Clements and Ed Weadon as they discuss innovation in the edge of the network. They talk about how companies like Cisco have been trying to bring users into the modern era. They talk about the centralization of management in the cloud and how competition has driven those moves. They also look ahead to Cisco Live and discuss the releases they would most like to see at the event.
Scaling Smarter Optimizes Cloud Costs in the Age of Data Abundance
Keeping every application and every scrap of data on the public cloud becomes very expensive; we need to improve our cloud economics. This episode of the Tech Field Day podcast features Vriti Magee, Mitch Lewis, and Alastair Cooke. The belief that data is the new oil has led many companies to retain every piece of data they generate, often in object storage on public cloud platforms. The continuous growth of this data leads to a growing bill from the cloud provider, often with no clear plan in place for recouping the value of the money spent. Generative AI requires training data, which is another reason to retain everything; again, there needs to be value returned to the business. New designs for cloud applications must include data management and managed retention as key criteria. Sustainable, honest designs that enable business change are vital for delivering value back to the business.
Compliance Does Not Equal Security
Compliance reinforces the state of security in your organization. However, compliance in and of itself is not security. In this episode of the Tech Field Day Podcast, Tom Hollingsworth is joined by Jack Poller and Milou Meier as they discuss the nuance between securing your organization and ensuring compliance. They discuss the challenges with audits and the “checkbox” mentality that has become common. They also discuss how organizations face different challenges globally and how you can help ensure that you aren’t being exposed to problems in the future.
The Unknown Unknowns of Cloud Providers with Catchpoint
Your Internet Application is full of unknowns, which will affect its performance and availability for your customers. This episode of the Tech Field Day Podcast features Catchpoint CEO and co-founder Mehdi Daoudi, Eric Wright, Jon Myer, and Alastair Cooke. Internet applications are seldom self-contained, relying on other web services for specialized functions and needing responses from the services before a final response to a user. Functions such as DDoS protection, tracking, embedded advertising, and other valuable services enable faster application feature development, but at what cost? Any delayed response from these services can slow down your application for your users, leading to dissatisfaction, even when your servers perform beautifully. Remember that the services you choose to use may, in turn, use other external services. Catchpoint champions user-centric monitoring and Internet Performance Monitoring (IPM) to complement existing APM tools. Visibility of issues outside your data center is vital to identifying issues before they become helpdesk tickets or application outages. If this Tech Field Day Podcast episode piques your interest, watch the Catchpoint appearance at Cloud Field Day on YouTube.
Managing Hybrid Cloud Networks Complexity with Infoblox
Managing hybrid-cloud networks is complex due to differing architectures and naming between on-premises and the multiple public cloud platforms. This Tech Field Day Podcast episode features Glenn Sullivan, Senior Director of Product Management at Infoblox, Eric Wright, and Alastair Cooke. Each public cloud has a unique management console and network management paradigm; none provides deep integration with each other or with on-premises networking. It is left to individual customers to assemble a jigsaw of pieces into a coherent whole. Customers may not plan to use multiple public clouds, but through different project requirements or mergers and acquisitions, most large organizations find themselves in a hybrid multi-cloud environment. Combining fast-changing public cloud applications with on-premises applications further complicates network management, requiring an automation-based approach. Infoblox UDDI (Universal DNS, DHCP, and IPAM) provides a consistent, automatable interface to manage and operate basic network infrastructure across all enterprise locations. UDDI includes bi-directional operation where changes using cloud-native consoles are visible in UDDI and vice versa.
Wi-Fi is Fast Enough
Modern Wi-Fi connections rely on more than just raw throughput to measure performance. The complexity of wireless as a medium makes the user experience more varied and creates difficulties in troubleshooting. In this episode, Tom Hollingsworth is joined by Keith Parsons, Rocky Gregory, and Ron Westfall as they discuss the state of Wi-Fi and how performance works. They talk about the challenges with properly designed wireless networks and how data sheets make assumptions about the environment. They also discuss user expectations for performance and how workflows involve many moving parts that can impact overall user experiences.
Quality Data is the Foundation for AI with Qlik
AI ought to be able to help businesses derive value from their data, but not all AI applications have a solid foundation. This episode of the Tech Field Day podcast looks forward to Qlik Connect 2025, featuring delegates Gina Rosenthal and Jim Czuprynski discussing the importance of data with Nick Magnuson of Qlik and host Stephen Foskett. Last year Qlik introduced Answers, a RAG AI product that delivers intelligence from unstructured data. This year we expect to see much more integration with structured data, analytics, business intelligence, and agentic AI, as Qlik’s customers seek to deliver innovative solutions. Mature organizations are focused on building a solid governance foundation for their data, ensuring responsible and ethical use in AI applications. The advent of agentic AI raises more concerns, as autonomous agents are empowered to take action without human involvement. Responsible use must include strict limits and human supervision to make sure AI agents remain controlled. We’re looking forward to customer stories, technical takeaways, and maybe some new product introductions at Qlik Connect this year!