Billion-Dollar AI Headlines Obscure Real Business Value

The big headlines that we’re seeing around the massive funding of large AI companies are a distraction from the reality that AI is being built and used in business applications. This episode of the Tech Field Day podcast features Frederic Van Haren, Chris Grundemann, Brian Martin, and Alastair Cooke reflecting after AI Infrastructure Field Day in Santa Clara. Popular news often covers the creation of large, general purpose AI models, yet the real-world application of AI through inference is where most companies see a return on their investment. Similarly, the common understanding of “AI” is as a single topic, without a more granular view that differentiates between rules-based systems, traditional machine learning, and emergent generative models like Large Language Models (LLMs). Specialized AI models will be vital for cost-effective applications with enhanced efficiency and the integration of diverse AI capabilities into agentic architectures. Advanced security protocols and regulatory frameworks are vital to mitigate novel vulnerabilities, organizations must adapt to an extraordinarily rapid pace of technological evolution. AI has already had a profound impact on software development, potentially enabling widespread custom application creation.

Cutting-Edge AI Networking and Storage Kick Off 2026 at AI Infrastructure Field Day 4

We’re kicking off 2026 with one of our most popular events, AI Infrastructure Field Day 4, running from January 28th through January 30th. The event will stream live on LinkedIn, Techstrong TV, the Tech Field Day website, and for the first time ever, on our YouTube channel, offering a front-row view of the latest in […]

Unified Flash Memory and Reduced HBM are Reshaping AI Training and Inference with Phison

AI will need less HBM (high bandwidth memory) because flash memory unification is changing training and inference. This episode of the Tech Field Day podcast features Sebastien Jean from Phison, Max Mortillaro, Brian Martin, and Alastair Cooke. Training, fine-tuning, and inference with Large Language Models traditionally use GPUs with high bandwidth memory to hold entire data models and data sets. Phison’s aiDaptiv+ framework offers the ability to trade lower cost of infrastructure against training speed or allow larger data sets (context) for inference. This approach enables users to balance cost, compute, and memory needs, making larger models accessible without requiring top-of-the-line GPUs, and giving smaller companies more access to generative AI.

Pushing the Boundaries of AI Performance, Scale, and Innovation at AI Infrastructure Field Day 3

Tech Field Day is heading back in Santa Clara, California on September 10th and 11th for AI Infrastructure Field Day 3. You can watch live on the Tech Field Day website, LinkedIn Page, or Techstrong TV to see how the boundaries of performance, scalability, and innovation are being pushed by our presenting companies. The event […]

Early Adoption of Generative AI Helps Control Costs with Signal65

If you haven’t already, start working with Generative AI now and make sure to control your ongoing costs. This episode of the Tech Field Day podcast features Russ Fellows, Mitch Lewis, and Brian Martin, all from Signal65, and is hosted by Alastair Cooke. Generative AI is delivering value to businesses of all sizes, but significant evolution in models and technologies remains before maturity is achieved. Experimentation is essential to understand the value of new technologies, starting with cloud resources or small-scale on-premises servers. Business value is derived from the inference stage, where AI tools generate actionable information for users. Generative AI is like a knowledgeable and well-intentioned intern; someone more senior must ensure AI is given good instructions and check their work. In production, grounding and guard rails are vital to keep your AI an asset, not a liability.

Brian Martin

Vice President of AI and Datacenter Performance at Signal65