Eye On A.I.

Craig S. Smith
Eye On A.I.
Neueste Episode

324 Episoden

  • Eye On A.I.

    #324 Sharon Zhou: Inside AMD's Plan to Build Self-Improving AI

    27.02.2026 | 49 Min.
    AI is not just getting smarter. It is getting faster by learning how to optimize the hardware it runs on.

    In this episode, Sharon Zhou, VP of AI at AMD and former Stanford AI researcher, explains how language models are beginning to write and optimize their own GPU kernel code. We explore what self improving AI actually means, how reinforcement learning is used in post training, and why kernel optimization could be one of the most overlooked scaling levers in modern AI.

    Sharon breaks down how GPU efficiency impacts the cost of training and inference, why catastrophic forgetting remains a challenge in continual learning, and how verifiable rewards from hardware profiling can help models improve themselves. The conversation also dives into compute economics, synthetic data, RLHF, and why infrastructure may define the next phase of AI progress.

    If you want to understand where AI scaling is really happening beyond bigger models and more data, this episode goes under the hood.


    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) Preview and Intro
    (00:25) Sharon Zhou's Background and Transition to AMD
    (02:00) What Is Self-Improving AI?
    (04:16) What Is a GPU Kernel and Why It Matters
    (07:01) Using AI Agents and Evolutionary Strategies to Write Kernels
    (11:31) Just-In-Time Optimization and Continual Learning
    (13:59) Self-Improving AI at the Infrastructure Layer
    (16:15) Synthetic Data and Models Generating Their Own Training Data
    (20:48) AMD's AI Strategy: Research Meets Product
    (23:22) Inside the NeurIPS Tutorial on AI-Generated Kernels
    (30:59) Reinforcement Learning Beyond RLHF
    (39:09) 10x Faster Kernels vs 10x More Compute
    (41:50) Will Efficiency Reduce Chip Demand?
    (42:18) Beyond Language Models: Diffusion, JEPA, and Robotics
    (45:34) Educating the Next Generation of AI Builders
  • Eye On A.I.

    #323 David Ha: Why Model Merging Could Be the Next AI Breakthrough

    24.02.2026 | 57 Min.
    This episode is sponsored by tastytrade.
    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.

    Learn more at https://tastytrade.com/



    Artificial intelligence is reaching a turning point. Instead of building bigger and bigger models, what if the real breakthrough comes from letting AI evolve?

    In this episode of Eye on AI, David Ha, Co-Founder and CEO of Sakana AI, explains why evolutionary strategies and collective intelligence could reshape the future of machine learning. We explore model merging, multi-agent systems, Monte Carlo tree search, and the AI Scientist framework designed to generate and evaluate new research ideas. The conversation dives into open-ended discovery, quality and diversity in AI systems, world models, and whether artificial intelligence can push beyond the boundaries of human knowledge.

    If you're interested in AGI, evolutionary AI, frontier models, AI research automation, or how AI could start discovering science on its own, this episode offers a clear look at where the field may be heading next.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) AI Should Evolve, Not Just Scale
    (03:54) David's Journey From Finance to Evolutionary AI
    (10:18) Why Gradient Descent Gets Stuck
    (18:12) Model Merging and Collective Intelligence
    (28:18) Combining Closed Frontier Models
    (32:56) Inside the AI Scientist Experiment
    (38:11) Parent Selection, Diversity and Innovation
    (49:25) Can AI Discover Truly New Knowledge?
    (53:05) Why Continual Learning Matter
  • Eye On A.I.

    #322 Amanda Luther: The Widening AI Value Gap (Inside BCG's AI Research)

    19.02.2026 | 54 Min.
    In this episode of Eye on AI, Craig Smith speaks with Amanda Luther, Senior Partner at Boston Consulting Group and global lead of BCG's AI Transformation practice, about what their latest 1,500-company AI study reveals about the widening gap between AI leaders and laggards.
    Only 5% of companies are truly "future-built" with AI embedded across their core business functions. These firms are seeing measurable gains in revenue growth, EBIT margins, and shareholder returns. Meanwhile, 60% of organizations are either experimenting or struggling to extract real value.
    Amanda breaks down how BCG measures AI maturity across 41 capabilities, how AI impact flows through the P&L, and why leading companies invest twice as much in AI as their competitors. She explains where AI is actually creating value today, from sales and marketing to procurement and retail operations, and why most of that value comes from core business functions, not back-office automation.
    The conversation also explores the rise of agentic systems, why many early agent deployments fail, and what it really takes to redesign workflows around AI. Amanda shares practical advice for companies stuck in experimentation mode, how to prioritize the right use cases, and why training and change management matter more than chasing the perfect vendor.
    If you want to understand how AI is reshaping competitive advantage in enterprise organizations, this episode provides a data-backed look at what separates the leaders from everyone else.
     
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI
     

    (00:00) The AI Value Gap
    (01:17) Inside BCG's 1,500-Company AI Study
    (04:14) What "Future-Built" Companies Do Differently
    (09:30) How AI Impact Is Measured on the P&L
    (12:57) Why AI Leaders Invest 2X More
    (14:16) Where AI Is Driving Real Cost Reduction
    (16:20) Agentic AI: Hype vs Reality
    (20:13) Where Agents Actually Create Value
    (24:22) Tech vs Talent: Where the Money Goes
    (26:58) Will AI Laggards Slowly Disappear?
    (31:58) Why Adoption Is Accelerating Now
    (40:07) How to Start: Amanda's Advice to AI Laggards
  • Eye On A.I.

    #321 Nick Frosst: Why Cohere Is Betting on Enterprise AI, Not AGI

    17.02.2026 | 1 Std. 1 Min.
    This episode is sponsored by tastytrade. 
    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.
     
    Learn more at https://tastytrade.com/



    In this episode of Eye on AI, Nick Frosst, Co-Founder of Cohere and former Google Brain researcher, explains why Cohere is betting on enterprise AI instead of chasing AGI.
     
    While much of the AI industry is focused on artificial general intelligence, Cohere is building practical, capital-efficient large language models designed for real-world enterprise deployment. Nick breaks down why scaling transformers does not equal AGI, why inference cost and ROI matter, and how enterprise AI differs from consumer AI hype.
     
    We discuss enterprise LLM deployment, private data, regulated industries like banking and healthcare, agentic systems, evaluation benchmarks, and why AI will likely become embedded infrastructure rather than a headline breakthrough.
     
    If you care about enterprise AI, AGI debates, large language models, and the future of AI in business, this conversation delivers a grounded perspective from inside one of the leading AI companies.
     
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI
     

    (00:00) From Google Brain to Cohere
    (03:54) Discovering Transformers
    (06:39) The Transformer Dominance
    (09:44) What AGI Actually Means
    (12:26) Planes vs Birds: The AI Analogy
    (14:08) Why Cohere Isn't Chasing AGI
    (18:38) Distillation & Model Efficiency
    (21:42) What Enterprise AI Really Does
    (25:20) Private Data & Secure Deployment
    (26:59) Enterprise Use Cases (RBC Example)
    (32:22) Why AI Benchmarks Mislead
    (34:55) Why Most AI Stays in Demo
    (38:23) What "Agents" Actually Are
    (43:32) The Problem With AGI Fear
    (49:15) Scaling Enterprise AI
    (53:24) Why AI Will Get "Boring"
  • Eye On A.I.

    #320 Carter Huffman: Exploring The Architecture Behind Modulate's Next-Gen Voice AI

    11.02.2026 | 1 Std. 8 Min.
    This episode is sponsored by tastytrade. 
    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.
     
    Learn more at https://tastytrade.com/



    Voice AI is moving far beyond transcription.
     
    In this episode, Carter Huffman, CTO and co-founder of Modulate, explains how real-time voice intelligence is unlocking something much bigger than speech-to-text. His team built AI that understands emotion, intent, deception, harassment, and fraud directly from live conversations. Not after the fact. Instantly.
     
    Carter shares how their technology powers ToxMod to moderate toxic behavior in online games at massive scale, analyzes millions of audio streams with ultra-low latency, and beats foundation models using an ensemble architecture that is faster, cheaper, and more accurate. We also explore voice deepfake detection, scam prevention, sentiment analysis for finance, and why voice might become the most important signal layer in AI.
     
    If you're building voice agents, working on AI safety, or curious where conversational AI is heading next, this conversation breaks down the technical and practical future of voice understanding.



    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) Real-Time Voice AI: Detecting Emotion, Intent & Lies
    (03:07) From MIT & NASA to Building Modulate
    (04:45) Why Voice AI Is More Than Just Transcription
    (06:14) The Toxic Gaming Problem That Sparked ToxMod
    (12:37) Inside the Tech: How "Ensemble Models" Beat Foundation Models
    (21:09) Achieving Ultra-Low Latency & Real-Time Performance
    (26:16) From Voice Skins to Fighting Harassment at Scale
    (37:31) Beyond Gaming: Fraud, Deepfakes & Voice Security
    (46:14) Privacy, Ethics & Voice Fingerprinting Risks
    (52:10) Lie Detection, Sentiment & Finance Use Cases
    (54:57) Opening the API: The Future of Voice Intelligence

Weitere Technologie Podcasts

Über Eye On A.I.

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.
Podcast-Website

Höre Eye On A.I., Future Weekly und viele andere Podcasts aus aller Welt mit der radio.at-App

Hol dir die kostenlose radio.at App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v8.7.0 | © 2007-2026 radio.de GmbH
Generated: 3/1/2026 - 3:13:57 PM