PodcastsBeziehungenYour Undivided Attention

Your Undivided Attention

The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin
Your Undivided Attention
Neueste Episode

155 Episoden

  • Your Undivided Attention

    Why the Meta Verdicts Are a Big Deal (And What It Was Like to Testify)

    26.03.2026 | 19 Min.
    In two landmark cases, juries in California and New Mexico found Meta and Google liable for creating addictive, harmful products and failing to protect children from exploitation and abuse. These verdicts signal that the era of tech impunity may finally be closing. State attorneys general are finding ways around the broad immunity of Section 230 — seeking not just fines, but changes to the design of these products.

    Our very own Aza Raskin testified at the New Mexico trial as a fact witness, drawing on his firsthand experience as the inventor of infinite scroll, one of the core mechanics of addictive design. In this episode, Tristan and Aza discuss what it was like to take the stand for tech justice, what the companies knew and when, and why the real significance of these cases lies not in the dollar amounts but in the injunctive relief still to come.

    In the 1990s, a series of landmark cases held Big Tobacco accountable for the harms of their toxic products. This could be that moment for social media.

    RECOMMENDED MEDIA

    Further reading on the New Mexico trial

    Further reading on the California trial

    Arturo Béjar’s “Broken Promises” Report

     

    RECOMMENDED YUA EPISODES

    What if we had fixed social media?

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Social Media Victims Lawyer Up with Laura Marquez-Garrett

    Real Social Media Solutions, Now with Frances Haugen

    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
  • Your Undivided Attention

    A Conversation with the Team Behind "The AI Doc"

    23.03.2026 | 47 Min.
    “The AI Doc: Or How I Became An Apocaloptimist” opens in theaters across the U.S. this Friday, March 27. In this episode, we sit down with the team behind this groundbreaking documentary — Oscar-winning producers Daniel Kwan, Jonathan Wang, and Ted Tremper. They explore how they navigated the overwhelming complexity of AI, held space for radically different perspectives, and created a film designed not just to inform but to be experienced together. 

    At CHT, we believe clarity creates agency. This film has the power to create the shared clarity we need to steer the direction of AI towards a better, more humane technological future. With every new technology, there’s a brief window to set the rules of the road that determine the future we live in. This is ours. So grab your friends, your family and go see “The AI Doc.” 

    RECOMMENDED MEDIA

    Buy tickets for The AI Doc

    The trailer for The AI Doc

    The website for the Creators Coalition on AI

    Further reading on The Day After

     

    RECOMMENDED YUA EPISODES

    A Problem Well-Stated Is Half-Solved with Daniel Schmachtenberger

    The AI Dilemma

    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
  • Your Undivided Attention

    AI Is Breaking Education. Rebecca Winthrop Has the Blueprint to Fix It.

    05.03.2026 | 46 Min.
    The promise of AI in education is incredible: picture infinitely patient tutors that can teach every student exactly the way they need to be taught. But the history of education technology tells us that these kinds of simple, optimistic stories are naive. Ask any teacher or student whether they feel unleashed by technology to do their best work. 

    Because AI has the potential to completely transform education — is already transforming it — faster than educators can keep up, it’s essential that we start asking the big questions: how should these tools be used in the classroom? What’s the purpose of education in an AI age? And how do we prepare students for a future that’s still so radically uncertain?

    Our guest this week actually has some answers. Rebecca Winthrop leads the Center for Universal Education at the Brookings Institution, and they just released a report called A New Direction for Students in an AI World. She and her colleagues conducted an extensive ‘pre-mortem’ of AI in the classroom, speaking with hundreds of educators, students, policy-makers, and technologists worldwide. 

    In this episode, Rebecca walks us through what she's learned — what's working, what's not, and most importantly, what are the concrete steps that parents, teachers, and administrators can and should take right now?

     

    RECOMMENDED MEDIA

    A New Direction for Students in An AI World

    The Disengaged Teen by Rebecca Winthrop and Jenny Anderson

     

    RECOMMENDED YUA EPISODES

    Rethinking School in the Age of AI

    Attachment Hacking and the Rise of AI Psychosis

    How OpenAI's ChatGPT Guided a Teen to His Death

    AI and the Future of Work: What You Need to Know

    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
  • Your Undivided Attention

    The Race to Build God: AI's Existential Gamble — Yoshua Bengio & Tristan Harris at Davos

    19.02.2026 | 37 Min.
    This week on Your Undivided Attention, Tristan Harris and Daniel Barcay offer a backstage recap of what it was like to be at the Davos World Economic Forum meeting this year as the world’s power brokers woke up to the risks of uncontrolled AI. 
    Amidst all the money and politics, the Human Change House staged a weeklong series of remarkable conversations between scientists and experts about technology and society. This episode is a discussion between Tristan and Professor Yoshua Bengio, who is considered one of the world’s leaders in AI and deep learning, and the most cited scientist in the field. 
    Yoshua and Tristan had a frank exchange about the AI we’re building, and the incentives we’re using to train models. What happens when a model has its own goals, and those goals are ‘misaligned’ with the human-centered outcomes we need? In fact this is already happening, and the consequences are tragic. 
    Truthfully, there may not be a way to ‘nudge’ or regulate companies toward better incentives. Yoshua has launched a nonprofit AI safety research initiative called Law Zero that isn't just about safety testing, but really a new form of advanced AI that's fundamentally safe by design.
    RECOMMENDED MEDIA 
    All the panels that Tristan and Daniel did with Human Change House 
    LawZero: Safe AI for Humanity 
    Anthropic’s internal research on ‘agentic misalignment’ 
    RECOMMENDED YUA EPISODES 
    Attachment Hacking and the Rise of AI Psychosis

    How OpenAI's ChatGPT Guided a Teen to His Death
    What if we had fixed social media?
    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
    CORRECTIONS AND CLARIFICATIONS 
    1) In this episode, Tristan Harris discussed AI chatbot safety concerns. The core issues are substantiated by investigative reporting, with these clarifications:
    Grok: The Washington Post reported in August 2024 that Grok generated sexualized images involving minors and had weaker content moderation than competitors. 
    Meta: The Wall Street Journal reported in December 2024 that Meta reduced safety restrictions on its AI chatbots. Testing showed inappropriate responses when researchers posed as 13-year-olds (Meta's minimum age). Our discussion referenced "eight year olds" to emphasize concerns about young children accessing these systems; the documented testing involved 13-year-old personas.
    Bottom line: The fundamental concern stands—major AI companies have reduced safety guardrails due to competitive pressure, creating documented risks for young users.
    2) There was no Google House at Davos in 2026, as stated by Tristan. It was a collaboration at Goals House. 
    3) Tristan states that in 2025, the total funding going into AI safety organizations was “on the order of about $150 million.” This number is not strictly verifiable. 

    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
  • Your Undivided Attention

    FEED DROP: Possible with Reid Hoffman and Aria Finger

    05.02.2026 | 1 Std. 7 Min.
    This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”. Reid and Aria are both tech entrepreneurs: Reid is the founder of LinkedIn, was one of the major early investors in OpenAI, and is known for his work creating the playbook for blitzscaling. Aria is the former CEO of DoSomething.org. 
    This may seem like a surprising conversation to have on YUA. After all, we’ve been critical of the kind of “move fast” mentality that Reid has championed in the past. But Reid and Aria are deeply philosophical about the direction of tech and are both dedicated to bringing about a more humane world that goes well. So we thought that this was a critical conversation to bring to you, to give you a perspective from the business side of the tech landscape. 
    In this episode, Reid, Aria, and Aza debate the merits of an AI pause, discuss how software optimization controls our lives, and why everyone is concerned with aligned artificial intelligence — when what we really need is aligned collective intelligence.  
    This is the kind of conversation that needs to happen more in tech. Reid has built very powerful systems and understands their power. Now he’s focusing on the much harder problem of learning how to steer these technologies towards better outcomes.
    You can find "Possible" wherever you get your podcasts! And you can follow Reid on YouTube for more of his content: https://www.youtube.com/@reidhoffman. 

    RECOMMENDED MEDIA
    Aza’s first appearance on “Possible”
    The website for Earth Species Project
    “Amusing Ourselves to Death” by Neil Postman
    The Moloch’s Bargain paper from Stanford
    On Human Nature by E.O. Wilson
    Dawn of Everything by David Graber
    RECOMMENDED YUA EPISODES
    The Man Who Predicted the Downfall of Thinking
    America and China Are Racing to Different AI Futures
    Talking With Animals... Using AI
    How OpenAI's ChatGPT Guided a Teen to His Death
    Future-proofing Democracy In the Age of AI with Audrey Tang

    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Weitere Beziehungen Podcasts

Über Your Undivided Attention

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.
Podcast-Website

Höre Your Undivided Attention, Paula Lieben Lernen und viele andere Podcasts aus aller Welt mit der radio.at-App

Hol dir die kostenlose radio.at App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v8.8.4| © 2007-2026 radio.de GmbH
Generated: 3/27/2026 - 11:59:42 AM