Partner im RedaktionsNetzwerk Deutschland
PodcastsTechnologieFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Neueste Episode

Verfügbare Folgen

5 von 237
  • Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
    Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world. Learn more about Ed's work here: https://ed.newtonrex.com Timestamps: 00:00:00 Preview and intro 00:04:18 AI-generated music 00:12:15 Resigning from Stability AI 00:16:20 AI industry attitudes towards rights 00:26:22 Fairly Trained 00:37:16 Special kinds of training data 00:50:42 The longer-term future of AI 00:56:09 Will AI improve living standards? 01:03:10 AI versions of artists 01:13:28 Authenticity and art 01:18:45 Competitive pressures in AI 01:24:06 Priorities going forward
    --------  
    1:27:14
  • AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)
    On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI’s development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies’ vague AGI plans. We also discuss the human psychology of AI, including the feelings of living in the "fast world" versus the "slow world", and navigating long-term projects given short timelines. Timestamps: 00:00:00 Preview and intro00:00:46 What do benchmarks measure? 00:08:08 Will AI develop like other tech? 00:14:13 Which tasks can AIs do? 00:23:00 Capability profiles of AIs 00:34:04 Timelines and social effects 00:42:01 Alignment by default? 00:50:36 Can vague AGI plans be useful? 00:54:36 The fast world and the slow world 01:08:02 Long-term projects and short timelines
    --------  
    1:15:49
  • Could Powerful AI Break Our Fragile World? (with Michael Nielsen)
    On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents and tools, how power is latent in the world, implications of widespread powerful hardware, and finally touch upon the philosophical perspectives of deep atheism and optimistic cosmism.Timestamps: 00:00:00 Preview and intro 00:01:05 Understanding is dual-use 00:05:17 Can we handle AI like other tech? 00:12:08 Can institutions adapt to AI? 00:16:50 Recognizing signs of dangerous AI 00:22:45 Agents versus tools 00:25:43 Power is latent in the world 00:35:45 Widespread powerful hardware 00:42:09 Governance mechanisms for AI 00:53:55 Deep atheism and optimistic cosmism
    --------  
    1:01:28
  • Facing Superintelligence (with Ben Goertzel)
    On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward. Timestamps: 00:00:00 Preview and intro 00:01:59 Thinking about AGI in the 1970s 00:07:28 What's different about this AI boom? 00:16:10 Former taboos about AGI 00:19:53 AI research worth revisiting 00:35:53 Will the first AGI be simple? 00:48:49 Is alignment achievable? 01:02:40 Benchmarks and economic impact 01:15:23 Bottlenecks to superintelligence 01:23:09 What should we do?
    --------  
    1:32:33
  • Will Future AIs Be Conscious? (with Jeff Sebo)
    On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively. You can follow Jeff’s work here: https://jeffsebo.net/ Timestamps: 00:00:00 Preview and intro 00:02:56 Imagining artificial consciousness 00:07:51 Substrate-independence? 00:11:26 Are we making progress? 00:18:03 Intuitions about explanations 00:24:43 AI risk and AI consciousness 00:40:01 Consciousness and cognitive complexity 00:51:20 Intuition versus intellect 00:58:48 AIs as companions 01:05:24 AI rights 01:13:00 Acting under time pressure 01:20:16 Measuring consciousness 01:32:11 How can you help?
    --------  
    1:34:27

Weitere Technologie Podcasts

Über Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Podcast-Website

Hören Sie Future of Life Institute Podcast, The Vergecast und viele andere Podcasts aus aller Welt mit der radio.at-App

Hol dir die kostenlose radio.at App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

Future of Life Institute Podcast: Zugehörige Podcasts

Rechtliches
Social
v7.18.5 | © 2007-2025 radio.de GmbH
Generated: 6/22/2025 - 10:50:03 AM