Partner im RedaktionsNetzwerk Deutschland
PodcastsTechnologieRadio Bostrom

Radio Bostrom

Team Radio Bostrom
Radio Bostrom
Neueste Episode

Verfügbare Folgen

5 von 29
  • AI Creation and the Cosmic Host (2024)
    By Nick Bostrom.Abstract:There may well exist a normative structure, based on the preferences or concordats of a cosmic host, and which has high relevance to the development of A-eye. In particular, we may have both moral and prudential reason to create superintelligence that becomes a good cosmic citizen—that is conforms to cosmic norms and contributes positively to the cosmopolis. An exclusive focus on promoting the welfare of the human species and other terrestrial beings, or an insistence that our own norms must at all cost prevail, may be objectionable and unwise. Such attitudes might be analogized to the selfishness of one who exclusively pursues their own personal interest, or the arrogance of one who acts as if their own convictions entitle them to run roughshod over social norms—though arguably they would be worse, given our present inferior status relative to the membership of the cosmic host. An attitude of humility may be more appropriate.Read the full paper:https://nickbostrom.com/papers/ai-creation-and-the-cosmic-host.pdfMore episodes at:https://radiobostrom.com
    --------  
    1:21
  • Deep Utopia: Life and Meaning in a Solved World (2024)
    Nick Bostrom’s latest book, Deep Utopia: Life and Meaning in a Solved World, will be published on 27th March, 2024. It’s available to pre-order now: https://nickbostrom.com/deep-utopia/ The publisher describes the book as follows: A greyhound catching the mechanical lure—what would he actually do with it? Has he given this any thought? Bostrom’s previous book, Superintelligence: Paths, Dangers, Strategies changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right? Suppose that we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock. If this transition to the machine intelligence era goes well, human labor becomes obsolete. We would thus enter a condition of “post-instrumentality”, in which our efforts are not needed for any practical purpose. Furthermore, at technological maturity, human nature becomes entirely malleable. Here we confront a challenge that is not technological but philosophical and spiritual. In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day? Deep Utopia shines new light on these old questions, and gives us glimpses of a different kind of existence, which might be ours in the future.
    --------  
    1:37
  • The Unilateralist’s Curse and the Case for a Principle of Conformity (2016)
    By Nick Bostrom, Thomas Douglas & Anders Sandberg.Abstract:In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it.Read the full paper:https://nickbostrom.com/papers/unilateralist.pdfMore episodes at:https://radiobostrom.com/ ---Outline:(00:00) Intro(01:20) 1. Introduction(10:02) 2. The Unilateralist's Curse: A Model(11:31) 3. Lifting the Curse(13:54) 3.1. The Collective Deliberation Model(15:21) 3.2. The Meta-rationality Model(18:15) 3.3. The Moral Deference Model(33:24) 4. Discussion(37:53) 5. Concluding Thoughts(41:04) Outro & credits
    --------  
    41:35
  • In Defense of Posthuman Dignity (2005)
    By Nick Bostrom.Abstract:Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision.Read the full paper:https://nickbostrom.com/ethics/dignityMore episodes at:https://radiobostrom.com/ ---Outline:(00:02) Introduction(00:21) Abstract(01:57) Transhumanists vs. bioconservatives(06:42) Two fears about the posthuman(19:44) Is human dignity incompatible with posthuman dignity?(29:03) Why we need posthuman dignity(34:38) Outro & credits
    --------  
    35:13
  • A Primer on the Doomsday Argument (1999)
    By Nick Bostrom.Abstract:Rarely does philosophy produce empirical predictions. The Doomsday argument is an important exception. From seemingly trivial premises it seeks to show that the risk that humankind will go extinct soon has been systematically underestimated. Nearly everybody's first reaction is that there must be something wrong with such an argument. Yet despite being subjected to intense scrutiny by a growing number of philosophers, no simple flaw in the argument has been identified.Read the full paper:https://anthropic-principle.com/q=anthropic_principle/doomsday_argument/More episodes at:https://radiobostrom.com/
    --------  
    12:30

Weitere Technologie Podcasts

Über Radio Bostrom

Audio narrations of academic papers by Nick Bostrom.
Podcast-Website

Hören Sie Radio Bostrom, Lex Fridman Podcast und viele andere Podcasts aus aller Welt mit der radio.at-App

Hol dir die kostenlose radio.at App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v7.15.0 | © 2007-2025 radio.de GmbH
Generated: 4/20/2025 - 11:17:38 PM