PodcastsGesellschaft und KulturLessWrong (30+ Karma)

LessWrong (30+ Karma)

LessWrong
LessWrong (30+ Karma)
Neueste Episode

1861 Episoden

  • LessWrong (30+ Karma)

    “Claude Code, Codex and Agentic Coding #7: Auto Mode” by Zvi

    16.04.2026 | 26 Min.
    As we all try to figure out what Mythos means for us down the line, the world of practical agentic coding continues, with the latest array of upgrades.

    The biggest change, which I’m finally covering, is Auto Mode. Auto Mode is the famously requested kinda-dangerously-skip-some-permissions, where the system keeps an eye on all the commands to ensure human approval for anything too dangerous. It is not entirely safe, but it is a lot safer than —dangerously-skip-permissions, and previously a lot of people were just clicking yes to requests mostly without thinking, which isn’t safe either.

    Table of Contents


    Huh, Upgrades.

    On Your Marks.

    Lazy Cheaters.

    It's All Routine.

    Declawing.

    Free Claw.

    Take It To The Limit.

    Turn On Auto The Pilot.

    I’ll Allow It.

    Threat Model.

    The Classifier Is The Hard Part.

    Acceptable Risks.

    Manage The Agents.

    Introducing.

    Skilling Up.

    What Happened To My Tokens?

    Coding Agents Offer Mundane Utility.

    Huh, Upgrades

    Claude Code Desktop gets a redesign for parallel agents, with a new sidebar for managing multiple sessions, a drag-and-drop layout for arranging your [...]
    ---
    Outline:
    (00:48) Huh, Upgrades
    (02:46) On Your Marks
    (04:21) Lazy Cheaters
    (06:11) Its All Routine
    (06:52) Declawing
    (09:03) Free Claw
    (09:31) Take It To The Limit
    (13:54) Turn On Auto The Pilot
    (15:55) Ill Allow It
    (16:26) Threat Model
    (17:10) The Classifier Is The Hard Part
    (18:34) Acceptable Risks
    (19:54) Manage The Agents
    (22:34) Introducing
    (22:44) Skilling Up
    (25:27) What Happened To My Tokens?
    (25:43) Coding Agents Offer Mundane Utility
    ---

    First published:

    April 15th, 2026


    Source:

    https://www.lesswrong.com/posts/w8misLX7KCmLxJM2K/claude-code-codex-and-agentic-coding-7-auto-mode

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong (30+ Karma)

    “Do not conquer what you cannot defend” by habryka

    16.04.2026 | 10 Min.
    Epistemic status: All of the western canon must eventually be re-invented in a LessWrong post. So today we are re-inventing federalism.
    Once upon a time there was a great king. He ruled his kingdom with wisdom and economically literate policies, and prosperity followed. Seeing this, the citizens of nearby kingdoms revolted against their leaders, and organized to join the kingdom of this great king.
    While the kingdom's ability to defend itself against external threats grew with each person who joined the land, the kingdom's ability to defend itself against internal threats did not. One fateful evening, the king bit into a bologna sandwich poisoned by a rival noble. That noble quickly proceeded to behead his political enemies in the name of the dead king. The flag bearing the wise king's portrait known as "the great unifier" still flies in the fortified cities where his successor rules with an iron fist.
    Once upon a time there was a great scientific mind. She developed a new theoretical framework that made large advances on the hardest scientific questions of the day. Seeing the promise of her work, new graduate students, professors, and corporate R&D teams flocked into the field, hungry to [...]
    ---

    First published:

    April 15th, 2026


    Source:

    https://www.lesswrong.com/posts/jinzzbPHshif8nmnw/do-not-conquer-what-you-cannot-defend

    ---

    Narrated by TYPE III AUDIO.
  • LessWrong (30+ Karma)

    “What is the Iliad Intensive?” by Leon Lang, Alexander Gietelink Oldenziel, David Udell

    16.04.2026 | 4 Min.
    Almost two months ago, Iliad announced the Iliad Intensive and Iliad Fellowship. Fellowships are a well-understood unit, but what is an intensive? This post explains this in more detail!
    Comparison. The Iliad Intensive has similarities to ARENA, but focuses more on foundational AI alignment research instead of alignment research engineering. Expect more math and less coding.
    Rhythm. It's currently four weeks long. Five days a week. 10am till 6pm every day, with lunch and an afternoon break. This makes for around 6.5 hours of learning a day, which is at the upper end of how long most people can concentrate deeply within a day. This is why we call it “Intensive”.
    Content. The Iliad Intensive is broken into five clusters, with 20 total modules, one for each day. The clusters and modules in the April iteration are below. We expect to add substantially more topics and material over the coming months. There is much more material than can be covered in a single month, so different Intensives will vary in content.
    Alignment Cluster
    AI Alignment: an Introduction
    Alignment in Practice
    AI Alignment: The Field

    Learning Cluster
    Deep Learning 1
    Deep Learning 2
    Singular Learning [...]

    ---

    First published:

    April 15th, 2026


    Source:

    https://www.lesswrong.com/posts/moG6k8mJiGvH4zc8j/what-is-the-iliad-intensive

    ---

    Narrated by TYPE III AUDIO.
  • LessWrong (30+ Karma)

    “The Mirror Test Is Complicated” by J Bostock

    15.04.2026 | 8 Min.
    The Mirror Test is kind of like Hitler. In any discussion of animal cognition, somebody is going to bring it up. The conversation usually goes like this:
    A: So, most animals can’t recognize themselves in the mirror
    B: Which animals specifically?
    A: Oh, dogs, cats, betta fish, monkeys, that sort of thing. Anyway as I was saying, those animals can’t. But some smart animals can recognize themselves in the mirror.
    B: Such as?
    A: Well, chimpanzees and orangutans for a start.
    B: Makes sense
    A: Not gorillas though, at least not always. But dolphins and elephants can!
    B: Yeah, those animals are smart as well
    A: Magpies can, though crows cant.
    B: Sure, ok
    A: And cleaner wrasse can as well.
    B: The uhh, finger-sized fish? You sure?
    A: Yeah. And also ants.
    B: What.
    What?
    Frans de Waal drew this picture of an orangutan putting lettuce on her head and then actually got it published in a real journal. Based.
    What do we actually mean by the “Mirror Test”
    “The mirror test” elides a bit of a distinction between different kinds of test. There's lots of things you can do which look like “put an animal in front [...]
    ---
    Outline:
    (01:48) What do we actually mean by the Mirror Test
    (03:07) The Complicated Ones
    (03:54) The Unbelievable Ones
    (05:17) Making Sense Of It All
    ---

    First published:

    April 15th, 2026


    Source:

    https://www.lesswrong.com/posts/5eLoZQshfre8DGaxd/the-mirror-test-is-complicated

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong (30+ Karma)

    “Contra Leicht on AI Pauses” by David Scott Krueger (formerly: capybaralet)

    15.04.2026 | 10 Min.
    This is going to be a nerdier article than usual. It's a response to Anton Leicht's blog post “Press Play To Continue”. I disagree with much of it and think it's not very well argued.
    Going section-by-section, Leicht's claims are:
    AI is going pretty damn well all things considered. The argument has two parts:
    It implicitly models pausing AI as resampling from a fixed distribution of possible AI development timelines.

    It argues the current situation is better than average, so resampling is a bad idea.

    Specifically the arguments that things are going well are:
    Minimal compute overhang

    Multipolarity

    Liberal democracies control the supply chain

    Pausing AI isn’t and won’t be popular among centrist politicians, only among the more radical wings of both parties, and this makes it unlikely to get passed.

    A “second best” pause is more likely, and would be worse than nothing.
    It would be unilateral.

    It would not cater to x-risk concerns, and thus will lack critical pieces like e.g. export controls or limits on internal deployment.

    Proposals to pause AI don’t expand the Overton window in a helpful way [...]

    ---
    Outline:
    (03:29) Response to the arguments
    (08:31) General reflections
    ---

    First published:

    April 14th, 2026


    Source:

    https://www.lesswrong.com/posts/HofdmpSHbthzZDFZH/contra-leicht-on-ai-pauses

    ---

    Narrated by TYPE III AUDIO.

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (30+ Karma)

Audio narrations of LessWrong posts.
Podcast-Website

Höre LessWrong (30+ Karma), Hotel Matze und viele andere Podcasts aus aller Welt mit der radio.at-App

Hol dir die kostenlose radio.at App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (30+ Karma): Zugehörige Podcasts

Rechtliches
Social
v8.8.10| © 2007-2026 radio.de GmbH
Generated: 4/16/2026 - 1:43:07 PM