Partner im RedaktionsNetzwerk Deutschland
PodcastsWirtschaftEngineering Enablement by Abi Noda

Engineering Enablement by Abi Noda

DX
Engineering Enablement by Abi Noda
Neueste Episode

Verfügbare Folgen

5 von 86
  • Lessons from Twilio’s multi-year platform consolidation
    In this episode, host Laura Tacho speaks with Jesse Adametz, Senior Engineering Leader on the Developer Platform at Twilio. Jesse is leading Twilio’s multi-year platform consolidation, unifying tech stacks across large acquisitions and driving migrations at enterprise scale. He discusses platform adoption, the limits of Kubernetes, and how Twilio balances modernization with pragmatism. The conversation also explores treating developer experience as a product, offering “change as a service,” and Twilio’s evolving approach to AI adoption and platform support.Where to find Jesse Adametz: • LinkedIn: https://www.linkedin.com/in/jesseadametz/• X: https://x.com/jesseadametz• Website: https://www.jesseadametz.com/Where to find Laura Tacho:• LinkedIn: https://www.linkedin.com/in/lauratacho/• X: https://x.com/rhein_wein• Website: https://lauratacho.com/• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-courseIn this episode, we cover:(00:00) Intro(01:30) Jesse’s background and how he ended up at Twilio(04:00) What SRE teaches leaders and ICs(06:06) Where Twilio started the post-acquisition integration(08:22) Why platform migrations can’t follow a straight-line plan(10:05) How Twilio balances multiple strategies for migrations(12:30) The human side of change: advocacy, training, and alignment(17:46) Treating developer experience as a first-class product(21:40) What “change as a service” looks like in practice(24:57) A mandateless approach: creating voluntary adoption through value(28:50) How Twilio demonstrates value with metrics and reviews(30:41) Why Kubernetes wasn’t the right fit for all Twilio workloads (36:12) How Twilio decides when to expose complexity(38:23) Lessons from Kubernetes hype and how AI demands more experimentation(44:48) Where AI fits into Twilio’s platform strategy(49:45) How guilds fill needs the platform team hasn’t yet met(51:17) The future of platform in centralizing knowledge and standards(54:32) How Twilio evaluates tools for fit, pricing, and reliability (57:53) Where Twilio applies AI in reliability, and where Jesse is skeptical(59:26) Laura’s vibe-coded side project built on Twilio(1:01:11) How external lessons shape Twilio’s approach to platform support and docsReferenced:The AI Measurement FrameworkExperianTransact-SQL - WikipediaTwilioKubernetesCopilotClaude CodeWindsurfCursorBedrock
    --------  
    1:06:15
  • Driving enterprise-wide AI tool adoption
    In this episode of Engineering Enablement, host Laura Tacho talks with Bruno Passos, Product Lead for Developer Experience at Booking.com, about how the company is rolling out AI tools across a 3,000-person engineering team.Bruno shares how Booking.com set ambitious innovation goals, why cultural change mattered as much as technology, and the education practices that turned hesitant developers into daily users. He also reflects on the early barriers, from low adoption and knowledge gaps to procurement hurdles, and explains the interventions that worked, including learning paths, hackathon-style workshops, Slack communities, and centralized procurement. The result is that Booking.com now sits in the top 25 percent of companies for AI adoption.Where to find Bruno Passos:• LinkedIn: https://www.linkedin.com/in/brpassos/• X: https://x.com/brunopassosWhere to find Laura Tacho:• LinkedIn: https://www.linkedin.com/in/lauratacho/• X: https://x.com/rhein_wein• Website: https://lauratacho.com/• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-courseIn this episode, we cover:(00:00) Intro(01:09) Bruno’s role at Booking.com and an overview of the business (02:19) Booking.com’s goals when introducing AI tooling(03:26) Why Booking.com made such an ambitious innovation ratio goal (06:46) The beginning of Booking.com’s journey with AI(08:54) Why the initial adoption of Cody was low(13:17) How education and enablement fueled adoption(15:48) The importance of a top-down cultural change for AI adoption(17:38) The ongoing journey of determining the right metrics(21:44) Measuring the longer-term impact of AI (27:04) How Booking.com solved internal bottlenecks to testing new tools(32:10) Booking.com’s framework for evaluating new tools(35:50) The state of adoption at Booking.com and efforts to expand AI use(37:07) What’s still undetermined about AI’s impact on PR/MR quality(39:48) How Booking.com is addressing lagging adoption and monitoring churn(43:24) How Booking.com’s Slack community lowers friction for questions and support(44:35) Closing thoughts on what’s next for Booking.com’s AI planReferenced:Measuring AI code assistants and agentsDX Core 4 FrameworkBooking.comSourcegraph SearchCody | AI coding assistant from SourcegraphGreyson Junggren - DX | LinkedIn
    --------  
    46:50
  • Measuring AI code assistants and agents with the AI Measurement Framework
    In this episode of Engineering Enablement, DX CTO Laura Tacho and CEO Abi Noda break down how to measure developer productivity in the age of AI using DX’s AI Measurement Framework. Drawing on research with industry leaders, vendors, and hundreds of organizations, they explain how to move beyond vendor hype and headlines to make data-driven decisions about AI adoption.They cover why some fundamentals of productivity measurement remain constant, the pitfalls of over-relying on flawed metrics like acceptance rate, and how to track AI’s real impact across utilization, quality, and cost. The conversation also explores measuring agentic workflows, expanding the definition of “developer” to include new AI-enabled contributors, and avoiding second-order effects like technical debt and slowed PR throughput.Whether you’re rolling out AI coding tools, experimenting with autonomous agents, or just trying to separate signal from noise, this episode offers a practical roadmap for understanding AI’s role in your organization—and ensuring it delivers sustainable, long-term gains.Where to find Laura Tacho:• X: https://x.com/rhein_wein• LinkedIn: https://www.linkedin.com/in/lauratacho/• Website: https://lauratacho.com/Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda • Substack: ​​https://substack.com/@abinoda In this episode, we cover:(00:00) Intro(01:26) The challenge of measuring developer productivity in the AI age(04:17) Measuring productivity in the AI era — what stays the same and what changes(07:25) How to use DX’s AI Measurement Framework (13:10) Measuring AI’s true impact from adoption rates to long-term quality and maintainability(16:31) Why acceptance rate is flawed — and DX’s approach to tracking AI-authored code(18:25) Three ways to gather measurement data(21:55) How Google measures time savings and why self-reported data is misleading(24:25) How to measure agentic workflows and a case for expanding the definition of developer(28:50) A case for not overemphasizing AI’s role(30:31) Measuring second-order effects (32:26) Audience Q&A: applying metrics in practice(36:45) Wrap up: best practices for rollout and communication Referenced:DX Core 4 Productivity FrameworkMeasuring AI code assistants and agentsAI is making Google engineers 10% more productive, says Sundar Pichai - Business Insider
    --------  
    41:14
  • How to cut through the hype and measure AI’s real impact (Live from LeadDev London)
    In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, DX CTO Laura Tacho explores the growing gap between AI headlines and the reality inside engineering teams—and what leaders can do to close it.Laura shares data from nearly 39,000 developers across 184 companies, highlights the Core 4 and introduces the AI Measurement Framework, and offers a practical playbook for using data to improve developer experience, measure AI’s true impact, and build better software without compromising long-term performance.Where to find Laura Tacho:• X: https://x.com/rhein_wein• LinkedIn: https://www.linkedin.com/in/lauratacho/• Website: https://lauratacho.com/In this episode, we cover:(00:00) Intro: Laura’s keynote from LDX3(01:44) The problem with asking how much faster can we go with AI?(03:02) How the disappointment gap creates barriers to AI adoption(06:20) What AI adoption looks like at top-performing organizations(07:53) What leaders must do to turn AI into meaningful impact(10:50) Why building better software with AI still depends on fundamentals(12:03) An overview of the DX Core 4 Framework(13:22) Why developer experience is the biggest performance lever(15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings(16:08) How to get started with Core 4(17:32) Measuring AI with the AI Measurement Framework(21:45) Final takeaways and how to get started with confidenceReferenced:LDX3 by LeadDev | The Festival of Software Engineering Leadership | LondonSoftware engineering with LLMs in 2025: reality checkSPACE framework, PRs per engineer, AI researchThe AI adoption playbook: Lessons from Microsoft's internal strategyDX Core 4 Productivity FrameworkNicole ForsgrenMargaret-Anne StoreyDropbox.comEtsyPfizerDrew Houston - Dropbox | LinkedInBlockCursorDora.devSourcegraphBooking.com
    --------  
    23:26
  • Unpacking METR’s findings: Does AI slow developers down?
    In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.Where to find Quentin Anthony: • LinkedIn: https://www.linkedin.com/in/quentin-anthony/• X: https://x.com/QuentinAnthon15Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda In this episode, we cover:(00:00) Intro(01:32) A brief overview of Quentin’s background and current work(02:05) An explanation of METR and the study Quentin participated in (11:02) Surprising results of the METR study (12:47) Quentin’s takeaways from the study’s results (16:30) How developers can avoid bloated code bases through self-reflection(19:31) Signs that you’re not making progress with a model (21:25) What is “context rot”?(23:04) Advice for combating context rot(25:34) How to make the most of your idle time as a developer(28:13) Developer hygiene: the case for selectively using AI tools(33:28) How to interact effectively with new models(35:28) Why organizations should focus on tasks that AI handles well(38:01) Where AI fits in the software development lifecycle(39:40) How to approach testing with models(40:31) What makes models different (42:05) Quentin’s thoughts on agents Referenced:DX Core 4 Productivity FrameworkZyphraEleutherAIMETRCursorClaudeLibreChatGoogle GeminiIntroducing OpenAI o3 and o4-miniMETR’s study on how AI affects developer productivityQuentin Anthony on X: "I was one of the 16 devs in this study."Context rot from Hacker NewsTracing the thoughts of a large language modelKimiGrok 4 | xAI
    --------  
    43:45

Weitere Wirtschaft Podcasts

Über Engineering Enablement by Abi Noda

This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.
Podcast-Website

Höre Engineering Enablement by Abi Noda, Alles auf Aktien – Die täglichen Finanzen-News und viele andere Podcasts aus aller Welt mit der radio.at-App

Hol dir die kostenlose radio.at App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

Engineering Enablement by Abi Noda: Zugehörige Podcasts

Rechtliches
Social
v7.23.7 | © 2007-2025 radio.de GmbH
Generated: 9/13/2025 - 2:33:43 AM