EP232 The Human Element of Privacy: Protecting High-Risk Targets and Designing Systems
Guest: Sarah Aoun, Privacy Engineer, Google Topic: You have had a fascinating career since we [Tim] graduated from college together – you mentioned before we met that you’ve consulted with a literal world leader on his personal digital security footprint. Maybe tell us how you got into this field of helping organizations treat sensitive information securely and how that led to helping keep targeted individuals secure? You also work as a privacy engineer on Fuschia, Google’s new operating system kernel. How did you go from human rights and privacy to that? What are the key privacy considerations when designing an operating system for “ambient computing”? How do you design privacy into something like that? More importantly, not only “how do you do it”, but how do you convince people that you did do it? When we talk about "higher risk" individuals, the definition can be broad. How can an average person or someone working in a seemingly less sensitive role better assess if they might be a higher-risk target? What are the subtle indicators? Thinking about the advice you give for personal security beyond passwords and multi-factor auth, how much of effective personal digital hygiene comes down to behavioral changes versus purely technical solutions? Given your deep understanding of both individual security needs and large-scale OS design, what's one thing you wish developers building cloud services or applications would fundamentally prioritize about user privacy? Resources: Google privacy controls Advanced protection program
--------
31:37
EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise
Guest: David French, Staff Adoption Engineer, Google Cloud Topic: Detection as code is one of those meme phrases I hear a lot, but I’m not sure everyone means the same thing when they say it. Could you tell us what you mean by it, and what upside it has for organizations in your model of it? What gets better for security teams and security outcomes when you start managing in a DAC world? What is primary, actual code or using SWE-style process for detection work? Not every SIEM has a good set of APIs for this, right? What’s a team to do in a world of no or low API support for this model? If we’re talking about as-code models, one of the important parts of regular software development is testing. How should teams think about testing their detection corpus? Where do we even start? Smoke tests? Unit tests? You talk about a rule schema–you might also think of it in code terms as a standard interface on the detection objects–how should organizations think about standardizing this, and why should they? If we’re into a world of detection rules as code and detections as code, can we also think about alert handling via code? This is like SOAR but with more of a software engineering approach, right? One more thing that stood out to me in your presentation was the call for sharing detection content. Is this between vendors, vendors and end users? Resources: Can We Have “Detection as Code”? Testing in Detection Engineering (Part 8) “So Good They Can't Ignore You: Why Skills Trump Passion in the Quest for Work You Love” book EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering EP181 Detection Engineering Deep Dive: From Career Paths to Scaling SOC Teams EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther Getting Started with Detection-as-Code and Google SecOps Detection Engineering Demystified: Building Custom Detections for GitHub Enterprise From soup to nuts: Building a Detection-as-Code pipeline David French - Medium Blog Detection Engineering Maturity Matrix
--------
30:40
EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google
Guest: Daniel Fabian, Principal Digital Arsonist, Google Topic: Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process? What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems? Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it? What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle? What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field? Resources: Video (LinkedIn, YouTube) Google's AI Red Team: the ethical hackers making AI safer EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons Lessons from AI Red Teaming – And How to Apply Them Proactively [RSA 2025]
--------
26:11
EP229 Beyond the Hype: Debunking Cloud Breach Myths (and What DBIR Says Now)
Guest: Alex Pinto, Associate Director of Threat Intelligence, Verizon Business, Lead the Verizon Data Breach Report Topics: How would you define “a cloud breach”? Is that a real (and different) thing? Are cloud breaches just a result of leaked keys and creds? If customers are responsible for 99% of cloud security problems, is cloud breach really about a customer being breached? Are misconfigurations really responsible for so many cloud security breaches? How are we still failing at configuration? What parts of DBIR are not total “groundhog day”? Something about vuln exploitation vs credential abuse in today’s breaches–what’s driving the shifts we’re seeing? DBIR Are we at peak ransomware? Will ransomware be here in 20 years? Will we be here in 20 years talking about it? How is AI changing the breach report, other than putting in hilarious footnotes about how the report is for humans to read and and is written by actual humans? Resources: Video (LinkedIn, YouTube) Verizon DBIR 2025 EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality EP112 Threat Horizons - How Google Does Threat Intelligence EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025
--------
35:05
EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines
Guest Alan Braithwaite, Co-founder and CTO @ RunReveal Topics: SIEM is hard, and many vendors have discovered this over the years. You need to get storage, security and integration complexity just right. You also need to be better than incumbents. How would you approach this now? Decoupled SIEM vs SIEM/EDR/XDR combo. These point in the opposite directions, which side do you think will win? In a world where data volumes are exploding, especially in cloud environments, you're building a SIEM with ClickHouse as its backend, focusing on both parsed and raw logs. What's the core advantage of this approach, and how does it address the limitations of traditional SIEMs in handling scale? Cribl, Bindplane and “security pipeline vendors” are all the rage. Won’t it be logical to just include this into a modern SIEM? You're envisioning a 'Pipeline QL' that compiles to SQL, enabling 'detection in SQL.' This sounds like a significant shift, and perhaps not to the better? (Anton is horrified, for once) How does this approach affect detection engineering? With Sigma HQ support out-of-the-box, and the ability to convert SPL to Sigma, you're clearly aiming for interoperability. How crucial is this approach in your vision, and how do you see it benefiting the security community? What is SIEM in 2025 and beyond? What’s the endgame for security telemetry data? Is this truly SIEM 3.0, 4.0 or whatever-oh? Resources: EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures “20 Years of SIEM: Celebrating My Dubious Anniversary” blog “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog tl;dr security newsletter Introducing a RunReveal Model Context Protocol Server! MCP: Building Your SecOps AI Ecosystem AI Runbooks for Google SecOps: Security Operations with Model Context Protocol
Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure.
We’re going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject’s benefit or just for organizational benefit.
We hope you’ll join us if you’re interested in where technology overlaps with process and bumps up against organizational design. We’re hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can’t keep as the world moves from on-premises computing to cloud computing.