Powered by RND
PodcastTecnologiaCloud Security Podcast by Google

Cloud Security Podcast by Google

Anton Chuvakin
Cloud Security Podcast by Google
Ultimo episodio

Episodi disponibili

5 risultati 254
  • EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success
    Guests: Alexander Pabst, Deputy Group CISO, Allianz Lars Koenig,  Global Head of D&R, Allianz  Topics:  Moving from traditional SIEM to an agentic SOC model, especially in a heavily regulated insurer, is a massive undertaking. What did the collaboration model with your vendor look like?  Agentic AI introduces a new layer of risk - that of unconstrained or unintended autonomous action. In the context of Allianz, how did you establish the governance framework for the SOC alert triage agents? Where did you draw the line between fully automated action and the mandatory "human-in-the-loop" for investigation or response? Agentic triage is only as good as the data it analyzes. From your perspective, what were the biggest challenges - and wins - in ensuring the data fidelity, freshness, and completeness in your SIEM to fuel reliable agent decisions? We've been talking about SOC automation for years, but this agentic wave feels different. As a deputy CISO, what was your primary, non-negotiable goal for the agent? Was it purely Mean Time to Respond (MTTR) reduction, or was the bigger strategic prize to fundamentally re-skill and uplevel your Tier 2/3 analysts by removing the low-value alert noise? As you built this out, were there any surprises along the way that left you shaking your head or laughing at the unexpected AI behaviors? We felt a major lack of proof - Anton kept asking for pudding - that any of the agentic SOC vendors we saw at RSA had actually achieved anything beyond hype! When it comes to your org, how are you measuring agent success?  What are the key metrics you are using right now? Resources: EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP242 The AI SOC: Is This The Automation We've Been Waiting For? EP249 Data First: What Really Makes Your SOC 'AI Ready'? EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI "Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!" blog "How Google Does It: Building AI agents for cybersecurity and defense" blog Company annual report to look for risk "How to Win Friends and Influence People" by Dale Carnegie "Will It Make the Boat Go Faster?" book
    --------  
    35:53
  • EP251 Beyond Fancy Scripts: Can AI Red Teaming Find Truly Novel Attacks?
    Guest: Ari Herbert-Voss, CEO at RunSybil Topics: The market already has Breach and Attack Simulation (BAS), for testing known TTPs. You're calling this 'AI-powered' red teaming. Is this just a fancy LLM stringing together known attacks, or is there a genuine agent here that can discover a truly novel attack path that a human hasn't scripted for it? Let's talk about the 'so what?' problem. Pentest reports are famous for becoming shelf-ware. How do you turn a complex AI finding into an actionable ticket for a developer, and more importantly, how do you help a CISO decide which of the thousand 'criticals' to actually fix first? You're asking customers to unleash a 'hacker AI' in their production environment. That's terrifying. What are the 'do no harm' guardrails? How do you guarantee your AI won't accidentally rm -rf a critical server or cause a denial of service while it's 'exploring'? You mentioned the AI is particularly good at finding authentication bugs. Why that specific category? What's the secret sauce there, and what's the reaction from customers when you show them those types of flaws? Is this AI meant to replace a human red teamer, or make them better? Does it automate the boring stuff so experts can focus on creative business logic attacks, or is the ultimate goal to automate the entire red team function away? So, is this just about finding holes, or are you closing the loop for the blue team? Can the attack paths your AI finds be automatically translated into high-fidelity detection rules? Is the end goal a continuous purple team engine that's constantly training our defenses? Also, what about fixing? What makes your findings more fixable? What will happen to red team testing in 2-3 years if this technology gets better? Resource: Kim Zetter Zero Day blog EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP68 How We Attack AI? Learn More at Our RSA Panel! EP71 Attacking Google to Defend Google: How Google Does Red Team  
    --------  
    25:15
  • EP250 The End of "Collect Everything"? Moving from Centralization to Data Access?
    Guest: Balazs Scheidler, CEO at Axoflow, original founder of syslog-ng Topics: Are we really coming  to "access to security data" and away from "centralizing the data"? How to detect without the same storage for all logs? Is data pipeline a part of SIEM or is it standalone? Will this just collapse into SIEM soon? Tell us about the issues with log pipelines in the past? What about enrichment? Why do it in a pipeline, and not in a SIEM? We are unable to share enough practices between security teams. How are we fixing it? Is pipelines part of the answer? Do you have a piece of advice for people who want to do more than save on their SIEM costs? Resources: EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines Axoflow podcast and Anton on it "Decoupled SIEM: Where I Think We Are Now?" blog "Decoupled SIEM: Brilliant or Stupid?" blog "Output-driven SIEM — 13 years later" blog
    --------  
    29:21
  • EP249 Data First: What Really Makes Your SOC 'AI Ready'?
    Guest: Monzy Merza, co-founder and CEO at Crogl Topics: We often hear about the aspirational idea of an "IronMan suit" for the SOC—a system that empowers analysts to be faster and more effective. What does this ideal future of security operations look like from your perspective, and what are the primary obstacles preventing SOCs from achieving it today? You've also raised a metaphor of AI in the SOC as a "Dr. Jekyll and Mr. Hyde" situation. Could you walk us through what you see as the "Jekyll"—the noble, beneficial promise of AI—and what are the factors that can turn it into the dangerous "Mr. Hyde"? Let's drill down into the heart of the "Mr. Hyde" problem: the data. Many believe that AI can fix a team's messy data, but you've noted that "it's all about the data, duh." What's the story? "AI ready SOC" - What is the foundational work a SOC needs to do to ensure their data is AI-ready, and what happens when they skip this step?   And is there anything we can do to use AI to help with this foundational problem? How do we measure progress towards AI SOC? What gets better at what time? How would we know?  What SOC metrics will show improvement? Will anything get worse?  Resources: EP242 The AI SOC: Is This The Automation We've Been Waiting For? EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP227 AI-Native MDR: Betting on the Future of Security Operations? EP236 Accelerated SIEM Journey: A SOC Leader's Playbook for Modernization and AI EP238 Google Lessons for Using AI Agents for Securing Our Enterprise "Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!" blog Nassim Taleb "Antifragile" book "AI Superpowers" book "Attention Is All You Need" paper
    --------  
    30:37
  • EP248 Cloud IR Tabletop Wins: How to Stop Playing Security Theater and Start Practicing
    Guest: Jibran Ilyas, Director for Incident Response at Google Cloud Topics: What is this tabletop thing, please tell us about running a good security incident tabletop?  Why are tabletops for incident response preparedness so amazingly effective yet rarely done well? This is cheap/easy/useful so why do so many fail to do it? Why are tabletops seen as kind of like elite pursuit? What's your favorite Cloud-centric scenario for tabletop exercises? Ransomware? But there is little ransomware in the cloud, no? What are other good cloud tabletop scenarios? Resources: EP60 Impersonating Service Accounts in GCP and Beyond: Cloud Security Is About IAM? EP179 Teamwork Under Stress: Expedition Behavior in Cybersecurity Incident Response EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster?  
    --------  
    32:42

Altri podcast di Tecnologia

Su Cloud Security Podcast by Google

Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure. We're going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject's benefit or just for organizational benefit. We hope you'll join us if you're interested in where technology overlaps with process and bumps up against organizational design. We're hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can't keep as the world moves from on-premises computing to cloud computing.
Sito web del podcast

Ascolta Cloud Security Podcast by Google, Ciao, Internet! con Matteo Flora e molti altri podcast da tutto il mondo con l’applicazione di radio.it

Scarica l'app gratuita radio.it

  • Salva le radio e i podcast favoriti
  • Streaming via Wi-Fi o Bluetooth
  • Supporta Carplay & Android Auto
  • Molte altre funzioni dell'app