
EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side
05/06/24 • 27 min
Guest:
- Elie Bursztein, Google DeepMind Cybersecurity Research Lead, Google
Topics:
- Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)?
- What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical?
- Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really?
- Why do you think that AI favors the defenders? Is this a long term or a short term view?
- What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?
Resources:
- “How Large Language Models Are Reshaping the Cybersecurity Landscape” RSA 2024 presentation by Elie (May 6 at 9:40AM)
- “Lessons Learned from Developing Secure AI Workflows” RSA 2024 presentation by Elie (May 8, 2:25PM)
- EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents
- EP40 2021: Phishing is Solved?
- EP135 AI and Security: The Good, the Bad, and the Magical
- EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC
- EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It
- PyRIT LLM red-teaming tool
- Accelerating incident response using generative AI
- Threat Actors are Interested in Generative AI, but Use Remains Limited
- OpenAI’s Approach to Frontier Risk
Guest:
- Elie Bursztein, Google DeepMind Cybersecurity Research Lead, Google
Topics:
- Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)?
- What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical?
- Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really?
- Why do you think that AI favors the defenders? Is this a long term or a short term view?
- What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?
Resources:
- “How Large Language Models Are Reshaping the Cybersecurity Landscape” RSA 2024 presentation by Elie (May 6 at 9:40AM)
- “Lessons Learned from Developing Secure AI Workflows” RSA 2024 presentation by Elie (May 8, 2:25PM)
- EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents
- EP40 2021: Phishing is Solved?
- EP135 AI and Security: The Good, the Bad, and the Magical
- EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC
- EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It
- PyRIT LLM red-teaming tool
- Accelerating incident response using generative AI
- Threat Actors are Interested in Generative AI, but Use Remains Limited
- OpenAI’s Approach to Frontier Risk
Previous Episode

EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC
Guest:
- Payal Chakravarty, Director of Product Management, Google SecOps, Google Cloud
Topics:
- What are the different use cases for GenAI in security operations and how can organizations prioritize them for maximum impact to their organization?
- We’ve heard a lot of worries from people that GenAI will replace junior team members–how do you see GenAI enabling more people to be part of the security mission?
- What are the challenges and risks associated with using GenAI in security operations?
- We’ve been down the road of automation for SOCs before–UEBA and SOAR both claimed it–and AI looks a lot like those but with way more matrix math-what are we going to get right this time that we didn’t quite live up to last time(s) around?
- Imagine a SOC or a D&R team of 2029. What AI-based magic is routine at this time? What new things are done by AI? What do humans do?
Resources:
- Live video ( LinkedIn, YouTube) [live audio is not great in these]
- Practical use cases for AI in security operations, Cloud Next 2024 session by Payal
- EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It
- EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps
- 15 must-attend security sessions at Next '24
Next Episode

EP172 RSA 2024: Separating AI Signal from Noise, SecOps Evolves, XDR Declines?
Guests:
- None
Topics:
- What have we seen at RSA 2024?
- Which buzzwords are rising (AI! AI! AI!) and which ones are falling (hi XDR)?
- Is this really all about AI? Is this all marketing?
- Security platforms or focused tools, who is winning at RSA?
- Anything fun going on with SecOps?
- Is cloud security still largely about CSPM?
- Any interesting presentations spotted?
Resources:
- EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (RSA 2024 episode 1 of 2)
- “From Assistant to Analyst: The Power of Gemini 1.5 Pro for Malware Analysis” blog
- “Decoupled SIEM: Brilliant or Stupid?” blog
- “Introducing Google Security Operations: Intel-driven, AI-powered SecOps” blog
- “Advancing the art of AI-driven security with Google Cloud” blog
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
Select type & size
<a href="https://goodpods.com/podcasts/cloud-security-podcast-by-google-346699/ep171-genai-in-the-wrong-hands-unmasking-the-threat-of-malicious-ai-an-50940362"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ep171 genai in the wrong hands: unmasking the threat of malicious ai and defending against the dark side on goodpods" style="width: 225px" /> </a>
Copy