
EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps
05/12/25 • 30 min
Guest:
- Diana Kelley, CSO at Protect AI
Topics:
- Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right?
- What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it?
- How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we?
- In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance?
- How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy?
- What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks?
- Top differences between LLM/chatbot AI security vs AI agent security?
Resources:
- “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers”
- “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever”
- Secure by Design for AI by Protect AI
- “Securing AI Supply Chain: Like Software, Only Not”
- OWASP Top 10 for Large Language Model Applications
- OWASP Top 10 for AI Agents (draft)
- MITRE ATLAS
- “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper)
- LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
Guest:
- Diana Kelley, CSO at Protect AI
Topics:
- Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right?
- What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it?
- How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we?
- In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance?
- How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy?
- What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks?
- Top differences between LLM/chatbot AI security vs AI agent security?
Resources:
- “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers”
- “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever”
- Secure by Design for AI by Protect AI
- “Securing AI Supply Chain: Like Software, Only Not”
- OWASP Top 10 for Large Language Model Applications
- OWASP Top 10 for AI Agents (draft)
- MITRE ATLAS
- “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper)
- LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
Previous Episode

EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025
Guests:
- no guests, just us in the studio
Topics:
- At RSA 2025, did we see solid, measurably better outcomes from AI use in security, or mostly just "sizzle" and good ideas with potential?
- Are the promises of an "AI SOC" repeating the mistakes seen with SOAR in previous years regarding fully automated security operations? Does "AI SOC" work according to RSA floor?
- How realistic is the vision expressed by some [yes, really!] that AI progress could lead to technical teams, including IT and security, shrinking dramatically or even to zero in a few years?
- Why do companies continue to rely on decades-old or “non-leading” security technologies, and what role does the concept of a "organizational change budget" play in this inertia?
- Is being "AI Native" fundamentally better for security technologies compared to adding AI capabilities to existing platforms, or is the jury still out? Got "an AI-native SIEM"? Be ready to explain how is yours better!
Resources:
- EP172 RSA 2024: Separating AI Signal from Noise, SecOps Evolves, XDR Declines?
- EP119 RSA 2023 - What We Saw, What We Learned, and What We're Excited About
- EP70 Special - RSA 2022 Reflections - Securing the Past vs Securing the Future
- RSA (“RSAI”) Conference 2024 Powered by AI with AI on Top — AI Edition (Hey AI, Is This Enough AI?) [Anton’s RSA 2024 recap blog]
- New Paper: “Future of the SOC: Evolution or Optimization — Choose Your Path” (Paper 4 of 4.5) [talks about the change budget discussed]
Next Episode

EP225 Cross-promotion: The Cyber-Savvy Boardroom Podcast: EP2 Christian Karam on the Use of AI
Hosts:
- David Homovich, Customer Advocacy Lead, Office of the CISO, Google Cloud
- Alicja Cade, Director, Office of the CISO, Google Cloud
Guest:
- Christian Karam, Strategic Advisor and Investor
Resources:
- EP2 Christian Karam on the Use of AI (as aired originally)
- The Cyber-Savvy Boardroom podcast site
- The Cyber-Savvy Boardroom podcast on Spotify
- The Cyber-Savvy Boardroom podcast on Apple Podcasts
- The Cyber-Savvy Boardroom podcast on YouTube
- Now hear this: A new podcast to help boards get cyber savvy (without the jargon)
- Board of Directors Insights Hub
- Guidance for Boards of Directors on How to Address AI Risk
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
Select type & size
<a href="https://goodpods.com/podcasts/cloud-security-podcast-by-google-346699/ep224-protecting-the-learning-machines-from-ai-agents-to-provenance-in-91032151"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ep224 protecting the learning machines: from ai agents to provenance in mlsecops on goodpods" style="width: 225px" /> </a>
Copy