Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
The MLSecOps Podcast - Unpacking the Cloud Security Alliance AI Controls Matrix

Unpacking the Cloud Security Alliance AI Controls Matrix

04/16/25 • 35 min

The MLSecOps Podcast

Send us a text

In this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies.

Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

plus icon
bookmark

Send us a text

In this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies.

Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Previous Episode

undefined - From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains

From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains

Send us a text

Join Keith Hoodlet from Trail of Bits as he dives into AI/ML security, discussing everything from prompt injection and fuzzing techniques to bias testing and compliance challenges.

Full transcript with links to resources available at https://mlsecops.com/podcast/from-pickle-files-to-polyglots-hidden-risks-in-ai-supply-chains

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Next Episode

undefined - What’s Hot in AI Security at RSA Conference 2025?

What’s Hot in AI Security at RSA Conference 2025?

Send us a text

What’s really hot at RSA Conference 2025? MLSecOps Community Manager Madi Vorbrich sits down with Protect AI Co‐Founder Daryan “D” Dehghanpisheh for a rapid rundown of must‐see sessions, booth events, and emerging AI‐security trends—from GenAI agents to zero‐trust AI and million‐model scans. Use this episode to build a bullet‐proof RSA agenda before you land in San Francisco.

Full transcript with links to resources available at https://mlsecops.com/podcast/whats-hot-in-ai-security-at-rsa-conference-2025

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/the-mlsecops-podcast-282047/unpacking-the-cloud-security-alliance-ai-controls-matrix-89648923"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to unpacking the cloud security alliance ai controls matrix on goodpods" style="width: 225px" /> </a>

Copy