Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
The MLSecOps Podcast - AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits

AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits

02/24/25 • 24 min

The MLSecOps Podcast

Send us a text

Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits

Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

plus icon
bookmark

Send us a text

Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits

Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Previous Episode

undefined - Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success

Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success

Send us a text

Full transcript with links to resources available at https://mlsecops.com/podcast/implementing-a-robust-ai-governance-framework-for-business-success

In this episode of the MLSecOps podcast, host Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the world of responsible AI governance. They discuss how ethical principles, risk management, and robust security practices can be integrated throughout the AI lifecycle—from design and development to deployment and oversight. Learn practical strategies for building resilient AI frameworks, understanding regulatory impacts, and driving innovation safely.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Next Episode

undefined - Agentic AI: Tackling Data, Security, and Compliance Risks

Agentic AI: Tackling Data, Security, and Compliance Risks

Send us a text

Full transcript with links to resources available at https://mlsecops.com/podcast/agentic-ai-tackling-data-security-and-compliance-risks

Join host Diana Kelley and CTO Dr. Gina Guillaume-Joseph as they explore how agentic AI, robust data practices, and zero trust principles drive secure, real-time video analytics at Camio. They discuss why clean data is essential, how continuous model validation can thwart adversarial threats, and the critical balance between autonomous AI and human oversight. Dive into the world of multimodal modeling, ethical safeguards, and what it takes to ensure AI remains both innovative and risk-aware.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/the-mlsecops-podcast-282047/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits-86226256"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ai vulnerabilities: ml supply chains to llm and agent exploits on goodpods" style="width: 225px" /> </a>

Copy