
Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success
02/14/25 • 38 min
Full transcript with links to resources available at https://mlsecops.com/podcast/implementing-a-robust-ai-governance-framework-for-business-success
In this episode of the MLSecOps podcast, host Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the world of responsible AI governance. They discuss how ethical principles, risk management, and robust security practices can be integrated throughout the AI lifecycle—from design and development to deployment and oversight. Learn practical strategies for building resilient AI frameworks, understanding regulatory impacts, and driving innovation safely.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Full transcript with links to resources available at https://mlsecops.com/podcast/implementing-a-robust-ai-governance-framework-for-business-success
In this episode of the MLSecOps podcast, host Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the world of responsible AI governance. They discuss how ethical principles, risk management, and robust security practices can be integrated throughout the AI lifecycle—from design and development to deployment and oversight. Learn practical strategies for building resilient AI frameworks, understanding regulatory impacts, and driving innovation safely.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Previous Episode

Unpacking Generative AI Red Teaming and Practical Security Solutions
Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-generative-ai-red-teaming-and-practical-security-solutions
In this episode, we explore LLM red teaming beyond simple “jailbreak” prompts with special guest Donato Capitella, from WithSecure Consulting. You’ll learn why vulnerabilities live in context—how LLMs interact with users, tools, and documents—and discover best practices for mitigating attacks like prompt injection. Our guest also previews an open-source tool for automating security tests on LLM applications.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Next Episode

AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits
Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits
Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/the-mlsecops-podcast-282047/implementing-enterprise-ai-governance-balancing-ethics-innovation-and-84210452"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to implementing enterprise ai governance: balancing ethics, innovation & risk for business success on goodpods" style="width: 225px" /> </a>
Copy