
The MLSecOps Podcast Season 2 Finale
09/07/24 • 40 min
This compilation contains highlights from episodes throughout Season 2 of the MLSecOps Podcast, and it's a great one for community members who are new to the show. If there is a clip from this highlights reel that is especially interesting to you, you can note the name of the original episode that the clip came from and easily go check out that full length episode for a deeper dive.
Extending enormous thanks to everyone who has supported this show, including our audience, Protect AI hosts, and stellar expert guests. Stay tuned for Season 3!
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
This compilation contains highlights from episodes throughout Season 2 of the MLSecOps Podcast, and it's a great one for community members who are new to the show. If there is a clip from this highlights reel that is especially interesting to you, you can note the name of the original episode that the clip came from and easily go check out that full length episode for a deeper dive.
Extending enormous thanks to everyone who has supported this show, including our audience, Protect AI hosts, and stellar expert guests. Stay tuned for Season 3!
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Previous Episode

Exploring Generative AI Risk Assessment and Regulatory Compliance
In this episode of the MLSecOps Podcast we have the honor of talking with David Rosenthal, Partner at VISCHER (Swiss Law, Tax & Compliance). David is also an author & former software developer, and lectures at ETH Zürich & the University of Basel.
He has more than 25 years of experience in data & technology law and kindly joined the show to discuss a variety of AI regulation topics, including the EU Artificial Intelligence Act, generative AI risk assessment, and challenges related to organizational compliance with upcoming AI regulations.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
Next Episode

Generative AI Prompt Hacking and Its Impact on AI Security & Safety
Welcome to Season 3 of the MLSecOps Podcast, brought to you by Protect AI!
In this episode, MLSecOps Community Manager Charlie McCarthy speaks with Sander Schulhoff, co-founder and CEO of Learn Prompting. Sander discusses his background in AI research, focusing on the rise of prompt engineering and its critical role in generative AI. He also shares insights into prompt security, the creation of LearnPrompting.org, and its mission to democratize prompt engineering knowledge. This episode also explores the intricacies of prompting techniques, "prompt hacking," and the impact of competitions like HackAPrompt on improving AI safety and security.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
The MLSecOps Podcast - The MLSecOps Podcast Season 2 Finale
Transcript
Highlights:
Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems - Martin Stanley, CISSP
D Dehghanpisheh
So, you're currently assigned at NIST to work on the Trustworthy AI Project and a part of that is the AI Risk Management Framework or AI RMF, right? Can you talk about
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/the-mlsecops-podcast-282047/the-mlsecops-podcast-season-2-finale-73376133"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to the mlsecops podcast season 2 finale on goodpods" style="width: 225px" /> </a>
Copy