Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Lunchtime BABLing with Dr. Shea Brown

Lunchtime BABLing with Dr. Shea Brown

Babl AI, Jeffery Recker, Shea Brown

Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.
Share icon

All episodes

Best episodes

Top 10 Lunchtime BABLing with Dr. Shea Brown Episodes

Goodpods has curated a list of the 10 best Lunchtime BABLing with Dr. Shea Brown episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Lunchtime BABLing with Dr. Shea Brown for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Lunchtime BABLing with Dr. Shea Brown episode by adding your comments to the episode page.

Lunchtime BABLing with Dr. Shea Brown - Interview with Abhi Sanka

Interview with Abhi Sanka

Lunchtime BABLing with Dr. Shea Brown

play

01/27/25 β€’ 33 min

πŸŽ™οΈ Lunchtime BABLing: Interview with Abhi Sanka πŸŽ™οΈ Join BABL AI CEO Dr. Shea Brown as he chats with Abhi Sanka, a dynamic leader in responsible AI and a graduate of BABL AI's inaugural Algorithm Auditor Certificate Program. In this episode, Abhi reflects on his unique journeyβ€”from studying the ethics of the Human Genome Project at Duke University to shaping science and technology policy for the U.S. government, to now helping drive innovation at Microsoft. Explore Abhi's insights on the parallels between the Human Genome Project and the current AI revolution, the challenges of governing agentic AI systems, and the importance of building trust through responsible design. They also discuss the evolving landscape of AI assurance and the critical need for collaboration between industry, policymakers, and civil society. πŸ“Œ Highlights: Abhi’s academic and professional path to responsible AI. The challenges of auditing agentic AI and aligning governance frameworks. The importance of community and collaboration in advancing responsible AI. Abhi’s goals for 2025 and his passion for staying connected to the wider AI ethics community. Don’t miss this thought-provoking conversation packed with wisdom for anyone passionate about AI governance, policy, and innovation! πŸ”— Abhi's Linkedin: https://www.linkedin.com/in/abhisanka/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - AI Literacy Requirements of the EU AI Act

AI Literacy Requirements of the EU AI Act

Lunchtime BABLing with Dr. Shea Brown

play

10/21/24 β€’ 20 min

πŸ‘‰ Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". πŸ“š Courses Mentioned: 1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce 2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems 3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification 4️⃣ BABL AI Course Catalog: https://babl.ai/courses/ πŸ”— Follow us for more: https://linktr.ee/babl.ai In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the "AI Literacy Requirements of the EU AI Act," focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what "AI literacy" means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements. Throughout the episode, Dr. Brown covers: AI literacy obligations for providers and deployers under the EU AI Act. The importance of AI literacy in ensuring compliance. An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - AI Literacy

AI Literacy

Lunchtime BABLing with Dr. Shea Brown

play

03/17/25 β€’ 20 min

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacyβ€”what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world. Topics covered: The evolution of AI education and BABL AI’s new subscription model for training & certifications. Why AI auditing skills are becoming essential for professionals across industries. How AI governance roles will shape the future of business leadership. The impact of AI on workforce transition and how individuals can future-proof their careers. The EU AI Act’s new AI literacy requirementsβ€”what they mean for organizations. Want to level up your AI knowledge? Check out BABL AI’s courses & certifications! πŸš€ Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership πŸ‘‰ Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - Explainability of AI

Explainability of AI

Lunchtime BABLing with Dr. Shea Brown

play

03/31/25 β€’ 34 min

What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they doβ€”and should the average person even care? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainabilityβ€”and why it matters more than you might think. From recommender systems to large language models, we explore: πŸ” The difference between explainability and interpretability -Why even humans struggle to explain their decisions -What should be considered a β€œgood enough” explanation -The importance of stakeholder context in defining "useful" explanations -Why AI literacy and trust go hand-in-hand -How concepts from cybersecurity, like zero trust, could inform responsible AI oversight Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users. Mentioned in this episode: πŸ”— Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/ πŸ”— Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25 πŸ”— Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - A Conversation with Ezra Schwartz on UX Design

A Conversation with Ezra Schwartz on UX Design

Lunchtime BABLing with Dr. Shea Brown

play

02/24/25 β€’ 33 min

Join BABL AI CEO Dr. Shea Brown on Lunchtime BABLing as he sits down with UX Consultant Ezra Schwartz for an in-depth conversation about the evolving world of user experienceβ€”and how it intersects with responsible AI. In this episode, you'll discover: β€’ Ezra’s Journey: From being a student in our AI & Algorithm Auditor Certification Program to becoming a seasoned UX consultant specializing in age tech. β€’ Beyond UI Design: Ezra breaks down the true essence of UX, explaining how it’s not just about pretty interfaces, but about creating intuitive, accessible, and human-centered experiences that build trust and drive user satisfaction. β€’ The Role of UX in AI: Learn how thoughtful UX design is essential in managing AI risks, facilitating cross-department collaboration, and ensuring that digital products truly serve their users. β€’ Age Tech Insights: Explore how innovative solutions, from fall detection systems to digital caregiving tools, are reshaping life for our aging populationβ€”and the importance of balancing technology with privacy and ethical considerations. If you’re passionate about design, responsible AI, or just curious about the human side of technology, this episode is a must-listen. πŸ‘‰ Connect with Ezra Schwartz: Website: https://www.artandtech.com LinkedIn: https://www.linkedin.com/in/ezraschwartz Responsible AgeTech Conference I’m organizing: https://responsible-agetech.orgCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - How will a Trump Presidency Impact AI Regulation

How will a Trump Presidency Impact AI Regulation

Lunchtime BABLing with Dr. Shea Brown

play

11/18/24 β€’ 36 min

πŸŽ™οΈ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations? In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. πŸš¨πŸ€– Key topics include: Federal deregulation and the push for state-level AI governance. The potential repeal of Biden's executive order on AI. Implications for organizations navigating a fragmented compliance framework. The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies. How deregulation might affect innovation, litigation, and risk management in AI development. This is NOT a political podcastβ€”we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you're an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - Ensuring LLM Safety

Ensuring LLM Safety

Lunchtime BABLing with Dr. Shea Brown

play

04/07/25 β€’ 27 min

In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)? With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything. 🎯 What you'll learn: Why evaluations are essential for mitigating risk and supporting compliance How to adopt a socio-technical mindset and think in terms of parameter spaces What auditors (like BABL AI) look for when assessing LLM-powered systems A practical, first-principles approach to building and documenting LLM test suites How to connect risk assessments to specific LLM behaviors and evaluations The importance of contextualizing evaluations to your use caseβ€”not just relying on generic benchmarks Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage. Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now. πŸ“Œ Don’t wait for a perfect standard to tell you what to doβ€”learn how to build a solid, use-case-driven evaluation strategy today.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - Interview with Mahesh Chandra Mukkamala from Quantpi

Interview with Mahesh Chandra Mukkamala from Quantpi

Lunchtime BABLing with Dr. Shea Brown

play

02/17/25 β€’ 27 min

πŸ‡©πŸ‡ͺ People can join Quantpi's "RAI in Action" event series kicking off in Germany in March: πŸ‘‰ https://www.quantpi.com/resources/events πŸ‡ΊπŸ‡Έ U.S. based folks can join Quantpi's GTC session on March 20th called "A scalable approach toward trustworthy AI": πŸ‘‰ https://www.nvidia.com/gtc/session-catalog/?ncid=so-link-241456&linkId=100000328230011&tab.catalogallsessionstab=16566177511100015Kus&search=antoine#/session/1726160038299001jn0f πŸ‘‰ Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". πŸ“š Sign up for our courses today: https://babl.ai/courses/ πŸ”— Follow us for more: https://linktr.ee/babl.ai πŸŽ™οΈ Lunchtime BABLing: An Interview with Mahesh Chandra Mukkamala from Quantpi πŸŽ™οΈ In this episode of Lunchtime BABLing, host Dr. Shea Brown, CEO of BABL AI, sits down with Mahesh Chandra Mukkamala, a data scientist from Quantpi, to discuss the complexities of black box AI testing, AI risk assessment, and compliance in the age of evolving AI regulations. πŸ’‘ Topics Covered: βœ”οΈ What is black box AI testing, and why is it crucial? βœ”οΈ How Quantpi ensures model robustness and fairness across different AI systems βœ”οΈ The role of AI risk assessment in EU AI Act compliance and enterprise AI governance βœ”οΈ Challenges businesses face in AI model evaluation and best practices for testing βœ”οΈ Career insights for aspiring AI governance professionals With increasing regulatory pressure from laws like the EU AI Act, companies need to test their AI models rigorously. Whether you’re an AI professional, compliance officer, or just curious about AI governance, this conversation is packed with valuable insights on ensuring AI systems are trustworthy, fair, and reliable. πŸ”” Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI governance insights from BABL AI! πŸ“’ Listen to the podcast on all major podcast streaming platforms πŸ“© Connect with Mahesh on Linkedin: https://www.linkedin.com/in/maheshchandra/ πŸ“Œ Follow Quantpi for more AI insights: https://www.quantpi.comCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - How NIST Might Help Deloitte With the FTC

How NIST Might Help Deloitte With the FTC

Lunchtime BABLing with Dr. Shea Brown

play

09/23/24 β€’ 32 min

Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations. In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand. πŸ“ Topics discussed: Deloitte’s Medicaid eligibility system in Texas The role of the FTC and the NIST AI Risk Management Framework How AI governance can safeguard against unintentional harm Why proactive risk management is key, even for non-AI systems What companies can learn from this case to improve compliance and oversight Tune in now and stay ahead of the curve! πŸ”Šβœ¨ πŸ‘ If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode
Lunchtime BABLing with Dr. Shea Brown - 2024 an AI Year in Review

2024 an AI Year in Review

Lunchtime BABLing with Dr. Shea Brown

play

12/30/24 β€’ 40 min

πŸŽ™οΈ Lunchtime BABLing: 2024 - An AI Year in Review πŸŽ™οΈ Join Shea Brown (CEO, BABL AI), Jeffery Recker (COO, BABL AI), and Bryan Ilg (CSO, BABL AI) as they reflect on an extraordinary year in AI! In this final episode of the year, the trio dives into: 🌟 The rapid growth of Responsible AI and algorithmic auditing in 2024. πŸ“ˆ How large language models are redefining audits and operational workflows. 🌍 The global wave of AI regulations, including the EU AI Act, Colorado AI Act, and emerging laws worldwide. πŸ“š The rise of AI literacy and the "race for competency" in businesses and society. πŸ€– Exciting (and risky!) trends like AI agents and their potential for transformation in 2025. Jeffery also shares an exciting update about his free online course, Introduction to Responsible AI, available until January 13th, 2025. Don’t miss this opportunity to earn a certification badge and join a live Q&A session! πŸŽ‰ Looking Ahead to 2025 What’s next for AI governance, standards like ISO 42001, and the evolving role of education in shaping the future of AI? The team shares predictions, insights, and hopes for the year ahead. πŸ“Œ Key Takeaways: AI is maturing rapidly, with businesses adopting governance frameworks and grappling with new regulations. Education and competency-building are essential to navigating the changing AI landscape. The global regulatory response is reshaping how AI is developed, deployed, and audited. Link to Raymon Sun's Techie Ray Global AI Regulation Tracker: https://www.techieray.com/GlobalAIRegulationTracker πŸ’‘ Don’t miss this thought-provoking recap of 2024 and the exciting roadmap for 2025!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Lunchtime BABLing with Dr. Shea Brown have?

Lunchtime BABLing with Dr. Shea Brown currently has 60 episodes available.

What topics does Lunchtime BABLing with Dr. Shea Brown cover?

The podcast is about Consulting, Management, Research, Data, Nlp, Podcasts, Big Data, Technology, Education, Data Analytics, Philosophy, Cyber Security, Business, Artificial Intelligence, Privacy, Data Science, Data Privacy, Machine Learning, Python and Ethics.

What is the most popular episode on Lunchtime BABLing with Dr. Shea Brown?

The episode title 'NIST AI Risk Management Framework & Generative AI Profile' is the most popular.

What is the average episode length on Lunchtime BABLing with Dr. Shea Brown?

The average episode length on Lunchtime BABLing with Dr. Shea Brown is 37 minutes.

How often are episodes of Lunchtime BABLing with Dr. Shea Brown released?

Episodes of Lunchtime BABLing with Dr. Shea Brown are typically released every 10 days.

When was the first episode of Lunchtime BABLing with Dr. Shea Brown?

The first episode of Lunchtime BABLing with Dr. Shea Brown was released on Sep 10, 2022.

Show more FAQ

Toggle view more icon

Comments