S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)
The Shifting Privacy Left Podcast09/05/23 • 51 min
This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.
In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers.
We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs:
- the hallucination problem;
- the deliberation problem;
- the sleazy salesperson problem; &
- the problem of shared responsibility
Topics Covered:
- What motivated Reid to write his book, "Ethical Machines"
- The key differences between 'active privacy' & 'passive privacy'
- Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimization
- The importance of aligning privacy agendas with business priorities
- Why what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred data
- Automated decision making: when it's necessary to have a 'human in the loop'
- Approaches for mitigating 'AI ethics fatigue'
- The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated value
- The differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethics
- Reid's article, "Generative AI-xiety," & the 4 main risks related to generative AI
- Reid's advice for technical staff building products & services that leverage LLM's
Resources Mentioned:
- Read the book, "Ethical Machines"
- Reid's podcast, Ethical Machines
Guest Info:
- Follow Reid on LinkedIn
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Copyright © 2022 - 2024 Principled LLC. All rights reserved.
09/05/23 • 51 min
The Shifting Privacy Left Podcast - S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)
Transcript
LLMs don't deliberate. They don't weigh pros and cons. They don't give you advice based on reasons. What they're doing, in all cases, is predicting the next set of words that is maximally coherent with the words that came before it. It's a mathematical thing, right? So, when it gives you that explanation, it's not actually telling you the reason that it came up with the output that it gave you previously. It's more like a post facto explanati
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/the-shifting-privacy-left-podcast-257042/s2e26-building-ethical-machines-with-reid-blackman-phd-virtue-consulta-33185803"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to s2e26: "building ethical machines" with reid blackman, phd (virtue consultants) on goodpods" style="width: 225px" /> </a>
Copy