
Pondering AI
Kimberly Nevala, Strategic Advisor - SAS
All episodes
Best episodes
Top 10 Pondering AI Episodes
Goodpods has curated a list of the 10 best Pondering AI episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Pondering AI for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Pondering AI episode by adding your comments to the episode page.

Tech, Prosperity and Power with Simon Johnson
Pondering AI
03/20/24 • 38 min
Simon Johnson takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.
In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.
Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.
Simon Johnson is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity with Daren Acemoglu.
A transcript of this episode is here.

Practical Ethics with Reid Blackman
Pondering AI
04/05/23 • 46 min
Reid Blackman confronts whack-a-mole approaches to AI ethics, ethical ‘do goodery,’ squishy values, moral nuance, advocacy vs. activism and overfitting for AI.
Reid distinguishes AI for ‘not bad’ from AI ‘for good’ and corporate social responsibility. He describes how the language of risk creates a bridge between ethics and business. Debunking the notion of ethicists as moral priests, Reid provides practical steps for making ethics palatable and effective.
Reid and Kimberly discuss developing organizational muscle to reckon with moral nuance. Reid emphasizes that disagreement and uncertainty aren’t unique to ethics. Nor do squishy value statements make ethics squishy. Reid identifies a cocktail of motivations driving organization to engage, or not, in AI ethics. We also discuss the tendency for self-regulation to cede to market forces and the government’s role in ensuring access to basic human goods. Cautioning against overfitting an ethics program to AI alone, Reid illustrates the benefits of distinguishing digital ethics from ethics writ large. Last but not least, Reid considers how organizations may stitch together responses to the evolving regulatory patchwork.
Reid Blackman is the author of “Ethical Machines” and the CEO of Virtue Consultants.
A transcript of this episode is here.

11/20/24 • 48 min
Kathleen Walch and Ron Schmelzer analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.
The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.
A transcript of this episode is here.
Kathleen Walch and Ron Schmelzer are the co-founders of Cognilytica, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.
Additional Resources:
CPMAI certification: https://courses.cognilytica.com/
AI Today podcast: https://www.cognilytica.com/aitoday/

RAGging on Graphs with Philip Rathle
Pondering AI
08/28/24 • 49 min
Philip Rathle traverses from knowledge graphs to LLMs and illustrates how loading the dice with GraphRAG enhances deterministic reasoning, explainability and agency.
Philip explains why knowledge graphs are a natural fit for capturing data about real-world systems. Starting with Kevin Bacon, he identifies many ‘graphy’ problems confronting us today. Philip then describes how interconnected systems benefit from the dynamism and data network effects afforded by knowledge graphs.
Next, Philip provides a primer on how Retrieval Augmented Generation (RAG) loads the dice for large language models (LLMs). He also differentiates between vector- and graph-based RAG. Along the way, we discuss the nature and locus of reasoning (or lack thereof) in LLM systems. Philip articulates the benefits of GraphRAG including deterministic reasoning, fine-grained access control and explainability. He also ruminates on graphs as a bridge to human agency as graphs can be reasoned on by both humans and machines. Lastly, Philip shares what is happening now and next in GraphRAG applications and beyond.
Philip Rathle is the Chief Technology Officer (CTO) at Neo4j. Philip was a key contributor to the development of the GQL standard and recently authored The GraphRAG Manifesto: Adding Knowledge to GenAI (neo4j.com) a go-to resource for all things GraphRAG.
A transcript of this episode is here.

The Nature of Learning with Helen Beetham
Pondering AI
02/19/25 • 45 min
Helen Beetham isn’t waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course.
Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI’s teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities.
Helen Beetham is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, Imperfect Offerings, is recommended by the Guardian/Observer for its wise and thoughtful critique of generative AI.
Additional Resources:
Imperfect Offerings - https://helenbeetham.substack.com/
Audrey Watters - https://audreywatters.com/
Kathryn (Katie) Conrad - https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/
Anna Mills - https://www.linkedin.com/in/anna-mills-oer/
Dr. Maya Indira Ganesh - https://www.linkedin.com/in/dr-des-maya-indira-ganesh/
Tech(nically) Politics - https://www.technicallypolitics.org/
LOG OFF - logoffmovement.org/
Rest of World - www.restofworld.org/
Derechos Digitales – www.derechosdigitales.org
A transcript of this episode is here.

AI Myths and Mythos with Eryk Salvaggio
Pondering AI
01/08/25 • 58 min
Eryk Salvaggio articulates myths animating AI design, illustrates the nature of creativity and generated media, and artfully reframes the discourse on GenAI and art.
Eryk joined Kimberly to discuss myths and metaphors in GenAI design; the illusion of control; if AI saves time and what for; not relying on futuristic AI to solve problems; the fallacy of scale; the dehumanizing narrative of human equivalence; positive biases toward AI; why asking ‘is the machine creative’ misses the mark; creative expression and meaning making; what AI generated art represents; distinguishing archives from datasets; curation as an act of care; representation and context in generated media; the Orwellian view of mass surveillance as anonymity; complicity and critique of GenAI tools; abstraction and noise; and what we aren’t doing when we use GenAI.
Eryk Salvaggio is a new media artist, Visiting Professor in Humanities, Computing and Design at the Rochester Institute of Technology, and an Emerging Technology Research Advisor at the Siegel Family Endowment. Eryk is also a researcher on the AI Pedagogies Project at Harvard University’s metaLab and lecturer on Responsible AI at Elisava Barcelona School of Design and Engineering.
Addition Resources:
Cybernetic Forests: mail.cyberneticforests.com
The Age of Noise: https://mail.cyberneticforests.com/the-age-of-noise/
Challenging the Myths of Generative AI: https://www.techpolicy.press/challenging-the-myths-of-generative-ai/
A transcript of this episode is here.

Technical Morality with John Danaher
Pondering AI
09/25/24 • 46 min
John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.
John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.
Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.
John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters.
A transcript of this episode is here.

Ethics for Engineers with Steven Kelts
Pondering AI
02/05/25 • 46 min
Steven Kelts engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care.
Steven and Kimberly discuss Ashley Casovan’s inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices.
Steven Kelts is a lecturer in Princeton’s University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human’s Responsible University Network.
Additional Resources:
- Princeton Agile Ethics Program: https://agile-ethics.princeton.edu
- CITP Talk 11/19/24: Agile Ethics Theory and Evidence
- Oktar, Lomborozo et al: Changing Moral Judgements
- 4-Stage Theory of Ethical Decision Making: An Introduction
- Enabling Engineers through “Moral Imagination” (Google)
A transcript of this episode is here.

Artificial Empathy with Ben Bland
Pondering AI
09/11/24 • 46 min
Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions.
Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult.
He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability.
Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution.
Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.
A transcript of this episode is here.

Humanity at Scale with Kate O’Neill
Pondering AI
12/15/21 • 44 min
Kate O’Neill is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale.
In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.
Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.
A transcript of this episode can be found here.
Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.
Show more best episodes

Show more best episodes
FAQ
How many episodes does Pondering AI have?
Pondering AI currently has 69 episodes available.
What topics does Pondering AI cover?
The podcast is about Diversity And Inclusion, Dei, Podcasts, Technology, Business, Artificial Intelligence and Ethics.
What is the most popular episode on Pondering AI?
The episode title 'Chief Data Concerns with Heidi Lanford' is the most popular.
What is the average episode length on Pondering AI?
The average episode length on Pondering AI is 37 minutes.
How often are episodes of Pondering AI released?
Episodes of Pondering AI are typically released every 14 days.
When was the first episode of Pondering AI?
The first episode of Pondering AI was released on Apr 14, 2021.
Show more FAQ

Show more FAQ