Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Pondering AI

Pondering AI

Kimberly Nevala, Strategic Advisor - SAS

How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
bookmark
Share icon

All episodes

Best episodes

Top 10 Pondering AI Episodes

Goodpods has curated a list of the 10 best Pondering AI episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Pondering AI for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Pondering AI episode by adding your comments to the episode page.

Pondering AI - Tech, Prosperity and Power with Simon Johnson
play

03/20/24 • 38 min

Simon Johnson takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.

In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.

Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.

Simon Johnson is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity with Daren Acemoglu.

A transcript of this episode is here.

bookmark
plus icon
share episode
Pondering AI - Practical Ethics with Reid Blackman
play

04/05/23 • 46 min

Reid Blackman confronts whack-a-mole approaches to AI ethics, ethical ‘do goodery,’ squishy values, moral nuance, advocacy vs. activism and overfitting for AI.

Reid distinguishes AI for ‘not bad’ from AI ‘for good’ and corporate social responsibility. He describes how the language of risk creates a bridge between ethics and business. Debunking the notion of ethicists as moral priests, Reid provides practical steps for making ethics palatable and effective.

Reid and Kimberly discuss developing organizational muscle to reckon with moral nuance. Reid emphasizes that disagreement and uncertainty aren’t unique to ethics. Nor do squishy value statements make ethics squishy. Reid identifies a cocktail of motivations driving organization to engage, or not, in AI ethics. We also discuss the tendency for self-regulation to cede to market forces and the government’s role in ensuring access to basic human goods. Cautioning against overfitting an ethics program to AI alone, Reid illustrates the benefits of distinguishing digital ethics from ethics writ large. Last but not least, Reid considers how organizations may stitch together responses to the evolving regulatory patchwork.

Reid Blackman is the author of “Ethical Machines” and the CEO of Virtue Consultants.

A transcript of this episode is here.

bookmark
plus icon
share episode

Kathleen Walch and Ron Schmelzer analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.

The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.

A transcript of this episode is here.

Kathleen Walch and Ron Schmelzer are the co-founders of Cognilytica, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.

Additional Resources:

CPMAI certification: https://courses.cognilytica.com/

AI Today podcast: https://www.cognilytica.com/aitoday/

bookmark
plus icon
share episode
Pondering AI - RAGging on Graphs with Philip Rathle
play

08/28/24 • 49 min

Philip Rathle traverses from knowledge graphs to LLMs and illustrates how loading the dice with GraphRAG enhances deterministic reasoning, explainability and agency.

Philip explains why knowledge graphs are a natural fit for capturing data about real-world systems. Starting with Kevin Bacon, he identifies many ‘graphy’ problems confronting us today. Philip then describes how interconnected systems benefit from the dynamism and data network effects afforded by knowledge graphs.

Next, Philip provides a primer on how Retrieval Augmented Generation (RAG) loads the dice for large language models (LLMs). He also differentiates between vector- and graph-based RAG. Along the way, we discuss the nature and locus of reasoning (or lack thereof) in LLM systems. Philip articulates the benefits of GraphRAG including deterministic reasoning, fine-grained access control and explainability. He also ruminates on graphs as a bridge to human agency as graphs can be reasoned on by both humans and machines. Lastly, Philip shares what is happening now and next in GraphRAG applications and beyond.

Philip Rathle is the Chief Technology Officer (CTO) at Neo4j. Philip was a key contributor to the development of the GQL standard and recently authored The GraphRAG Manifesto: Adding Knowledge to GenAI (neo4j.com) a go-to resource for all things GraphRAG.

A transcript of this episode is here.

bookmark
plus icon
share episode
Pondering AI - Technical Morality with John Danaher
play

09/25/24 • 46 min

John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.

John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.

Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.

John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters.

A transcript of this episode is here.

bookmark
plus icon
share episode
Pondering AI - Artificial Empathy with Ben Bland
play

09/11/24 • 46 min

Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions.

Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult.

He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability.

Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution.

Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.

A transcript of this episode is here.

bookmark
plus icon
share episode
Pondering AI - Power and Peril of AI with Michael Kanaan
play

04/14/21 • 45 min

Michael Kanaan is the author of the best-selling book T-AI and the former chairperson of AI for the U.S. Air Force, Headquarters Pentagon.

In this far-reaching discussion, Michael provides perspectives on the peril of anthropomorphizing AI and how differentiating between intelligence and consciousness creates clarity. He shares his own reckoning with humility while writing T-AI, popular misconceptions about AI, where we can go awry in addressing – or not addressing – AI’s inherent dualities, pros and cons of the technology’s ready availability, and why unflinching due diligence is critical to deploying AI safely, ethically, and responsibly.

After a brief diversion into the perils of technology that is too responsive to our whims (ahem, social media), Kimberly and Michael discuss the importance of bridging the digital divide so everyone can contribute to and benefit from AI. Michael also makes the case for how AI may have the greatest impact on subject matter experts and decision makers and why explainability is overrated. And, finally, why AI’s future will be determined not by data scientists but by artists, sociologists, teachers and more.

A transcript of this episode can be found here.

Our next episode will feature Tess Posner: an educator, social entrepreneur, and CEO of AI-4-All. Subscribe to Pondering AI now so you don’t miss it.

bookmark
plus icon
share episode
Pondering AI - AI Principles in Practice with Ansgar Koene
play

07/07/21 • 39 min

Dr. Ansgar Koene is the Global AI Ethics and Regulatory Leader at Ernst & Young (EY), a Sr. Research Fellow at the University of Nottingham and chair of the IEEE P7003 Standard for Algorithm Bias Considerations working group.

In this visionary discussion, Ansgar traces his path from robotics and computational social science to the ethics of data sharing and AI. Drawing from his wide-ranging research, Ansgar illustrates the need for true stakeholder representation; what diversity looks like in practice; and why context, critical thinking and common sense are required in AI.

Describing some of the more subtle yet most impactful dilemmas in AI, Ansgar highlights the natural tension between developing foresight to avoid harms whilst reacting to harms that have already occurred. Ansgar and Kimberly discuss emerging regulations and the link between power and accountability in AI. Ansgar advocates for broad AI literacy but cautions against setting citizens and users up with unrealistic expectations. Lastly, Ansgar muses about the future and why the biggest challenges created by AI might not be obvious today.

A full transcript of this episode can be found here.

Thank you for joining us for Season 1 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.

bookmark
plus icon
share episode

Tess Posner is an educator, social entrepreneur, CEO of AI-4-All and an avid advocate for diversity, inclusion and equity in the tech economy.

In this inspiring and insightful discussion, Tess shares her mission to make technology and education accessible to all, inspiring work being done by rising student leaders in the AI-4-All Changemaker community, some eye-opening statistics on the state of diversity in AI, research on bias in today’s AI systems, and the importance of not letting cynicism rule the day.

Tess’s passion is infectious as she explains why AI literacy and education cultivates future leaders, not just future data scientists. Kimberly and Tess talk about the hard but necessary work of creating diverse, inclusive cultures and why the benefits go far beyond positive optics. As well as why viewing technology as a silver bullet is fraught and the importance of unlocking human potential. Finally, Tess identifies tangible actions individuals, organizations, and communities can take today to ensure everyone benefits from AI tomorrow.

A full transcript of this episode can be found here.

Our next episode features Renée Cummings: a criminologist, criminal psychologist and AI ethics evangelist who is passionate about keeping the human experience at the center of AI. Subscribe to Pondering AI now so you don’t miss it.

bookmark
plus icon
share episode
Pondering AI - Humanity at Scale with Kate O’Neill
play

12/15/21 • 44 min

Kate O’Neill is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale.

In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.

Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.

A transcript of this episode can be found here.

Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Pondering AI have?

Pondering AI currently has 62 episodes available.

What topics does Pondering AI cover?

The podcast is about Diversity And Inclusion, Dei, Podcasts, Technology, Business, Artificial Intelligence and Ethics.

What is the most popular episode on Pondering AI?

The episode title 'Chief Data Concerns with Heidi Lanford' is the most popular.

What is the average episode length on Pondering AI?

The average episode length on Pondering AI is 36 minutes.

How often are episodes of Pondering AI released?

Episodes of Pondering AI are typically released every 14 days.

When was the first episode of Pondering AI?

The first episode of Pondering AI was released on Apr 14, 2021.

Show more FAQ

Toggle view more icon

Comments