Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Notable Perspectives - #13 - Ilana Golbin: Responsible AI and its application in healthcare

#13 - Ilana Golbin: Responsible AI and its application in healthcare

08/31/23 • 34 min

Notable Perspectives

In this episode, Ilana sits down for an in-depth conversation with Dr. Aaron Neinstein, chief medical officer at Notable. Among other things, the two discuss:

What organizations look and function like when they are taking the right approach to responsible AI

How responsible AI is similar to the ‘rules of the road’ that keep us organized, safe, and able to get to where we want to go quickly when driving

Where healthcare organizations typically start with responsible AI

And much more.

—-

Ilana is Director and Responsible AI Lead at PwC US, where she serves as one of the leads for Artificial Intelligence. Ilana specializes in applying machine learning and simulation modeling to address client needs across sectors regarding strategic deployment of new services, operational efficiencies, geospatial analytics, explainability and bias.

Ilana is a Certified Ethical Emerging Technologist, is listed as one of 100 “Brilliant Women in AI Ethics” in 2020, and was recently recognized in Forbes as one of 15 leaders advancing Ethical AI. Since 2018, she has led PwC’s efforts globally in the development of cutting-edge approaches to build and deploy Responsible AI.

—-

Outline

Here are the timestamps for this episode.

(00:00) - Intro

(02:00) - Defining Responsible AI

(05:25) - Who typically ‘owns’ responsible AI within an organization?

(08:10) - Why responsible AI should fit within existing governance capabilities

(10:42) - The differences in responsible AI for builders vs. implementers

(13:33) - Who is doing responsible AI the right way? What are examples?

(16:30) - How a good governance program is like the rules of the road for driving

(19:10) - Where organizations have ‘gone wrong’ with responsible AI - common themes

(24:13) - Where healthcare executives should start with responsible AI

(29:04) - Exploring the common objections to advanced AI technologies

(30:26) - Recommended resources for learning more about responsible AI

(34:47) - End

Relevant links

Ilana Golbin on LinkedIn

Dr. Neinstein on LinkedIn and Twitter

NIST AI Risk Management Framework

Notable on LinkedIn

* Notable Perspectives

plus icon
bookmark

In this episode, Ilana sits down for an in-depth conversation with Dr. Aaron Neinstein, chief medical officer at Notable. Among other things, the two discuss:

What organizations look and function like when they are taking the right approach to responsible AI

How responsible AI is similar to the ‘rules of the road’ that keep us organized, safe, and able to get to where we want to go quickly when driving

Where healthcare organizations typically start with responsible AI

And much more.

—-

Ilana is Director and Responsible AI Lead at PwC US, where she serves as one of the leads for Artificial Intelligence. Ilana specializes in applying machine learning and simulation modeling to address client needs across sectors regarding strategic deployment of new services, operational efficiencies, geospatial analytics, explainability and bias.

Ilana is a Certified Ethical Emerging Technologist, is listed as one of 100 “Brilliant Women in AI Ethics” in 2020, and was recently recognized in Forbes as one of 15 leaders advancing Ethical AI. Since 2018, she has led PwC’s efforts globally in the development of cutting-edge approaches to build and deploy Responsible AI.

—-

Outline

Here are the timestamps for this episode.

(00:00) - Intro

(02:00) - Defining Responsible AI

(05:25) - Who typically ‘owns’ responsible AI within an organization?

(08:10) - Why responsible AI should fit within existing governance capabilities

(10:42) - The differences in responsible AI for builders vs. implementers

(13:33) - Who is doing responsible AI the right way? What are examples?

(16:30) - How a good governance program is like the rules of the road for driving

(19:10) - Where organizations have ‘gone wrong’ with responsible AI - common themes

(24:13) - Where healthcare executives should start with responsible AI

(29:04) - Exploring the common objections to advanced AI technologies

(30:26) - Recommended resources for learning more about responsible AI

(34:47) - End

Relevant links

Ilana Golbin on LinkedIn

Dr. Neinstein on LinkedIn and Twitter

NIST AI Risk Management Framework

Notable on LinkedIn

* Notable Perspectives

Previous Episode

undefined - #12 - Mona Baset: Generative AI, LLMs and the future of healthcare

#12 - Mona Baset: Generative AI, LLMs and the future of healthcare

In this episode, Mona sits down for an in-depth conversation with Dr. Muthu Alagappan, chief medical officer at Notable. Among other things, the two discuss:

How Intermountain Health thinks about and sets out to build consumer-grade experiences for its patients

How advanced technology is augmenting human healthcare workers

Why it is important to incorporate empathy into any ROI calculation

And much more.

—-

As Vice President of Digital Services at Intermountain Health, Mona Baset leads digital strategy and transformation, including the development and implementation of the digital technology roadmap. She was also appointed by the Governor of Colorado to serve on the state’s eHealth Commission.

Prior to that, Mona was a leader in the technology organization at Atrium Health, leading consumer engagement strategies. Previously, Mona spent almost 10 years at Bank of America, where she led various marketing and communications teams.

—-

Outline

Here are the timestamps for this episode.

(00:00) - Intro

(00:52) - The motivation to work in healthcare

(01:48) - Does healthcare lag in consumer technology adoption?

(03:24) - Best-in-class consumer technology from a health system POV

(04:47) - Amazon’s consumer experience vs. the healthcare experience

(07:34) - Building the consumer experience at Intermountain Health

(10:04) - Prioritizing the work

(11:09) - Factors that influence the build vs. partner decision

(13:07) - How Design Thinking applies in healthcare

(20:32) - Quantifying the ROI of empathy

(25:01) - How Intermountain Health thinks about time horizons for digital projects

(26:59) - Intermountain’s best partners have these common characteristics

(28:34) - The impact of ChatGPT and large language models in healthcare

(33:31) - Does technology augment human workers or eliminate the need?

(40:54) - End

Relevant links

Mona Baset on LinkedIn

Dr. Alagappan on LinkedIn and Twitter

Notable on LinkedIn

* Notable Perspectives

Next Episode

undefined - #14 - Michael Hasselberg: Generative AI is the future

#14 - Michael Hasselberg: Generative AI is the future

In this episode, Dr. Hasselberg sits down for an in-depth conversation with Dr. Aaron Neinstein, chief medical officer at Notable. Among other things, the two discuss:

The power of advanced AI and LLMs to dramatically reduce development time

How pre-trained models are being used to power automated form fillers

The drivers and motivations of being an early adopter

And much more.

—-

Dr. Michael Hasselberg is the first Chief Digital Health Officer at University of Rochester (UR) Medicine and is the co-Director of the UR Health Lab, the health system’s digital health incubator. He is also an Associate Professor of Psychiatry, Clinical Nursing, and Data Science at the University of Rochester.

Board certified as a Psychiatric Mental Health Nurse Practitioner, Dr. Hasselberg completed his Ph.D. degree in Health Practice Research at the UR and a postdoctoral certificate in Healthcare Leadership at the Johnson School of Management at Cornell University.

His expertise expands health and technology as a Robert Wood Johnson Foundation Clinical Scholar Fellow and advisor on digital health modalities to the New York State Department of Health, the Department of Health & Human Services, and the National Quality Forum.

—-

Outline

Here are the timestamps for this episode.

(00:00) - Intro

(02:02) - Trying to solve complex healthcare problems before GPT-4

(03:41) - Solving the patient messaging problem with GPT-4 in just two days

(08:03) - Non-patient facing use cases for LLMs and generative AI

(09:22) - Building automated form fillers (workers comp)

(10:54) - Using LLMs to build tools for the IT Help Desk at a health system

(12:45) - Generative AI for ambient documentation

(14:45) - What’s the motivation to be an early adopter of technology?

(18:40) - Why banning the use of generative AI is not a winning strategy

(19:28) - Exploring the incentives for continued innovation

(23:22) - What guardrails does an innovation incubator operate within?

(27:45) - End

Relevant links

Dr. Michael Hasselberg on LinkedIn

Dr. Neinstein on LinkedIn and Twitter

Notable on LinkedIn

* Notable Perspectives

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/notable-perspectives-621548/13-ilana-golbin-responsible-ai-and-its-application-in-healthcare-82117298"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to #13 - ilana golbin: responsible ai and its application in healthcare on goodpods" style="width: 225px" /> </a>

Copy