Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
How AI Happens - Responsible AI Economics with Katya Klinova & The Partnership on AI

Responsible AI Economics with Katya Klinova & The Partnership on AI

10/28/21 • 29 min

1 Listener

How AI Happens

In recent years, the focus of AI developers has been to implement technologies that replace basic human labor. Talking to us today about why this is the wrong application for AI (right now), is Katya Klinova, the Head of AI, Labor, and the Economy at The Partnership on AI. Tune in to find out why replacing human labor doesn't benefit the whole of humanity, and what our focus should be instead. We delve into the threat of "so-so technologies" and what the developer's role should be in approaching ethical vendors and looking after the workers supplying them with data. Join us to find out more about how AI can be used to better the whole of society if there’s a shift in the field’s current aims.

Key Points From This Episode:

  • An introduction to Katya Klinova, Head of Al, Labor and the Economy at The Partnership on AI.
  • How her expectations of the world after her undergraduate degree shaped her.
  • Pursuing a degree in economics to understand how AI impacts labor and economics.
  • The role of The Partnership on AI in dissipating technological gains.
  • Who is impacted when AI is introduced to a market: the consumers and the workers.
  • How different companies are deficient in the ways they benefit everyone.
  • Find out what the “threat of so-so technology” is.
  • Should people become shareholders in AI technology that they helped to train?
  • How capitalism incentivizes “so-so technologies”.
  • The role of developers in selecting vendors and responsible sourcing.
  • Why it's important to realize that data labelers are employees and not just numbers.
  • Shifting the focus of AI from automation to complementarity.
  • Why now is not the time to be replacing human labor.

Tweetables:

“Creating AI that benefits all is actually a very large commitment and a statement, and I don't think many companies have really realized or thought through what they're actually saying in the economic terms when they're subscribing to something like that.” — @klinovakatya [0:09:45]

"It’s not that you want to avoid all kinds of automation, no matter what. Automation, at the end of the day, has been the force that lifted living conditions and incomes around the world, and has been around for much longer than AI." — @klinovakatya [0:11:28]

“We compensate people for the task or for their time, but we are not necessarily compensating them for the data that they generate that we use to train models that can displace their jobs in the future.” — @klinovakatya [0:14:49]

"Might we be automating too much for the kind of labor market needs that we have right now?" — @klinovakatya [0:23:14]

”It’s not the time to eliminate all of the jobs that we possibly can. It’s not the time to create machines that can match humans in everything that they do, but that’s what we are doing.” — @klinovakatya [0:24:50]

Links Mentioned in Today’s Episode:

Katya Klinova on LinkedIn

"Automation and New Tasks: How Technology Displaces and Reinstates Labor"

The Partnership on AI: Responsible Sourcing

plus icon
bookmark

In recent years, the focus of AI developers has been to implement technologies that replace basic human labor. Talking to us today about why this is the wrong application for AI (right now), is Katya Klinova, the Head of AI, Labor, and the Economy at The Partnership on AI. Tune in to find out why replacing human labor doesn't benefit the whole of humanity, and what our focus should be instead. We delve into the threat of "so-so technologies" and what the developer's role should be in approaching ethical vendors and looking after the workers supplying them with data. Join us to find out more about how AI can be used to better the whole of society if there’s a shift in the field’s current aims.

Key Points From This Episode:

  • An introduction to Katya Klinova, Head of Al, Labor and the Economy at The Partnership on AI.
  • How her expectations of the world after her undergraduate degree shaped her.
  • Pursuing a degree in economics to understand how AI impacts labor and economics.
  • The role of The Partnership on AI in dissipating technological gains.
  • Who is impacted when AI is introduced to a market: the consumers and the workers.
  • How different companies are deficient in the ways they benefit everyone.
  • Find out what the “threat of so-so technology” is.
  • Should people become shareholders in AI technology that they helped to train?
  • How capitalism incentivizes “so-so technologies”.
  • The role of developers in selecting vendors and responsible sourcing.
  • Why it's important to realize that data labelers are employees and not just numbers.
  • Shifting the focus of AI from automation to complementarity.
  • Why now is not the time to be replacing human labor.

Tweetables:

“Creating AI that benefits all is actually a very large commitment and a statement, and I don't think many companies have really realized or thought through what they're actually saying in the economic terms when they're subscribing to something like that.” — @klinovakatya [0:09:45]

"It’s not that you want to avoid all kinds of automation, no matter what. Automation, at the end of the day, has been the force that lifted living conditions and incomes around the world, and has been around for much longer than AI." — @klinovakatya [0:11:28]

“We compensate people for the task or for their time, but we are not necessarily compensating them for the data that they generate that we use to train models that can displace their jobs in the future.” — @klinovakatya [0:14:49]

"Might we be automating too much for the kind of labor market needs that we have right now?" — @klinovakatya [0:23:14]

”It’s not the time to eliminate all of the jobs that we possibly can. It’s not the time to create machines that can match humans in everything that they do, but that’s what we are doing.” — @klinovakatya [0:24:50]

Links Mentioned in Today’s Episode:

Katya Klinova on LinkedIn

"Automation and New Tasks: How Technology Displaces and Reinstates Labor"

The Partnership on AI: Responsible Sourcing

Previous Episode

undefined - Moxie the Robot & Embodied CTO Stefan Scherer

Moxie the Robot & Embodied CTO Stefan Scherer

1 Recommendations

In this episode, we talk to Stefan Scherer (CTO of Embodied) about why he decided to focus on the more nuanced challenge of developing children’s social-emotional skills. Stefan takes us through how encouraging children to mentor Moxie (a friendly robot) through social interaction helps them develop their interpersonal relationships. We dive into the relevance of scripted versus unscripted conversation in different AI technologies, and how Embodied taught Moxie to define abstract concepts such as "kindness".

Key Points From This Episode:

  • Welcome to Stefan Scherer, CTO of Embodied and lead researcher and developer of Embodied's SocialXTM technology, Moxie.
  • The goal of Embodied: using a natural mode of communication to support children’s social development.
  • Mentoring Moxie: how Moxie teaches children social-emotional learning without being a teacher.
  • Why Stefan and Embodied focused on the challenge of social-emotional skills, not STEM.
  • Developing a technology that captures the infinite answers to social-emotional questions: using neural networks and sentiment analysis.
  • How using Few-shot learning reduced the amount of data needed to train Moxie.
  • Why it's important to make the transition between freer- and scripted conversations seamless.
  • How the percentage of scripted versus non-scripted conversation differs based on the context of the technology.
  • Discover how Moxie adapts to children’s changing needs and desires.
  • How Moxie as a springboard in teaching children to form long-term relationships.
  • The hardware behind Moxie: the ethical considerations around home devices, and data protection.
  • Why Moxie looks the way it does: making it affordable.

Tweetables:

“Human behavior is very complex, and it gives us a window into our soul. We can understand so much more than just language from human behavior, we can understand an individual's wellbeing and their abilities to communicate with others.” — Stefan Scherer [0:01:04]

"It is not sufficient to work on the easy challenges at first and then expand from there. No, as a startup you have to tackle the hard ones first because that's where you set yourself apart from the rest." — Stefan Scherer [0:04:53]

“Moxie comes into the world of the child with the mission to basically learn how to be a good friend to humans. And Moxie puts the child into this position of teaching Moxie about how to do that.” — Stefan Scherer [0:17:40]

"One of the most important aspects of Moxie is that Moxie doesn't serve as the destination, Moxie is really a springboard into life." — Stefan Scherer [0:18:29]

“We did not want to overengineer Moxie, we really wanted to basically afford the ability to have a conversation, to be able to multimodally interact, and yet be as frugal with the amount of concepts that we added or the amount of capabilities that we added.” — Stefan Scherer [0:27:17]

Links Mentioned in Today’s Episode:

See Moxie in Action

Stefan Scherer on LinkedIn

Embodied Website

Next Episode

undefined - Egocentric Perception with Facebook's Manohar Paluri

Egocentric Perception with Facebook's Manohar Paluri

Joining us today is Senior Director at Facebook AI, Manohar Paluri. Mano discusses the biggest challenges facing the field of computer vision, and the commonalities and differences between first and third-person perception. Manohar dives into the complexity of detecting first-person perception, and how to overcome the privacy and ethical issues of egocentric technology. Manohar breaks down the mechanism underlying AI based on decision trees compared to those based on real-world data, and how they result in two different ideals: transparency or accuracy.

Key Points From This Episode:

  • Talking to Manohar Paluri, his background in IT, and how he wound up at Facebook AI.
  • Manohar's advice on the pros and cons of doing a Ph.D.
  • Why computer vision is so complex for machines but so simple for humans.
  • Why the term “computer vision” is not a limiting definition in terms of the sensors used.
  • How computer vision and perception differ.
  • The two problems facing computer vision: recognizing entities and augmenting perception.
  • Personalized data; generalized learning ability; and adaptability: the three problems that are responsible for the low number of entities that computer vision recognizes.
  • Managing the direction Manohar's organization is going: egocentric vision, predicting the impact of modeling, and finding the balance between transparency and accuracy.
  • Find out what the differences are between first- and third-person perception: intention, positioning, and long-form reasoning.
  • The similarity between first- and third-person perception: both are trying to understand the world.
  • Which sensors are required to predict intention: gaze and hand-object-interaction.
  • What the privacy and ethical issues are with regard to egocentric technologies.
  • Why Manohar believes striking a balance between accuracy and transparency will set the standard.
  • The three prospects in AI that excite Manohar the most: the next computing platform, bringing different modalities together, and improved access to technology.

Tweetables:

“What I tell many of the new graduates when they come and ask me about ‘Should I do my Ph.D. or not?’ I tell them that ‘You’re asking the wrong question’. Because it doesn’t matter whether you do a Ph.D. or you don’t do a Ph.D., the path and the journey is going to be as long for anybody to take you seriously on the research side.” — Manohar Paluri [0:02:40]

“Just to give you a sense, there are billions of entities in the world. The best of the computer vision systems today can recognize in the order of tens of thousands or hundreds of thousands, not even a million. So abandoning the problem of core computer vision and jumping into perception would be a mistake in my opinion. There is a lot of work we still need to do in making machines understand this billion entity taxonomy.” — Manohar Paluri [0:11:33]

“We are in the research part of the organization, so whatever we are doing, it’s not like we are building something to launch over the next few months or a year, we are trying to ask ourselves how does the world look like three, five, ten years from now and what are the technological problems?” — Manohar Paluri [0:20:00]

“So my hope is, once you set a standard on transparency while maintaining the accuracy, it will be very hard for anybody to justify why they would not use such a model compared to a more black-box model for a little bit more gain in accuracy.” — Manohar Paluri [0:32:55]

Links Mentioned in Today’s Episode:

Manohar Paluri on LinkedIn

Facebook AI Research Website

Facebook AI Website: Ego4D

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/how-ai-happens-180611/responsible-ai-economics-with-katya-klinova-and-the-partnership-on-ai-17256715"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to responsible ai economics with katya klinova & the partnership on ai on goodpods" style="width: 225px" /> </a>

Copy