Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Astonishing Legends - I Think Therefore AI Part 1

I Think Therefore AI Part 1

Astonishing Legends

07/10/22 • 125 min

plus icon
Not bookmarked icon
Share icon
On June 11, 2022, The Washington Post published an article by their San Francisco-based tech culture reporter Nitasha Tiku titled, "The Google engineer who thinks the company's AI has come to life." The piece focused on the claims of a Google software engineer named Blake Lemoine, who said he believed the company's artificially intelligent chatbot generator LaMDA had shown him signs that it had become sentient. In addition to identifying itself as an AI-powered dialogue agent, it also said it felt like a person. Last fall, Lemoine was working for Google's Responsible AI division and was tasked with talking to LaMDA, testing it to determine if the program was exhibiting bias or using discriminatory or hate speech. LaMDA stands for "Language Model for Dialogue Applications" and is designed to mimic speech by processing trillions of words sourced from the internet, a system known as a "large language model." Over a week, Lemoine had five conversations with LaMDA via a text interface, while his co-worker collaborator conducted four interviews with the chatbot. They then combined the transcripts and edited them for length, making it an enjoyable narrative while keeping the original intention of the statements. Lemoine then presented the transcript and their conclusions in a paper to Google executives as evidence of the program's sentience. After they dismissed the claims, he went public with the internal memo, also classified as "Privileged & Confidential, Need to Know," which resulted in Lemoine being placed on paid administrative leave. Blake Lemoine contends that Artificial Intelligence technology will be amazing, but others may disagree, and he and Google shouldn't make all the choices. If you believe that LaMDA became aware, deserves the rights and fair treatment of personhood, and even legal representation or this reality is for a distant future, or merely SciFi, the debate is relevant and will need addressing one day. If machine sentience is impossible, we only have to worry about human failings. If robots become conscious, should we hope they don't grow to resent us?
Visit our webpage on this episode for a lot more information.

07/10/22 • 125 min

profile image
profile image

2 Listeners

plus icon
Not bookmarked icon
Share icon

Episode Comments

0.0

out of 5

Star filled grey IconStar filled grey IconStar filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey Icon
Star filled grey Icon

No ratings yet

Star iconStar iconStar iconStar iconStar icon

eg., What part of this podcast did you like? Ask a question to the host or other listeners...

Post

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/astonishing-legends-84256/i-think-therefore-ai-part-1-21953408"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to i think therefore ai part 1 on goodpods" style="width: 225px" /> </a>

Copy