
Did a Chatbot Cause Her Son’s Death? Megan Garcia v. Character.AI & Google
12/05/24 • 73 min
What if the worst fears around AI come true? For Megan Garcia, that’s already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewell took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI’s alleged wrongdoing.
Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google.
When reached for comment, a spokesperson at Character.AI responded with the following statement:
We do not comment on pending litigation.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.
Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform.
As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog.
When reached for comment, Google spokesperson Jose Castaneda responded with the following statement:
Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that’s why – as has been widely reported – we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.
Questions? Comments? Email us at [email protected] or find us on Instagram and TikTok @onwithkaraswisher
Learn more about your ad choices. Visit podcastchoices.com/adchoices
What if the worst fears around AI come true? For Megan Garcia, that’s already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewell took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI’s alleged wrongdoing.
Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google.
When reached for comment, a spokesperson at Character.AI responded with the following statement:
We do not comment on pending litigation.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.
Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform.
As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog.
When reached for comment, Google spokesperson Jose Castaneda responded with the following statement:
Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that’s why – as has been widely reported – we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.
Questions? Comments? Email us at [email protected] or find us on Instagram and TikTok @onwithkaraswisher
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Previous Episode

Nathan Myhrvold: Tech’s Renaissance Man
Nathan Myhrvold likes to challenge conventional wisdom. When the founder and CEO of Intellectual Ventures (and former Chief Technology Officer at Microsoft) isn’t running one of the world’s leading invention businesses, he’s busy doing norm-defying research on topics like dinosaur bone density, asteroid sizing, and the proper way to knead dough. Kara and Nathan talk about everything from AI, politics, nuclear power, and global warming to “splash shots” — photographs of colliding wine glasses.
Questions? Comments? Email us at [email protected] or find us on Instagram/TikTok as @onwithkaraswisher
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Next Episode

Why Tubi CEO Anjali Sud Says Free Is the Future of Streaming
How does an ad-based streamer compete with subscription-based models like Netflix, Hulu, Max, and all the rest? By charging nothing. At least that’s what Tubi is doing. And despite being seemingly less prestigious than premium streamers, Tubi is used by millions of Americans and outranks Peacock, Max, Paramount Plus, and Apple TV+ in total viewing time. For those who are fatigued by subscriptions fees and monoculture viewing, Tubi offers an enormous catalog of nostalgia and “newstalgia” movies, hours of bingeable classics, over 250 live channels, plus Tubi originals – all at no cost to viewers. So why aren’t more people talking about it? Kara sits down with Tubi CEO Anjali Sud in this special episode of On presented by e.l.f. Cosmetics to talk about Tubi’s appeal to cord-cutters and cord-nevers; how niche-specific fans help inform Tubi content; why Sud thinks Tubi can democratize storytelling and create space for emerging filmmakers; and how she came to be one of few female CEOs in tech.
This interview was taped live at the Whitney Museum in partnership with e.l.f cosmetics as a part of their campaign to increase representation and diversity in boardrooms. Find out more here: https://www.elfbeauty.com/changing-the-board-game/so-many-dicks
Questions? Comments? Email us at [email protected] or find us on Instagram and TikTok @onwithkaraswisher
Learn more about your ad choices. Visit podcastchoices.com/adchoices
If you like this episode you’ll love
Episode Comments
Featured in these lists
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/on-with-kara-swisher-213583/did-a-chatbot-cause-her-sons-death-megan-garcia-v-characterai-and-goog-79435285"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to did a chatbot cause her son’s death? megan garcia v. character.ai & google on goodpods" style="width: 225px" /> </a>
Copy