Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621

Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621

03/20/23 • 51 min

1 Listener

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction.

plus icon
bookmark

Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction.

Previous Episode

undefined - Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620

Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620

1 Recommendations

Today we’re joined by Anna Ivanova, a postdoctoral researcher at MIT Quest for Intelligence. In our conversation with Anna, we discuss her recent paper Dissociating language and thought in large language models: a cognitive perspective. In the paper, Anna reviews the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world. We explore parallels between linguistic competence and AGI, the need to identify new benchmarks for these models, whether an end-to-end trained LLM can address various aspects of functional competence, and much more!

The complete show notes for this episode can be found at twimlai.com/go/620.

Next Episode

undefined - Runway Gen-2: Generative AI for Video Creation with Anastasis Germanidis - #622

Runway Gen-2: Generative AI for Video Creation with Anastasis Germanidis - #622

Today we’re joined by Anastasis Germanidis, Co-Founder and CTO of RunwayML. Amongst all the product and model releases over the past few months, Runway threw its hat into the ring with Gen-1, a model that can take still images or video and transform them into completely stylized videos. They followed that up just a few weeks later with the release of Gen-2, a multimodal model that can produce a video from text prompts. We had the pleasure of chatting with Anastasis about both models, exploring the challenges of generating video, the importance of alignment in model deployment, the potential use of RLHF, the deployment of models as APIs, and much more!

The complete show notes for this episode can be found at twimlai.com/go/622.

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/the-twiml-ai-podcast-formerly-this-week-in-machine-learning-and-artifi-57415/watermarking-large-language-models-to-fight-plagiarism-with-tom-goldst-28821368"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to watermarking large language models to fight plagiarism with tom goldstein - 621 on goodpods" style="width: 225px" /> </a>

Copy