Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Gradient Dissent: Conversations on AI

Gradient Dissent: Conversations on AI

Lukas Biewald

Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.

1 Listener

Share icon

All episodes

Best episodes

Top 10 Gradient Dissent: Conversations on AI Episodes

Goodpods has curated a list of the 10 best Gradient Dissent: Conversations on AI episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Gradient Dissent: Conversations on AI for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Gradient Dissent: Conversations on AI episode by adding your comments to the episode page.

Gradient Dissent: Conversations on AI - Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next
play

11/15/22 • 70 min

Emad Mostaque is CEO and co-founder of Stability AI, a startup and network of decentralized developer communities building open AI tools. Stability AI is the company behind Stable Diffusion, the well-known, open source, text-to-image generation model.

Emad shares the story and mission behind Stability AI (unlocking humanity's potential with open AI technology), and explains how Stability's role as a community catalyst and compute provider might evolve as the company grows. Then, Emad and Lukas discuss what the future might hold in store: big models vs "optimal" models, better datasets, and more decentralization.

-

🎶 Special note: This week’s theme music was composed by Weights & Biases’ own Justin Tenuto with help from Harmonai’s Dance Diffusion.

-

Show notes (transcript and links): http://wandb.me/gd-emad-mostaque

-

⏳ Timestamps:

00:00 Intro

00:42 How AI fits into the safety/security industry

09:33 Event matching and object detection

14:47 Running models on the right hardware

17:46 Scaling model evaluation

23:58 Monitoring and evaluation challenges

26:30 Identifying and sorting issues

30:27 Bridging vision and language domains

39:25 Challenges and promises of natural language technology

41:35 Production environment

43:15 Using synthetic data

49:59 Working with startups

53:55 Multi-task learning, meta-learning, and user experience

56:44 Optimization and testing across multiple platforms

59:36 Outro

-

Connect with Jehan and Motorola Solutions:

📍 Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/

📍 Jehan on Twitter: https://twitter.com/jehan/

📍 Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/

📍 Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html

-

💬 Host: Lukas Biewald

📹 Producers: Riley Fields, Angelica Pan, Lavanya Shukla, Anish Shah

-

Subscribe and listen to our podcast today!

👉 Apple Podcasts: http://wandb.me/apple-podcasts​​

👉 Google Podcasts: http://wandb.me/google-podcasts​

👉 Spotify: http://wandb.me/spotify​

1 Listener

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Cristóbal Valenzuela — The Next Generation of Content Creation and AI
play

01/19/23 • 40 min

Cristóbal Valenzuela is co-founder and CEO of Runway ML, a startup that's building the future of AI-powered content creation tools. Runway's research areas include diffusion systems for image generation.

Cris gives a demo of Runway's video editing platform. Then, he shares how his interest in combining technology with creativity led to Runway, and where he thinks the world of computation and content might be headed to next. Cris and Lukas also discuss Runway's tech stack and research.

Show notes (transcript and links): http://wandb.me/gd-cristobal-valenzuela

---

⏳ Timestamps:

0:00 Intro

1:06 How Runway uses ML to improve video editing

6:04 A demo of Runway’s video editing capabilities

13:36 How Cris entered the machine learning space

18:55 Cris’ thoughts on the future of ML for creative use cases

28:46 Runway’s tech stack

32:38 Creativity, and keeping humans in the loop

36:15 The potential of audio generation and new mental models

40:01 Outro

---

🎥 Runway's AI Film Festival is accepting submissions through January 23! 🎥

They are looking for art and artists that are at the forefront of AI filmmaking. Submissions should be between 1-10 minutes long, and a core component of the film should include generative content

📍 https://aiff.runwayml.com/

--

📝 Links

📍 "High-Resolution Image Synthesis with Latent Diffusion Models" (Rombach et al., 2022)", the research paper behind Stable Diffusion: https://research.runwayml.com/publications/high-resolution-image-synthesis-with-latent-diffusion-models

📍 Lexman Artificial, a 100% AI-generated podcast: https://twitter.com/lexman_ai

---

Connect with Cris and Runway:

📍 Cris on Twitter: https://twitter.com/c_valenzuelab

📍 Runway on Twitter: https://twitter.com/runwayml

📍 Careers at Runway: https://runwayml.com/careers/

---

💬 Host: Lukas Biewald

📹 Producers: Riley Fields, Angelica Pan

---

Subscribe and listen to Gradient Dissent today!

👉 Apple Podcasts: http://wandb.me/apple-podcasts​​

👉 Google Podcasts: http://wandb.me/google-podcasts​

👉 Spotify: http://wandb.me/spotify​

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Amelia & Filip — How Pandora Deploys ML Models into Production
play

07/01/21 • 40 min

Amelia and Filip give insights into the recommender systems powering Pandora, from developing models to balancing effectiveness and efficiency in production. --- Amelia Nybakke is a Software Engineer at Pandora. Her team is responsible for the production system that serves models to listeners. Filip Korzeniowski is a Senior Scientist at Pandora working on recommender systems. Before that, he was a PhD student working on deep neural networks for acoustic and language modeling applied to musical audio recordings. Connect with Amelia and Filip: 📍 Amelia's LinkedIn: https://www.linkedin.com/in/amelia-nybakke-60bba5107/ 📍 Filip's LinkedIn: https://www.linkedin.com/in/filip-korzeniowski-28b33815a/ --- ⏳ Timestamps: 0:00 Sneak peek, intro 0:42 What type of ML models are at Pandora? 3:39 What makes two songs similar or not similar? 7:33 Improving models and A/B testing 8:52 Chaining, retraining, versioning, and tracking models 13:29 Useful development tools 15:10 Debugging models 18:28 Communicating progress 20:33 Tuning and improving models 23:08 How Pandora puts models into production 29:45 Bias in ML models 36:01 Repetition vs novelty in recommended songs 38:01 The bottlenecks of deployment 🌟 Transcript: http://wandb.me/gd-amelia-and-filip 🌟 Links: 📍 Amelia's "Women's History Month" playlist: https://www.pandora.com/playlist/PL:1407374934299927:100514833 --- Get our podcast on these platforms: 👉 Apple Podcasts: http://wandb.me/apple-podcasts​​ 👉 Spotify: http://wandb.me/spotify​ 👉 Google Podcasts: http://wandb.me/google-podcasts​​ 👉 YouTube: http://wandb.me/youtube​​ 👉 Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Chris Anderson — Robocars, Drones, and WIRED Magazine

Chris Anderson — Robocars, Drones, and WIRED Magazine

Gradient Dissent: Conversations on AI

play

01/14/21 • 63 min

Chris shares his journey starting from playing in R.E.M, becoming interested in physics to leading WIRED Magazine for 11 years. His robot fascination lead to starting a company that manufactures drones, and creating a community democratizing self-driving cars. Chris Anderson is the CEO of 3D Robotics, founder of the Linux Foundation Dronecode Project and founder of the DIY Drones and DIY Robocars communities. From 2001 through 2012 he was the Editor in Chief of Wired Magazine. He's also the author of the New York Times bestsellers `The Long Tail` and `Free` and `Makers: The New Industrial Revolution`. In 2007 he was named to "Time 100," most influential men and women in the world. Links discussed in this episode: DIY Robocars: diyrobocars.com Getting Started with Robocars: https://diyrobocars.com/2020/10/31/getting-started-with-robocars/ DIY Robotics Meet Up: https://www.meetup.com/DIYRobocars Other Works 3DRobotics: https://www.3dr.com/ The Long Tail by Chris Anderson: https://www.amazon.com/Long-Tail-Future-Business-Selling/dp/1401309666/ref=sr_1_1?dchild=1&keywords=The+Long+Tail&qid=1610580178&s=books&sr=1-1 Interesting links Chris shared OpenMV: https://openmv.io/ Intel Tracking Camera: https://www.intelrealsense.com/tracking-camera-t265/ Zumi Self-Driving Car Kit: https://www.robolink.com/zumi/ Possible Minds: Twenty-Five Ways of Looking at AI: https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997 Topics discussed: 0:00 sneak peek and intro 1:03 Battle of the REM's 3:35 A brief stint with Physics 5:09 Becoming a journalist and the woes of being a modern physicis 9:25 WIRED in the aughts 12:13 perspectives on "The Long Tail" 20:47 getting into drones 25:08 "Take a smartphone, add wings" 28:07 How did you get to autonomous racing cars? 33:30 COVID and virtual environments 38:40 Chris's hope for Robocars 40:54 Robocar hardware, software, sensors 53:49 path to Singularity/ regulations on drones 58:50 "the golden age of simulation" 1:00:22 biggest challenge in deploying ML models Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on these other platforms: YouTube: http://wandb.me/youtube Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://wandb.ai/gallery
bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Josh Tobin — Productionizing ML Models

Josh Tobin — Productionizing ML Models

Gradient Dissent: Conversations on AI

play

07/08/20 • 48 min

Josh Tobin is a researcher working at the intersection of machine learning and robotics. His research focuses on applying deep reinforcement learning, generative models, and synthetic data to problems in robotic perception and control. Additionally, he co-organizes a machine learning training program for engineers to learn about production-ready deep learning called Full Stack Deep Learning. https://fullstackdeeplearning.com/ Josh did his PhD in Computer Science at UC Berkeley advised by Pieter Abbeel and was a research scientist at OpenAI for 3 years during his PhD. Finally, Josh created this amazing field guide on troubleshooting deep neural networks: http://josh-tobin.com/assets/pdf/troubleshooting-deep-neural-networks-01-19.pdf Follow Josh on twitter: https://twitter.com/josh_tobin And on his website:http://josh-tobin.com/ Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast 🔊 Get our podcast on Youtube, Apple, and Spotify! Youtube: https://www.youtube.com/playlist?list=PLD80i8An1OEEb1jP0sjEyiLG8ULRXFob_ Apple Podcasts: https://bit.ly/2WdrUvI Spotify: https://bit.ly/2SqtadF We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it! 👩🏼‍🚀Weights and Biases: We’re always free for academics and open source projects. Email [email protected] with any questions or feature suggestions. - Blog: https://www.wandb.com/articles - Gallery: See what you can create with W&B - https://app.wandb.ai/gallery - Continue the conversation on our slack community - http://bit.ly/wandb-forum 🎙Host: Lukas Biewald - https://twitter.com/l2k 👩🏼‍💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai 📹Editor: Cayla Sharp - http://caylasharp.com/
bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Dominik Moritz — Building Intuitive Data Visualization Tools
play

03/25/21 • 39 min

Dominik shares the story and principles behind Vega and Vega-Lite, and explains how visualization and machine learning help each other. --- Dominik is a co-author of Vega-Lite, a high-level visualization grammar for building interactive plots. He's also a professor at the Human-Computer Interaction Institute Institute at Carnegie Mellon University and an ML researcher at Apple. Connect with Dominik Twitter: https://twitter.com/domoritz GitHub: https://github.com/domoritz Personal website: https://www.domoritz.de/ --- 0:00 Sneak peek, intro 1:15 What is Vega-Lite? 5:39 The grammar of graphics 9:00 Using visualizations creatively 11:36 Vega vs Vega-Lite 16:03 ggplot2 and machine learning 18:39 Voyager and the challenges of scale 24:54 Model explainability and visualizations 31:24 Underrated topics: constraints and visualization theory 34:38 The challenge of metrics in deployment 36:54 In between aggregate statistics and individual examples Links Discussed Vega-Lite: https://vega.github.io/vega-lite/ Data analysis and statistics: an expository overview (Tukey and Wilk, 1966): https://dl.acm.org/doi/10.1145/1464291.1464366 Slope chart / slope graph: https://vega.github.io/vega-lite/examples/line_slope.html Voyager: https://github.com/vega/voyager Draco: https://github.com/uwdata/draco Check out the transcription and discover more awesome ML projects: http://wandb.me/gd-domink-moritz --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​ Spotify: http://wandb.me/spotify​ Google: http://wandb.me/google-podcasts​ YouTube: http://wandb.me/youtube​ Soundcloud: http://wandb.me/soundcloud --- Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​ Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
bookmark
plus icon
share episode

About this episode

In this episode of Gradient Dissent, Lukas interviews Dave Rogenmoser (CEO & Co-Founder) and Saad Ansari (Director of AI) of Jasper AI, a generative AI company with a focus on text generation for content like blog posts, articles, and more. The company has seen impressive growth since it's launch at the start of 2021.

Lukas talks with Dave and Saad about how Jasper AI was able to sell the capabilities of large language models as a product so successfully, and how they are able to continually improve their product and take advantage of steps forward in the AI industry at large.

They also speak on how they keep their business ahead of the competition, where they put their focus on in terms of R&D, and how they are able to keep the insights they've learned over the years relevant at all times as their company grows in employee count and company value.

Other topics include the potential use of generative AI in domains it hasn't necessarily seen yet, as well as the impact that community and user feedback plays on the constant tweaking and tuning processes that machine learning models go through.

Connect with Dave & Saad:

Find Dave on Twitter and LinkedIn.

Find Saad on LinkedIn.

---

💬 Host: Lukas Biewald

---

Subscribe and listen to Gradient Dissent today!

👉 Apple Podcasts: http://wandb.me/apple-podcasts​​

👉 Google Podcasts: http://wandb.me/google-podcasts​

👉 Spotify: http://wandb.me/spotify​

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Jordan Fisher — Skipping the Line with Autonomous Checkout
play

08/04/22 • 57 min

Jordan Fisher is the CEO and co-founder of Standard AI, an autonomous checkout company that’s pushing the boundaries of computer vision.

In this episode, Jordan discusses “the Wild West” of the MLOps stack and tells Lukas why Rust beats Python. He also explains why AutoML shouldn't be overlooked and uses a bag of chips to help explain the Manifold Hypothesis.

Show notes (transcript and links): http://wandb.me/gd-jordan-fisher

---

⏳ Timestamps:

00:00 Intro

00:40 The origins of Standard AI

08:30 Getting Standard into stores

18:00 Supervised learning, the advent of synthetic data, and the manifold hypothesis

24:23 What's important in a MLOps stack

27:32 The merits of AutoML

30:00 Deep learning frameworks

33:02 Python versus Rust

39:32 Raw camera data versus video

42:47 The future of autonomous checkout

48:02 Sharing the StandardSim data set

52:30 Picking the right tools

54:30 Overcoming dynamic data set challenges

57:35 Outro

---

Connect with Jordan and Standard AI

📍 Jordan on LinkedIn: https://www.linkedin.com/in/jordan-fisher-81145025/

📍 Standard AI on Twitter: https://twitter.com/StandardAi

📍 Careers at Standard AI: https://careers.standard.ai/

---

💬 Host: Lukas Biewald

📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla

---

Subscribe and listen to our podcast today!

👉 Apple Podcasts: http://wandb.me/apple-podcasts​​

👉 Google Podcasts: http://wandb.me/google-podcasts​

👉 Spotify: http://wandb.me/spotify​

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems
play

06/22/23 • 60 min

On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.

We discuss:

The advantages of using large chips for AI work.

Cerebras Systems’ process for building chips optimized for AI.

Why traditional GPUs aren’t the optimal machines for AI work.

Why efficiently distributing computing resources is a significant challenge for AI work.

How much faster Cerebras Systems’ machines are than other processors on the market.

Reasons why some ML-specific chip companies fail and what Cerebras does differently.

Unique challenges for chip makers and hardware companies.

Cooling and heat-transfer techniques for Cerebras machines.

How Cerebras approaches building chips that will fit the needs of customers for years to come.

Why the strategic vision for what data to collect for ML needs more discussion.

Resources:

Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/

Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/

Cerebras Systems | Website - https://www.cerebras.net/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Zack Chase Lipton — The Medical Machine Learning Landscape
play

09/17/20 • 59 min

How Zack went from being a musician to professor, how medical applications of Machine Learning are developing, and the challenges of counteracting bias in real world applications. Zachary Chase Lipton is an assistant professor of Operations Research and Machine Learning at Carnegie Mellon University. His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data. He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks. Zack’s blog - http://approximatelycorrect.com/ Detecting and Correcting for Label Shift with Black Box Predictors: https://arxiv.org/pdf/1802.03916.pdf Algorithmic Fairness from a Non-Ideal Perspective https://www.datascience.columbia.edu/data-good-zachary-lipton-lecture Jonas Peter’s lectures on causality: https://youtu.be/zvrcyqcN9Wo 0:00 Sneak peek: Is this a problem worth solving? 0:38 Intro 1:23 Zack’s journey from being a musician to a professor at CMU 4:45 Applying machine learning to medical imaging 10:14 Exploring new frontiers: the most impressive deep learning applications for healthcare 12:45 Evaluating the models – Are they ready to be deployed in hospitals for use by doctors? 19:16 Capturing the signals in evolving representations of healthcare data 27:00 How does the data we capture affect the predictions we make 30:40 Distinguishing between associations and correlations in data – Horror vs romance movies 34:20 The positive effects of augmenting datasets with counterfactually flipped data 39:25 Algorithmic fairness in the real world 41:03 What does it mean to say your model isn’t biased? 43:40 Real world implications of decisions to counteract model bias 49:10 The pragmatic approach to counteracting bias in a non-ideal world 51:24 An underrated aspect of machine learning 55:11 Why defining the problem is the biggest challenge for machine learning in the real world Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on YouTube, Apple, and Spotify! YouTube: https://www.youtube.com/c/WeightsBiases Soundcloud: https://bit.ly/2YnGjIq Apple Podcasts: https://bit.ly/2WdrUvI Spotify: https://bit.ly/2SqtadF We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it! Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://tiny.cc/wb-salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://bit.ly/wandb-forum Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://app.wandb.ai/gallery
bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Gradient Dissent: Conversations on AI have?

Gradient Dissent: Conversations on AI currently has 126 episodes available.

What topics does Gradient Dissent: Conversations on AI cover?

The podcast is about Podcasts, Technology and Business.

What is the most popular episode on Gradient Dissent: Conversations on AI?

The episode title 'Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next' is the most popular.

What is the average episode length on Gradient Dissent: Conversations on AI?

The average episode length on Gradient Dissent: Conversations on AI is 54 minutes.

How often are episodes of Gradient Dissent: Conversations on AI released?

Episodes of Gradient Dissent: Conversations on AI are typically released every 14 days.

When was the first episode of Gradient Dissent: Conversations on AI?

The first episode of Gradient Dissent: Conversations on AI was released on Mar 11, 2020.

Show more FAQ

Toggle view more icon

Comments