Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Gradient Dissent: Conversations on AI

Gradient Dissent: Conversations on AI

Lukas Biewald

Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.

1 Listener

Share icon

All episodes

Best episodes

Top 10 Gradient Dissent: Conversations on AI Episodes

Goodpods has curated a list of the 10 best Gradient Dissent: Conversations on AI episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Gradient Dissent: Conversations on AI for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Gradient Dissent: Conversations on AI episode by adding your comments to the episode page.

Gradient Dissent: Conversations on AI - Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next
play

11/15/22 • 70 min

Emad Mostaque is CEO and co-founder of Stability AI, a startup and network of decentralized developer communities building open AI tools. Stability AI is the company behind Stable Diffusion, the well-known, open source, text-to-image generation model.

Emad shares the story and mission behind Stability AI (unlocking humanity's potential with open AI technology), and explains how Stability's role as a community catalyst and compute provider might evolve as the company grows. Then, Emad and Lukas discuss what the future might hold in store: big models vs "optimal" models, better datasets, and more decentralization.

-

🎶 Special note: This week’s theme music was composed by Weights & Biases’ own Justin Tenuto with help from Harmonai’s Dance Diffusion.

-

Show notes (transcript and links): http://wandb.me/gd-emad-mostaque

-

⏳ Timestamps:

00:00 Intro

00:42 How AI fits into the safety/security industry

09:33 Event matching and object detection

14:47 Running models on the right hardware

17:46 Scaling model evaluation

23:58 Monitoring and evaluation challenges

26:30 Identifying and sorting issues

30:27 Bridging vision and language domains

39:25 Challenges and promises of natural language technology

41:35 Production environment

43:15 Using synthetic data

49:59 Working with startups

53:55 Multi-task learning, meta-learning, and user experience

56:44 Optimization and testing across multiple platforms

59:36 Outro

-

Connect with Jehan and Motorola Solutions:

📍 Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/

📍 Jehan on Twitter: https://twitter.com/jehan/

📍 Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/

📍 Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html

-

💬 Host: Lukas Biewald

📹 Producers: Riley Fields, Angelica Pan, Lavanya Shukla, Anish Shah

-

Subscribe and listen to our podcast today!

👉 Apple Podcasts: http://wandb.me/apple-podcasts​​

👉 Google Podcasts: http://wandb.me/google-podcasts​

👉 Spotify: http://wandb.me/spotify​

1 Listener

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - AI’s Future: Investment & Impact with Sarah Guo and Elad Gil
play

01/18/24 • 64 min

Explore the Future of Investment & Impact in AI with Host Lukas Biewald and Guests Elad Gill and Sarah Guo of the No Priors podcast.

Sarah is the founder of Conviction VC, an AI-centric $100 million venture fund. Elad, a seasoned entrepreneur and startup investor, boasts an impressive portfolio in over 40 companies, each valued at $1 billion or more, and wrote the influential "High Growth Handbook."

Join us for a deep dive into the nuanced world of AI, where we'll explore its broader industry impact, focusing on how startups can seamlessly blend product-centric approaches with a balance of innovation and practical development.

*Subscribe to Weights & Biases* → https://bit.ly/45BCkYz

Timestamps:

0:00 - Introduction

5:15 - Exploring Fine-Tuning vs RAG in AI

10:30 - Evaluating AI Research for Investment

15:45 - Impact of AI Models on Product Development

20:00 - AI's Role in Evolving Job Markets

25:15 - The Balance Between AI Research and Product Development

30:00 - Code Generation Technologies in Software Engineering

35:00 - AI's Broader Industry Implications

40:00 - Importance of Product-Driven Approaches in AI Startups

45:00 - AI in Various Sectors: Beyond Software Engineering

50:00 - Open Source vs Proprietary AI Models

55:00 - AI's Impact on Traditional Roles and Industries

1:00:00 - Closing Thoughts

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

Follow Weights & Biases:

YouTube: http://wandb.me/youtube

Twitter: https://twitter.com/weights_biases

LinkedIn: https://www.linkedin.com/company/wandb

Join the Weights & Biases Discord Server:

https://discord.gg/CkZKRNnaf3

#OCR #DeepLearning #AI #Modeling #ML

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Piero Molino — The Secret Behind Building Successful Open Source Projects
play

02/11/21 • 36 min

Piero shares the story of how Ludwig was created, as well as the ins and outs of how Ludwig works and the future of machine learning with no code. Piero is a Staff Research Scientist in the Hazy Research group at Stanford University. He is a former founding member of Uber AI, where he created Ludwig, worked on applied projects (COTA, Graph Learning for Uber Eats, Uber’s Dialogue System), and published research on NLP, Dialogue, Visualization, Graph Learning, Reinforcement Learning, and Computer Vision. Topics covered: 0:00 Sneak peek and intro 1:24 What is Ludwig, at a high level? 4:42 What is Ludwig doing under the hood? 7:11 No-code machine learning and data types 14:15 How Ludwig started 17:33 Model performance and underlying architecture 21:52 On Python in ML 24:44 Defaults and W&B integration 28:26 Perspective on NLP after 10 years in the field 31:49 Most underrated aspect of ML 33:30 Hardest part of deploying ML models in the real world Learn more about Ludwig: https://ludwig-ai.github.io/ludwig-docs/ Piero's Twitter: https://twitter.com/w4nderlus7 Follow Piero on Linkedin: https://www.linkedin.com/in/pieromolino/?locale=en_US Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Richard Socher — The Challenges of Making ML Work in the Real World
play

09/29/20 • 50 min

Richard Socher, ex-Chief Scientist at Salesforce, joins us to talk about The AI Economist, NLP protein generation and biggest challenge in making ML work in the real world. Richard Socher was the Chief scientist (EVP) at Salesforce where he lead teams working on fundamental research(einstein.ai/), applied research, product incubation, CRM search, customer service automation and a cross-product AI platform for unstructured and structured data. Previously, he was an adjunct professor at Stanford’s computer science department and the founder and CEO/CTO of MetaMind(www.metamind.io/) which was acquired by Salesforce in 2016. In 2014, he got my PhD in the [CS Department](www.cs.stanford.edu/) at Stanford. He likes paramotoring and water adventures, traveling and photography. More info: - Forbes article: https://www.forbes.com/sites/gilpress/2017/05/01/emerging-artificial-intelligence-ai-leaders-richard-socher-salesforce/) with more info about Richard's bio. - CS224n - NLP with Deep Learning(http://cs224n.stanford.edu/) the class Richard used to teach. - TEDx talk(https://www.youtube.com/watch?v=8cmx7V4oIR8) about where AI is today and where it's going. Research: Google Scholar Link(https://scholar.google.com/citations?user=FaOcyfMAAAAJ&hl=en) The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies Arxiv link(https://arxiv.org/abs/2004.13332), blog(https://blog.einstein.ai/the-ai-economist/), short video(https://www.youtube.com/watch?v=4iQUcGyQhdA), Q&A(https://salesforce.com/company/news-press/stories/2020/4/salesforce-ai-economist/), Press: VentureBeat(https://venturebeat.com/2020/04/29/salesforces-ai-economist-taps-reinforcement-learning-to-generate-optimal-tax-policies/), TechCrunch(https://techcrunch.com/2020/04/29/salesforce-researchers-are-working-on-an-ai-economist-for-more-equitable-tax-policy/) ProGen: Language Modeling for Protein Generation: bioRxiv link(https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2), [blog](https://blog.einstein.ai/progen/) ] Dye-sensitized solar cells under ambient light powering machine learning: towards autonomous smart sensors for the internet of things Issue11, (**Chemical Science 2020**). paper link(https://pubs.rsc.org/en/content/articlelanding/2020/sc/c9sc06145b#!divAbstract) CTRL: A Conditional Transformer Language Model for Controllable Generation: Arxiv link(https://arxiv.org/abs/1909.05858), code pre-trained and fine-tuning(https://github.com/salesforce/ctrl), blog(https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/) Genie: a generator of natural language semantic parsers for virtual assistant commands: PLDI 2019 pdf link(https://almond-static.stanford.edu/papers/genie-pldi19.pdf), https://almond.stanford.edu Topics Covered: 0:00 intro 0:42 the AI economist 7:08 the objective function and Gini Coefficient 12:13 on growing up in Eastern Germany and cultural differences 15:02 Language models for protein generation (ProGen) 27:53 CTRL: conditional transformer language model for controllable generation 37:52 Businesses vs Academia 40:00 What ML applications are important to salesforce 44:57 an underrated aspect of machine learning 48:13 Biggest challenge in making ML work in the real world Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on Soundcloud, Apple, Spotify, and Google! Soundcloud: https://bit.ly/2YnGjIq Apple Podcasts: https://bit.ly/2WdrUvI Spotify: https://bit.ly/2SqtadF Google: http://tiny.cc/GD_Google Weights and Biases makes developer tools for deep learning. Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://tiny.cc/wb-salon Join our community of ML practitioners: http://bit.ly/wb-slack Our gallery features curated machine learning reports by ML researchers. https://app.wandb.ai/gallery
bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Zack Chase Lipton — The Medical Machine Learning Landscape
play

09/17/20 • 59 min

How Zack went from being a musician to professor, how medical applications of Machine Learning are developing, and the challenges of counteracting bias in real world applications. Zachary Chase Lipton is an assistant professor of Operations Research and Machine Learning at Carnegie Mellon University. His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data. He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks. Zack’s blog - http://approximatelycorrect.com/ Detecting and Correcting for Label Shift with Black Box Predictors: https://arxiv.org/pdf/1802.03916.pdf Algorithmic Fairness from a Non-Ideal Perspective https://www.datascience.columbia.edu/data-good-zachary-lipton-lecture Jonas Peter’s lectures on causality: https://youtu.be/zvrcyqcN9Wo 0:00 Sneak peek: Is this a problem worth solving? 0:38 Intro 1:23 Zack’s journey from being a musician to a professor at CMU 4:45 Applying machine learning to medical imaging 10:14 Exploring new frontiers: the most impressive deep learning applications for healthcare 12:45 Evaluating the models – Are they ready to be deployed in hospitals for use by doctors? 19:16 Capturing the signals in evolving representations of healthcare data 27:00 How does the data we capture affect the predictions we make 30:40 Distinguishing between associations and correlations in data – Horror vs romance movies 34:20 The positive effects of augmenting datasets with counterfactually flipped data 39:25 Algorithmic fairness in the real world 41:03 What does it mean to say your model isn’t biased? 43:40 Real world implications of decisions to counteract model bias 49:10 The pragmatic approach to counteracting bias in a non-ideal world 51:24 An underrated aspect of machine learning 55:11 Why defining the problem is the biggest challenge for machine learning in the real world Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on YouTube, Apple, and Spotify! YouTube: https://www.youtube.com/c/WeightsBiases Soundcloud: https://bit.ly/2YnGjIq Apple Podcasts: https://bit.ly/2WdrUvI Spotify: https://bit.ly/2SqtadF We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it! Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://tiny.cc/wb-salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://bit.ly/wandb-forum Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://app.wandb.ai/gallery
bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Jeremy Howard — The Simple but Profound Insight Behind Diffusion
play

01/05/23 • 72 min

Jeremy Howard is a co-founder of fast.ai, the non-profit research group behind the popular massive open online course "Practical Deep Learning for Coders", and the open source deep learning library "fastai".

Jeremy is also a co-founder of #Masks4All, a global volunteer organization founded in March 2020 that advocated for the public adoption of homemade face masks in order to help slow the spread of COVID-19. His Washington Post article "Simple DIY masks could help flatten the curve." went viral in late March/early April 2020, and is associated with the U.S CDC's change in guidance a few days later to recommend wearing masks in public.

In this episode, Jeremy explains how diffusion works and how individuals with limited compute budgets can engage meaningfully with large, state-of-the-art models. Then, as our first-ever repeat guest on Gradient Dissent, Jeremy revisits a previous conversation with Lukas on Python vs. Julia for machine learning.

Finally, Jeremy shares his perspective on the early days of COVID-19, and what his experience as one of the earliest and most high-profile advocates for widespread mask-wearing was like.

Show notes (transcript and links): http://wandb.me/gd-jeremy-howard-2

---

⏳ Timestamps:

0:00 Intro

1:06 Diffusion and generative models

14:40 Engaging with large models meaningfully

20:30 Jeremy's thoughts on Stable Diffusion and OpenAI

26:38 Prompt engineering and large language models

32:00 Revisiting Julia vs. Python

40:22 Jeremy's science advocacy during early COVID days

1:01:03 Researching how to improve children's education

1:07:43 The importance of executive buy-in

1:11:34 Outro

1:12:02 Bonus: Weights & Biases

---

📝 Links

📍 Jeremy's previous Gradient Dissent episode (8/25/2022): http://wandb.me/gd-jeremy-howard

📍 "Simple DIY masks could help flatten the curve. We should all wear them in public.", Jeremy's viral Washington Post article: https://www.washingtonpost.com/outlook/2020/03/28/masks-all-coronavirus/

📍 "An evidence review of face masks against COVID-19" (Howard et al., 2021), one of the first peer-reviewed papers on the effectiveness of wearing masks: https://www.pnas.org/doi/10.1073/pnas.2014564118

📍 Jeremy's Twitter thread summary of "An evidence review of face masks against COVID-19": https://twitter.com/jeremyphoward/status/1348771993949151232

📍 Read more about Jeremy's mask-wearing advocacy: https://www.smh.com.au/world/north-america/australian-expat-s-push-for-universal-mask-wearing-catches-fire-in-the-us-20200401-p54fu2.html

---

Connect with Jeremy and fast.ai:

📍 Jeremy on Twitter: https://twitter.com/jeremyphoward

📍 fast.ai on Twitter: https://twitter.com/FastDotAI

📍 Jeremy on LinkedIn: https://www.linkedin.com/in/howardjeremy/

---

💬 Host: Lukas Biewald

📹 Producers: Riley Fields, Angelica Pan

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Cristóbal Valenzuela — The Next Generation of Content Creation and AI
play

01/19/23 • 40 min

Cristóbal Valenzuela is co-founder and CEO of Runway ML, a startup that's building the future of AI-powered content creation tools. Runway's research areas include diffusion systems for image generation.

Cris gives a demo of Runway's video editing platform. Then, he shares how his interest in combining technology with creativity led to Runway, and where he thinks the world of computation and content might be headed to next. Cris and Lukas also discuss Runway's tech stack and research.

Show notes (transcript and links): http://wandb.me/gd-cristobal-valenzuela

---

⏳ Timestamps:

0:00 Intro

1:06 How Runway uses ML to improve video editing

6:04 A demo of Runway’s video editing capabilities

13:36 How Cris entered the machine learning space

18:55 Cris’ thoughts on the future of ML for creative use cases

28:46 Runway’s tech stack

32:38 Creativity, and keeping humans in the loop

36:15 The potential of audio generation and new mental models

40:01 Outro

---

🎥 Runway's AI Film Festival is accepting submissions through January 23! 🎥

They are looking for art and artists that are at the forefront of AI filmmaking. Submissions should be between 1-10 minutes long, and a core component of the film should include generative content

📍 https://aiff.runwayml.com/

--

📝 Links

📍 "High-Resolution Image Synthesis with Latent Diffusion Models" (Rombach et al., 2022)", the research paper behind Stable Diffusion: https://research.runwayml.com/publications/high-resolution-image-synthesis-with-latent-diffusion-models

📍 Lexman Artificial, a 100% AI-generated podcast: https://twitter.com/lexman_ai

---

Connect with Cris and Runway:

📍 Cris on Twitter: https://twitter.com/c_valenzuelab

📍 Runway on Twitter: https://twitter.com/runwayml

📍 Careers at Runway: https://runwayml.com/careers/

---

💬 Host: Lukas Biewald

📹 Producers: Riley Fields, Angelica Pan

---

Subscribe and listen to Gradient Dissent today!

👉 Apple Podcasts: http://wandb.me/apple-podcasts​​

👉 Google Podcasts: http://wandb.me/google-podcasts​

👉 Spotify: http://wandb.me/spotify​

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - Joaquin Candela — Definitions of Fairness

Joaquin Candela — Definitions of Fairness

Gradient Dissent: Conversations on AI

play

10/01/20 • 79 min

Joaquin chats about scaling and democratizing AI at Facebook, while understanding fairness and algorithmic bias. --- Joaquin Quiñonero Candela is Distinguished Tech Lead for Responsible AI at Facebook, where he aims to understand and mitigate the risks and unintended consequences of the widespread use of AI across Facebook. He was previously Director of Society and AI Lab and Director of Engineering for Applied ML. Before joining Facebook, Joaquin taught at the University of Cambridge, and worked at Microsoft Research. Connect with Joaquin: Personal website: https://quinonero.net/ Twitter: https://twitter.com/jquinonero LinkedIn: https://www.linkedin.com/in/joaquin-qui%C3%B1onero-candela-440844/ --- Topics Discussed: 0:00 Intro, sneak peak 0:53 Looking back at building and scaling AI at Facebook 10:31 How do you ship a model every week? 15:36 Getting buy-in to use a system 19:36 More on ML tools 24:01 Responsible AI at Facebook 38:33 How to engage with those effected by ML decisions 41:54 Approaches to fairness 53:10 How to know things are built right 59:34 Diversity, inclusion, and AI 1:14:21 Underrated aspect of AI 1:16:43 Hardest thing when putting models into production Transcript: http://wandb.me/gd-joaquin-candela Links Discussed: Race and Gender (2019): https://arxiv.org/pdf/1908.06165.pdf Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning (2019): https://arxiv.org/abs/1912.10389 Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (2018): http://proceedings.mlr.press/v81/buolamwini18a.html --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​ Spotify: http://wandb.me/spotify​ Google Podcasts: http://wandb.me/google-podcasts​​ YouTube: http://wandb.me/youtube​​ Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
bookmark
plus icon
share episode

About this episode

In this episode of Gradient Dissent, Lukas interviews Dave Rogenmoser (CEO & Co-Founder) and Saad Ansari (Director of AI) of Jasper AI, a generative AI company with a focus on text generation for content like blog posts, articles, and more. The company has seen impressive growth since it's launch at the start of 2021.

Lukas talks with Dave and Saad about how Jasper AI was able to sell the capabilities of large language models as a product so successfully, and how they are able to continually improve their product and take advantage of steps forward in the AI industry at large.

They also speak on how they keep their business ahead of the competition, where they put their focus on in terms of R&D, and how they are able to keep the insights they've learned over the years relevant at all times as their company grows in employee count and company value.

Other topics include the potential use of generative AI in domains it hasn't necessarily seen yet, as well as the impact that community and user feedback plays on the constant tweaking and tuning processes that machine learning models go through.

Connect with Dave & Saad:

Find Dave on Twitter and LinkedIn.

Find Saad on LinkedIn.

---

💬 Host: Lukas Biewald

---

Subscribe and listen to Gradient Dissent today!

👉 Apple Podcasts: http://wandb.me/apple-podcasts​​

👉 Google Podcasts: http://wandb.me/google-podcasts​

👉 Spotify: http://wandb.me/spotify​

bookmark
plus icon
share episode
Gradient Dissent: Conversations on AI - AI's Future: Spatial Data with Paul Copplestone

AI's Future: Spatial Data with Paul Copplestone

Gradient Dissent: Conversations on AI

play

12/21/23 • 58 min

Dive into the remarkable journey of Paul Copplestone, CEO of Supabase in this episode of Gradient Dissent Business. Paul recounts his unique experiences, having started with a foundation in web development before venturing into innovative projects in agriculture, blending his rural roots with technological advancements.Throughout our conversation with Paul we uncover his coding and database management skills, and learn how his diverse background was instrumental in shaping his approach to building a thriving tech company.

This episode covers everything from challenges in the AI and database industries to the future of spatial data and embeddings. Join us as we explore the fascinating world of innovation, the importance of diverse perspectives, and the future of AI, data storage, and spatial data.

We discuss:

0:00 What is supabase.com?

03:07 Exploration of exciting use cases

06:15 Challenges in the AI and Database Industry

12:30 Role of Multimodal Models

16:45 Innovations in Data Storage

22:10 Diverse Perspectives on Technology

29:20 The Importance of Intellectual Honesty

36:00 Building a Company and Navigating Challenges

42:30 The Impact of Global Experience

49:20 The Future of AI and Spatial Data

54:00 Integrating Spatial Data into Digital Systems

Thanks for listening to the Gradient Dissent Business podcast, with hosts Lavanya Shukla and Caryn Marooney, brought to you by Weights & Biases. Be sure to click the subscribe button below, to keep your finger on the pulse of this fast-moving space and hear from other amazing guests.

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Gradient Dissent: Conversations on AI have?

Gradient Dissent: Conversations on AI currently has 122 episodes available.

What topics does Gradient Dissent: Conversations on AI cover?

The podcast is about Podcasts, Technology and Business.

What is the most popular episode on Gradient Dissent: Conversations on AI?

The episode title 'Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next' is the most popular.

What is the average episode length on Gradient Dissent: Conversations on AI?

The average episode length on Gradient Dissent: Conversations on AI is 54 minutes.

How often are episodes of Gradient Dissent: Conversations on AI released?

Episodes of Gradient Dissent: Conversations on AI are typically released every 14 days.

When was the first episode of Gradient Dissent: Conversations on AI?

The first episode of Gradient Dissent: Conversations on AI was released on Mar 11, 2020.

Show more FAQ

Toggle view more icon

Comments