
Zack Chase Lipton — The Medical Machine Learning Landscape
09/17/20 • 59 min
Previous Episode

Anthony Goldbloom — How to Win Kaggle Competitions
Anthony Goldbloom is the founder and CEO of Kaggle. In 2011 & 2012, Forbes Magazine named Anthony as one of the 30 under 30 in technology. In 2011, Fast Company featured him as one of the innovative thinkers who are changing the future of business. He and Lukas discuss the differences in strategies that do well in Kaggle competitions vs academia vs in production. They discuss his 2016 Ted talk through the lens of 2020, frameworks, and languages. Topics Discussed: 0:00 Sneak Peek 0:20 Introduction 0:45 methods used in kaggle competitions vs mainstream academia 2:30 Feature engineering 3:55 Kaggle Competitions now vs 10 years ago 8:35 Data augmentation strategies 10:06 Overfitting in Kaggle Competitions 12:53 How to not overfit 14:11 Kaggle competitions vs the real world 18:15 Getting into ML through Kaggle 22:03 Other Kaggle products 25:48 Favorite under appreciated kernel or dataset 28:27 Python & R 32:03 Frameworks 35:15 2016 Ted talk though the lens of 2020 37:54 Reinforcement Learning 38:43 What’s the topic in ML that people don’t talk about enough? 42:02 Where are the biggest bottlenecks in deploying ML software? Check out Kaggle: https://www.kaggle.com/ Follow Anthony on Twitter: https://twitter.com/antgoldbloom Watch his 2016 Ted Talk: https://www.ted.com/talks/anthony_goldbloom_the_jobs_we_ll_lose_to_machines_and_the_ones_we_won_t Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on Soundcloud, Apple, and Spotify! Soundcloud: https://bit.ly/2YnGjIq Apple Podcasts: https://bit.ly/2WdrUvI Spotify: https://bit.ly/2SqtadF We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it! Weights and Biases: We’re always free for academics and open source projects. Email [email protected] with any questions or feature suggestions. * Blog: https://www.wandb.com/articles * Gallery: See what you can create with W&B - https://app.wandb.ai/gallery * Join our community of ML practitioners working on interesting problems - https://www.wandb.com/ml-community Host: Lukas Biewald - https://twitter.com/l2k Producer: Lavanya Shukla - https://twitter.com/lavanyaai Editor: Cayla Sharp - http://caylasharp.com/
Next Episode

Richard Socher — The Challenges of Making ML Work in the Real World
Richard Socher, ex-Chief Scientist at Salesforce, joins us to talk about The AI Economist, NLP protein generation and biggest challenge in making ML work in the real world. Richard Socher was the Chief scientist (EVP) at Salesforce where he lead teams working on fundamental research(einstein.ai/), applied research, product incubation, CRM search, customer service automation and a cross-product AI platform for unstructured and structured data. Previously, he was an adjunct professor at Stanford’s computer science department and the founder and CEO/CTO of MetaMind(www.metamind.io/) which was acquired by Salesforce in 2016. In 2014, he got my PhD in the [CS Department](www.cs.stanford.edu/) at Stanford. He likes paramotoring and water adventures, traveling and photography. More info: - Forbes article: https://www.forbes.com/sites/gilpress/2017/05/01/emerging-artificial-intelligence-ai-leaders-richard-socher-salesforce/) with more info about Richard's bio. - CS224n - NLP with Deep Learning(http://cs224n.stanford.edu/) the class Richard used to teach. - TEDx talk(https://www.youtube.com/watch?v=8cmx7V4oIR8) about where AI is today and where it's going. Research: Google Scholar Link(https://scholar.google.com/citations?user=FaOcyfMAAAAJ&hl=en) The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies Arxiv link(https://arxiv.org/abs/2004.13332), blog(https://blog.einstein.ai/the-ai-economist/), short video(https://www.youtube.com/watch?v=4iQUcGyQhdA), Q&A(https://salesforce.com/company/news-press/stories/2020/4/salesforce-ai-economist/), Press: VentureBeat(https://venturebeat.com/2020/04/29/salesforces-ai-economist-taps-reinforcement-learning-to-generate-optimal-tax-policies/), TechCrunch(https://techcrunch.com/2020/04/29/salesforce-researchers-are-working-on-an-ai-economist-for-more-equitable-tax-policy/) ProGen: Language Modeling for Protein Generation: bioRxiv link(https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2), [blog](https://blog.einstein.ai/progen/) ] Dye-sensitized solar cells under ambient light powering machine learning: towards autonomous smart sensors for the internet of things Issue11, (**Chemical Science 2020**). paper link(https://pubs.rsc.org/en/content/articlelanding/2020/sc/c9sc06145b#!divAbstract) CTRL: A Conditional Transformer Language Model for Controllable Generation: Arxiv link(https://arxiv.org/abs/1909.05858), code pre-trained and fine-tuning(https://github.com/salesforce/ctrl), blog(https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/) Genie: a generator of natural language semantic parsers for virtual assistant commands: PLDI 2019 pdf link(https://almond-static.stanford.edu/papers/genie-pldi19.pdf), https://almond.stanford.edu Topics Covered: 0:00 intro 0:42 the AI economist 7:08 the objective function and Gini Coefficient 12:13 on growing up in Eastern Germany and cultural differences 15:02 Language models for protein generation (ProGen) 27:53 CTRL: conditional transformer language model for controllable generation 37:52 Businesses vs Academia 40:00 What ML applications are important to salesforce 44:57 an underrated aspect of machine learning 48:13 Biggest challenge in making ML work in the real world Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on Soundcloud, Apple, Spotify, and Google! Soundcloud: https://bit.ly/2YnGjIq Apple Podcasts: https://bit.ly/2WdrUvI Spotify: https://bit.ly/2SqtadF Google: http://tiny.cc/GD_Google Weights and Biases makes developer tools for deep learning. Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://tiny.cc/wb-salon Join our community of ML practitioners: http://bit.ly/wb-slack Our gallery features curated machine learning reports by ML researchers. https://app.wandb.ai/gallery
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/gradient-dissent-conversations-on-ai-163784/zack-chase-lipton-the-medical-machine-learning-landscape-8937469"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to zack chase lipton — the medical machine learning landscape on goodpods" style="width: 225px" /> </a>
Copy