Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
Reference- Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531
- Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks https://arxiv.org/abs/2004.05937
05/20/20 • 22 min
Episode Comments
0.0
out of 5
No ratings yet
eg., What part of this podcast did you like? Ask a question to the host or other listeners...
Post
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/data-science-at-home-98892/compressing-deep-learning-models-distillation-ep104-5229552"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to compressing deep learning models: distillation (ep.104) on goodpods" style="width: 225px" /> </a>
Copy