As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io
Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment.
Comparing Rewinding and Fine-tuning in Neural Network Pruning
https://arxiv.org/abs/2003.02389
06/01/20 • 15 min
Episode Comments
0.0
out of 5
No ratings yet
eg., What part of this podcast did you like? Ask a question to the host or other listeners...
Post
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/data-science-at-home-98892/compressing-deep-learning-models-rewinding-ep105-5229551"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to compressing deep learning models: rewinding (ep.105) on goodpods" style="width: 225px" /> </a>
Copy