Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Data Science at Home - Compressing deep learning models: distillation (Ep.104)

Compressing deep learning models: distillation (Ep.104)

Data Science at Home

05/20/20 • 22 min

plus icon
Not bookmarked icon
Share icon

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.

In this episode I explain one of the first methods: knowledge distillation

Come join us on Slack

Reference

05/20/20 • 22 min

plus icon
Not bookmarked icon
Share icon

Episode Comments

0.0

out of 5

Star filled grey IconStar filled grey IconStar filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey Icon
Star filled grey Icon

No ratings yet

Star iconStar iconStar iconStar iconStar icon

eg., What part of this podcast did you like? Ask a question to the host or other listeners...

Post

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/data-science-at-home-98892/compressing-deep-learning-models-distillation-ep104-5229552"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to compressing deep learning models: distillation (ep.104) on goodpods" style="width: 225px" /> </a>

Copy