Why is autograd so complicated? What are the constraints and features that go into making it complicated? What's up with it being written in C++? What's with derivatives.yaml and code generation? What's going on with views and mutation? What's up with hooks and anomaly mode? What's reentrant execution? Why is it relevant to checkpointing? What's the distributed autograd engine?
Further reading.
- Autograd notes in the docs https://pytorch.org/docs/stable/notes/autograd.html
- derivatives.yaml https://github.com/pytorch/pytorch/blob/master/tools/autograd/derivatives.yaml
- Paper on autograd engine in PyTorch https://openreview.net/pdf/25b8eee6c373d48b84e5e9c6e10e7cbbbce4ac73.pdf
06/03/21 • 15 min
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/pytorch-developer-podcast-373610/why-is-autograd-so-complicated-53495869"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to why is autograd so complicated on goodpods" style="width: 225px" /> </a>
Copy