An introduction to the mechanics of AutoDiff, exploring its mathematical rules, implementation methods, and purposes in presently most-used frameworks
On the coronary heart of machine studying lies the optimization of loss/goal features. This optimization course of closely depends on computing gradients of those features with respect to mannequin parameters. As Baydin et al. (2018) elucidate of their complete survey [1], these gradients information the iterative updates in optimization algorithms resembling stochastic gradient descent (SGD):
θₜ₊₁ = θₜ – α ∇θ L(θₜ)
The place:
- θₜ represents the mannequin parameters at step t
- α is the training charge
- ∇_θ L(θₜ) denotes the gradient of the loss perform L with respect to the parameters θ
This easy replace rule belies the complexity of computing gradients in deep neural networks with tens of millions and even billions of parameters.