Back
Tags: #machine-learning
-
Softmax to MNIST: Building a Tiny Autograd and Classifier
Deriving softmax, log-softmax, and NLL loss, then training an MLP on MNIST with a home-grown autograd.
Get free template
-->
Back
Deriving softmax, log-softmax, and NLL loss, then training an MLP on MNIST with a home-grown autograd.
Get free template