Skip to content

pacanada/nanograd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nanograd

Own version of micrograd

See benchmark.py for a comparison with a torch model of same size (layer size 1,10,10,1) using SGD optimizer and MSE as loss function. In the example I try to approximate $y=x^2$.

Loss Predictions
Nanograd training: 50.84s, Torch training: 1.85s

It is considerably slower than pytorch since the autograd is implemented on a scalar level not tensor.

Testing

pytest nanograd

About

Nano autograd engine on scalar level

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages