Implementation of convolution layer in different flavors
-
Updated
Oct 8, 2017 - C
Implementation of convolution layer in different flavors
A Flexible and Energy Efficient Accelerator For Sparse Convolution Neural Network
Convolutions and more as einsum for PyTorch
Accelerating convolution using numba, cupy and xnor in python
🚀An implicit im2col supporting backpropagation on CUDA, and a CNN backpropagation framework.
Fast bare-bones implementation of convolutional layers, residual blocks, Adam optimizer, backpropagation and custom accuracy and loss functions (F1 score per pixel and binary-crossentropy)
Add a description, image, and links to the im2col topic page so that developers can more easily learn about it.
To associate your repository with the im2col topic, visit your repo's landing page and select "manage topics."