A Mini Deep learning library, accelerated on GPUs with PyCuda.
No no no.. I havent wrote this library on tensorflow or torch, it is a standalone machine. :P
Implemented backpropogations using reverse traversal, Support gradients and GPU operations, You can make a mini neural network (FNN) and use the in-house optimizer to train it on a dataset (e.g MNIST).
Tip: always give your tensor a funny name! :)
Note: Only for Educational Usage.
pip install deepop
import deepops as dp
from deepops.model import Model
class SeqFC(Model):
def __init__(self):
super().__init__()
self.dense1 = dp.layers.Dense(2, 2, activation="relu", name="dense1")
self.dense2 = dp.layers.Dense(2, 1, name="dense2")
def forward(self, x):
x = self.dense1(x)
x = self.dense2(x)
return x
sequential = SeqFC()
sequential.forward(x)
sequential.init_backward()
sequential.backward()
print([p.grad for p in sequential.parameters()])
a = Tensor([1,2,3,4,5])
# deepop tensor
a = Tensor([1,2,3,4,5])
a.device("gpu:0") # attach to gpu device.
a.where
# 'cpu'
a = Tensor([1.0,2.0])
print(a + a)
# GPU Operation
a = Tensor([1.0, 2.0])
print(a.mul(a))
print(a * a)
Tensor = dp.Tensor
a1 = Tensor([1.0, 3.0, 1.0])
b1 = Tensor([7.0, 3.0, 5.0])
a2 = Tensor([4.0, 3.0, 1.0])
a3 = Tensor([3.0, 3.0, 1.0])
a4 = Tensor([7.0, 1.0, 6.0])
b2 = Tensor([1.0, 21.0, 12.0])
c = a1 * b1 + a3
d = a2 * b2 + a4
out = c * d
# backward
out.backward()
print(out.grad)
print(a1.grad)
python -m pytest -s
Please contribute to my work.
- write more tests...
- need a optimizer.
- support more operations.
MIT