You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- use `cd src && make clean && make main && ./main` to run normal compilation
17
+
- use `make clean && make release` to generate AppImage binary (you have to install `linuxdeploy` and other dependecies using `make install_tools` first)
14
18
15
19
16
20
# Libraries
@@ -19,12 +23,17 @@ Dependency of the libraries is in this order: `GUI -> ceural -> lag`
19
23
-`ceural` - [C neural network library](#Ceural)
20
24
21
25
## Lag
26
+
Library supports many operations but more development is needed because currently uses [OpenBLAS](https://github.com/xianyi/OpenBLAS) only for matrix multiplication and matrix transposition.
27
+
28
+
### Naming
22
29
-`mat` - stands for matrix
23
30
-`ew` - stands for element wise
31
+
32
+
### Notes
24
33
- Matrix part of the library automatically checks if destination and source is same where shouldn't be same and warns using `assert()`.
25
34
26
35
## Ceural
27
-
Ceural library is created for multi-layer networks for MNIST dataset but with small modifications it can be used for other datasets.
36
+
Ceural library is created for multi-layer networks trained using MNIST dataset but with small modifications it can be used for other datasets too. See [Accuracy](#Accuracy) for more info.
28
37
29
38
# GUI
30
39
@@ -46,26 +55,28 @@ Preprocessing used in [MNIST](http://yann.lecun.com/exdb/mnist/) database: *The
46
55
8.**neural network forward propagation** - this preprocessed image is fed to the neural network
47
56
48
57
# Accuracy
49
-
After training the test set accuracy is `94.49 %`. This accuracy not bad considering the test error rate in [MNIST database website](http://yann.lecun.com/exdb/mnist/) of the 2-layer NN. Sadly accuracy is not as good in practice as it's in the test data set.
58
+
After `10` epochs of training with batch size `32`the test set accuracy is `97.47 %` which is not bad considering the test error rate in [MNIST database website](http://yann.lecun.com/exdb/mnist/) of the 2-layer NN. Sadly accuracy is not as good in practice as it's in the test data set 🥺.
50
59
51
60
Accuracy is calculated using formula [`accuracy = (TP+TN)/(TP+TN+FP+FN)`](https://en.wikipedia.org/wiki/Accuracy_and_precision) which is `accuracy = correct/total`
52
61
53
62
54
63
# Performance
55
-
Even though Python is much slower than C, Python-digit-recognition is much faster. The reason behind it is that Python version uses great library NumPy, which is written in C and uses BLAS implementation of matrix operations.
64
+
Even though Python is much slower than C, Python-digit-recognition is faster. The reason behind it is that Python version uses great library NumPy, which is perfectly optimized.
56
65
57
66
58
67
# ToDo
59
-
-[ ] Cleanup & document `lag` library
60
-
-[ ] Cleanup & document `ceural` library
68
+
-[ ] add `lag` tests
69
+
-[ ] add `ceural` tests
70
+
-[x] Cleanup & document `lag` library
71
+
-[x] Cleanup & document `ceural` library
61
72
-[ ] Cleanup & document `gui`
62
73
-[ ] Add icons into `gui`
63
74
-[ ] Add command line options to train & test & save & load NN
64
75
-[x] Check NN definition during the NN weights & biases load process
65
76
-[ ] Create documentation
66
-
-[] Write everything into README
77
+
-[x] Write everything into README
67
78
-[ ] Choose license
68
-
-[] Finish top-level 'Makefile' to create final binary for linux
79
+
-[x] Finish top-level 'Makefile' to create final binary for linux
69
80
-[ ] Create Windows compilation script & test it on Windows
70
81
-[x] Center digit by center of mass of the pixels before feeding it to the neural network from GUI input
71
82
-[ ] Use BLAS (for example [OpenBLAS](https://github.com/xianyi/OpenBLAS)) library for linear algebra to improve speed
0 commit comments