Do NOT copy any code from this repository. By using this code, you will be held liable for accusations of academic plagiarism.
Contains:
- Hyperparameter path
- Layer Configuration path
- Network Input/Output path
- Initial Weights path
- Final Weights path
- Path for a folder to write the outputs
Contains number of inputs on first line. Then, every pair of lines has paths of inputs files on the first line and paths of output files on the second line.
Contains the number of layers on the first line. Then, every line after that contains the size of one layer.
This is the original construction of the single-output trainable neural net for one hidden layer.
--config_path - the file name of the configuration file
This gives the ability to configure and use the project on VSCode.
This is the new construction of the multilayer perceptron with one hidden layer but multiple outputs
Contains line by line:
- Max #iterations
- Initial Learning Factor
- Total error to terminate at
- Learning Factor Adjustment Rate (what to multiply and divide it by)
- Minimum total error decrease over a certain number of iterations
- Maximum learning factor
- Minimum lambda (essentially epsilon)
- Distance between iterations where learning factor and total error are printed and the total error decrease is checked
- A path to random range files
- The range of the inputs so that it can be scaled to between 0 and 1
- The range of the outputs so that the network can scale outputs between 0 and 1 to the expected output range
Each line contains a comma separated range of values for random values for the nth layer.
With the definition of the weights as w[n][k][j], lists the input weights line by line sorted first by n, then k, then j.
With the same definition as weights0.txt, weights1.txt represents the weights after training.