Skip to content

Work on 2017 W43

Matthew Wootten edited this page Nov 17, 2017 · 1 revision

US

We're continuing primarily with the multi-spiking network/training. We're moving dramatically faster than I expected, thanks to Jeremy's work on the highly literal translation of the research, instead of trying to re-architecture, like Matthew had anticipated. As such, we have been working this week with a largely complete neural network with training, although the network is still heavily tailored towards the XOR problem which we are, following Ghosh-Dastidar and Adeli (2009), using as our test case to determine the correctness of our implementation. Unfortunately, our network is failing the correctness tests: specifically, the error function is failing to converge to zero, or even converge at all. Instead, it oscillated among three or four values. Debugging this will surely be a priority for next week.

We have also begun splitting up the current monolithic script into more manageable fnctions. THis will hopefully allow us to move to a Jupyter notebook where we can annotate functional pieces of the program, rather than an enourmous loop that has "training" as its sole unifying theme. So far, the refactored version only initializes weights; much work will be needed before this could be ready and fully documented at a practical (how the code works) and theoretical level (what purpose the code serves).