In Our Last Episode...

The neural network framework that we start with in this course is the same one we built in Course 312. If you haven't taken the course yet, I recommend it. If it's been a while, here's a quick review of the cast of characters (files). - This is the top level script that constructs and runs an autoencoder.

The inner workings of the autoencoder are in the directory nn_framework: - This contains the class ANN, which represents the whole network as an object. It takes care of input normalization and reporting and coordinates activity between the layers. - This contains the class Dense, the only layer type currently implemented in this little framework. It handles forward and backward propagation of signals through the layer, as well as training of the weights. - This contains a small collection of activation functions: hyperbolic tangent, the logistic function, and rectified linear units. Each is its own class. - This contains a small collection of error functions: absolute error and squared error. They calculate the error between two arrays of equal size. Note that this falls short of being a proper loss function, which requires that the output be a single value, not an array.

There are some other files that we'll bring along for testing and visualization. - This is an object-oriented version of the visualization code we wrote in Course 311, Neural Network Visualization. If you want to understand how the visualization works and how to create custom visualizations of your own in Matplotlib, I recommend taking a detour and walking through it. But if you would like to treat it as a black box, you are welcome to do that too. - This is a toy data set with 4-pixel images for testing to make sure the rest of the autoencoder machinery is working OK. - A 9-pixel toy data set. - A data set of 24 Nordic runes, represented as 7x7 pixel images. - The dictionary containing the raw rune data.