How the course is laid out
A neural network framework is somewhat complex, but it’s individual pieces are much less overwhelming when tackled one at a time.
- We'll start by defining the problem we’re trying to solve and creating a simple-as-possible data set to test our neural network with. This won’t be anything interesting to look at and probably won’t give us a lot of insight, but it will provide a good way to tell whether we’ve made any terrible mistakes as we go along.
- Next we will build just a hollow outline of a framework. It will take in an input and turn back around and spit it out as an output. This will set us up to start filling in details
- Our next step will be to build a fully connected layer. This will take place in two stages. In the first stage, we'll create a fully connected linear layer.
- In the second stage, we will add our nonlinearity, our activation function. In fact, we will create a few of them, so that we can choose among them.
- Next, we will allow for multiple layers, in fact, as many as we want.
- The next step is integration with the visualization.
- In this step, we will implement the crux of what makes neural networks tick: backpropagation. This will be the most intellectually demanding theory and code that we get to deal with.
- Now our hard work starts paying off. We will create a slightly more sophisticated data set to pass through our autoencoder.
- This is the culmination of our work. We'll use a data set of Nordic runes to put our autoencoder through its pace. This will let us begin to play and start building and intuition for how multilayer neural networks work.
There's a lot to do, but it's broken down into bite-sized pieces. Let's get started.