Wrap Up
Congratulations!
You made it all the way through. Feel proud of yourself. This course takes a lot of coding and a lot of tricky mental machinations. You can't do all that without some pain and some confusion and a lot of dedicated time. Don't forget to claim your certificate when you're done here and show it to your mom and any potential employers. I expect they'll be very interested to hear that you wrote a simple machine learning framework from scratch.
Extensions
Now that you have the basics under your belt you are free to do some experimenting of your own. What are the effects of different numbers of layers and nodes? How do different data sets behave in the autoencoder? You can even adapt it for image classification, it's most common use case today. Have fun with it.
Alternatively, now that you know what's going on inside, you can use a full-scale framework, like pytorch or TensorFlow, more effectively. The choices of activation functions and layer counts will mean more.
What's next?
The follow up course to this one, 313, is in the works if it isn't finished yet. I expect to release it in the winter of 2019. In it, we'll extend this framework, adding advanced features like regularization and dropout, and building a rich set of initialization methods and alternatives to gradient descent. We'll also engage in hyperparameter optimization, showing how to tune the neural network to best meet your needs.
I hope to see you there.
4 comments