Get Started

Welcome!

In this course we'll take a simple neural network framework and add in the tricks and methods that help it achieve the best possible results. When you're done, you'll have both a first-hand understanding of how neural network enhancements work and a platform for experimenting with high-performing networks of your own design.

What you will have when you’re done

By the time you finish this course, you'll have a tool for creating fully connected neural networks of any depth and size, complete with regularization, dropout, and a toolbox of optimizers. This framework can be extended to use any activation function, any type of layer, and any architecture that you like. It could be extended for example, to convolutional neural networks and recurrent neural networks, such as long short-term memory. 

More importantly, you'll have understanding of how the many options behind neural network frameworks, such as Tensor Flow and PyTorch, operate and how to use them to your best advantage. This will allow you to make better use of existing tools, get your models working more quickly, and get more accurate results. 

What you need to get started

Firm prerequisites

You'll need a working knowledge of python to get the most out of this, but you don't need to be an expert. Comfort with classes and NumPy will serve you well. Here is a wonderful visual introduction to NumPy I recommend checking out, even if you've used it before.

Soft prerequisites

These are topics that aren't absolutely necessary, but will enrich your understanding of what's going on and your ability to extend it. Familiarity with the basics of calculus (slopes and derivatives) will be helpful. They are the basis of backpropagation, the foundation of learning in neural networks.

The code is stored in GitHub and organized into a collection of Git branches. Understanding how Git and GitHub work will make it a lot easier to work with the code and to organize any code that you write.

In this course we'll be building on the code we wrote in Course 312, Build a Neural Network Framework. You'll be able to understand what's going on much better if you've walked through that course first.

For maximum benefit

This course makes use of the code created in the Neural Network Visualization, Course 311. If you'd like to understand the visualization down to its roots you can take the course, but it's not necessary for understanding the rest of the material.

Representing all the connections between two groups of things, like the nodes of two consecutive layers in a neural network, is easy to represent with a two-dimensional array, or a matrix. The mechanics of working with matrices are covered as part of linear algebra. If you'd like to totally understand the ins-and-outs of this, I can strongly recommend these linear algebra resources. But it's optional too.

Complete and Continue  
Discussion

2 comments