Get started

Welcome!

In this course we'll step through process of writing a simple neural network framework, start to finish. When you're done, you'll have both a first-hand understanding of the principles underlying multi-layer neural networks and a platform for experimenting with networks of your own design.

What you will have when you’re done

By the time you finish this course, you'll have a tool for creating fully connected neural networks of any depth and size. This framework can be extended to use any activation function, any type of layer, and any architecture that you like. It could be extended, for example, to convolutional neural networks and recurrent neural networks, such as long short-term memory. 

More importantly, you'll have understanding of how neural network frameworks, such as Tensor Flow and Pytorch, are constructed and operate. This will allow you to make better use of existing tools, get your models working more quickly, and get more accurate results. 

What you need to get started

Firm prerequisites

You'll need a working knowledge of python to get the most out of this, but you don't need to be an expert. Comfort with classes and NumPy will serve you well. Here is a wonderful visual introduction to NumPy I recommend checking out, even if you've used it before.

Soft prerequisites

These are topics that aren't absolutely necessary, but will enrich your understanding of what's going on and your ability to extend it. Familiarity with the basics of calculus (slopes and derivatives) will be helpful. They are the basis of backpropagation, the foundation of learning in neural networks.

The code is stored in GitHub and organized into a collection of Git branches. Understanding how Git and GitHub work will make it a lot easier to work with the code and to organize any code that you write.

During the course, you'll be referring to lectures in the How Deep Neural Networks Work course, especially the lectures on "How Neural Networks Work" and "What Neural Networks can Learn". The course on How Optimization Works will be a valuable reference too. They're both free.

For maximum benefit

This course makes use of the code created in the Neural Network Visualization course. If you'd like to understand the visualization down to its roots you can take the course, but it's not necessary for understanding the rest of the neural network building process.

Representing all the connections between two groups of things, like the nodes of two consecutive layers in a neural network, is easy to represent with a two-dimensional array, or a matrix. The mechanics of working with matrices are covered as part of linear algebra. If you'd like to totally understand the ins-and-outs of this, I can strongly recommend these linear algebra resources. But it's optional too.

Discussion

12 comments