Exercise 19. Add backpropagation scaffolding.

Now we get to the grittiest part of this operation. If you haven't already, I strongly recommend you watch the video below on how backpropagation works, or read the blog post. Getting a feel for what it's trying to do will be necessary before we can reduce it to code in our next few lessons.


If you're inclined to dig deeper into math behind backpropagation, please check out this playlist from 3Blue1Brown (aka Grant Sanderson) a mathematician and YouTuber of phenomal teaching ability. He can make the most abstract concepts feel comfortable.


Coding challenge

  • In ANN.train() add a call to the back_prop() method. (We'll write this next.) Pass it de_dy. the sensitivity of the error to output of the final layer's output.
  • Add a back_prop() method to ANN that iterates through all the model layers backward, from end to start. Call each layer's back_prop() method. (We'll write this next.) It should accept an output sensitivity, dy_de, and return an input sensitivity, dx_de. This in turn will be the output sensitivity for the next layer.
  • Create a back_prop() method in the Layer class. We won't complete it for now. We're just putting a shell in place that we can fill out in the next few lessons. Calculate the output de_dx from the input de_dy as if there were no activation function, that is, as if y = weights @ x. Make sure not to return the de_dx value for the bias neuron.

My solution

Here is all the code we've written up to this point.

Complete and Continue  
Discussion

8 comments