Skip to content

Artificial Neural Networks🔗

This class gives a biological and historical overview of artificial neural networks and walks through the theoretical foundations of backpropagation and gradient descent, using an example in numpy.

Notebook [source] [Colab]

As an additional exercise, this notebook runs the provided backpropagation code, visualizing neural activations and error at each step.

Visualizing backprop

Additional Resources🔗

In class we present a Universal Approximation Theorem for single-layer networks with sigmoid activation functions. The slides for that proof are here: Universal Approximation Theorem

This online book has a nice interactive visualization of this property of neural networks.

The deep learning book is fully available online and contains many great examples. Notebook versions of those examples are available here

Stanford CS229 Lectures 11 and 12 introduce ANNs, backpropagation, and gradient descent, with lecture 12 going further in depth on how to resolve common training problems in ANNs.