Ok so last time we introduced the feedforward neural network. We discussed how input gets fed forward to become output, and the backpropagation algorithm for learning the weights of the edges.

Today we will begin by showing how the model can be expressed using matrix notation, under the assumption that the neural network is fully connected, that is each neuron is connected to all the neurons in the next layer.

Once this is done we will give a Python implementation and test it out.

**Matrix Notation For Neural Networks**

Most of this I learned from here.

In what follows, vectors are always thought of as *columns, *and so the transpose a row.

So first off we have $latex x in mathbb{R}^k$, our *input vector*, and $latex y in mathbb{R}^m$ our *output vector*.

Our neural network $latex N$ will have $latex N$ layers $latex L_1, dots …

View original post 535 more words