A single neuron takes in a bunch of floats from other neurons or directly from the input data, multiplies each by a weight, and sums them. It adds this total to a bias, passes it through the activation function, and outputs the result.

This is easier to state with code. Let ws be the weights, b be the bias, and as be the inputs, and f be the activation function. Then the output is:

f (sum (zipWith (*) ws as) + b)

We choose the rectifier (ReLU) to be our activation function. This differs from Nielsen, who uses the sigmoid function, and it means we should initialize our biases differently.

relu = max 0

Like Nielsen, our network has an input layer of 282 = 784 raw numbers which feed into a hidden layer of 30 neurons, which in turn feeds into an output layer of 10 neurons. Each pair of adjacent layers has full mesh topology, namely, every node of one layer is connected to every node of the next.

The input values lie in [0,1] and indicate the darkness of a pixel.

The output neurons give nonnegative values: we’ll train the nth neuron to output at least 1 if the digit is n, and 0 otherwise.

This scheme results in a peculiar cost function. If we had also chosen the sigmoid function, then our outputs would be bounded above by 1 and we could train our network to aim for exactly 1. But we’ve chosen relu, whose outputs can be arbitarily large. Inituitively, I feel it makes sense to interpret anything that is at least 1 as a strong positive indication, and we should train for 0 otherwise. I don’t know what the literature recommends.

For each layer of neurons, we store the biases in a list of floats [Float]: the ith float is the bias for the ith neuron. Similarly, we store the weights in a two-dimensional matrix of floats [[Float]]: the ith row holds the weights of the inputs to the ith node. Since our neural network is a bunch of layers, we store it as a list of biases and weights for each layer: [([Float], [[Float]])].

We initialize all the biases to 1, and the weights to come from a normal distribution with mean 0 and standard deviation 0.01.

newBrain :: [ Int ] -> [([ Float ], [[ Float ]])] newBrain szs @ ( _ : ts ) = zip ( flip replicate 1 <$> ts ) <$> zipWithM ( \ m n -> replicateM n $ replicateM m $ gauss 0.01 ) szs ts main = do putStrLn "3 inputs, a hidden layer of 4 neurons, and 2 output neurons:" print $ newBrain [ 3 , 4 , 2 ]

We’ll want to hang on to the values before they are fed to the activation functions; these are called the weighted inputs. We compute an entire layer of weighted inputs from the previous layer as follows:

zLayer :: [ Float ] -> ([ Float ], [[ Float ]]) -> [ Float ] zLayer as ( bs , wvs ) = zipWith ( + ) bs $ sum . zipWith ( * ) as <$> wvs

I’ve used wvs to suggest “weight vectors”, to remind us this is a list of rows of a matrix.

We can run the activation function on the weighted inputs to compute the actual outputs with (relu <$>). A suitable fold takes us from the start of the neural network to the end: