In an artificial neural network, the weight of a neuron is a scalar multiplied with its corresponding input . We can describe the weights of a neuron with an -dimensional vector .
How do we learn the weights and biases of a neural network?
- We make a prediction for some input data , with a known correct output .
- We then compare the correct output with our predicted output to compute a loss function.
- We adjust the weights/biases to make the prediction closer to the ground truth, i.e., we optimise to minimise error.
- Then we repeat until we have an acceptable level of error.
The first two steps are called forward pass, which is used for training and inference. The last two are called backward pass, used only for training our model.
We also have a term , the bias. This is a weight that is learned with no input, i.e., . This is helpful for problems where the solution doesn’t pass through the origin.