In machine learning, the error function tells us how much our model deviates from data points. This gives us a way to quantify the performance of our models and also gives us a metric to compare against when adjusting neuron weights.

A large loss means the network’s prediction differs significantly from the ground truth. A small loss means it matches the ground truth. We calculate the error over all training samples: we train and compare against the validation set. If the error of the validation comparison begins to increase, this means we’ve overtrained and we’re beginning to memorise the data.

Types

In code

In our neural network architecture, we should define a loss function as followed:

criterion = nn.CrossEntropyLoss() # or your choice of loss function

PyTorch has several loss functions contained inside the torch.nn class. While training our neural network, we need to do two important steps. One is to compute the loss, and the second is to obtain the gradients via backprop:

loss = criterion(out, actual) # compares model's output vs gnd truth
loss.backward() # obtain gradients

See also