The opposite of the convolution is the transposed convolution (different from an inverse convolution). They work with similar parameters, but instead map from 1 pixel to pixels, and kernels are learned just like regular convolutional kernels:
where is the output padding. The computation is:
- For each pixel of the input image:
- Multiply each value of the kernel (i.e., a kernel) with the input pixel to get a weighted kernel.
- Insert it into the output to create an image.
- If the outputs overlap, then sum them.
For padding, the effect is the opposite of the regular convolution. After the output is computed, the rows and columns around the perimeter are removed. This is used because depending on the geometric parameters, it’s ambiguous what the output shape should be.
The stride will result in an increase in the upsampling effect of the convolution, i.e., it will increase the output resolution.
A transposed convolution layer with the same specifications (input, output channels, kernel size, stride, etc.) will have the reverse effect on the shape.
In code
In PyTorch:
For a more robust example, here’s a sample encoder/decoder network in a convolutional autoencoder: