In deep learning, a feed-forward network (or multilayer perceptron, MLP) is a type of neural network that aims to approximate a function . For example, if we have a function , an MLP defines a mapping where are parameters that the model learns to best approximate the function.
Feed-forward models get their name because information flows from the input to the output without any feedback connection. Recurrent neural networks are feed-forward models that are extended to include feedback.
Note that MLPs are limited often because they require we pre-process the input. This makes it unideal for vision tasks, where convolutional neural networks don’t require this step.