GRUs (gated recurrent unit) are an architectural variation of recurrent neural networks. They offer similar performance as LSTMs but are generally more efficient:
- They combine the forget and input gates into an update gate.
- And they merge the cell state and hidden state.
In code
A GRU layer can be specified with: