We can pick the least recently used (LRU) block to remove in the LRU replacement algorithm. The LRU algorithm can lead to poor performance, though it performs well for many access patterns. For example, when accesses are made to an array that is slightly too large to fit into the cache there can be bad performance.
The LRU algorithm keeps track of references to all blocks with a counter. When a hit occurs, the counter is set to 0. Otherwise, the other counters are incremented. If a miss occurs, the value of all other counters are incremented. When a line needs to be replaced, a line with a maxed out counter are removed and a new block put in its place.
It takes to search through the blocks and find the one with the oldest clock.
Performance problems with the LRU algorithm can be improved with a small amount of stochasticity in choosing which block to remove. In the same sense, a simpler algorithm is to just choose a random block to remove (and this turns out to be pretty effective).
In software:
- create doubly Linked list of pages
- for each page reference, move it to the front of the list
- for replacement, remove from back of list
- it requires 6 ptr updates for each page reference (so we need to read from memory)
- which creates a high contention bottleneck for multiple processors
so LRU not really ideal in practice we can approximate LRU
- clock algorithm and
- least frequently used, 2Q, adaptive replacement cache (ARC)