Cache memory needs to be able to determine where blocks in main memory are placed in the cache. This is done via a mapping function (much like a version of a hash function with a smaller codomain).

A replacement algorithm is used to determine which cache line to evict.

Direct mapping

The most rudimentary example of this is direct mapping, where each main memory block maps onto an -line cache by modulo of its address (i..e, if there’s just one specific cache line for each block). This isn’t flexible at all, but its replacement algorithm is trivial.

Because of the nature of the modulo, we can take only the least significant bits and still know which cache lines to write into. The more significant bits are all that’s needed to uniquely identify where in the main memory words should go (i.e., for the tag bits).

One key benefit is that we know where to look to see if the access is a hit or miss. The cache will look at the tag bits in the line that was accessed. If it’s equal to the requested memory address, then it’s a hit and the cache will send the data back to the processor. Otherwise, it’s a miss and the block must be fetched from main memory, possibly evicting whatever was there already. The new block is brought into the cache, and D = 0, V = 1, and the Tag is set to the memory’s address.

If we read in a memory word with an offset of 1, it will grab the preceding word (offset of 0) to fulfil the cache conditions.

Associative mapping

The most flexible method is associative mapping, where a main memory block can be placed into any cache block. Fully associative mapping reduces cache collisions, but requires a considerably complex searching algorithm to properly properly find the right line, meaning it can be potentially slower.

An -way set associative cache is a sweet spot between a fully associative and direct mapped cache. The cache is divided into fixed-size sets, where each set can hold cache lines (typically around 16). Each memory address is mapped to a specific set depending on its address bits.

For example, a 2-way set associative cache allows a main memory block to go into one of two places, i.e., sets of size 2.

Okay, some computations. To compute the number of lines: we do the cache size / line size. To compute the number of sets, we compute the line numbers / lines per set. Keep these as powers of 2, because it indicates how many bits are reserved for each purpose. The least significant bits are for the memory word on the line, then in increasing significance, the set/line, and the remaining bits are the tag.

We want a number of sets that is a power of 2. This allows us to take any memory block into any line in the set . Then, whenever we search for a tag value, we look at all the lines in a given set. If we get one that matches the tag and has V = 1, we get a hit. Otherwise we get a miss and we bring any block into any empty line in the set.