Modern computer memory hierarchy consists of several levels:

register -> cache -> main memory / DRAM -> SSD / HDD

Usually, register is fastest in speed, smallest in size, and highest in cost; SSD / HDD is slowest in speed, biggest in size, and lowest in cost.

CPU operates on registers directly, and it may exchange data with main memory and SSD/HDD.

Why Do We Need Cache?

Simple answer is, without cache, the performance of computer system will degrade drastically. The reasons are 3 folds:

  1. Main memory has limited bandwidth. For example, LPDDR5 supports up to 6400Mb/s data rate, LPDDR4 supports up to 4266Mb/s data rate. Assuming a CPU operates on 3GHz frequency, its register width is 32b, and it performs memory access every single cycle, then the required data rate is up to 96000Mb/s, exceeding what LPDDR5 can offer.
  2. Main memory access time is long. The time elapsed between the initiation and completion of a single operation, can take hundreds of CPU cycles.
  3. Main memory unavailability. In order to retain data, DRAM requires regular refresh. During refresh, the memory array is inaccessible.

The cache usually uses SRAM, and it acts as a data buffer for main memory. It is faster than DRAM to match CPU frequency, and larger than register file to provide more storage.

Temporal Locality and Spatial Locality

Why can cache satisfy CPU needs with limited storage? This is because cache takes advantage of temporal locality and spatial locality.

Temporal locality means, items are likely to be accessed multiple times by CPU. Considering CPU is executing a for loop, instructions inside the loop will be accessed by CPU multiple times. If the whole loop can be temporarily buffered in the cache, CPU does not need to fetch the instructions from main memory except for the first time.

Spatial locality means, items near an accessed item are likely to be accessed in the near future. Considering CPU is executing a program without branching or jumping, instructions are fetched by CPU linearly. With cache, multiple instructions will be fetched once, not multiple times.

Conclusion

To some extent, cache is the “cache” for registers, and main memory is the “cache” for SSD / HDD. The concept of “cache” is widely used in computer system. But the management between different levels is different:

  1. Between register and main memory, it is managed by compiler or assembly language programmer
  2. Between cache and main memory, it is managed by hardware
  3. Between memory and SSD / HDD, file management is done by programmers, while virtual memory is done by both hardware and operating system

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.