Based on whether the index and tag bits use physical address or virtual address, cache can be categorized as:

Physically Indexed, Physically Tagged Cache / PIPT Cache
Physically Indexed, Virtually Tagged Cache / PIVT Cache
Virtually Indexed, Physically Tagged Cache / VIPT Cache
Virtually Indexed, Virtually Tagged Cache / VIVT Cache

Problems Using Virtual Address in Cache

First problem is homonyms. Different physical blocks can have the same virtual address. This is the result of either context switches or explicit remapping. Possible solutions include:

  1. Flush / purge cache on every context switch
  2. Include PID in virtual address
  3. Flush / purge cache or relevent blocks during remapping

Second problem is aliasing. A single physical block has multiple virtual addresses. Such physical block may be cached separately despite referring to the same memory, causing coherency issues. This is typically the result of sharing among several processes. Possible solutions include:

  1. Flush / purge cache on every context switch
  2. Hardware ensures that only one copy of block can be present in cache. For example:
    • Search all possible locations and flush / purge copies
    • Use virtual L1 cache and inclusive physical L2 cache; “back-pointer” in L2 can be used to invalidate L1

Third problem is I/O. DMA may read / write blocks that are currently cached, and it uses physical addresses. Possible solutions include:

  1. OS ensures that blocks are not cached
  2. Use physical address directory for snooping
  3. Use inclusive physical L2 cache

PIPT Cache

In PIPT Cache, both index and tag bits use physical memory address. PIPT cache is simple to construct, and it avoids problems with homonym and aliasing.

However, PIPT cache has performance issues. Address translation has to complete before a cache look up can happen. Therefore, the practical implementations of PIPT cache are limited.

PIVT Cache

In PIVT Cache, index bits use physical address while tag bits use virtual address. Similar to PIPT cache, address translation has to finish first before cache lookup, making it impractical.

VIPT Cache

In VIPT Cache, index bits use virtual address and tag bits use physical address. Because cache lines can be looked up in parallel with TLB translation, it has lower latency compared to PIPT cache. In addition, VIPT cache can detect homonym problems since tag bits use physical address. If all index bits are within a memory page offset, VIPT cache can avoid aliasing problem as well, see the diagram below.

One way to overcome the aliasing is “page coloring”, i.e., to restrict page mapping to enable parallel TLB lookup and cache access, see the diagram below. The simplest scheme is to restrict the “color” bits to be the same as physical address.

VIVT Cache

In VIVT Cache, both index and tag bits use virtual addresses. It enables faster lookups, since MMU does not need to be checked first for address translation.

However, VIVT cache introduces aliasing and homonym problems.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.