Learn about the technologies behind the Internet with The TCP/IP Guide!
NOTE: Using robot software to mass-download the site degrades the server and is prohibited. See here for more.
Find The PC Guide helpful? Please consider a donation to The PC Guide Tip Jar. Visa/MC/Paypal accepted.
View over 750 of my fine art photos any time for free at DesktopScenes.com!

[ The PC Guide | Systems and Components Reference Guide | Motherboard and System Devices | System Cache | Function and Operation of the System Cache ]

Summary: The Cache Read/Write Process

Having looked at all the parts and design factors that make up a cache, in this section the actual process is described that is followed when the processor reads or writes from the system memory. This example is the same as in the other sections on this page: 64 MB memory, 512 KB cache, 32 byte cache lines. I will assume a direct mapped cache, since that is the simplest to explain (and is in fact most common for level 2 cache):

  1. The processor begins a read/write from/to the system memory.
  2. Simultaneously, the cache controller begins to check if the information requested is in the cache, and the memory controller begins the process of either reading or writing from the system RAM. This is done so that we don't lose any time at all in the event of a cache miss; if we have a cache hit, the system will cancel the partially-completed request from RAM, if appropriate. If we are doing a write on a write-through cache, the write to memory always proceeds.
  3. The cache controller checks for a hit by looking at the address sent by the processor. The lowest five bits (A0 to A4) are ignored, because these differentiate between the 32 different bytes in the cache line. We aren't concerned with that because the cache will always return the whole 32 bytes and let the processor decide which one it wants. The next 14 lines (A5 to A18) represent the line in the cache that we need to check (notice that 2^14 is 16,384).
  4. The cache controller reads the tag RAM at the address indicated by the 14 address lines A5 to A18. So if those 14 bits say address 13,714, the controller will examine the contents of tag RAM entry #13,714. It compares the 7 bits that it reads from the tag RAM at this location to the 7 address bits A19 to A25 that it gets from the processor. If they are identical, then the controller knows that the entry in the cache at that line address is the one the processor wanted; we have a hit. If the tag RAM doesn't match, then we have a miss.
  5. If we do have a hit, then for a read, the cache controller reads the 32-byte contents of the cache data store at the same line address indicated by bits A5 to A18 (13,714), and sends them to the processor. The read that was started to the system RAM is canceled. The process is complete. For a write, the cache controller writes 32 bytes to the data store at that same cache line location referenced by bits A5 to A18. Then, if we are using a write-through cache the write to memory proceeds; if we are using a write-back cache, the write to memory is canceled, and the dirty bit for this cache line is set to 1 to indicate that the cache was updated but the memory was not.
  6. If we have a miss and we were doing a read, the read of system RAM that we started earlier carries on, with 32 bytes being read from memory at the location specified by bits A5 to A25. These bytes are fed to the processor, which uses the lowest five bits (A0 to A4) to decide which of the 32 bytes it wanted. While this is happening the cache also must perform the work of storing these bytes that were just read from memory into the cache so they will be there for the next time this location is wanted. If we are using a write-through cache, the 32 bytes are just placed into the data store at the address indicated by bits A5 to A18. The contents of bits A19 to A25 are saved in the tag RAM at the same 14-bit address, A5 to A18. The entry is now ready for any future request by the processor. If we are using a write-back cache, then before overwriting the old contents of the cache line, we must check the line's dirty bit. If it is set (1) then we must first write back the contents of the cache line to memory, and then clear the dirty bit. If it is clear (0) then the memory isn't stale and we continue without the write cycle.
  7. If we have a cache miss and we were doing a write, interestingly, the cache doesn't do much at all, because most caches don't update the cache line on a write miss. They just leave the entry that was there alone, and write to memory, bypassing the cache entirely. There are some caches that put all writes into the appropriate cache line whenever a write is done. They make the general assumption that anything the processor has just written, it is likely to read back again at some point in the near future. Therefore, they treat every write as a hit, by definition. This means there is no check for a hit on a write; in essence, the cache line that is used by the address just written is always replaced by the data that was just put out by the processor. It also means that on a write miss the cache controller must update the cache, including checking the dirty bit on the entry that was there before the write, exactly the same as what happens for a read miss.

As complex as it already is :^) this example would of course be even more complex if we used a set associative or fully associative cache. Then we would have a search to do when checking for a hit, and we would also have the matter of deciding which cache line to update on a cache miss.

Next: Cache Characteristics


Home  -  Search  -  Topics  -  Up

The PC Guide (http://www.PCGuide.com)
Site Version: 2.2.0 - Version Date: April 17, 2001
Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

Not responsible for any loss resulting from the use of this site.
Please read the Site Guide before using this material.
Custom Search