Learn about the technologies behind the Internet with The TCP/IP Guide!
NOTE: Using robot software to mass-download the site degrades the server and is prohibited. See here for more.
Find The PC Guide helpful? Please consider a donation to The PC Guide Tip Jar. Visa/MC/Paypal accepted.
View over 750 of my fine art photos any time for free at DesktopScenes.com!

[ The PC Guide | Systems and Components Reference Guide | Motherboard and System Devices | System Chipset and Controllers | Chipset Functions and Features | Chipset Timing and Flow Control ]

Memory System Timing

Since memory is in most cases much slower than the processor it serves, the processor often must wait for the memory to provide it with information it needs. A "wait state" is a clock cycle (or "tick") where the processor is idle because it is waiting for the system memory. The chipset's goal is to reduce this waiting as much as possible; it inserts these wait cycles where necessary to make sure the processor doesn't get ahead of the system cache or memory. The faster your system memory and cache, the fewer of these wait states need to be performed, which increases performance (very few wait states are needed for the cache, compared to the system memory, which is kind of the point of cache. :^) ). All of this is also a function of the chipset memory access circuitry, and is discussed in detail in the section on memory timing.

Cache on a modern system is stored in 32-byte "lines", meaning information is read from and written to the cache 32 bytes at a time. Since the memory is normally read 8 bytes (64 bits) at a time, this means it takes 4 memory reads (or writes) to fill an "entry" in the cache. On the first of these reads or writes, the address information must be provided to the memory, to tell it which location must be used. After this, the next three reads or writes are from consecutive locations, so the speed is much higher, because there is no need to send the address for the last three accesses (since they are consecutive with the first one). This, of course, greatly improves performance. The delay in accessing the first memory location is referred to as "latency".

Reflecting this technology, cache and memory access timing is often specified using terminology like this: F-S-S-S, where "F" is the number of cycles for the first access, and "S" is the number for each subsequent consecutive access. An example of this speed specification would be "5-2-2-2" which means the first access takes 5 clock cycles, and three following it take 2 each. You may also see a speed parameter in your BIOS settings like "x-2-2-2", or "5-x-x-x", because the timing for the first access, and for the subsequent accesses, can be set and controlled independently on many systems. Remember that the first number doesn't represent just "wait states", part of the reason it takes longer is specifying the address to read from, as stated above.

This is all covered in more detail in this discussion in the chapter on memory.

Next: Memory Autodetection

Home  -  Search  -  Topics  -  Up

The PC Guide (http://www.PCGuide.com)
Site Version: 2.2.0 - Version Date: April 17, 2001
Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

Not responsible for any loss resulting from the use of this site.
Please read the Site Guide before using this material.
Custom Search