Over the last three years or so, memory manufacturers have been releasing new and faster memory modules at an ever quicken pace. Some are updates of existing types, while others are redesigns and still others are entirely new innovations. We've moved quickly from FPM to EDO memory, and then on to SDRAM. Then SDRAM jumped from its initial release at 66MHz to 100MHz and then to 133MHz. In the last year we've seen Rambus DR DRAM, and most recently DDR DRAM memory. Now, after having digested all of these changes and innovations, and all the hype about "substantial speed increases", the memory manufacturers, distributors and retailers have decided to bombard us with the technical terms as to why their product is better then that of another company. One such example has been the issue of CAS Latency. Recently, a whole raft of resellers have been publishing comments that the CAS Latency on their Brand X modules are a huge improvement over that of Brand Y modules sold elsewhere. Unfortunately, none of them go quite far enough to explain just what this means. Is CAS Latency an issue? Is it hype, or is it something I should be careful about? What does it mean?
Maybe yes, and maybe no. We know you don't like that answer, but the truth is that much of how CAS Latency effects your computer depends upon the component make-up of your computer, how you use it and the programs you use.
Although our review of Memory Speeds digs into these issues, let's take a quick look at the issue of memory timing by itself. CAS Latency (CL) is the ratio between column access time (tCAC) and the clock cycle time (tCLK), rounded to the next higher whole number. The formula is rather simple really, divide the column access time by the clock cycle time, then raise the result to the next whole number.
As an example, if the tCAC is 20 nanoseconds and the tCLK is 10 nanoseconds (a 100 Mhz system bus), then the CL would be 2. If tCAC is 25 nanoseconds, then CL would be 3 since 25/10 = 2.5 (rounded up to 3).
Okay, so what does all of this mean? To understand that, we need to understand the meaning of a few other memory timing terms. By the way, for clarity purposes, we will be referring to a 100MHz system bus.
RAS and CAS normally appear in technical manuals with an over-line as in RAS or CAS.
In lay terms, data is transferred from memory to the CPU as follows:
Since the clock cycle is the inverse of the bus speed, it is defined for our purposes here as 10 nanoseconds. On a 100 Mhz. bus, data transfer takes about 2 nanoseconds. According to the PC 100 specification, tAC is 6 nanoseconds, and it takes about 2 nanoseconds for the signal to stabilize.
6 nanoseconds (tAC) + 2 nanoseconds to stabilize = 8 nanoseconds
8 nanoseconds + 2 nanoseconds for transfer time = 10 nanoseconds = 1 clock tick
Therefore, in burst mode, the three data transfers after the first one that requires 50 nanoseconds, data can be transferred in one clock cycle. SDRAM is a multi-bank architecture, and during the occurrence of data processing, the chipset can leave a given row of a given bank (which has been accessed before) "open". As an example, if the very next request accesses the same row, the chipset does not have to wait until the sense amps are charged. This is referred to as a "page hit". The RAS to CAS latency will be 0 (zero) cycles and the output buffers will contain the right data after the CAS latency. In simpler terms, a page hit makes sure we only have to wait until the right columns are found on the sense amps, which already contain the requested row.
There is a down side though, as the row requested by the chipset might not be the one that is open, which is referred to as a page miss. In that case the RAS to CAS latency can be 2 or 3 clock cycles, depending on the quality of the SDRAM. If the chipset has left open a certain row on a certain bank, and the data requested is in a different row in the same bank, latency gets worse. When this occurs, the sense amps have to write back the old row before they can charge the new one. Writing back the old row takes a predetermined amount of time referred to as tRP (Precharge Time).
SDRAM modules are usually defined by three numbers, such as 2-2-2 or 3-2-2. The first number refers to CAS Latency, the second to tRP (time to switch between memory banks), and the third to tRCD. Bear in mind that these numbers mean different things for different bus speeds. As an example, these are the calculations for 100 Mhz. (1 clock cycle = 10 nanoseconds):
At 100 MHz.
|tCAC = 25 nanoseconds||25 / 10 = 2.5 - rounded to 3||or 3-2-2|
|tRP = 20 nanoseconds||20 / 10 = 2|
|tRCD = 20 nanoseconds||20 / 10 = 2|
If we were to calculate these figures at 133 Mhz., with 1 clock cycle equal to 7.5 nanoseconds, the results would be:
At 133 MHz.
|tCAC = 25 nanoseconds||25 / 7.5 = 3.33 - rounded to 4||4-3-3|
|tRP = 20 nanoseconds||20 / 7.5 = 2.67 - rounded to 3|
|tRCD = 20 nanoseconds||20 / 7.5 = 2.67 - rounded to 3|
This second example, with a CAS Latency of "4", would be invalid in a 133 Mhz. system, as the PC 133 SDRAM specification does not permit this.
The Bottom Line
CAS Latencies are usually written as CAS2 or CAS3, so just how important is this?
In the real world, unless your system is up on the cutting edge of technology and you're pushing performance to the limit as do some over-clockers, or gamers, it may have some relevance. On the other hand, in everyday systems the relevance is nominal at best. CAS3 means that at 100 Mhz., the amount of time required for the first memory access in a burst is increased by 10 nanoseconds or less. Divide this figure by 4 to average the increased time across four bursts, and you have an improvement of less than 2.5 nanoseconds over CAS2. We need to underscore the term relevance as it pertains to CAS Latency and changing memory modules on the average system. If you had a Pentium III 600 to 866MHz. computer, as an example, and you used this for surfing the Internet, using Microsoft Office or Corel Office, Adobe products etc., and changed your memory modules from those having CAS3 to CAS2 latencies, you wouldn't be able to notice any difference. But again, if you are pushing your system to the limits, this could become critical.
System performance is not measured by the performance of one part, but rather by the sum of all parts. If you would like to review more about memory related issues, you may want to follow these links:
How Memory Speeds Are Determined
How to Identify PC-133 Memory Modules
Frequently Asked Questions About Memory
Troubleshooting Memory Problems
Megabyte (MB) vs. Megabit (Mb)
Memory Trends in 2001
Click here to go to the Performance Center Home Page
This page updated: 11/01/2000
Copyright Â©1995-2000 DEW Associates Corporation. All rights reserved.