Memory Speeds

Memory Speeds
Have you ever wondered how they were determined?

It doesn’t matter whether you are purchasing memory for a new system, or just trying to match the memory in a system you already own, if you have ever asked yourself how fast it is, you’ve just stumbled onto one of the most daunting questions in the industry. New customers purchasing memory upgrades often ask us how they can identify the memory they have in their computers. If you purchase memory from a reputable supplier and it happens to be branded, such as Micron/Crucial, Kingston, Samsung, Melco, NEC or one of the other major manufacturers, most go to great lengths to help you accurately identify their memory modules along with all of the pertinent data relating to them. Unfortunately, if you have purchased your $300 worth of memory for $75 at a computer show, you may be out of luck unless you knew exactly what you were buying.

As you read through our articles relating to memory, you will see that we too have made an effort to give you the information you need to be an informed consumer and user. If you work in the computer field, you may already know what the term(s) EDO, SDRAM, PC100, PC133, PC800 RDRAM and DDR DRAM mean. If you don’t, hang on, we’re about to help you too!

If you have picked up an SDRAM module and looked at it, you may have seen some markings on the individual chips, such as –10, -8, -6, and wondered what they meant. If you have been surfing the Internet or reading computer related publications lately, you may also be wondering if PC800 DRDRAM is really 8 times faster than PC-100, or how much faster PC-100 is compared to EDO memory. In the following we will try and clarify how memory speed is defined for different types of memory technologies as well as eliminate some of the confusion surrounding this subject. Since DRAM memory is the predominant memory type at the moment, the following subject matter applies to DRAM only.

Before we discuss the various DRAM types and begin to compare the differing technologies and their benefits, let’s take a moment and review how DRAM works.

DRAM Basics

Internally DRAM has a huge array of cells that contain data. (If you’ve ever used Microsoft’s Excel, try and picture it that way) A pair of row and column addresses can uniquely address each cell in the DRAM. DRAM communicates with a memory controller through two main groups of signals: Control-Address signals and Data signals. The control signals tell the DRAM what operation is executed, then the address signals tell the DRAM the cell location to perform the operation on and then the data signals to transfer data to and from the DRAM.

These are relatively important timing terms:

  1. tRP – The time required to switch internal memory banks. (RAS Precharge)
  2. tRCD – The time required between /RAS (Row Address Select) and /CAS (Column Address Select) access.
  3. tAC – The amount of time necessary to “prepare” for the next output in burst mode.
  4. tCAC – The Column Access Time.
  5. tCL – (or CL) CAS Latency.
  6. tCLK – The Length of a Clock Cycle.
  7. RAS – Row Address Strobe or Row Address Select.
  8. CAS – Column Address Strobe or Column Address Select.
  9. Read Cycle Time – The time required to make data ready by the next clock cycle in burst mode.

Note #1:  tRAC (Random Access Time) is calculated as tRCD + tCAC = tRAC
Note #2:  RAS and CAS normally appear in technical manuals with an over-line as in RAS or  CAS.

These are the steps in order to access a cell in DRAM:

  • A row command to latch in Row address
  • A column command to latch in Column address

There is a necessary delay between the two commands as well as a delay after the column command for the I/O circuit to drive valid data. When we add these two delays together we have the random access time of the DRAM. It is the minimum time it takes the memory controller to read data randomly from DRAM. The next figure below illustrates this sequence in the first DRAM read cycle.

EDO/FPM Memory

FPM stands for Fast-Page-Mode, and EDO stands for Extended-Data-Out. EDO is enhanced FPM. EDO/FPM memory takes advantage of the fact that when cells within the same row are accessed, the row command doesn’t need to be repeated. The operation mode where multiple column commands follow a single row command is called Page Mode. Below you will find a graphic illustration of page mode access.

tRCD is the delay between a row command to a column command (referred to as RAS-to-CAS delay and is represented in clock cycles). tCAC is the delay from column command to valid data (column access time). The tRAC (random access time) is calculated as tRCD + tCAC = tRAC. (See the read timing diagram below). When inspecting EDO/FPM DRAM modules, you will typically see –70, -60, -6, -5 or –50 markings on DRAM chips. These markings refer to the tRAC as being 70ns, 60ns and 50ns respectively. Obviously the smaller the number the faster the DRAM.

In personal computers, memory access often occurs in page mode. When an L2 Cache is present, the page mode access rate is the true indicator of the memory speed that determines how a system performs. Random access time is not necessarily the best indicator for memory speed as it pertains to system performance. The page mode cycle time, tPC, indicates the peak data rate in FPM/EDO DRAM. EDO DRAM is almost identical to FPM except that the data is turned off later in the read mode. This difference allows EDO to have a shorter tPC, thus increasing the performance of the computer. For a –6 or –60 FPM DRAM, typically the tPC is  35ns. For a –60 EDO DRAM, typically the tPC is 25ns. As you can see, the –60 EDO DRAM has 29% faster peak data rate.

Let’s try a couple of illustrations so that you can see what impact page mode has on system performance.

Illustration #1: Four random memory accesses without page mode.
Illustration #2: Four memory accesses in the same row, but with page mode.

For these illustrations we are making the presumption that all modules are 70ns and we are using the typical number of 40ns for a page mode cycle. In Illustration #1, the minimum total time needed to read 4 cells is 4 x tRAC (4×70) which equals 280ns. In Illustration #2, only the first access takes a full 70ns, with successive reads taking only 40ns each. Therefore, the total time needed to read 4 random cells in the same row is tRAC + 3 x tPC (70+3 x 40) which equals 190ns. As you can see, Page Mode access is at least 33% faster. See the illustrations below.


SDRAM, PC100 and PC133

There are two portions of the EDO/FPM discussion that need to be addressed in the discussion about SDRAM. The first is that in page mode, the column command can set random address within the row. The second is that, as we noted in the page mode cycle time, tPC is a better indicator of memory performance than tRAC, (random access time). SDRAM improves on both of these issues.

Instructions and data in personal computers tends to be read in sequential order most of the time. With a L2 Cache present, memory transactions happen as bursts of fixed sized memory blocks with continuous addresses. As an example, in Pentium class of processors (early), 32 consecutive bytes in memory are transferred between the L2 Cache and main system memory. Given this, DRAM developers released a better design called SDRAM, which stands for Synchronous DRAM. SDRAM fixes the page mode address patterns so as to be sequential and introduces free running clocks for all timing references as well as the page mode cycle indicator. The page mode access in SDRAM is called burst mode. The following illustration shows SDRAM burst read timing.

Clock Cycle and Clock Frequency

Given that burst modes have fixed address patterns, burst cycle times can be much shorter than that of a page mode cycle. The clock cycle time of SDRAM is set to be the same as that of burst cycle timing. Instead of random access time, the clock cycle time tCLK is used appropriately as the indicator of SDRAM speed. The –12, -10 –8 references on the SDRAM chip indicate the minimum clock cycle time for the SDRAM component. (See putting it all together below). A –12 reference means the clock cycle time for the SDRAM is 12ns. That in turn means the maximum clock frequency for the part is 83 MHz.

Some latitude is given when integrated circuits are mounted on a printed circuit board (module), that is why modules with –12 chips usually runs in 66 MHz systems. For this reason, -8 chips (minimum frequency 125MHz) are used in 100 MHz systems.

PC-100 and PC-133 refer to SDRAM modules that meet the specifications for PC-100 MHz and PC-133 MHz personal computer systems. As a general reference only, -8 chips or faster can be used for PC-100 qualified modules, where -6 or faster chips are used for PC-133 qualified modules. There have been some –75 (7.5ns clock cycle time) chips being used on modules that are alleged to be PC-133 qualified, however, in truth they are not. Unfortunately no marking standard has been established for SDRAM chips, all the more reason to be careful. As an example, some of the Samsung chips have no “-xx” reference markings, therefore the user would have to decode the speed with an electronics data book or visit the vendor’s Web site.

What does 2-2-2 SDRAM mean?

Let’s try and put these numbers (2-2-2) into terms that are easier to understand and maybe more meaningful. Aside from clock frequency, the other more commonly used timing terms for memory as noted above, including SDRAM, are: tRP, tRCD and tCL. (See SDRAM Burst Read Timing above). Now let’s redefine these timing terms and then discuss how they relate to 2-2-2.

  • tRP – The speed or length of time that it takes DRAM to terminate one Row Access and start another. Switching between memory banks.
  • tRCD – The time required between /RAS (Row Address Select) and /CAS (Column Address Select) access. Although this is an extreme oversimplification, think of tRCD this way. Picture an Excel chart with number across the top and along the left side. The numbers down the left side represent the Rows and the numbers across the top represents the Columns. The time it would take you to move down to Row 20 and across to Column 20 is RAS to CAS.
  • tAC – The amount of time necessary to “prepare” for the next output in burst mode.
  • tCAC – The Column Access Time.
  • tCL – (or CL) CAS Latency Latency is the number that refers to the ratio between the column access time and the clock cycle time rounded to the next higher whole number. It is derived from dividing the column access time by the clock frequency, and raising the result to the next whole number. tCL = tCAC / tCLK. If you look above at the illustration for SDRAM burst read timing, CAS latency means the length of access time tAC is measured from the 2nd clock edge after the “Read” command.
  • tCLK – The Length of a Clock Cycle.
  • Read Cycle Time – The time required to make data ready by the next clock cycle in burst mode.

Okay, we’ve spent enough time on acquainting you with all the terms, now let’s discuss 2-2-2 and try and make some sense of it. To keep our example as simple as possible, we will presume that the clock cycle referred to here, (unless otherwise specified) is based on a 100 Megahertz bus. Given that the clock cycle is the inverse of the bus speed, it is defined as 10 nanoseconds. On a 100 MHz bus, data transfer takes about 2 ns. According to SDRAM specification, tAC is 6 ns. It takes approximately 2 ns. for the signal to stabilize. According to SDRAM specifications, CAS Latency can be 1ns., but no greater than 3ns.

6 ns. (tAC) + 2 ns. (stabilization time) = 8 ns.

8 ns. + 2 ns. (transfer time) = 10 ns. = 1 clock tick

Therefore, in burst mode (the three data transfers after the first one requiring 50 ns.) data can be transferred in one clock cycle.

SDRAM modules are usually defined by three numbers, such as 2-2-2 or 3-2-2. The first number refers to CAS Latency, the second to tRP, and the third to tRCD.

Note: These numbers (2-2-2 and 3-2-2) mean different things for different bus speeds, therefore these calculations are for a 100 MHz bus as noted above, with 1 clock cycle = 10 ns.

tCAC = 20 ns. 20 / 10 = 2 2-2-2
tRP = 20 ns. 20 / 10 = 2
tRCD = 20 ns. 20 / 10 = 2

tCAC = 25 ns. 25 / 10 = 2.5 – round up to 3 3-2-2
tRP = 20 ns. 20 / 10 = 2
tRCD = 20 ns. 20 / 10 = 2

Now let’s calculate these figures for 133 MHz, with 1 clock cycle = 7.5 ns.

tCAC = 25 ns. 25 / 7.5 = 3.33 – round up to 4 4-3-3
tRP = 20 ns. 20 / 7.5 = 2.67 – round up to 3
tRCD = 20 ns. 20 / 7.5 = 2.67 – round up to 3

As you can see, the last example would not be valid in a 133 MHz system, as CAS Latencies greater than 3 are not permitted under the SDRAM specification.

Let’s dig a little deeper into these timing issues and spend some time with SDRAM access times.

Access Time in SDRAM

We’ve covered allot of issues beginning with EDO/FPM DRAM on through to SDRAM, therefore its is imperative that you understand that the access times tAC for SDRAM are defined differently from that of EDO/FPM DRAM.

To clarify, SDRAM access time is defined as access time in burst mode, from a certain clock edge (clock latency). Typical access times for PC-100 SDRAM is about 7 ns., but this does not mean that PC-100 SDRAM is 7 times faster than 50 ns. EDO DRAM. When we compare 7 ns. to 70 ns., we are obviously comparing apples to oranges. So as to have a fair comparison, we need to convert the SDRAM access time to random access time.

Let’s see how much faster PC-100 SDRAM is as compared to 50 ns. EDO DRAM. The tRCD for PC-100 SDRAM is typically 2 clock cycles which is 20 ns. Presuming a CAS (tCAC) latency of 2 and an access time of 7 ns., the random access time is 20 + 17 = 37 ns. So a PC-100 module is about as fast as a 40 ns. EDO module for random access. In burst mode however, PC-100 SDRAM is faster than a 40 ns. EDO DRAM.


A common marketing term attached to SDRAM modules is either “CAS2” or “CAS3”. Unfortunately, this is a bit misleading, as they should be referred to as CL2 or CL3, since they refer to CAS Latency timings (2 clocks vs. 3 clocks). As you have seen from the discussion above, the CAS Latency of a chip is determined by the column access time (tCAC). This is the time it takes to transfer the data to the output buffers from the time the /CAS line is activated.

The “rule” for determining CAS Latency timing is based on this equation:  tCL = tCAC / tCLK

In lay terms, CAS Latency times the system clock cycle length must be greater than or equal to the column access time (tCAC). In other words, if tCLK is 10ns (100 MHz system clock) and tCAC is 20ns, the CAS Latency (CL) can be 2. But if tCAC is 25ns, then CAS Latency (CL) must be 3. The SDRAM specification permits CAS Latency values of 1, 2 or 3.

Lost yet?

If you purchase an SDRAM memory module and you are advised that its values are 2-2-2, the first number is the CAS Latency, the second number is the tRP, which is normally always 20ns, and the third number is tRCD which is also normally 20ns. In this case, 2-2-2 is better than 3-2-2!

Here’s how the CPU and SDRAM work together

First the CPU activates the row and bank via the /RAS line. After a period of time (tRCD), the /CAS line is activated. When the amount of time required for column access (tCAC) has passed, the data appears on the output line and can be transferred on the next clock cycle. The time that has passed is approximately 50 ns for the first piece of data to become available. Subsequent transfers are then performed via burst mode (every clock cycle), or by cycling /CAS if necessary, which requires an amount of time dictated by tCAC, also called the CAS Latency period. For burst mode operation, the access time (tAC) must be 6ns. This is so the signal can stabilize and an output operation can begin by 8 ns after the last one. The transfer of the data takes 2 ns or less, which means that the data is available every 10 ns on a burst transfer, or just in time for the next 100 MHz clock signal.

There’s allot of uproar lately over which is faster, PC800 RDRAM or PC266 DDR, so let’s take a brief look at the topic that we cover in depth in our Rambus and DDR articles.

The terms PC800 and PC266 are misleading as they only tell part of the story. PC800 is used by Intel and Rambus to indicate the 800 MHz peak data transfer rate for Direct Rambus memory technology. But a Rambus module is only 2 bytes wide while an SDRAM module is 8 bytes wide.

  • The formula for peak bandwidth is PBW = Peak_Data_Rate x Data_Bus_Width.
  • The peak bandwidth of PC-800 Rambus module = 800 MHz x 2 Bytes = 1.6GB/s.
  • The peak bandwidth for PC-100 SDRAM =100 MHz x 8 Bytes = 800MB/s, exactly half that of the PC-800 Rambus module.

PC-266 DDR has a peak data transfer rate of 266 MHz and like SDRAM module, a DDR module has an 8 byte wide bus. Therefore, the peak bandwidth for PC-266 DDR is 266 MHz x 8 Bytes = 2.1GB/s, about 30% higher than PC-800 RDRAM. The actual system performance of a different memory technology is much more complicated than simply comparing peek bandwidth.

If you have found this information useful to you, won’t you please let us know? 

Also, if you feel there is something that should be added or corrected, we would like to hear about that too!

This page updated: 11/12/2000

About Dewwa Socc