Memory, Evolution or Revolution?

Memory

Evolution

or Revolution?

Introduction

Everywhere you look on the Internet lately you’ll find discussions about computer memory. These discussions range from which memory type is the best, not the best, least expensive or the most expensive. If you search long enough you will even find a few really authoritative articles on memory architecture and performance. While these better resources provide better explanations than all of the other hype combined, there is one small inherent problem, most of it is geared towards people with engineering backgrounds. Recently, the memory industry has added several new technologies to the mix, one of which involves Rambus memory, one of the latest developments, aside from the legal entanglements over its licensing issues, and now the development and release of DDR SDRAM.

Rather than subject you to engineering discussions, we have decided to provide you with some background on where all of this started, what the different memory types are and how to identify them. More importantly we will try and provide our opinion as to where we believe the memory industry is going with all of this. We will try and put all of these memory issues in “lay terms” in an effort to make them a little more understandable. We will then sprinkle all of this with our honest opinions. Before reading any of the following, we would like to leave you with this, “there is no specific memory type that fits all situations or configurations“.

Five or six years ago, there wasn’t much to say about system memory. Most personal computers came with fast page mode (FPM) DRAM, which ran at speeds between as slow as 100ns ( nanoseconds) and a slightly fast 80ns. However, as processor (CPU) development evolved and motherboard chipset bus speeds improved, the combination outstripped the ability of FPM DRAM to process data quickly enough to meet the demands of processors and motherboards.

Although there were memory types available that were faster than FPM DRAM, these were relegated to high-end workstations and servers for no other reason that cost considerations. Early DRAM modules were asynchronous, single-bank designs that met the needs of the relatively slow processors that were in use at the time. Recently, last several years or so, synchronous interfaces (SDRAM) has been produced with a multitude of advanced features. And even though these high-performance memory modules have only been available for a few years and are still in their evolutionary stages, it is apparent that they will soon be replaced by more than one of the current protocol-based designs, such as DRDRAM (developed by Rambus, Inc.) and DDR DRAM.

Let’s briefly review the basics of memory evolution.

There’s really no need to discuss early memory types, as to do so goes beyond the scope of our intent to provide the basics. Let’s start with the last major improvement to asynchronous DRAM.

HyperPage Mode (EDO)

The last major improvement to asynchronous DRAM arrived with Hyperpage mode, or Extended DataOut, or EDO. This development innovation was nothing other than merely to stop turning off the output buffers upon the rising edge of /CAS (for help on the definition of CAS, see the glossary of memory terms). In essence, this eliminates the column pre-charge time while latching the data out. This allows the minimum time for /CAS to be low to be reduced, and the rising edge can come earlier.

CASColumn Address Strobe – memory controller signal that tells the memory that it can read the column address signal.

RASRow Address Strobe – memory controller signal that tells the memory that it can read the row address signal.

More things to try and understand right? As a huge over simplification, data is read into (sent to) memory and then written out (sent out) of memory. This data is handled in columns and rows by address. By reducing the time for signaling and data handling (writing to and from memory), memory performance is increased.

Although EDO memory uses approximately the same amount of silicon and is basically the same package size, it comes with an average 40% or greater improvement in access times, EDO, depending upon the type, has been shown to work well with memory bus speeds up to 83 MHz with little or no performance penalties. If the chips are sufficiently fast (55ns or faster), EDO can be used even with a 100 MHz memory bus speed. At one time, one of the best reasons to use EDO was that most of the motherboard chipsets supported it, (such as Intel’s HX, TX, GX, SX, LX etc.) with no unusual compatibility problems. Even with all of its advantages, EDO is no longer considered mainstream. Most major manufacturers still produce it, but in limited production given that SDRAM modules are more prevalent and either equal or less expensive.

If you already own EDO memory, and your present motherboard chipset only supports 66 MHz or 83 MHz, there is no real reason to jump to SDRAM unless you plan on changing to a motherboard that supports bus speeds above 83 MHz. With a typical EDO timing of 5-2-2-2 at 66 MHz, there is almost no noticeable improvement with SDRAM over EDO, and at 83 MHz it is still negligible. If you require 100 MHz or 133 MHz bus operation, EDO will lag far behind current SDRAM in performance even if it does operate at that speed due to the need for 6-3-3-3 timings. On the other hand, with the probability that EDO will phase out in the near future, you may find SDRAM to be more to your liking.

Synchronous Operation

When it became apparent that processors and motherboard bus speeds would be running faster than 66 MHz, memory engineers needed to find ways to overcome the significant latency issues that existed within current DRAM designs. By implementing a synchronous interface, the engineers were able to reduce this latency as well as gain some other advantages as well. You can read more about latency issues here.

Systems with early Pentium processors, those in the 120 MHz to 200 MHz range, and memory utilizing an asynchronous interface, the processor must wait idly for the DRAM to complete its internal operations. This typically took about 60 to 70 nanoseconds (ns). With synchronous control, the DRAM latches information from the processor under control of the system clock. These latches store the addresses, data and control signals, which allows the processor to handle other tasks. After a specific number of clock cycles the data becomes available and the processor can read it from the output lines.

Okay, let’s see if we can explain this in lay terms.

Suppose you leave work to make a copy of an important document at the local copy center. The average employee at the copy center normally takes about 70 nanoseconds to process a copy job (if only they were that fast). However, you don’t know if there will be someone there waiting to take your order once you arrive, you only know how long it takes them to make the copy once your order is actually taken. Having done this many times before, you are familiar with the way the copy center runs their Business and you allow some wait time. You arrive at the copy center and, sure enough, you have to wait for an employee to take your request. After all is said and done, the entire process has taken a lot longer than the 70 nanoseconds it takes to actually make the copy.

Now suppose that an employee at the copy center has been assigned to go to the front counter every 10 nanoseconds to take new orders or deliver previously placed orders. Since this eliminates your initial wait time, the process is already more efficient. You now know that when you arrive at the copy center, there will be someone there to begin processing your order immediately.

This illustrates the advantage of Synchronous DRAM over Fast Page Mode (FPM) and Extended Data Out (EDO) memory. SDRAM is synchronized to interact with the processor at specific intervals. With this faster, more efficient transfer of data, the CPU can process requests more quickly, thus reducing wait time for the end user.

Another advantage of a synchronous interface is that the system clock is the only timing edge that needs to be provided to the DRAM. This eliminates the need for multiple timing strobes to be created. The inputs are simplified as well, since the control signals, addresses and data can all be coupled (latched) in without the processor having to monitor setup and hold timings. Similarly, these same benefits are realized for output operations as well.

About Dewwa Socc

Sahifa Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.