Memory Trends In The Year 2001

Memory Trends In The Year 2001

If you have arrived here in quest of what new memory types will be available six months or a year from now, you’ve probably come to the wrong place. Memory development has never been more volatile and fast moving, and this is in spite of all of the inter-relationships and reliance on “old” technology. Before we can discuss where memory development is going, we really should take a closer look at where it has been, the different developments that are occurring and how these will effect the future. To pique your interest, SDRAM, Rambus and DDR SDRAM are not the only developments effecting the marketplace. You should also bear in mind that this writing is quickly becoming outdated as it is being written.

During the late 1980’s and early 1990’s, memory issues were critical. No one even considered backward compatibility, and in fact we were looking to purchase modules all made from the same chip lots just to reduce problems. Today, quality assurance aside, were able to mix different lots and even put PC 100 and PC 133 in the same machine and have it work. Not that we would, but it’s a sign of the advancements made thus far.

DRAM has been an important memory unit in computer architecture due to its high density and low cost. In 1997, DRAM made up 67% of the total memory market. Currently 85% of the DRAMs are used for main memory. The other 15% are normally used in graphic applications, and this percentage was expected to grow to 19% in 2000, although those figures have not yet been released.

During the development of DRAM, one of its primary objectives was to offer the largest possible memory capacity at the lowest possible cost. This was achieved in two ways. First, by optimizing the design and manufacturing process to minimize manufacturing cost. Second, ensuring that the device could be manufactured in high enough volume to be able to serve a potential high volume market and achieve the greatest economies of scale.

Over the last three to five years, one of the largest problems facing system builders as well as manufacturers of motherboards, video cards and other similar devices, is the fact that microprocessor performance has been improving or scaling at a rate of 60% per year, while the access time to DRAM has been improving at less than 10% per year. This gap is the primary obstacle of improved computer and component performance. As faster and faster processors are developed, we are seeing the pressure increase dramatically on DRAM manufacturers to produce a high performance memory system that meets the speed of these new processors.

The DRAM development is breaking new ground:

  • High speed DRAM designed around faster data rate, faster random access and a revolutionary memory interface design.
  • High functionality DRAM that evolves with video applications and window graphics etc.
  • Merged DRAM/logic technology that integrates DRAM with control circuits or processors on the same chip to increase speed.

DRAM Architecture and Operation

Array Architecture
One of the advantages of DRAM is its simple structure. The single cell structure allows the adoption of a cross-point array architecture for the memory array, realizing the high-density implementation of a solid-state RAM. The memory cell is defined by the intersection of two lines, one for the data selection (the word line) and one for the data transfer (the bit line). Since the memory cell and bit line pairs are massively parallel, a huge memory bandwidth (possibly more than 100 gigabits) is available in the array.

Operation Scheme
The memory read cell’s read operation consists of row access, column access, write-back, and pre-charge. The row access starts the row address latch and decoding when the RAS (row address strobe) signal becomes low. This path is considered as the multiplexing path of the RAS, which eventually activates an appropriate word-line according to the row address bits. The selected word-line connects the selected row of memory cells to the bit line pairs.

A fixed number of bits in the sense amplifiers are selected and transferred to the external bus according to the column address bits. The row and column address inputs are usually multiplexed to minimize the number of pins in the DRAM package. Since the read operation is destructive in that the cell itself can not restore the original signal, the data in the selected row must be written back to the memory cells in parallel with the column access path. Even though the write back doesn’t effect the access time, it nevertheless causes a serious limitation in the RAS cycle time.

DRAM Limitations
Currently, the single greatest limitation of DRAM is its performance. This involves two main aspects, the latency and cycle time in the row access and the data rate in the column access. The first issue is unique to DRAM, while the second is a memory interface issue common to most other types of semiconductor memory.

Performance in Row Access
In the random access mode, where memory access starts with row access, performance is seriously constrained by the slow RC time constant in both charging and discharging the dynamic memory cell and array. Since a DRAM cell is destructive, in that it cannot recover data once it is discharged, the full amount of the signal must to be written back to the memory cell at the slow RC time constant. This results primarily from the cell’s capacitance and the selection transistor’s resistance. Capacitance is often kept higher than 10 femto-farads because of the soft-error problem caused by bombardment of the cell by alpha particles from cosmic rays and radioactive impurities. The resistance problem becomes worse during write-back because of the asymmetry in the selection transistor operation. In addition to this RC constant in the memory cell, the RC time constant in the array resulting from the driver hierarchy’s large fan-out and branching ratios further compounds (degrades) the performance problem.

Performance in Column Access
The cycle time in column access determines the data rate once the data has been latched in the sense amplifiers. To improve this cycle time we need data multiplexing (write) or de-multiplexing (read) according to the decoded column address bits. Also needed are appropriate width and frequency conversion between the sense amplifier latches and external I/Os. Widening the chip to chip connections increases the cost because it increases the chip area and package size in proportion to the increase in number of I/O drivers and packaging pins.

Memory Refresh
Another limitation to be conquered is the memory refresh problem, which is unique to DRAMs. Since the memory cell is dynamic, junction leakage and sub-threshold leakage in the cell capacitor and selection transistor require that the cell’s contents be refreshed at certain intervals. The cells dynamic nature is not a serious problem in present computer system, however in some applications, building a non-volatile or even a battery-backup memory system is difficult unless an SRAM or flash memory chip/module is used.

DRAM Development

High speed DRAM development
Research on high–speed DRAM started in the late 1980s at IBM and Hitachi. IBM DRAM achieved high speed memory access through an improved address latch and decoding circuit, use of memory array architecture, a sensing scheme, and data path circuits relying on CMOS technology. Hitachi DRAM achieved their high speed memory access by combining the large gain bipolar transistor and CMOS circuits using the Bi-CMOS process.

Faster Data Rates
In the 4-Mbit generation, EDO (extended data out) DRAM improved the data rate for column access by adding an extra pipeline stage in the output buffer while keeping the conventional asynchronous interface. The difference is that the data is still valid at the end of the present CAS (column access strobe). The peak data rate is 266 Mbytes per second.

In the 16-Mbit generation, SDRAM (synchronous DRAM) employed a high-speed synchronous interface. By either pipelining the data path, or by pre-fetching several bits at a time over wider data lines, designers improved random access data rates for column accesses. They also improved random access performance by interleaving multiple banks of memory arrays on the same chip. The peak data rate at 66 MHz is 1.1Gbps.

Faster Random Access
Over a very short period of time, several separate development efforts were focusing on improving DRAM row access performance to near SRAM level by integrating a small amount of SRAM, or by dividing the DRAM into multiple independent banks and overlapping or hiding the row access and pre-charge. These approaches are found in EDRAM (enhanced DRAM) that has a distributed cache, CDRAM (cached DRAM) and MDRAM (Multi-bank DRAM).

Revolutionary Memory Interface

Rambus DRAM uses a packet-type memory interface, and with that realizes a peak data transfer rate of 500Mbps per data I/O pin. When a more efficient protocol was employed, 600Mbps per data I/O pin was realized.

High Functionality DRAM development
Up until the last few years, developers, instead of focusing on making the DRAM faster, responded to the needs of developers of special applications and very specific graphics handling, and thereby setting trends in other forms of DRAM development.

As an example, Video DRAM gained concurrent dual-port access by multiplexing the massively parallel data in the DRAM array before the data was de-multiplexed in the column data path. Operating at 10 MHz, the internal data rate is 41 Gbps. The downside, however, is that VRAM is expensive since placing serial registers near the sense amplifier complicates the design and consumes die area. VDRAM is a JEDEC standard.

WDRAM (window DRAM) achieves improved graphics operations in a GUI environment by placing localized registers for serial access outside the DRAM array, with data for screen refresh traveling over the 2.1 Gbps internal bus. As a result, the design becomes smaller and less expensive than VDRAM.

Merged DRAM Logic technology
It is extremely important in the development of DRAM to develop a DRAM macro that is competitive in terms of flexibility and density and also make the macro usable in a standard ASIC logic design environment. A dram macro is a combination of a DRAM array and the necessary support circuits to form a functional memory unit with a smaller memory granularity for integration. The key element is to create a chip architecture that utilizes the bandwidth available to form the massively parallel data architecture in the array. The on-chip architecture must be efficiently arranged for improved flexibility in connected DRAM arrays and logic circuits. One method is to use a cross-bar architecture.

Manufacturers, laboratory and university research on IRAM (Intelligent DRAM) and PPRAM (parallel processing RAM) are attempting the exploration of such architectures. IRAM combines processors and DRAM to provide high bandwidth for predictable access and low latency for unpredictable access. PPRAM integrates four RSIC processors, DRAM, and cache SRAM.

Major DRAM Interfaces

SDRAM (synchronous DRAM)
Synchronous DRAM (SDRAM) features serial read/write operations synchronized with an external clock. Either pipeline operation or an on-chip, parallel-serial conversion, with multiple registers that realize high speed burst data transfer. The system can access up to 8-bits of serial data in one column. During a serial read, next-column address loading in page mode enables the next 8 bits to be read without a gap. Two-bank interleaving enables a gapless access to a different row of another bank.

SDRAM employs a mode register that programs burst length, burst type, CAS latency, and so on, in order to accommodate different system requirements. A combination of registers and multi-plexers use parallel-serial conversion to output high-rates of serial data. A 128-register, 16-Mbit SDRAM fabricated with a 0.55-um CMOS process easily achieved 180-MHz operation. The architecture increased the memory array area about 1.5 percent over the conventional array. SDRAM will provide main memory in engineering workstations and high end personal computers for quite some time before we see a mass change in memory types. SDRAM will also sustain the middle-range graphic memory area by due to its ability to adds block write and write-per-bit (WPB) capabilities to SDRAM chip.

RDRAM (Rambus DRAM)
Rambus DRAM (RDRAM), a proprietary technology of RAMBUS, Inc., is an enhanced, on-chip, parallel-serial convention with two-bank interleaved operation similar to that of the registered SDRAM. It also features an on-chip phase-locked loop and a Rambus I/O interface that has a 600-mV swing with both termination and protocol. At the onset, the Rambus interface system achieved a 500-Mbytes/second data transfer.

RDRAM technology uses a narrow bus topology operating at a high clock rate to solve the memory bandwidth problem. A Direct Rambus channel includes a controller and one or more Direct RDRAMs connected together via a common bus. The controller is located at one end, and the RDRAMs are distributed along the bus, which is parallel terminated at the far end. The signaling technology used for the high-speed channel signals is referred to as Rambus signaling logic. Each RSL signal wire has equal loading and fan-out and they are routed parallel to each other on the top trace of a printed circuit board with a ground plane located on the layer underneath.

The channel uses 18 data pins cycling at 800 Mbps per pin to provide a low-cost, high-bandwidth bus. The bus requires a total of 76 controller pins including all signals and power supply pins. This key property of the Rambus channel is that it permits a much higher frequency of operation than the matrix topology used by SDRAMs. Because a full-bandwidth channel can be constructed using a single Direct RDRAM, the minimum memory granularity is that of a single chip. This characteristic is quickly becoming important as DRAM develops and progresses into the 256-Mbit generation and beyond.

CDRAM (Cached DRAM)
Cache DRAM (CDRAM) is a development that has a localized, on chip cache with a wide internal bus composed of two sets of static data transfer buffers between cache and DRAM. This architecture achieves concurrent operation of DRAM and SRAM synchronized with an external clock. Separate control and address input terminals of the two portions enable independent control of the DRAM and SRAM, thus the system achieves continuous and concurrent operation of DRAM and SRAM. CDRAM can handle CPU, direct memory access (DMA) and video refresh at the same time, by utilizing half-time multiplexed interleaving through a high-speed video interface. The system transfers data from DRAM to SRAM during the CRT blanking period. Graphic memory, as well as main memory and cache memory, are unified in the CDRAM. As you can see, CDRAM can replace cache and main memory, and it is has already been proven that a CDRAM based system has a 10 to 50 percent performance advantage over a 256kbyte cache based system.

IRAM (Intelligent RAM)
Intelligent DRAM, or IDRAM, merges processor and memory into a single chip in order to lower memory latency and increase bandwidth. It is a research model for the next generation of DRAM and has been tested in alpha 21164 processors. The reasoning behind placing a processor in DRAM rather than increasing the on-processor SRAM is that the DRAM is approximately 25 to 50 times denser than cache memory in a microprocessor. Merging a microprocessor and DRAM on the same chip provides some rather obvious opportunities in performance, energy efficiency, and cost. It affords a reduction in latency by a factor of 5 to 10, an increase in bandwidth by a factor of 50 to 100, and has an advantage in energy efficiency at a factor of 2 to 4. Add to this and an qualified cost saving as the result of removing superfluous memory and reducing board area. Although the above figures are estimations based on early testing and present technology, it would appear that IRAM holds allot of promise.

SLDRAM (Synchronous-Link DRAM)
SLDRAM offers high sustainable bandwidth, low latency, low power consumption, is easily upgraded, and supports large hierarchical memory configurations. For video, graphics, and telecommunications applications, SLDRAM provides multiple independent banks, a fast read/write bus turn around, and the capability for small, fully pipelined burst. SLDRAM addresses the requirements of all major high volume DRAM applications. SLDRAM is an open standard to be formalized by IEEE and JEDEC specifications. Open standards permit manufacturers to develop varying products that address emerging applications and niche opportunities while inspiring competition that will ensure the continued rapid pace of development of DRAM technology, at the lowest possible cost.

A typical SLDRAM architecture uses a multi-drop bus that has one memory controller and up to eight loads. A load can be either a single SLDRAM device or a buffered module with many SLDRAM devices. Command, address and control information are on the unidirectional command link. The data link is a bi-directional bus for the transmission of write data from the controller to the SLDRAM, and read data from the SLDRAM back to the controller. Two sets of clocks allow control of the data link to pass from one device to the next with a minimum gap. Later versions of SLDRAM add a buffer on the command link and data link to provide higher memory bandwidth and larger memory depth.

Future DRAM Trends

Today more than ever, changes in DRAM technology are driven by the need to close the ever widening gap in performance relative to the microprocessor. DRAM data rates should be directly proportionate to their density increase. As an example, if designers were to replace four 4-Mbit DRAM with one 16-Mbit DRAM, the 16-Mbit DRAM should have a data rate that is four times higher in order to maintain the same memory system performance. The high speed DRAM should provide smaller memory granularity for a given bandwidth requirement.

At the moment, no one contender has captured the crown in the race for the highest speed DRAM, although the top two contenders show allot of promise, DDR-SDRAM and Rambus. Both have their niche in the market depending upon motherboard chipset development.

While there is, in fact, four major contenders for the next generation of high-speed DRAM, the first two are in a heated head-to-head race. While we believe that in the end, first place is likely to be taken by DDR-SDRAM (SDRAM double data rate), which uses a synchronous RAS and CAS interface comparable to that of the original SDRAM, Rambus is not out of the race just yet. With DDR-SDRAM, the data rate has been improved by transferring data at both edges of the clock. A 1-Mbit SDRAM with ´ 16 I/O operating at a 100-Mhz clock, (200-Mhz data rate), can provide 3.2Gbps.

The original Rambus design has seen its own enhancements with the release of Direct Rambus DRAM. This version is an improved version that provides a peak data rate of 13 Gbps per chip. The data rate is due to the 400-Mhz clock, (800 MHz data rate), and 16-bit bus width.

There are two other contenders.

SLDRAM, which originated from Ramlink, the IEEE standard 1596.4, which was developed by applying the SCI (scalable coherent interface) to the memory bottleneck problem. This system adopts one bi-directional bus for the data as there are usually long bursts of write or read data that can disturb the balance between incoming and outgoing links.

The fourth contender is the merged DRAM/Logic technology. This development is driven by the need to control power consumption and the need fore a small footprint, particularly for mobile applications. Power consumption in the memory system is attributable to both intra- and inter- chip data transfer. It is extremely important to reduce power consumption without sacrificing memory bandwidth between the microprocessor and DRAM.

The need for a transition to the merged DRAM/logic technology is accelerated by smaller memory system capacity in low-end applications, but this transition may slow down for both technical and commercial reasons. From a technical standpoint, DRAM and logic semiconductor technologies are different, and merging the two at a reasonable cost still presents many challenges. From a commercial standpoint, system manufacturers rely on a known technical standard as well as a reliable source for secure, stable, high volume supply. A transition will not occur until the merged technology can provide a major improvement over the current alternative, incremental solutions.

Conclusion(s)

Anyone remotely interested in memory technology and its advances can draw their own conclusions as to where all of these developments will lead. Regardless of the conclusion you may reach, you might not want to go betting your lunch on it, you just may wind up going hungry. There may be just a few investors that bet on Rambus, sold their Beemers to cover their margin account only to have the stock recover.

Over the last year or so, we saw a partnership between Intel Corporation and Rambus that started in a fervor of high expectations and almost ended in a loud thump as it appeared that Rambus might not be all that it was cracked up to be. Rambus was on a high, and Intel was producing high-end motherboards to support the new technology.

Shortly thereafter came the clamor over DDR-SDRAM and how it was going to literally blow the entire industry wide open. Unfortunately there was one problem, no one bothered to partner with the major motherboard manufacturers to insure that there were any motherboards to support the new technology.

Making Matters Worse!

Unfortunately, during the course of the last twelve to fourteen months there’s been a little more than new memory developments. Two of the more ominous looking problems involved Micron Electronics (no not Micron Technology) alleged claim to a patent for the utilization of the “WP” pin of the SPD chip on pin 81 of the DIMM module. This pin connection is specified in the Intel PC 100 standard for the SPD “write protect” control function. Unfortunately module manufacturers are caught between a rock and a hard place, as they must conform to the PC 100 standard but they don’t want to pay Micron a royalty. There may be some light though at the end of the tunnel. While Micron did file its patent in 1993, JEDEC had located documents that revealed that there were other discussions that had occurred in early 1993 involving module “Write Protect”. Now the question becomes one of prior art as posed to a first filing of a patent. Thus far, Micron Electronics has not sued anyone for patent infringement, but they are pursuing license agreements.

The second legal entanglement involves Rambus, Micron Technology, Hyundai and a few others. Apparently Mike Farmwald and Mark Horowitz, the Rambus founders, filed comprehensive patent claims in the early 1990’s relating to memory bus architectures and synchronous DRAM technology. This patent covers all synchronous memories and their controllers, which means “all” Synchronous DRAM as well as DDR devices, modules and system memory controllers. In 1999, the Patent Office granted a laundry list of patents to Farmwald and Horowitz, assigning them to Rambus. Based upon opinions filed with the courts, it’s not likely that the DRAM industry will be able to avoid their effectiveness. Micron Technology commenced a legal action against Rambus based upon an alleged anti-trust violation, arguing that Rambus was using its patent to force the industry into choosing the more expensive Rambus memory. Shortly thereafter, Hyundai Semiconductor commenced an action against Rambus that alleged they had a claim based upon the theory of “prior art”. Hyundai claims that Rambus had learned the details of synchronous memory at a 1995 JEDEC committee meeting long before they filed their amendment for their patent. Obviously all of this resulted in counter-claims and cross-claims between the parties in the action. As this writing is being finalized, it would appear that all of the parties involved may be settling their differences.

Just six months ago we heard rumors from a source that believed that he had his own crystal ball and was able to foresee the future. With that, he came out with the very bold statement that as DDR-SDRAM technology matures, VIA, Ali, and Server Works will capture the high speed market, leaving Intel to their own devices as a minor market player clinging to Rambus and the PC 133 technology of the not too recent past. The obvious implication was that Intel would not embrace, let alone support, DDR-SDRAM technology. Apparently Intel wasn’t looking into the same crystal ball! Egg on the face anyone?

Mainstreaming DDR-SDRAM will depend on the willingness of chipset manufacturers and system OEM’s to get behind the technology. We had hoped to see a large showing at the fall Comdex in November 2000, but unfortunately that wasn’t to be.

Looking for a prediction? As far as our crystal ball is concerned, PC 133 memory will continue to dominate most of the low end, mid-range and low high end markets for the better part of 2001. We feel this is so as computer system sales at the moment are price driven rather than quality driven. We feel that Rambus will still shine brightly in many of the true high end workhorse units, and we don’t envision much by way of DDR-SDRAM support until the third and fourth quarters of 2001. This isn’t to say that there won’t be a sprinkling of motherboards available that support DDR memory, but rather there won’t be a massive influx of them.

As for pricing and stability, we believe that it may be a while before you see pricing any lower than it is right now. You may see some increases for the better modules, but in general, pricing has stabilized. As far as availability is concerned, this may be a rocky road. In 2000 we saw the memory manufacturers gear up after all of the storms in Asia only to have many of the system manufacturers back off due to a lack of sales. This caused the market to become flooded with memory modules, which in turn drove pricing down. In 2001, we feel this will taper off, the pricing will stabilize as will availability. We do see changes in demand though. In 1999, the major movers were 32MB and 64MB modules. In 2000 the movers were 64MB and 128MB modules. It appears now that this will shift again to 128MB and 256MB as more end users shift towards operating systems such as Windows 2000 or delve into the word of graphics and sound.

If you would like to review more about memory related issues, you may want to follow these links:

How Memory Speeds Are Determined

How to Identify PC-133 Memory Modules

Frequently Asked Questions About Memory

Troubleshooting Memory Problems

Megabyte (MB) vs. Megabit (Mb)

Memory Trends in 2001

How Much Memory Do You Need?

Click here to go to the Performance Center Home Page

This page updated: 2/17/2001

About Dewwa Socc

Sahifa Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.