The Technical Aspects of DDR and RDRAM
RDRAM/DDR: Part I, Technical Aspects”
Ed Stroligo – 5/21/00
There are three major areas of contention between these two memory systems:
- The technical aspects
- The production/cost aspects
- The political aspects
With the exceptions of very low-end PCs and servers, technical factors are the least important in determining which memory system will win out.
For the average desktop, there is no killer technical benefit or flaw to favor one over the other; there are only advantages and disadvantages.
The production/cost aspects are more important, but those are and will be heavily influenced by the political aspects.
The political aspects are by far the most powerful factors in determining which will be the next memory standard for PCs, or if there will be one.
Intel/Rambus wants RDRAM; the memory establishment on the whole doesn’t. Each has its priorities and agendas; both have legitimate and less legitimate reasons for doing what they are doing.
Both want to get their way; either will get hurt if one caves in to the other. Neither can prosper without the other, though. So they are playing a game of chicken and seeing who flinches first.
Who will win? It’s pretty clear what the fight will look like, but it’s too early to predict a winner.
How The Two Systems Differ and General Summary
- RDRAM is a narrow, high speed serial connection.
- SDRAM/DDR is a wider, lower speed parallel connection. (DDR is simply SDRAM that squeezes out two actions per clock cycle; something RDRAM already does. I’ll use the term “SDRAM” when what I’m saying applies to both current SDRAM and DDR, and “DDR” when it applies only to that).
Just about all the major advantages and disadvantages between RDRAM and SDRAM stem from this difference.
These are two much different ways of handling data flow. RDRAM does a little very quickly. SDRAM does much more but much more slowly.
The wiring on RDRAM modules is technically more challenging, but there is much less of it. It operates best with simple configurations; performance degrades as you add more devices. This is typical of serial devices.
SDRAM’s wiring is inherently more complicated than RDRAM’s on simple configurations. However, that initial complication allows it to handle multiple devices better than RDRAM. This is typical of parallel devices.
The next major upgrade step would be simpler for RDRAM than SDRAM. After that, though, both would face substantial technical challenges of equivalent complexity. RDRAM would run into the difficulties SDRAM already handles (complicated tracing). SDRAM would run into the difficulties RDRAM already handles (high speed).
Both face the underlying problem that both use what is now an antiquated memory system which inherently is nowhere near fast enough to handle the demands of CPUs and needs a major, extremely expensive overhaul.
Taken all together, the two systems have been roughly equivalent up to now. Increases in bandwidth/FSB alone show little proportionate improvement to overall computer performance scale very low in overall computer performance compared to items such as processor Mhz or RPM rotational speed.
DDR will take at least a temporary small lead compared to current RDRAM systems, but will likely relinquish at least the theoretical lead to near-future motherboards with bus speeds high enough to take full advantage of dual-channel RDRAM. Whether that will have a significant practical effect remains to be seen, and will probably depend greatly on individual computer use.
However, technical advantages and disadvantages are highly prone to be overridden by cost and/or political factors.
On the whole, with rare exception, for every advantage one system has over another, there is a corresponding disadvantage. Broad generalizations about the superiority of either based on one or two factors cannot be accurately made.
Whether one would be better than the other for you from a purely technical aspect depends on the type of machine you have and what you do with it. Non-technical aspects are likely to carry much greater weight in individual decisions.
Bandwidth is how much data can be transmitted in a given period of time. Whether that is good or not depends on how much needs to be transmitted.
If you have 1,000 cars going along a single country lane in an hour, traffic would probably go much faster if you widen the lane. However, turning it into an eight-lane highway will not make matters much better than a two-lane road, unless you expect 10,000 cars an hour along that route shortly.
More bandwidth all by itself is not necessarily a benefit. It depends on how much traffic you have now, and how much you expect during the lifetime of the road.
These are the designed maximum bandwidth for each of the following memory systems currently available or expected to be available within the next six months.
|Memory Type||Bytes transferred per clock cycle times operations per clock cycle times clock cycles/second||Maximum Bandwidth|
|PC100||8 bytes/cycle X 1 X 100Mhz||800 million bytes/sec|
|PC133||8 bytes/cycle X 1 X 133Mhz||1,064 million bytes/sec|
|PC1600||8 bytes/cycle X 2 X 100Mhz||1,600 million bytes/sec|
|PC2100||8 bytes/cycle X 2 X 133Mhz||2,128 million bytes/sec|
|RDRAM (Single Channel)|
|PC600||2 bytes/cycle X 2 X 256Mhz||1,024 million bytes/sec|
|PC700||2 bytes/cycle X 2 X 356Mhz||1,424 million bytes/sec|
|PC800||2 bytes/cycle X 2 X 400Mhz||1,600 million bytes/sec|
|RDRAM (Dual Channel)|
|PC600||4 bytes/cycle X 2 X 256Mhz||2,048 million bytes/sec|
|PC700||4 bytes/cycle X 2 X 356Mhz||2,848 million bytes/sec|
|PC800||4 bytes/cycle X 2 X 400Mhz||3,200 million bytes/sec|
Dual-channel DDR motherboards, though they have been built, are unlikely to become a mainstream option for the desktop mostly due to difficulties in putting the number of traces needed to have two separate parallel lines and motherboard manufacturer current indifference.
PC2600 is still in the stage of initial investigation and would not arrive until sometime in 2001, nor would the anticipated next RDRAM generation. DDR-2, QDR is still in the relatively early planning stages, and their eventual emergence is not near certain.
The designed maximum memory bandwidth of motherboards currently available or likely to be out within the next six months are:
|Motherboard Type||Maximum Designed Memory Bandwidth|
|Intel 440BX||800 million bytes/sec|
|AMD 750**||800 million bytes/sec|
|Intel 81x/820/840***||1,064 million bytes/sec|
|Via Apollo Pro/Pro+/KX-KZ133**||1,064 million bytes/sec|
|Micron Samurai/Via Apollo 2000/AMD 760/Motherboard Type||2,128 million bytes/sec|
|Intel Tehama (dual-channel Willamette motherboard)||3,200 million bytes/sec|
**The AMD boards based on the EV6 bus do have a theoretical maximum bus bandwidth of 1,600 millions bytes/second, but no memory standards for those boards reach that speed. Ironically, the EV6 is a better theoretical platform for RDRAM than the current Intel motherboards.
***The 840 chipset is a bit confusing. It has dual-banks of RDRAM but that is justifiable solely for latency purposes (see below). The dual-banks can provide up to 3.2Gb of bandwidth, but then you’d have to have a system bus of 3.2Gb to use it. Intel is very coy in its documentation about this; it doesn’t use the term “FSB” in the 840 literature, but why would the 400Mhz bus of the Tehama be a big deal promising more than current equipment if it had been already implemented in the 840? Excess bandwidth can be put to some other purposes, though (see latency below.)
Note that the FSB for the current Intel motherboards is a good deal lower than the maximum possible bandwidth of a single channel and definitely a dual channel RDRAM system.
While one should not read too much into that (maximum bandwidth is only a theoretical maximum rarely if ever encountered in real life, and having a bit extra bandwidth does help with some other CPU activities like prefetching), we really don’t know for certain how well a dual-channel RAMBUS system can operate until we have a Tehama in front of us for testing. An 840 is not an accurate approximation of it.
The recent benchmarking done by Bert McComas between a DDR system and the 840 is fine and valid between DDR and the current 840. They do show DDR is on the whole a bit better than the 840, sometimes a good deal better. However, that benchmarking is not valid for or even terribly indicative of any comparison between DDR and future dual-channel RDRAM systems with much higher FSBs.
What might we expect from a 400Mhz bus system? For now, the only source we have is Pat Gelsinger, who is the Intel Vice President and General Manager, Desktop Products Group. Now you may think, “this man is biased.” Actually, he’s probably the most pro-RAMBUS executive Intel has; after all, he signed the contracts with them.
However, whether you think Intel is heaven-sent or spawn of Satan, do you think Pat Gelsinger would grossly underestimate the improvement in performance? No.
So no matter what you think of Intel, it’s probably pretty safe to assume that whatever he said is about the maximum you could expect (and I realize the benchmarks mentioned include some of Intel’s own homebrewed ones).
This is only meant to give us a very rough idea as to whether Willamette will blow the doors off everything just because of the 400Mhz bus. It’s not meant to be authoritative, and is no substitute for a real test. We’re only doing this because you don’t have anything even close to test yet.
What did he say? (From http://www.intel.com/pressroom/archive/ speeches/pg021500.htm:)
“And thus, on the initial platforms that came out, we just showed two to five percent performance. And that’s what’s shown by the first bar here.”
“The data we’re showing here is the average of ten industry standard benchmarks. So this would be things like SYSmark 2000, Business Winstone, Multimedia Mark, etc. . . . we’re seeing very large performance gain in real-world benchmarks as we go to higher and higher frequencies of platforms. So we see numbers in the ten to 20-percent range as we go up to one gigahertz Pentium III processors and beyond, and we see numbers in the 15- to 30-percent range on those benchmarks as we go up to Willamette platforms and beyond.”
15-30% max. That may sound really good until you realize he’s comparing the performance to PC100, not DDR.
This shouldn’t be surprising. Improvements in memory scale very low in improvements on standard application-based benchmarks. Gelsinger said, “today’s applications were designed for today’s platforms, not for tomorrow’s memory.” (Actually, as we’ll see below, today’s applications are mostly designed for yesterday’s and the day before’s platform).
If you run a system at 100Mhz FSB, then run it at the same CPU speed at 133Mhz FSB, you only get about a 3% improvement. The reason for that is that caches and CPUs are designed to anticipate calls for information to minimize what is a very slow process in calling data from main memory. As an extremely rough guesstimate based on this crude measurement, let’s assume we get about a 12% improvement from DDR over PC133, and 15% over PC100.
Again, very roughly, using what we know is a partially loaded dice benchmark conglomeration, the improvement of a 400Mhz bus with RDRAM over 266Mhz DDR should be in the ballpark range of 10 to 15% faster than its real competition. It probably won’t be more, it could be less. And that will require two channels of RAMBUS with two (smaller-sized) RAMBUS RIMMs for every DDR SDRAM module. Not exactly Armageddon Now for DDR, especially if cost factors remain important.
Again, we’ll have to wait until we get a real Tehama board to come up with a real, fair comparison, but we know that even a dual channel RDRAM system won’t be mind-boggling better than a DDR system, though probably a good deal better than you might surmise from current DDR/840 systems comparisons.
No matter what the numbers finally end up being, it’s almost certain the results you’d get would depend pretty heavily on what applications or games you use.
The reality is software manufacturers like to sell to as many people as possible while taking at least some advantage of hardware developments. For many business applications, that means usually writing code that will perform at least not too badly with what was a good machine three years ago.
Games and professional graphics/media will tend to more aggressively take advantage of hardware improvements, but they usually aren’t exactly on the bleeding edge either. The biggest bandwidth user I’ve been able to identify so far is Quake, and it only chews up 350-400Mb a second.
Other applications do get really stymied by current hardware limitations, so if a hardware advance helps them out a lot, they will be even more likely to code for it.
The sad reality, though, is that programmers aren’t exactly chomping at the bit to slave away hand-coding some new innovation. If you can just click a button on your compiler to include SSE or SSE2 or more use of bandwidth, great, but then you need a compiler to do that, and those aren’t usually available well before new CPUs or motherboards are out.
So don’t expect to see much that really requires either DDR or Willamette bandwidth anytime soon.
However, the existence of more bandwidth will lead, over time, to more people taking advantage of it, so the general adoption of RDRAM over DDR could lead to better performance in certain cutting-edge applications that can and do take advantage of greater bandwidth.
Of course, to take full advantage of whatever advantage there is in increased bandwidth, you do need PC800 RDRAM, which is easier said than done. The yields of PC800 RDRAM remain very low; as of just a couple weeks ago, only 20% of the RDRAMs shipped by Samsung are binned as PC800. See http://www.ebnews.com/story/OEG20000505S0054.
I’ll discuss some possible reasons why in the next section, but for now, many of the RIMMs going into OEM machines are PC600 and 700 RIMMs, and if that keeps up, there goes some to all of the bandwidth advantage over DDR.
But bandwidth is not the be-all and end-all of memory. You also have . . .
Which is better? Actually, they’re both lousy.
Current memory, no matter what it is, is a good deal slower than the processor. That’s why you have things like L2 caches and prefetching, to try to make up for chips that take forever (in the nanosecond world) to come up with data on demand. You can’t do much of anything in less than 40ns in main memory, which isn’t too good when your processor has clock cycles of about 1ns.
That is why CPU manufacturers have been forced to do things like have on-die cache and speculative prefetching. Otherwise, the processor would be sitting around most of the time waiting for memory to get its act together.
Which is better for latency?
RDRAM has a few latency advantages over SDRAM. It has more banks of memory than SDRAM, so the odds of a missed page hit are less. (On the other hand, more banks mean increasing the die size and cost of the DRAM.) Better location addressing helps to prevent some errors. Under ideal circumstances, RDRAM may be able to react a little quicker in certain operations than SDRAM, though not by much.
With further optimization (which might be part of Willamette/Tehama), some of RDRAM’s extra bandwidth could be used for better speculative prefetching. This is already done to some degree with the 840 chipset, expect to see more of that with Tehama.
However (always a but) RDRAM also has a few significant latency disadvantages due to its serial nature. Since it only transmits 16 bytes a cycle, it takes two cycles for each 32-bit WORD, as opposed to SDRAM’s one. This can be remedied by using two channels, but of course, then you need two RDRAM RIMMs.
More importantly, since it is a serial device rather than a parallel device like SDRAM, if there are multiple modules being used, all requests and responses must wend their way through all the modules in between it and the CPU before getting to its destination. This slows things down more than the direct, parallel connection SDRAM has.
To keep things from getting chaotic, RDRAM deliberately slows down the responses of all the modules to the length of time it takes to communicate with the chip furthest away.
Practically speaking, the more modules you use, the more latency you have. If you have something like a server, this is the last thing you want. This is a big reason why even Intel is using DDR for its Willamette-class server processor Foster.
On the other hand, though, if you use two channels with just one module each, these latency problems are kept to a tolerable minimum. As things stand now, multiple modules are probably not a very good idea. Unfortunately, it is those who use applications that might take better advantage of bandwidth who are the most likely to stuff their machines with a lot of memory. Again, by additional speculative prefetching made possible through increased bandwidth, Willamette/Tehama may reduce other forms of latency sufficiently to make up or more than make up for this.
However, so long as you’re not talking about server-type amounts of memory, the additional latency is a negative, but not a crippling factor, even with current systems.
You should not use “latency” as a mantra when it comes to desktops. It’s even sillier to think there is only one simple explanation. You are dealing with millions of different transactions going on a second going to all sorts of different places on the memory map requiring all different kinds of operations. It would be silly to think that a simple rule would apply to everything.
Besides, it’s not like a transaction waves its arms and tells you, “I’m being terribly latent today.” Measuring latency in a real-life situation is rough, and there are many factors that go into computer performance. The best you can do is to test the equipment you’ll actually be using under the conditions you’ll normally be using it, or look for benchmarking that approximates the same.
Latency is a factor, not THE factor, in most cases. Depending on how much RAM you plan on having and what you are going to do with it, it could be a big, or a minor factor. Even if it is fairly big, depending on what you are doing, there might be countervailing factors in RDRAM’s favor. There might not.
Yes, RDRAM modules do get hot, and the supporting chips can get even hotter. Here’s what Intel has to say about it (from ftp://download.intel.com/design/chipsets/applnots/29802701.pdf):
“The Intel 840 chipset is the newest addition to Intel’s line of Pentium III processor chipsets. Some previous generations of Pentium III processor chipsets generated insufficient heat to require an enhanced cooling solution***. . . . . As the market transitions to higher speeds and higher bandwidths with enhanced features, the heat generated by these devices will introduce new thermal challenges for system designers.” (page 7)
***”We’d like to apologize to all those people who complained to us about their computers not running hot enough. With the help of our friends at RAMBUS, we’ve fixed that.”
If you look at pages 11 and 12, you find out that the maximum temperatures tolerable to the RIMMS and their related circuity range from a little below the boiling point of water to a little above. In the case of the RIMM itself, the maximum temperature of the thermal plate is 93C, or about 200F.
The whole piece is written as a guide to increased cooling of computers using the 840 chipset.
So all that stuff about RDRAMs “heat spreader” is bunkum? Actually not, but the reason why will not comfort you. From the same Intel document, page 13:
“In previous generations of chipsets in which quad flat pack (QDP) packages may have been the primary package type, most power dissipation was through the plastic case of the package and into the surrounding air. With the advent of ball grid array (BGA) packaging for chipsets, most thermal power dissipated by the chipset typically flows into the motherboard on which it is mounted (when thermal or center balls are present). The remaining thermal power is dissipated into the ambient environment by the package itself (with or without thermal enhancement). The MBGA packages used in the Intel 840 chipset continue this trend.
“The amount of thermal power dissipated either into the board or by the package varies depending on how well the motherboard conducts heat away from the package and whether the package uses thermal enhancements. While package thermal enhancements typically serve to improve heat flow through the case via a heat sink, how well the motherboard conducts heat away from the package is strictly a function of the motherboard design. . . .
“Good ground paths to areas of the board away from the BGA will distribute heat more efficiently.
“The size of the motherboard, the number of copper layers, and the thickness of those layers. In some cases, the use of “2-ounce copper” on the ground plane has been successful in improving the thermal conduction by reducing the case temperature.”
So most of the heat from RIMMs actually doesn’t go into the air as they would with SDRAM chips, but straight into the motherboard. While they don’t quite say that a thicker motherboard is required, they do point out that more layers help, and their reference motherboard is a six-layer board.
Yes, they need additional cooling; Intel says to do so. However, it’s also just as true that the power used by RDRAM chips is less than PC133 chips and roughly the same as DDR.
So why do RDRAM chips get so hot? The reason is SDRAM design has to spread the workload out among the chips fairly evenly, and RDRAM chips do not, so you can often have the case of one part of a RDRAM module doing all the work, grabbing most of the power, and thus generating a lot of heat. The heat “spreader” (which acts as a very low-end heatsink) does spread out the relatively minor proportion of heat that radiates outwards to a broader surface area.
Remember, though, most of the heat is not going into that heat sink/spreader; the BGA-2 design transmits most of the heat into the motherboard, so it would be prudent to give that area cooling attention should you ever own one of these. (BTW, there’s a good chance future DDR generation will go the same BGA route). It’s certainly an annoyance, but not a killer problem. It’s just something you have to keep in mind and account for.
Granularity is a fancy term for “how many chips do you need to make to make a module that works.” SDRAM usually requires 8 chips to transmit 64 bytes, some models need only four. Since RAMBUS only transmits 16 bytes at a time, and is arranged internally like the 4-chip to make a module SDRAM chips, you only need one.
This is great if you building a Sony Playstation 2. You can put one chip each into two channels, and that’s all you need. Very simple, takes up a lot less room than SDRAM chips, and you get the best latency you’re going to get from RDRAM as a bonus. Add to that the fewer pins and traces inherent to RDRAM, and you have a much simpler board than an SDRAM setup. That’s why Sony picked RDRAM for the Playstation 2.
However, that advantage means nothing if you want PC amounts of memory. Yes, you can use just one 128 megabit chip, and it will work in one channel, but that only gives you 16 megabytes of RAM for that channel. Use a 256 megabit chip, and you get 32MB of RAM. So the most RAM you can hope for currently in this ideal setup is 64MB of RAM: 2 chips, 32MB each, one in each channel. Now that may be good enough for a really cheap Timna box, but once you want more than that, you have to add more chips, and there goes some of your saved space and here comes latency.
So if you only want a 32MB or at most 64Mb system; this is a pretty big advantage RDRAM has over SDRAM. But if you want 128-256Mb of RAM, a lot of the space advantage goes away.
RDRAM runs one high-speed 16-bit serial connection per channel. SDRAM usually runs 8 8-bit parallel channels to form the 64-bit channel for each SIMM slot. That’s obviously handleable, but if you went with a dual-channel DDR, now you have 16 parallel connections required, and that’s quite a lot to fit on a motherboard, which is a big reason why motherboard manufacturers aren’t too big on the idea of dual-channel DDR at the moment.
Due to their high speed, RDRAM connections are harder to place and make than SDRAM channels, but there are a lot fewer of them, so on the whole, motherboard design is simpler, even if you have two RDRAM channels. Down the road, it’s probably simpler to put four channels in than to go dual-channel DDR or double the speed of SDRAM. Of course, that advantage just disappears when you go to eight; it’s no better than SDRAM.
Again, if you are out to design a very cheap, simple system, this can be an important factor in motherboard design and reducing motherboard cost. However, the SDRAM layout of eight chips for a module has pretty much been around since the dawn of the PC, so motherboard designers are well used to it, so in a more typical PC setup, it’s not a huge factor, though again, it’s a big factor against a dual-channel DDR system.
We’ve looked at the major technical factors. For cheap systems, RDRAM does have some decided advantages. For servers, RDRAM has some decided disadvantages. For everything else, it’s about even now, with DDR jumping ahead of current RDRAM systems, and Willamette-class systems likely to leapfrog over DDR. From a purely technical standpoint, for the average desktop, the data leans slightly towards the RDRAM side.
But man does not live by technical factors alone. A far bigger majority of people would prefer a Mercedes over a Taurus. Most buy Taurus’ because Mercedes cost a lot more.
In the next segment, we’ll look at why RDRAM costs so much, how much of that is due to it being inherently more expensive to make than SDRAM, and how much of it is due to the politics of the situation.
“RDRAM/DDR: Part II, Cost and Production Aspects”
Ed Stroligo – 5/23/00
Why Do They Cost So Much?
One can argue about technical merits all day long. But for most people, the cost of the items being compared comes into play.
RDRAM costs a lot more than SDRAM. Why? How much of this is due to differences that just go with the product, and how much of this is due to factors that will eventually go away or can be made to eventually go away with enough will and effort? A big problem looking at this subject is that RDRAM has production problems that I strongly suspect their makers don’t necessarily want to go away for at least a while for political purposes.
In this section, I’ll focus more on the problems themselves. The last section will talk about the likelihood of these problems being forcefully addressed, given the political climate.
Some of you may have noticed that I spoke mostly about RDRAM in the first article and very little about DDR. There were a number of reasons for that:
- DDR is really just a speeded-up version of SDRAM; PC100/PC133 on steroids. After you say that, there’s really nothing to say about it that can’t also be said for regular SDRAM.
- SDRAM is the status quo, RDRAM is the new kid on the block, and it’s more useful to see what the challenger brings to the table, good and bad.
- DDR’s advantages over RDRAM are not so much technical as production-, cost-, and political-orientation.
Primer On How Chips Are Made
If you’d like to see a detailed look on how chips are made, go HERE for a very good overview on the subject.
For the purposes of our discussion, what you need to know is:
- Making a memory chip takes a long time, about three months from start to finish. Ideally, memory manufacturers would like to know that far in advance just what and how much of what they are supposed to be making.
- RDRAM and SDRAM start the memory-making process the same way. A future RDRAM chip needs different manufacturing earlier on in the process to distinguish it from an SDRAM chip than a future DDR chip. The processes that differentiate a DDR chip from an SDRAM chip occur towards the end of the manufacturing cycle. This makes memory manufacturers very happy because they can be more flexible and responsive to last-minute changes in demand between SDRAM and DDR than they can be with RDRAM.
So in what major areas does RDRAM cost more than SDRAM, and continue to do so in the future?
- Die size
- Opportunity costs and uncertainties
- Lack of mass production
- Obligations to pay royalties
- Testing and packaging
RDRAM needs a memory controller built into every chip. This takes up a lot of space: about 30% of a 128Mbit chip and about 20-25% of a 256Mbit chip. That’s a lot of silicon real estate. Bigger chips mean less produced chips per wafer. Bigger chips usually also mean more flaws and thus wasted starts as a proportion of the wafer.
RDRAM is also bigger due to the increased number of banks it has. While increasing the number of banks is an improvement which leads to fewer bank misses and thus less latency from bank misses, it does make the chip bigger and adds to cost. While I haven’t yet been able to quantify it, it appears significant. Intel last fall took measures to start a discussion over the possibility of reducing the number of banks from 32 to 16 or 8 (SDRAM has 4) to reduce costs, an effort rebuffed by Rambus (story HERE).
Micron’s spokesperson recently stated that for every Direct RDRAM chip the company makes, it loses the ability to make two SDRAM chips because of the die area Direct RDRAM requires (story HERE). While that doesn’t mean RDRAM takes twice as much space as SDRAM (lower yield from larger circuits and wafer size considerations also play a role), it does indicate that RDRAM fabrication is rather expensive compared to SDRAM.
Some may say Micron isn’t exactly the biggest RAMBUS booster in the world, and that’s true, but Rambus itself estimates that die size currently causes a 25% increase in cost over SDRAM. See “RDRAM Cost Differential” (a little more than halfway through the document), so it’s probably safe to say the truth is somewhere in between those two figures.
Can efforts be made to reduce die size? Toshiba several months ago showed a design that essentially reduces size by allowing different banks to share logic circuitry. This results in a significant (about 8%) but not huge reduction in die size.
The easiest way to reduce die size, whether it be a CPU or a memory chip, is to shrink the design to a smaller micron size. However, this is not so easy a solution for RDRAM.
- You do the same thing with SDRAM, and they’ll get smaller, too. To get good yields from RDRAM (DDR, too), you should be using .18 micron, anyway.
- Die shrinkage for premium CPUs tends to be done to increase performance, not to make a profit. While it never hurts the bottom line to cut costs, the profit margins on CPUs are quite high. This is not normally the case with memory; no one makes a lot of money for any prolonged length of time in the memory industry like the CPU industry, and they often lose money.
Since memory manufacturers incur the same kinds of costs in refabbing to smaller and smaller die sizes, without the profit margins. So when they invest billions in new fab capacity, they want it to last a while since they aren’t normally making the profit margins of an Intel and need to make a lot more memory modules to recover their costs than Intel needs to make CPUs.
- There is a shakeout occurring in the memory industry. The top five memory manufacturers are grabbing more and more market share, and others are just fading away. It’s because the cost of new fabrications and processes are becoming so high that size matters more than ever before. To build any of these well, you really need to be using 0.18 micron technology or less, and that’s costly. Try to do it with older equipment, and your yields go down (which could be very handy under the right circumstances, as we shall see).
It’s probably safe to say that RDRAM will chew up more silicon in the foreseeable future than SDRAM, and thus will always cost somewhat more. However, with some more work on processing technologies like Toshiba’s, if there is the will to do so, the differences can be reduced to tolerable levels.
There are really two yield issues:
- What percentage of product works at all.
- What percentage of product works at given speeds.
There were stories and rumors about extremely low yields on RAMBUS products when it first came out, but you can see there are really two issues here, which can easily get confused.
The RDRAM process is considerably more complicated than the SDRAM process, so it’s not surprising that overall yields are lower fairly early in the manufacturing process. While I haven’t seen exact numbers, overall yield is probably not too far behind SDRAM yields. The RAMBUS people are saying that, and since none of the memory manufacturers are screaming bloody murder about overall yields, I’m inclined to believe this.
However, that doesn’t answer the second issue. The Rambus people are very careful not to talk about yields AT VARIOUS SPEEDS.
Fortunately for us, the leading maker of RDRAM did. In this article, we find out that Samsung as of a few weeks ago was only getting 20% of their RDRAM production at PC800.
Should this continue, this would be very bad news for RDRAM. A dual-channel Willamette system with PC600 memory would have no more bandwidth than a DDR system, which defeats the whole purpose of the bandwidth exercise.
Why might that be?
1. People are still learning how to handle a more complicated manufacturing process.
2. Intel may have been overly strict in laying out the initial technical requirements for making RDRAM, since they recently relaxed those.
3. Making RDRAM is just inherently a pain-in-the-butt, and particularly unwelcome if it means pulling your employees from tasks to which you give higher priority.
For example, NEC and Toshiba announced a while ago that they were going to outsource RDRAM packaging. (See HERE for the story). Since this is the first time memory manufacturers have handed this kind of task to somebody else rather than do it themselves, it is not exactly a vote of confidence in the product.
You normally outsource those tasks that are not critical to your operation and which others can do better than you. If NEC and Toshiba don’t find RDRAM production critical and don’t want to develop expertise in building what the business is supposed to do, just how important do you think RDRAM is in their future plans? If you were planning on making on ton of RDRAM a little down the road, don’t you think you’d want your employees to get used to doing it? Odd.
4. If RDRAM production isn’t your top priority (or even wish it would just go away), you might not be using your best equipment to make it. If you’re going to get lousy yields from older equipment and you have to use the older equipment for something, and if you don’t especially want to do this in the first place, it makes sense to use older equipment and get lousy yields for RDRAM. You can charge an arm and a leg for it and make up for it. If you used the same old equipment with SDRAM, you’d lose money since you couldn’t charge $500 for it.
You’re probably saying, “Wouldn’t it be more profitable to use your best equipment to make the item that goes for $500 and make a lot more of it?” First, if you did that (and that may be what Samsung plans to do shortly), you won’t be making $500 for very long and you’ll still be losing money with the old equipment and SDRAM.
Much more importantly, if you don’t want to do it in the first place (for reasons I’ll discuss in the next segment), this is a very good way to make some money in the short term, cut your investments to a minimum and at the same time discourage those who want to make you do what you don’t want to do in the long-term.
I am not saying for certain that some memory manufacturers are doing this, just suggesting it might be a possibility. There is such a lack of enthusiasm for this product compared to, say, DDR; outside of Samsung, it is close to nil. So much so that you begin to wonder what might be happening in those factories.
It’s safe to say, though, that if skullduggery isn’t going on, yield is the biggest manufacturing problem RDRAM faces.
Opportunity Costs And Uncertainties
Intel told all the memory manufacturers to make RDRAM last year. Then they couldn’t get a motherboard out for it. Then they announced you could use SDRAM with the now-infamous MTH for it. This killed most of the immediate sales potential. How far out on a limb would you go for Intel when they tell you to do it again?
Just a few days ago, Intel corralled the major manufacturers together. They wanted them to increase RDRAM production and cut prices to about 30% more than SDRAM prices. You can read for yourselves what happened HERE.
Essentially, the memory establishment told Intel, “You want us to take all the risks and you get all the benefits. Uhhh, uhhh. Pony up and guarantee that you’ll pay for whatever we make (and for whatever price we want to charge).”
Intel’s actions last year gives the memory manufacturers the excuse they need not to heavily commit to RDRAM. Not that they really committed big-time to RDRAM last year, or that they weren’t secretly delighted Intel stumbled, but now they have a great excuse.
The memory industry is a boom-and-bust industry. New product comes out, the first producers get initial high prices, everyone else piles in and builds too much capacity and the market eventually slumps. Prices plummet due to overcapacity and companies don’t invest heavily in new plant until there are shortages again and a new cycle starts.
Right now, memory manufacturers are moving into a capacity shortage (along with higher prices and profits) for the products they are already making. DDR will offer the prospect of making a bit more money (probably about 10-15% to start with) for a little more work without interrupting the flow of steady, reliable SDRAM production. (It’s pretty much the same for mobo manufacturers; the estimates are DDR motherboards will cost about $4 more to make which will probably translate to $10-15 more at retail.)
RDRAM production, on the other hand, does not integrate well with regular SDRAM production, and that’s a big negative when economies of scale are as important as it is in the memory industry. Not only does it cost more, it makes the other products you make cost more too, since you aren’t making quite as many of them.
This is a terrible thing to say to memory manufacturers. To say that they are obsessed with cost would be an understatement, and RDRAM is like the unwelcome Joker’s Wild.
Manufacturers have to commit to making RDRAM fairly early on in the production process, while making DDR is more like an afterthought. Give them big, guaranteed orders, and this is much less of a problem; but that’s exactly what hasn’t been happening so far. We’ll discuss the impact of Willamette on this next segment.
Lack Of Mass Production
No one is making a considerable amount of RDRAM yet.
A quick lesson on how to judge production figures
Over the next year, there will be somewhere in between 10-14 million computers built in any single month.
If you took all the memory modules put into desktop (not server) computers, and divide them by the number of desktop computers, a minimal estimate of the average number of modules that go into a computer could be 1.25-1.5 modules per desktop computer (most would have one module, some would have two or more). It has to be more than 1.0; if you think the average is more than 1.5, knock yourself out and raise it.
It would be very safe to say that the average number of chips going into each memory module is very close to the typical SDRAM requirement of 8 chips to a module.
Multiply all those numbers together, and you get a minimum of 100,000,000 and a maximum of 168,000,000 memory chips needed a month to supply all the desktop computers being made.
So if you hear claims that Samsung made 2,000,000 RDRAM chips last month, or that total RDRAM production is jumping from 2,000,000 to 9,000,000 chips in a quarter (three months), you can see that this is still a tiny proportion of the total chips required to run new desktop computers.
Slowly but surely, the memory manufacturers are announcing that they will have more RDRAM production capacity by the end of the year. Samsung’s been the most aggressive in announcing that they will jump from a capacity of 2,000,000 chips a month to 10,000,000 a month in the second half of the year. Others have announced smaller, but still significant, increases in capacity by the end of the year.
It’s probably safe to say that production capacity for RDRAMs will be around 30,000,000 chips a month by the end of the year, maybe more – not all the major players have announced numbers.
Please note that I’m saying “production capacity,” not production. Nobody said they would going to make that much, just that they would be prepared to.
Even assuming 30,000,000 chips a month, though, that’s a decent but not huge chunk of the memory market and it creates essentially two production lines: SDRAM and RDRAM. The companies expect to do fine with SDRAM products. DDR is just a minor variation on the song they’ve learned to play so well.
Nor do memory manufacturers live on Intel desktop sales alone. I’m not talking about AMD (though any help is welcome). I’m talking about servers.
Servers Have More To Remember
Intel doesn’t worry quite as much about servers as much as the memory manufacturers. The reason is simple; Intel only sells one or two or four or eight processors for the vast, vast majority of servers (and most are probably just one or two). Servers represent only about 5% of the machines being sold. However, between them and the communications industry, they use almost as much memory as desktop machines today and are expected to use more by 2002. This is to be expected – after all, what fuels the Internet?
Servers use a lot of memory nowadays (the average 2-way machine uses 1.5Gb, and average 4-way uses 3Gb), and those totals are expected to double in the next two years. (Statistics from slides at the DDR conference available HERE). The memory in those machines will be DDR (and SDRAM), not RDRAM. Not even Intel is using RDRAM for any of its new server chips. So even if RAMBUS ends up ruling the desktop, there will still be a very sizable market for DDR/SDRAM.
The issue for the memory manufacturers is not “DDR or RDRAM.” It is “DDR and . . . uhh . . . RDRAM??” They would much rather say just DDR. It would make their lives much easier, and manufacturing cheaper.
RDRAM does the opposite, so if memory manufacturers are plugging along very happily with the status quo, you’re going to have to make it worth their while to disrupt things, cost them a good deal of money to implement, mess up their economies of scale a bit, and make them lose more units than the units they would replace during a time of shortage. In short, big contracts at big prices. You don’t do that, maybe they don’t produce what you want when you want. Very inconvenient for Willamette product launches, isn’t it? Hey, we didn’t tell you to go with RDRAM.
We’ll talk a lot more about this the next segment.
The current royalty structure for RAMBUS memory and controllers is about 2% for the memory and 4-5% for the memory controllers. At current prices, maybe $10-12 per $500 RIMM; if RDRAM were selling for $200, we’re talking about $5 a RIMM.
How much of a financial burden is that? That depends.
Let’s pretend DDR costs $80 for 128MB and the memory manufacturers sell it for $100. Let’s pretend RDRAM costs $100 per 128Mb RIMM, plus a $3 royalty to RAMBUS.
- If you can sell the RIMM for more than $130, the additional royalty is no problem at all, since the increased sales price (including the royalty) produces more of a profit percentage (and dollars) than the SDRAM sale.
- If you sell it for between $124 and $130, you have more of a dollar profit, but less than SDRAM’s profit percentage.
- If you sell it for between $103 and $124 dollars, you are making less dollar profit and margin than with SDRAM.
- Sell it for the same price as SDRAM, and you are losing money due to the royalties rather than making it.
So there may or may not a financial burden, depending on costs and selling prices. I think the memory manufacturers object to this on principle.
From their perspective, the memory manufacturers are like a group of kids who’ve been playing together for years, and Rambus is like this obnoxious new kid on the block who shows up and incessantly says, “You’re all stupid, I’m the only smart one here.” So the other kids whack him a few, toss him in the gutter, and go about their business.
The next time the kid shows up, he brings the teacher along, and the teacher loves this brat. She tells you to do everything he says or else. The kid not only throws the way your little rascals do things into an uproar, but demands that you pay him for it.
Behind all the language and the rhetoric and the cost analysis, I think that’s the real reason why memory manufacturers dislike Rambus so much. 🙂
Testing and packaging
Again, DDR is no big disruption to current operations. You make it pretty much the same way (add a few extra masks); you test it pretty much the same way.
RDRAM is a whole different ball game. You need much different packaging; you need new testing equipment.
Some have claimed that you can’t test a complete module without cementing that “heat spreader” into place. This doesn’t make a lot of sense to me as a big source of wastage, though. Either the modules usually work, or if wastage at this stage were that big a deal, I would think manufacturers would just test them in a colder, highly ventilated environment.)
All this costs a lot of money. An estimate of this cost (though a pretty old one) can be found HERE. To quote:
“One reason for the cost difference is that the Rambus DRAMs require a new infrastructure at the back end,” Tabrizi said. “To produce one-million RDRAMs a month requires an investment of $10-to-$20 million. Hyundai currently makes about 40 million 64-Mbit DRAMs per month, and to convert our capacity to the Rambus memories would cost a minimum of $400 million and perhaps as much as $800 million.”
Again, this is an old estimate, very early on in the process, and equipment has recently become available which helps automate testing of equipment. I’m sure the estimate would be too high today, but even if it has dropped a lot, it’s still not chicken feed. Especially if you feel you have to lay out the money just for the privilege of making that obnoxious new kid rich.
Even the Rambus folks acknowledge that testing and packaging currently costs a whole lot more money than SDRAM; they said so themselves at their annual meeting (though they did say the additional costs would become quite tolerable in 2001). See “RDRAM Cost Differential” (a little more than halfway through the document).
When you buy capital equipment, it’s normally meant to last a while, usually years, and you normally expect to recoup the cost of buying it over the expected lifetime of the equipment. Well, if you’re “uncertain” about how long RDRAM technology is going to be around, you’d want to get payback on that capital equipment as quickly as possible.
If you spend a million dollars on equipment, and expect to get your money back from the equipment over three years, the expense per month of that new equipment isn’t so great. If you expect to get your money back in three months, then the expense suddenly looks huge in comparison to what you’re doing with it, and you can come up with some pretty dreadful looking accounting numbers.
If that vaguely sounds like “cooking the books,” that’s exactly what it is. When memory manufacturers moan and groan about how they can’t possibly cut prices, I’d bet that’s a trick some of them use, and that’s just one of them. You would be amazed how expensive or cheap you can make something look given the motivation and some creativity. 🙂
DDR is the new champion of the status quo; RDRAM is the challenger. RDRAM makes manufacturers change their production habits, which they don’t want to do for a variety of reasons.
Manufacturers bear the sole financial burden of implementing RDRAM, not Rambus, not Intel. There have been considerable difficulties with initial RDRAM ramp up, but there is no apparent production problem that would keep costs so high so as to justify current pricing. A premium, yes, a significant premium, possibly, but nothing like what we have today.
All new products have ramp up growing pains. What seems unusual about RDRAM is the lack of reported activity to fix those problems. What is unusual is that prices have remained as high as they are this late in the development process. What is unusual is that Intel invests a lot of money into a company like Micron to encourage them to get on the RDRAM bandwagon, and they do everything except sign in blood, “DDR or Die.”
What is really unusual is to have the major memory manufacturers refuse to meet Intel separately, then telling Intel in a group to go to hell without even making an effort to begin negotiating just five months before Willamette comes out.
Seems to me this is a lot more than “BGA-2 balls are hard to install,” and I’ll talk and speculate about what we may see in the future next time.
Why Politics Matter?
I can want something that is good. You can want something that is good. An impartial observer could agree that both are, indeed, good.
But often, getting what I consider to be good prevents you from getting what you consider to be good, and vice versa. We can’t both always get what we want.
Intel and Rambus believe it can advance the state of computing faster over the next few years with RDRAM than with anything the memory establishment can come up with. That’s a reasonably good argument down the road. This is not evil.
However, Intel essentially is passing the bill for this good thing to somebody else, the memory establishment. The memory establishment feels that lower memory prices advances the state of computing, too, and what they want to do does that better than RDRAM. This is not evil, either.
The two aims contradict each other. What happens when you can’t do both? You get situations like this.
I’m not saying everyone is completely pure of heart, they obviously aren’t. But even if they were computer saints, without the slightest mercenary thought, you would still have this impasse.
That’s where politics come into play. You either negotiate a compromise, or you maneuver to get more power on your side than the other folks to get your way.
Let’s look at the major combatants, and the people in the middle who may well end up deciding this go-round. Then I’ll give you a fight preview, and tell you what to look for.
Intel and Rambus
Rambus And The Advocacy Wars
Let’s first look at Rambus:
Intel must find Rambus more of a hindrance than a help.
It’s hard to see how Rambus could meet its objectives and keep the memory establishment happy. However, they have been a particularly noisy and boisterous underdog challenger.
Rambus’ advocacy reminds me much of Apple’s, they bring up a great number of items that, while technically correct, obfuscate the truth of many issues, even minor ones.
Let’s take this statement from the Firing Squad interview:
“FiringSquad: Does Intel own any portion of Rambus?
Rambus VP: No. It’s a common misconception. Intel does not own any part of Rambus Inc. Rambus is closely associated to Intel because Intel has selected Rambus as its memory of choice. Rambus does have in place a warrants program for partners who meet certain milestones, such as engineering validation, production volume and market share.
FiringSquad: What kind of milestones does Intel have to reach in order to gain the rights to those Rambus warrants?
Rambus VP: Intel’s warrants are tied to achieving motherboard and chipset volumes.”
It’s a technically correct answer. A honest answer would have been:
“Not yet. Intel has a million warrants which it can turn into shares of stock at $10 a share once 20% of its chipsets for a couple quarters are RDRAM-based. This would be about 4% of the company’s stock. We’ve given other companies lesser number of warrants, too, for meeting certain production targets.”
If Intel does manage to make RDRAM the desktop memory standard, they deserve every single one of those warrants, and probably a lot more. If I were a Rambus shareholder, this wouldn’t bother me at all; Intel’s actions will have made me a lot of money.
But though it is public knowledge, and hardly unethical or even unreasonable, Rambus prefers to give the evasive answer. They do this a lot. Look at those “heat spreaders,” for instance. Yes, the explanation they give is technically accurate. What the explanation obfuscates, though, is the truth: these chips get very hot, and direct their heat into the motherboard.
Yes, plenty of companies do the same thing and worse. But if a company is deceptive on the small stuff, how trustworthy are they on the big ones?
The DDR folks are much better at this. They realize the value of things not said, and issues not raised. They stick to the themes that are easily understood and appreciated by all (like cost and lack of hassle). They emphasize their strong points and don’t bring up their weak ones. When they do come up, they don’t deny their existence, they just point out that their strong points are more important than their weak ones.
They realize that all they have to do to win the PR war against a blustering, blundering opponent like Rambus is not to be completely honest and fair, just be more honest and fair than Rambus/Intel. Meanwhile, they’ll do everything possible to strangle that obnoxious child behind the scenes.
Let’s contrast the two styles.
The recent articles found on the Web which purport to give Rambus’ side of the story, whether intentionally or not, present the Rambus world view. They were not effective efforts, since many of the points were dubious at best, and easily refuted. They caused a lot of negative publicity among the audience it was supposed to persuade. That is bad advocacy.
The efforts were clumsy; unfocused, with no clear themes on issues that resonate with the audience.
In comparison, the Bert McComas pieces (latest example) provide superb advocacy for DDR. They provide a lot of information. The language and methodologies come across as far more objective than Rambus’ (or Intel’s) rambles, indeed, they are relatively more objective and honest.
You hear McComas being interviewed, he’ll say RDRAM has some real strengths. His benchmarks show DDR losing a few against RDRAM, but winning most of them. That’s far more believable and in the long run more convincing than Rambus saying it is better at everything.
When he raises objections, they are reasonable objections, not dubious or spurious. For instance, he’ll bring up cost, who doesn’t? The RAMBUS folks seem to think sometimes that if they don’t talk about cost, it magically goes away.
However, it’s the unspoken impressions that it often leaves that are not necessarily so truthful. If you read the piece and leave with the impression that DDR is generally better than the 840 platform, there’s nothing untruthful about that. However, if you left with the impression that it has been settled that DDR will be better than ANY dual-channel RDRAM platform, now or ever, that’s not so truthful. We just don’t know that yet, and won’t until we see Tehama with a full 400Mhz FSB in operation.
I know that’s the impression people are getting from the piece; I see the forum messages and get the emails. But Mr. McComas never says it or even really implies it. The subject of Tehama and the possible effects of a much higher FSB on performance are never mentioned, but that wasn’t the subject of the paper.
Now I’m not saying Mr. McComas did this deliberately; in interviews, he does mention the 400Mhz Willamette bus as a factor. But do you see how much harder it is to realize that subtle manipulation of the truth (it’s even arguable whether or not there even was a deliberate one) than “Intel doesn’t own us” or “our chips don’t get hot”? See how this approach is much more effective?
It’s not the advocate who isn’t very honest you have to worry about; he’s easily exposed. It’s the really honest one you have to watch closely.
Suing The Hand That Feeds You
Besides PR, the biggest irritant Rambus has launched is its lawsuit against Hitachi. Basically, Rambus is saying in the lawsuit that making SDRAM violates Rambus patents. I’m in no position to intelligently weigh the merits of the case, but neither in all likelihood is the trial judge, so this promises a lot of litigation and appeals.
This lawsuit doesn’t threaten just Hitachi – it’s just as applicable to every memory manufacturer. If Rambus wins, they are in the position of demanding royalties for every SDRAM chip ever made. That could wreck some companies and seriously hurt even the strongest.
Rambus is essentially saying, “One way or the other, we’re going to get money out of you, or at least make your lives legally miserable until you do what we want.”
This is not the way to make friends and influence people.
Shortly thereafter, Intel convened the major memory manufacturers together to discuss future generations of memory. Rambus was not initially invited.
Do you think Intel suddenly developed a hatred for Rambus? No, what’s far more likely is that at least some of the memory manufacturers would have refused to show up if Rambus was on the initial invitee list.
What is interesting is one aspect of the response; it essentially said that Rambus attended the meeting of the “memory club” (JEDEC), then took some ideas and patented them against the rules of JEDEC.
The point here is not whether Rambus actually did this or not, but rather the attitude, “since you refuse to play by our rules, you’re not welcome in our club.” That attitude probably weighs more in the minds of the memory establishment than a million technical advantages on RDRAM’s side. It’s not cricket.
In all fairness, RAMBUS may well have come to the conclusion that the memory establishment would have opposed them even if they acted like Jesus, so playing hardball in a kill-or-be-killed game does them no real harm so long as Big Brother Intel is on their side.
No one likes Goliath. Intel has given people plenty of legitimate reasons, along with some less legitimate ones, for disliking it.
However, this is not Star Wars. For one thing, the memory Empire is controlled by the other guys, and it is an empire with its own faults. You can reasonably dislike Intel and Rambus more than the memory establishment, but they both have faults. This is not Good vs. Evil.
Intel’s Connections And Obligations
Intel initially signed a contract back in 1996 with Rambus that basically said Intel might be interested in using its product, and Rambus essentially said, “OK, if you meet certain conditions, we’ll let you convert a million warrants into stock.”
You can find that initial agreement in Rambus’s S-1 statement filed with the SEC 3/6/97, Exhibit 4.4.
You can find these SEC documents using Free Edgar. The way I do it is::
- Go to Silicon Investor
- Type in RMBS
- A stock quote and some other information will pop. You’ll see a number of links in blue print. Click on SEC.
- This will bring you to Free Edgar, and you’ll see the documents Rambus has filed with it. Go to the ones I’ve mentioned.
- Sorry, there isn’t a direct URL.
In July 1998, an addendum to that record was signed by Intel and Rambus. This can be found in Rambus’ Form 10-K405 dated 12/9/98, EX 10.4(1).
This made a few changes, but the most important was Intel pledging to: “use its continuing best efforts in marketing, public relations, and engineering to make the Rambus-D DRAM the primary DRAM for PC main memory applications through December 31, 2002.”
What Does “Best Efforts” Mean?
United Telecomm. v. American Tel. & Comm. Corp., 536 F.2d 1310 (10th Cir. 1976), gives a pretty good general definition of the term:
“A “best efforts” obligation does not require [the promisor] to accomplish a given objective . . . it requires [the promisor] to make a diligent, reasonable, and good faith effort to accomplish that objective. The obligation takes into account unanticipated events and the exigencies of continuing business and does not require such events or exigencies be overcome at all costs. It requires only that . . . all reasonable efforts within a reasonable time to overcome any hurdles and accomplish the objective [be made].”
In short, Intel does not have to commit suicide for Rambus.
Additionally, what happens if Rambus decides Intel isn’t putting out its best efforts? Not much. Essentially, they’ll lose the warrants and some patent rights. They can’t sue Intel for it; the contract specifically prohibits that. There is nothing in that contract that would stop Intel from breaking it if it got into deep trouble.
Let us let Rambus describe how it sees the contractual relationship (from its Form 10-K405 dated 12/23/99):
“Under the contract, Intel can terminate its relationship with Rambus at any time. The Company established an earlier relationship with Intel several years ago, but Intel did not at that time pursue development relating to Rambus technology. There can be no assurance that Intel’s current emphasis or priorities will not change in the future, resulting in less attention and fewer resources being devoted to the current Rambus relationship. Although certain aspects of the current relationship between the two companies are contractual in nature, many important aspects depend on the continued cooperation of the two companies. There can be no assurance that Rambus and Intel will be able to work together successfully over an extended period of time. In addition, there can be no assurance that Intel will not develop or adopt competing technologies in the future.”
Hardly “’til death do us part.“
It’s Not Just RAMBUS
People forget that Intel doesn’t only have its thumb in the RAMBUS pie. They’ve made some big investments in some other memory companies, too. They’ve invested hundreds of millions of dollars in three major memory companies so far: Micron, Samsung, and Infineon (a spinoff of Siemens). So far, Samsung is the only major memory maker who has shown any real enthusiasm over the product. Micron has promoted DDR so much that it’s probably a lock for the Intel Corporate Ingratitude Award (Intel recently dumped a good chunk of its Micron investment). The Infineon investment is pretty recent.
Why Is Intel So Hellbent on RDRAM
There’s essentially two reasons:
First, it already has a million warrants of RAMBUS stock. A warrant is a right to buy stock. In Intel’s case, the warrant doesn’t become effective until they meet certain conditions (which they’d certainly meet with Willy and RDRAM). Then they can buy RAMBUS shares at $10 each. That works out to a little over 4% of the current shares outstanding. It’s value today would be about $170 million.
However, if RDRAM becomes the PC memory standard, those shares will certainly be worth more than they are now. Those warrants would probably be worth about $500 million to $1 billion dollars. It’s also pretty likely that if Rambus were willing to give Intel a million warrants for a measly 20% chipset market share for a couple quarters, they’d be delighted to hand over millions more if Intel sold hundreds of millions of them.
So, potentially, using its muscle to make RDRAM the memory standard at least for desktops is potentially worth billions.
As potentially lucrative as that sounds, though, that’s probably not the main reason.
Intel cannot control the memory establishment. They are composed of a number of big to very big companies that for the most part act as a club that determines where the memory industry is going to go. What’s best for them isn’t necessarily best for Intel.
Intel wants more bandwidth very badly. The faster it can get bandwidth ALONG WITH FSB up in general, the more computers can do, and the more they can do, the more chips Intel sells. The memory people, who move like molasses by Intel’s standards, force Intel to make processors do all sort of strange, inefficient things like adding L2 caches and speculative prefetching just to keep their processors from idling most of the time.
The problem RDRAM has had in this area is that Intel hasn’t given it a platform yet where it can really shine. The Tehama platform is that chance. If that is no better than a DDR platform, then RDRAM has just about run out of excuses. But if it is, then maybe Intel isn’t so crazy after all.
If their CPUs get an edge over AMD’s because of bandwidth (which they fervently seem to believe is the case), so much the better.
Intel probably doesn’t like having people who do not have the same interests as it controlling a vital area like memory. It’s more like being in control of your own destiny than megalomania (though there’s probably a little of that, too). Not even Intel can buy all the memory companies like Samsung or NEC.
What it can do, though, is control a relatively small company like Rambus, and if it can control the memory makers because it holds the patents for the PC memory standard, Intel has the control it wants at a bargain price.
Memory manufacturers tend to be conservative and are usually averse to radical innovation. There’s a lot of inertia in the business, and they don’t usually move rapidly. If for nothing else, Rambus probably deserves credit for making memory makers move a lot more quickly than they have before in getting DDR and its successors out.
With RDRAM, Intel has a reasonably quick growth path to 6.4Gb/sec implementing current technology. They can’t expect to get that as easily from the memory establishment. So why not go with the standard that benefits them the most?
One huge reason, is of course, cost. Intel doesn’t want you paying $500 a RIMM. They know most of you won’t, and it knows full well that if that’s the situation, not too many people are going to be buying Willamettes. It wants prices down.
When they had their meeting about a week ago, they wanted the memory manufacturers to reduce their costs to 30% over SDRAM, which probably is in the ballpark of what a ramped-up, mass produced RDRAM might cost. Is that the sign of a price gouger?
The memory companies refused, and the news reports didn’t exactly report a lot of economic reasons for doing so, which you’d think the companies would be putting out if Intel’s demand were that unreasonable. Instead, they focused on something else.
All of this may be posturing before crunch time comes, and there may well be legitimate economics concerns here, but who comes across more as the price gougers here?
All I’m trying to do is give you a different perspective. Not saying it’s correct, or that Intel is right, just that there is more to this than a fight between Good and Evil.
Is This An Issue Of Freedom?
Many are opposed to RDRAM on the grounds of freedom. True, the adoption of RDRAM would give Intel more power over the personal computer industry. Intel may be motivated to make itself more secure, but that can result in everyone else feeling very insecure. Yes, as of the moment, Intel is trying to restrict your choice of Willamette memory to RDRAM.
But what are you really objecting to? If RDRAM cost the same or a little more than DDR, would you still be objecting? If not, then you aren’t a freedom fighter. If RDRAM proves to be significantly better than DDR on the Tehama platform and the cost is reasonable, would you give DDR a second thought? If not, your F.F. credentials look a little bogus.
You might say Intel is taking away your right to choose. Is it? If Intel had left Rambus alone with the Nintendos and Playstations of the world, what choice would you have had then? Probably PC133 until the uncompeted against memory establishment got around to eventually implementing DDR.
Intel hasn’t taken away your choice, because you had none before. You just took what the memory establishment gave you. Intel is just trying to RAM their choice down your throat rather than having somebody else RAM their choice down your throat. In short, “stop being the memory manufacturer’s slaves; be our slaves instead.” If you think Intel/Rambus would be a worse slavemaster, fine, you may have a good point. But this is not freedom vs. slavery; it’s which master you’ll have.
The memory establishment doesn’t want to give you choice, they want to give you DDR. If you would rather have the choice between DDR and RDRAM, great, but neither side wants to give you “choice.”
What Do The Memory Manufacturers Think?
Most of these folks have been making memory for a long time; most were making it at the dawn of the PC era and some well before that. These are the folks who bear the responsibility, the risks and the costs of memory making, and they are not too thrilled with outsiders telling them what to do.
The memory establishment has to spend Intel-style fabrication money without Intel-style profits, so they have to be much more careful about their investments.
There is quite a bit of competition in the memory field, and there has not been for a long, long time a quasi-monopoly like the one Intel has enjoyed for a long time. Imagine six CPU makers with market shares between 10-30%, along with some smaller fry, and you get an idea of what the memory establishment is like.
They can’t afford to be as adventurous as Intel in changing everything around just to suit some new technical achievement. Although they are fiercely competitive against each other, it’s not usually in the area of standards (and that would probably continue to be the case even if RDRAM wins). Too much choice equals chaos and messes up the economies of scale they need to make money.
So evolution rather than revolution is the rule here. There is a standards board called JEDEC, but that has often become a debating society over the years, and usually, getting consensus for a change only happens when the situation becomes desperate.
So these guys aren’t perfect either, but it’s important to remember that they can’t just go trashing and obsoleting billions of dollars of plant on a whim. They don’t have the money, and the only way they’d have enough in their pockets to do it would be to take a lot more of it out of yours when you buy memory.
So here comes Rambus. All it has are some patented ideas. They don’t have to pay for anything; there’s no responsibility or accountability for actually making the things. They collect no matter what, whether you make money or not.
Right behind Rambus is Big Brother Intel saying, “Better listen to Rammy.” It doesn’t have any responsibility, either. You can spend all kinds of money building plant, and all Intel has to do is change its mind, and a lot of it becomes worthless. If you end up losing money on RDRAM because it ends up competing with your own products, and people won’t pay anymore than they pay for SDRAM or DDR, Intel doesn’t care.
Given all this, it serves the interests of the memory establishment to make improvements building on the base it already has, and that is DDR. It may not have as much bandwidth as RDRAM, and lack certain other advantages. But it’s easy to make, it’s cheap, and you can continue to use most of what you already got to make it.
AMD plays at best a contributory role. They are siding with the DDR people while not burning their bridges with RDRAM. They have an RDRAM license, they have hired RDRAM engineers, but RDRAM is a contingency plan they hope they never have to use.
Around the time this battle heats up again, Dresden will start producing big time. AMD is estimating 7.2 million Athlons to be produced in 4Q ’00. If the memory folks don’t provide the RDRAM Willamette needs, people who need computers during what will be a shortage can just turn around and buy a nice Thunderbird/DDR computer.
Even if Intel caves in, they may just cave in and go with a Willamette using PC133. That certainly will make a Thunderbird/DDR look better than otherwise; now AMD has the bandwidth advantage. If they go with DDR, it may take a few months to get a mobo out, again, that helps AMD.
Even after they get a DDR motherboard out, any bandwidth advantage Intel would have had with dual-channel RDRAM goes away, so even if Willamette is better than the best Athlon available, it will reduce the differences. If memory makers keep the price of RDRAM low enough not to kill it outright, but high enough to discourage the more budget-minded, again, advantage AMD.
The only way AMD loses out is if the memory manufacturers cave, and start producing tons of RDRAM at a somewhat reasonable price. It’s a risk, but a reasonable one to take.
Of course Via loves DDR. They have to make something in the future. Intel is suing them because they don’t think Via has the right to make P6-compatible motherboards. Intel is probably wrong here.
However, with one likely exception (ServerWorks), Intel has not been granting rights to build Willamette-compatible motherboards, and if Via decides to do that anyway, my impression is that Via will be legally wrong. So this is not “who cares what Intel does, we’ll wait for Via to make a Willamette DDR board.”
The OEM Refs
None of the major combatants actually make computers (at least not officially). Who makes them and has to sell them? The OEMs.
The OEMs really don’t care about either Intel’s or the memory establishment’s agendas. So long as they’re reasonably reliable and can be marketed as something wonderful, they couldn’t care less what memory goes in.
What they are concerned about is price. They can tolerate RDRAM costing somewhat more than other solutions so long as they’re packing it into Willys. But not a whole lot more. More than a couple hundred dollars difference, and AMD machines start looking really good. Since AMD can’t supply enough processors to replace Intel, they’ll be howling for cheaper alternatives, whether it be cheaper RDRAM or some other solution does not matter. Remember, this time, there is no 440BX board to fall back on. Intel may well have non-RDRAM contingencies in the works just to be on the safe side, but they know that if they roll over fast, kiss RDRAM goodbye as a general desktop standard.
A Silicon Stand-Off
Intel makes most of the processors, the memory manufacturers make the RAM. If Intel makes Willamettes and just RDRAM-only motherboards for Willamette, and the memory folks don’t make RDRAM, Intel can’t sell processors.
Intel plans to ramp up Willamette production a lot faster than they’ve led people to believe. Apparently, .18 micron Willamette do very well indeed from a manufacturing standpoint, better than expected. So expect a big rampup the first quarter of next year if not a bit sooner.
This leaves Willamette very vulnerable to memory shortages. If it’s supposed to be the standard Intel chip by the end of the first quarter, something or someone has to give.
However, if Intel sticks to RDRAM long enough, they might not be able to sell processors, but memory makers can’t sell memory, either. And Intel has deeper pockets than they do (outside of maybe Samsung).
I really doubt it would come to that, but if both sides are stubborn enough, that’s the end-outcome. Then it turns into a contest of deep pockets and nerves.
What This Fight Will Look Like
The OEMs will be the ones who officially decide this fight. The memory establishment has made, “we’ll do whatever our customers want” into a mantra. Of course, if they keep telling their customers, “$500 for a stick of RDRAM, please,” this is hardly giving the OEM customer much of a choice. So Intel’s job is to convince the customers that Willamette with RDRAM is what they want, and place very, very big orders for it. This is something the OEMs will have to be persuaded to do during the summer and fall of this year, and it won’t happen until they can get reasonable RDRAM prices.
Intel must break this informal anti-RDRAM coalition, and the obvious target is Samsung. It’s the biggest memory manufacturer in the world, led by H.W. Lee, a man with a track record of being bolder than most in this industry. He’s already moved Samsung far more into the RDRAM corner than any of the other memory manufacturers. If he can get a sweetheart RDRAM deal from Intel and Rambus, he’s the most inclined to take it, but it will have to be really good for Samsung, and Intel and Rambus must realize they don’t have much leverage. It could be a big stock investment; it could be a next-to-nothing royalty deal, maybe it will be indirect payments and guarantees for additional costs. Whatever it is, it’s going to cost them, and Intel/Rambus might not want to pay his price.
However, if they do, and can get Samsung to convert a very big chunk of their production over to RDRAM, and OEMs feel safe buying RDRAM systems because of that (no doubt very public announcement) that is likely to have a snowball effect, and it will be tough for the remaining manufacturers to hold their ground.
If Intel and Rambus won’t pay the price, and if the memory manufacturers do not break ranks and won’t increase production and reduce prices to make Willamette/RDRAM systems competitive, then we’ll see how much nerve Intel has. If they don’t flinch at the prospect of millions of unsold Willamettes early next year, and a war of attrition in which their profits and stock price will get ravaged, they still could win. Do they have that much nerve? I doubt it.
So it will be the Dells and Compaqs and Hewlett-Packards and Gateways and the like who, in what will no doubt be tortuous negotiations, will officially announce the winner. This will be won or lost behind the scenes based on the resources and nerves of the competitors.
- Technical issues are a minor factor in this equation for the average desktop.
- RDRAM will probably always cost more than DDR, but (provided that yields of PC800 can be brought to reasonable levels) it should not cost inherently a great deal more than DDR once mass-produced. However, there is considerable doubt these circumstances will come to pass.
- Intel and the memory manufacturers have incompatible agendas. On top of that, the memory manufacturers really don’t like Rambus.
- We do not know and will not know if RDRAM actually has significant performance advantages until we have and test the Tehama motherboard. Current cost/benefit analysis precludes the use of RDRAM in Coppermine-based systems.
However, very preliminary signs indicate that the current little benefit/substantial additional cost will shift to some benefit/somewhat less additional cost. It is too soon to determine whether the additional benefit would be worth the additional cost for most, since we don’t know the values of either, nor will know before the fall at earliest.
- Even if the Willamette/Tehama combination proves better than a DDR platform, cost/benefit will remain critical. However, it would be foolish to assume that RDRAM cannot become reasonably price-competitive. It would be just as foolish to assume that price-competitiveness is inevitable.
- If RDRAM production ramps up to at least the degree needed to initially supply Willamette, and prices are at least somewhat competitive with DDR, RDRAM stands a good chance of becoming the general desktop standard. If these conditions are not met, this is very unlikely to happen.