• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. SoC and IP
  3. Why is it so difficult to interface with DRAMs?
archive
archive
Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us

Why is it so difficult to interface with DRAMs?

24 May 2010 • 3 minute read
One of the maxims in the world of system design is that it has always been relatively hard to interface with DRAMs and make them work properly in all possible operational situations. This isn’t a new situation. It’s been hard to interface with DRAMs since the day they were first introduced back in October, 1970 when Intel rolled out the first commercial DRAM, the 1-kbit 1103. Intel’s 1103 DRAM was a PMOS chip that introduced the concept of refresh to system designers and used 16V logic levels that required level shifters. Despite the interfacing difficulties--which really weren’t so great compared to the interface requirements of the magnetic-core planes that the DRAM replaced--the 1103 needed only two years or so to single-handedly end the 2-decade dominance of magnetic-core memory. Mostek introduced the first DRAM with multiplexed addressing--the 4-kbit MK4096-- in 1973 and suddenly system designers needed to understand the intricacies of DRAM row- and column-access timing. The consequence of incorrectly interfacing to DRAM has always been erroneous DRAM operation. Worse, DRAMs never tell the system when the control signals they’re receiving from the system are out of spec. The DRAMs simply fail to return good data. It has always been up to the system design team to suss out the problems.

Why is this so? Why are DRAMs so darn dumb? Like most things in engineering, the answer lies in economies of scale. Today, PCs and servers drive more than 80% of DRAM chip sales volume so whatever PC and server designers need in a DRAM pretty much determines what most of the manufactured DRAM chips will look like. And what do PC and server designers need in a DRAM? They need cheap storage; the cheaper, the better because PCs and servers use a lot of DRAM so DRAM cost has a big influence on system cost. Today’s PCs, for example, team 16 or 32 DRAM chips with one processor and one memory controller. So it makes a lot more sense, economically, to concentrate as much of the design complexity of the memory-subsystem interface as possible in the one DRAM controller rather than in the 16 or 32 memory chips connected to that controller. That’s why DRAMs started out dumb and have stayed that way.

For a long time after the microprocessor’s introduction in 1971, DRAM control was implemented outside of the processor chip, with memorable exceptions. One of the most memorable exceptions was the DRAM refresh logic designed and built into the original Z80 microprocessor, which was introduced in 1976. The Z80’s refresh logic removed the need to perform the DRAM refresh task as a software routine or to implement the DRAM refresh in logic external to the processor. Unfortunately, the refresh scheme designed into the Z80 was specifically for the DRAMs of the day. The processor generated a 7-bit refresh address that looks laughably small today. Canning a non-programmable refresh scheme into any memory controller, whether or not the controller is part of the processor, no longer makes much sense. DRAM technology evolves quickly so DRAM controllers must be nimble. For example, today’s SDRAMs have an optional auto-refresh mode, which may or may not be an attractive feature depending on the bandwidth and power goals for a specific memory subsystem. A good SDRAM controller will be able to both optimally insert refresh cycles and to use autorefresh, depending on the specific situation.

As DRAMs have gotten larger and as they have migrated from the older asynchronous RAS/CAS interfaces to today’s synchronous interfaces, whole new sets of challenges and problems have emerged. Most of these are associated with the synchronous interface, which has evolved in support of the growing L1 and L2 SRAM caches found in modern PC and server processors. The inclusion of these cache memories moves DRAM access-speed emphasis from individual access-cycle latency to burst bandwidth. DDR3 memory accentuates this trend with its exclusive use of 8-transfer bursts. In addition, there are restrictions on command timing that must be implicitly enforced by the memory controller because the DRAM will give no indication of a command-timing violation. These timings are simply data-sheet specs that must be incorporated into the design of the memory controller.

All of this history boils down into the need for fairly sophisticated SDRAM memory control. Improper command sequences or command-sequence timing to SDRAMs results in erroneous data and data loss. The only way to ensure rock-solid SDRAM operation is to manage all aspects of the SDRAM with a sufficiently complex memory controller that issues properly ordered commands with the right command timing and a proven SDRAM PHY that provides exactly the signal timing the SDRAM requires at the multi-hundred-MHz clock rates used by today’s DDR2 and DDR3 SDRAMs.

CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information