• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. SoC and IP
  3. Low-Power Memory Subsystems Imperative
Denali Blog
Denali Blog

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us

Low-Power Memory Subsystems Imperative

10 Jul 2009 • 6 minute read
The figure below was put forth at the recent Denali MemCon, in a speech by Samsung's Dr. Sylvie Kadivar.


Memory and Memory Subsystems (MSS), long accused of being the bottleneck to higher system performance, and 'throttling' the MPU with their high latencies and addressing limitations, now finds itself also as the "bad boy of power consumption." Other server system elements have made great strides in power reduction, but at the end of the day, memory 'owns' more of the system power consumption than any other system element. Server Farms, recently a sign of 'advanced internet evolution', are now in the Stink Pen insofar as energy consumption is concerned.

And though steps are underway to reduce power of the MSS, after all is said and done, 'It's the Memory Subsystem, Stupid'.

For most memory designs, too, cost reduction (= die size reduction) remains their foremost objective, though, to be sure, power has moved closer to the top of the design objectives list over the past few years. For the 50nm node products, the main benefit appears to be productivity: moving from 65nm to 50nm, plus the evolving benefits of superior designs, give 2x the die/wafer and more performance. Ceteris paribus, however, power comes down only fractionally, unless the 65nm design was 1.5V and the 50nm was 1.35V.

The tools for reducing power consumption among memories, with some liabilities for performance and build cost, are understood, though many possibilities also lie in wait. Reducing operating voltages, as has been the mode recently, is pure goodness; the high-performance speed bins still seem within reach; the user gets 20% power saving, almost for free...x2 to get the power savings for reduced air conditioning and assorted system build costs.

But, one might ask, if this is such low hanging fruit to go from 1.5V to 1.35V, why not go one more step without a pause, to 1.2V, and get double the benefit? And, even 1.2V does not seem to be any other than an 'industry convention'. Systems with 1.0V or even 0.9V have also been demonstrated in test vehicles, and seem fully achievable...though maybe not productionable for all the end markets that DRAMs serve.

Should more finely-segmented arrays be an option that goes back on the table, to reduce power? For complex server DIMMs, with 36 to 72 chips on each of them, is there a place for 'intelligent design', and more informed and advanced control of the elements that make up DRAM power consumption...refresh, chip and array select, power-down modes?

Mobile MSS power issues/concerns: In phones, and in the larger mobile segment, one of the great forces trying to squeeze DRAMs out, comes from the power-concern side: Swapping NVM/PCM for DRAM makes for nonvolatility AND fast read-write. The MSS (meaning, in this case, LP DRAMs + NVM) constitutes up to 30% of standby power consumption in mobile phones, so it is a big hit on talk time, and makers are 'motivated' to get the power down or out.

Laptops and Desktops: Though LP DRAMs of sorts have been around for many years, we know of no laptop computer which uses them. The memory subsystem is not the power hog it is in phones, and the savings by using 'LP' DRAMs over PC DRAMs, is not enough to justify more MSS cost with higher prices DRAMs and 'LP' DIMMs. However, we have seen in just the past six months, an early and quick take-up of DDR3 DRAMs in laptops, highlighting their lower power capability. Some of this is 'green marketing' to concerned buyers, but the power savings are real, though perhaps of little consequence in terms of battery life. BUT, laptop interests, which now make up more than half of PC unit shipments, and their low-cost cousin, netbooks, may be the force that tips the DRAM voltage discussion in favor of a quick move to 1.35V and then 1.2V DRAMs. What's not to like?

For desktops, cost is paramount and they are almost all plugged into the wall. But, so as they have wagged the DRAM tail for so long, maybe they are about to lose control of the DRAM voltage or LP feature roadmap, be hoist by their own petard, and eventually have to make do with 'what the other guys define, and make'...as others so long have had to dance to their fiddle.

With netbooks being the fastest growing computer segment, by a wide margin, and something of a tabula rasa as far as legacy system design constraints, maybe the Netbook marketplace will entice new DRAM lower power technical developments which then will migrate UP into laptops and PCs. With 20M+ units a year, and +2GB per system, maybe DRAM vendors will try for power differentiation in new and innovative DRAM and MSS designs.

Servers: Server DIMMs, historically pushing the chip-density and DIMM density envelopes, pose the microelectronic version of what the server owner reads on his utility bill. Around the server industry today, 8GB and 16GB DIMMs are the new bleeding edge, and even if they use the newer DDR3 DRAMs, or better still, DDR3L, servers are still rather huge power consumers, constraining system design, mandating larger power supplies, heating and cooling, fans airflow, 'hot spots', etc.

IBM Goes from 'Bipolar' to 'CMOS' in early 1990s, dislocates people, fabs and technology roadmaps: Maybe I am reminded of an anecdote I first heard when I went to IBM in 1995. Until IBM decided to reform its chip business in the early 1990s, and let loose of its 'backward' ways, all the close-in and high-speed system caches were bipolar...lagging the merchant industry's embrace of CMOS over HS bipolar which occurred as much as a decade earlier. But, in addition to the fab process problem, letting loose of all its bipolar capacity in Fishkill, NY (vacating Dutchess County, NY, as it closed down and eventually refit its fab to run CMOS), the major complaint came from the board designers, who had gone to great lengths to 'handle the bipolar power problem'...heat sinks, fans, intricate thermal analyses, limitations of chip layout density on the boards...system cost and performance constraints.

This kind of problem the 'memory industry' faces today; whether there is a simple "bipolar-to-CMOS" solution so close at hand as there was in the early 1990s for IBM, remains to be seen, but seems not probable. More likely, the solution will be piecemeal, and made up of many evolutionary changes over a long period of time. In servers, FB DIMMs was a start three or four years ago, to attack the limited address capability of Registered DIMMs and, for sure, some things were learned along the way. But FB DIMMs had no traction, and a limited following, and were subsequently replaced by better (and denser) R DIMMs..and been wholly replaced by a separate discussion for the follow-on generation of server DIMMs, so called Load Reduced (LR) DIMMs. FB DIMMs was maybe the last 'technical challenge' that was pure performance-driven, with little regard for power, which eventually came back to bite them in the AMB.

But the power problem, so long pushed aside in favor of 'more performance', is real, it is here, and it will get worse before it gets resolved.

CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information