Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
Density, power, bandwidth, latency - all of these memory attributes will improve during the next few years, according to panelists at the MemCon 2012 conference Sept. 18. But don't underestimate the challenges, don't expect to replace NAND and DRAM, and forget about the dream of a "universal" memory that solves every problem, panelists said.
MemCon, organized by Cadence this year, also included three keynotes, two tracks of breakout sessions, and exhibits. A first look into the future of memory was offered in a keynote speech by Scott Graham, general manager of Hybrid Memory Cube (HMC) technology at Micron, who spoke on behalf of the 3D HMC technology.
The "Future Memories" panel was moderated by memory expert Jim Handy, analyst at Objective Analysis, and it included these panelists:
Michael Miller of MoSys speaks at the MemCon Future Memories panel
Introducing the panel, Handy noted that "basically memories are hitting a brick wall. We're running into scaling problems, bandwidth problems, and power problems. People are working to get past these problems, but it looks like they will probably cause some sort of a right turn in the industry."
Chevallier (Rambus): What Future Memories Have in Common
Future memories that are often discussed include Resistive RAM (RRAM), Magneto-resistive RAM (MRAM), and Phase-Change Memory (PCM). "The jury is still out" on which is best, Chevallier said. He noted that RRAM has not yet resulted in high-density memories, MRAM is hard to integrate, and PCM has high current density, limiting the possible applications. These new memories will probably not replace DRAM, he said. They will replace NAND eventually, although they still have a ways to go to catch up to NAND in terms of density and integration.
What all future memories have in common, Chevallier said, is the use of 2-terminal bit cells. They will also have smaller arrays, which is not a bad thing because designers can leverage them to get more parallelism. The end result will be higher density, lower power, and higher bandwidth memories.
Lipman (Sidense): Universal Memory as Likely as Extraterrestrials
Designers would love to have the speed of SRAM, the density of DRAM, and the non-volatility of flash and ROM in one standard CMOS process, Lipman said - but it's not going to happen. Chips will continue to use multiple memory technologies for the foreseeable future. "There is no magic bullet. I don't believe universal memory is on the horizon. You might consider the search for universal memory as sort of akin to the search for extraterrestrial intelligence - the odds of succeeding at either one are probably about the same."
Miller (Mosys): Challenges, Requirements, and Packaging Choices
Semiconductor scaling allows designers to put more inside a package every two years, but it poses some challenges, Miller observed. As devices shrink, the distance between them grows, producing latency challenges. We're pushing the limits of how far we can scale silicon, and will begin to experience quantum effects, which means that "what used to be smooth analog type things will now start to get choppy."
There are at least three different communications strategies as a result of packaging, Miller said. One is to place memory outside the system-on-chip (SoC) package, but this could require up to 1,000 pins in the near future. Another is to move memory inside the SoC package, or next to a memory stack on a silicon interposer, but "cost will be interesting and there are a lot of manufacturing aspects."
MoSys is working on a third approach - leaving memory off chip and using a high-speed serial interface to move data back and forth. "In 2013 you'll see something with 15 Gbit/second serial links," he said.
Fault tolerance will be one requirement for future memories. "You not only have to have BIST [built in self test], but you also have to have self-repair and self-healing. We have to start taking lessons from nature, which is that everything is going to degrade over time."
Gervasi (Discobolous) - It's Better Than You Think
Introducing himself as the "optimist" on the panel, Gervasi said "the status quo is not as bad as you think it is." If you look at mainstream memory, he said, there are unbuffered memory models, registered memory models, load reduction models, and soldered-down options, with each solution chosen for a particular tradeoff of frequency, latency and capacity. Many system requirements can be met just by changing configurations, without any need to re-architect memory.
Gervasi expressed some skepticism about the Hybrid Memory Cube. First, it uses a SerDes interface, and SerDes introduce their own thermal and latency problems. Secondly, the overall circuit is going to be large, and it must contain the entire controller. "It is going to be an entire CPU on its own," he said. "This expects the system guy to hand a whole lot of control over to their memory supplier. I anticipate some interesting political battles in this area over the next couple of years." He predicted it will take 10 years to make 3D stacking cost competitive.
A Few Questions and Answers
Q: When do we have to make a pretty big change in the memory technology we use?
Lipman: I think we need to do it right now. Floating gate technologies in general are reaching limitations. I think a lot of floating gate technologies won't survive beyond around 45nm.
Q: What will be the limiting factor of success for RRAM over the next several years?
Chevallier - Integration and materials. The weirder the material, the harder the integration.
Gervasi - Manufacturability. Anybody can produce 100K chips, but it's a whole different beast to produce 80 billion chips. Resistive, phase change, any technology - it's how to make the transition from 100K to 80 billion.
Q: Bill [Gervasi] said it will take 10 years to get to 3D. Do other panelists agree?
Miller: Physically we can assemble these things now. The big challenge is the ecosystem and how you put these together. I tend to think it will take at least 5 years, or 10 for getting to price parity and the point where people have all the tools.
Gervasi: The fundamental problem is compound yield issues. If you have one die with 90% yield you can make a market out of it, but put two together and it's 81%, and put four together and you keep coming down the curve. I still think we're ten years away.
Q: Several speakers this morning said the biggest increase in data will be video. Do we need different memory architectures for handling video?
Gervasi: Video differs in a way that's pretty far skewed from other applications. It has to do with a read/write percentage profile. In networking, RAM has a purely 50/50 read/write access, where you're streaming a packet in and out. But for video, you're going to download one video one time and stream it to many customers, so you may have one write per 100,000 or million reads. These different profiles lead to different features we would need in a mainstream architecture.
Q: Is adoption primarily matter of technology-driven challenges or business model challenges?
Lipman: Business model challenges are more difficult to solve than the technology. As new technologies develop, how to promote and sell them becomes more difficult.
Chevallier: I agree the business challenge is very high. RRAM is trying to compete with NAND, and NAND has had 20 years of cost reduction. If you compete on cost alone, there's almost no way a new technology can match it. We've forecast the death of NAND for years and I must say, it's a very slow death.
Related Blog Posts
MemCon Keynote: Cloud, Mobility Disrupt Semiconductor Memory Ecosystem
MemCon Keynote: Why Hybrid Memory Cube Will "Revolutionize" System Memory