Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
Verification Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
The Cadence Academic Network helps build strong relationships between academia and industry, and promotes the proliferation of leading-edge technologies and methodologies at universities renowned for their engineering and design excellence.
Participate in CDNLive
A huge knowledge exchange platform for academia to network with industry. We are looking for academic speakers to talk about their research to the industry attendees at the Academic Track at CDNLive EMEA and Silicon Valley.
Come & Meet Us @ Events
A huge knowledge exchange platform for academia. We are looking for academic speakers to talk about their research to industry attendees.
Americas University Software Program
Join the 250+ qualified Americas member universities who have already incorporated Cadence EDA software into their classrooms and academic research projects.
EMEA University Software Program
In EMEA, Cadence works with EUROPRACTICE to ensure cost-effective availability of our extensive electronic design automation (EDA) tools for non-commercial activities.
Apply Now For Jobs
If you are a recent college graduate or a student looking for internship. Visit our exclusive job search page for interns and recent college graduate jobs.
Cadence is a Great Place to do great work
Learn more about our internship program and visit our careers page to do meaningful work and make a great impact.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
Overview All Courses Asia Pacific EMEANorth America
Instructor-led training [ILT] are live classes that are offered in our state-of-the-art classrooms at our worldwide training centers, at your site, or as a Virtual classroom.
Online Training is delivered over the web to let you proceed at your own pace, anytime and anywhere.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
You might have seen the graph below about the increase in monthly internet traffic around the world. Ever wondered what was causing it?
If you think all that traffic is simply due to people binge-watching House of Cards on Netflix, or uploading the photos of their dinners on Instagram, think again. Most of this increase is driven by cloud computing. There was a time, in the not so distant past, when all companies had their own IT shop within the company, either due to economic viability, or security reasons. That trend is slowly going away, even within big companies like Coca Cola, which are outsourcing their entire IT infrastructure to companies like Amazon Web Services. Nowadays rather than having all data communications in one shop in one building on servers with WAN connections, nowadays the data traffic is outside the company on the cloud. Today most company locations must communicate with some large server farm on a remote location and that traffic is what is driving up the massive need for bandwidth in the Internet.
Needless to say, the memory in these servers and datacenters also needs to respond to the requirements of all this traffic going in and coming out of the datacenter. The primary requirements for these server memories is higher bandwidth and higher capacity. In an enterprise application, bandwidth is always a requirement, since most datacenter applications require processing huge amounts of data in real time, or with very minimal latency. Capacity is a big concern, because the more DRAM you can put into a server, the more performance you are going to get and the more application workload you can take on. So how is the memory industry addressing these needs? The primary solution has been the evolution of the DDR protocol, with DDR4 pushing data rates up to 3200MHz. DDR5 is already being developed, and those memories will start deploying around the year 2019. But in addition to that, there has been a need for very specialized memory technologies to service the enterprise market.
High Bandwidth Memory (HBM)
One of the new memory trends in data centers is High Bandwidth Memory (HBM). HBM is a high-performance RAM interface for 3D-stacked DRAM, where up to eight DRAM dies are stacked, including a base die connecting to the memory stack, and these are interconnected by through-silicon vias (TSV) and microbumps. HBM offers high performance and low latency because of a very wide parallel connection, thanks to the lack of overhead due to no serialization and deserialization. This density of signaling between a DRAM and SoC requires a
very fine grain interconnector, which is hard to achieve with a typical package substrate. So a silicon interposer is typically used. This memory offers a tremendous amount of bandwidth, where a number of high-bandwidth memory devices, can result in a bandwidth of Terabits per second. But because it’s all in the same package on the SoC, capacity is fairly limited Systems can use one or more HBM die, but the number of HBM die will typically not be more than two on one side. So a great application would be a packet buffer in an Enterprise networking application where you can take the advantage of a high bandwidth and low latency and you don’t need that much memory density in comparison with what you need in a high performance server.
Hybrid Memory Cube (HMC)
Another memory trend, which tries to solve the same bandwidth problem is the Hybrid Memory Cube (HMC). It’s a very high-bandwidth interconnect with moderate latency, because it’s a serial interface that goes from an SoC to an external device. It is mainly used In compute applications like High Performance Computing (HPC), where sensitivity to latency is low and bandwidth improvements can really be useful. Both HBM and HMC are limited from being widely used in all server applications due to their cost. The cost per bit for these memories is higher than that for standard DRAM because the volumes are lower and they are specialized. In addition, the cost to implement these in the SoC is also higher due to the more expensive packaging required.
Another new memory trend in servers is an increased use of Flash memory. A few years ago, the memory hierarchy was fairly straight forward. The processor had caches which were close to the processor, typically implemented on SRAM and often in the same die or package. Further away from the processor was the main memory in DRAM. Finally, the system had a large-volume data store on a magnetic disk. The closer the memory is to the processor, the smaller the capacity and faster the access time. As the memory moves away from the processor, the capacity of the memory increases, but the access times for the memory get longer. The challenge for the system designers is to manage this hierarchy so that optimum performance can be maintained, by keeping data that the application needs access to often as close to the processor as possible, and keeping things that are needed only occasionally far away on disk.
Flash memory initially made an entrance in the storage hierarchy. Adding a Solid State Drive (SSD) disk made out of Flash before a magnetic disk to the disk hierarchy, resulted in a substantial increase in the performance of the computer system. Access times are a magnitude faster for Flash than they are for a spinning disk.
The other place where Flash memory can be used is as a potential replacement to DRAM. One flash technology that is being considered for main memory replacement is 3D Cross Point where cost per bit for flash is projected to be advantageous, capacity is greater than DRAM and performance is theoretically close enough to it. Some computing workloads can benefit from a much larger memory, even if it’s not as fast. This has given rise to new solutions like NVDIMM where Flash in placed in the same DIMM socket as DRAM. A more practical solution is a hybrid with DRAM in the main memory subsystem and Flash to be used for memory off load in the event of a power failure. The challenge is to manage what goes where in order to maximize the performance of the system.
Memory solutions for enterprise applications in data centers are evolving along with the complex demands of current applications like cloud computing. Cadence has the memory interface solutions you need to design the datacenters of the future. Visit us to find out more details.