• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. SoC and IP
  3. Three New Memory Trends in Enterprise Data Centers
Priyab
Priyab

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us
Design IP
Memory
DDR4
flash
memory IP
DDR
memories

Three New Memory Trends in Enterprise Data Centers

22 Feb 2017 • 5 minute read

You might have seen the graph below about the increase in monthly internet traffic around the world. Ever wondered what was causing it?

 

If you think all that traffic is simply due to people binge-watching House of Cards on Netflix, or uploading the photos of their dinners on Instagram, think again. Most of this increase is driven by cloud computing. There was a time, in the not so distant past, when all companies had their own IT shop within the company, either due to economic viability, or security reasons. That trend is slowly going away, even within big companies like Coca Cola, which are outsourcing their entire IT infrastructure to companies like Amazon Web Services. Nowadays rather than having all data communications in one shop in one building on servers with WAN connections, nowadays the data traffic is outside the company on the cloud. Today most company locations must communicate with some large server farm on a remote location and that traffic is what is driving up the massive need for bandwidth in the Internet.

Needless to say, the memory in these servers and datacenters also needs to respond to the requirements of all this traffic going in and coming out of the datacenter. The primary requirements for these server memories is higher bandwidth and higher capacity. In an enterprise application, bandwidth is always a requirement, since most datacenter applications require processing huge amounts of data in real time, or with very minimal latency. Capacity is a big concern, because the more DRAM you can put into a server, the more performance you are going to get and the more application workload you can take on. So how is the memory industry addressing these needs? The primary solution has been the evolution of the DDR protocol, with DDR4 pushing data rates up to 3200MHz. DDR5 is already being developed, and those memories will start deploying around the year 2019. But in addition to that, there has been a need for very specialized memory technologies to service the enterprise market.

High Bandwidth Memory (HBM)

One of the new memory trends in data centers is High Bandwidth Memory (HBM). HBM is a high-performance RAM interface for 3D-stacked DRAM, where up to eight DRAM dies are stacked, including a base die connecting to the memory stack, and these are interconnected by through-silicon vias (TSV) and microbumps. HBM offers high performance and low latency because of a very wide parallel connection, thanks to the lack of overhead due to no serialization and deserialization. This density of signaling between a DRAM and SoC requires a

very fine grain interconnector, which is hard to achieve with a typical package substrate. So a silicon interposer is typically used. This memory offers a tremendous amount of bandwidth, where a number of high-bandwidth memory devices, can result in a bandwidth of Terabits per second. But because it’s all in the same package on the SoC, capacity is fairly limited Systems can use one or more HBM die, but the number of HBM die will typically not be more than two on one side. So a great application would be a packet buffer in an Enterprise networking application where you can take the advantage of a high bandwidth and low latency and you don’t need that much memory density in comparison with what you need in a high performance server.

   

Hybrid Memory Cube (HMC)

Another memory trend, which tries to solve the same bandwidth problem is the Hybrid Memory Cube (HMC). It’s a very high-bandwidth interconnect with moderate latency, because it’s a serial interface that goes from an SoC to an external device. It is mainly used In compute applications like High Performance Computing (HPC), where sensitivity to latency is low and bandwidth improvements can really be useful. Both HBM and HMC are limited from being widely used in all server applications due to their cost. The cost per bit for these memories is higher than that for standard DRAM because the volumes are lower and they are specialized. In addition, the cost to implement these in the SoC is also higher due to the more expensive packaging required.

 

Flash Memory

Another new memory trend in servers is an increased use of Flash memory. A few years ago, the memory hierarchy was fairly straight forward. The processor had caches which were close to the processor, typically implemented on SRAM and often in the same die or package. Further away from the processor was the main memory in DRAM. Finally, the system had a large-volume data store on a magnetic disk. The closer the memory is to the processor, the smaller the capacity and faster the access time. As the memory moves away from the processor, the capacity of the memory increases, but the access times for the memory get longer. The challenge for the system designers is to manage this hierarchy so that optimum performance can be maintained, by keeping data that the application needs access to often as close to the processor as possible, and keeping things that are needed only occasionally far away on disk.

Flash memory initially made an entrance in the storage hierarchy. Adding a Solid State Drive (SSD) disk made out of Flash before a magnetic disk to the disk hierarchy, resulted in a substantial increase in the performance of the computer system. Access times are a magnitude faster for Flash than they are for a spinning disk.

The other place where Flash memory can be used is as a potential replacement to DRAM.  One flash technology that is being considered for main memory replacement is 3D Cross Point where cost per bit for flash is projected to be advantageous, capacity is greater than DRAM and performance is theoretically close enough to it. Some computing workloads can benefit from a much larger memory, even if it’s not as fast. This has given rise to new solutions like NVDIMM where Flash in placed in the same DIMM socket as DRAM. A more practical solution is a hybrid with DRAM in the main memory subsystem and Flash to be used for memory off load in the event of a power failure. The challenge is to manage what goes where in order to maximize the performance of the system.

Memory solutions for enterprise applications in data centers are evolving along with the complex demands of current applications like cloud computing. Cadence has the memory interface solutions you need to design the datacenters of the future. Visit us to find out more details.


CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information