Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community Blogs Corporate and Culture LPDDR5X: Why Mobile Memory Matters More than Ever

Author

Sanjive Agarwala
Sanjive Agarwala

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from Corporate and Culture. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
featured
lpddr5x
Denali

LPDDR5X: Why Mobile Memory Matters More than Ever

22 Dec 2022 • 5 minute read

smart phone tear down view

Since its ratification in the mid-2000s, low-power DDR (LPDDR) has been fundamental in driving more complex use cases in mobile devices sensitive to area and power. Consider that in 2007, a smartphone camera was nothing more than a sensor tacked on the back for grainy, low-resolution selfies.

Now, smartphone cameras are the epitome of edge AI capability, secure enough for biometric authentication and powerful enough for computational photography: adding complex 3D effects, background blur, or pairs of bunny ears to photos and even live video in real time.

Each new generation of LPDDR has continued to target the must-have applications of its era, as the way we use mobile devices has evolved. LPDDR5 in 2020 delivered the memory bandwidth needed for 5G, low-level edge artificial intelligence (AI), advanced mobile gaming, and seamless 4K video streaming to smartphones, tablets, and laptops designed for all-day working.

Complex edge AI use cases

In the two years since, it’s felt like new, more complex edge AI use cases have been posited on an almost daily basis. These AI models require ever greater AI compute capacity from power-constrained edge devices, and they demand the decisions and outcomes from these AI models to be made more quickly, too.

Processors have continued to scale with smaller process nodes, more cores, and new packaging technologies such as 3D-IC. Networking is faster than ever thanks to Wi-Fi 6 and 5G. Memory, however, has often ended up being the bottleneck due to physical constraints.

Yet memory has been the lifeblood of all the new features and applications we’ve experienced in our mobile devices since the smartphone revolution began. Even the speed of basic tasks such as web browsing comes down to how fast a device’s CPU is able to store and retrieve information from memory.

All of which is why the availability of Cadence LPDDR5X memory interface IP, designed for data-hungry 5G and artificial intelligence (AI) applications and delivering speeds up to 33% faster than our highly successful LPDDR5 IP, will be welcome news to our partners. We know they’re under pressure to keep up with the incredible range of new edge AI use cases, not only in mobile devices but also in probably the largest mobile device any of us are ever likely to own—the car.

Automotive applications

As the automotive industry pivots towards software-defined vehicles (SDVs), the need for low-latency, high-bandwidth communication between the various subsystems within a car is growing rapidly. Compute performance is therefore vying with the powertrain for energy—its performance-per-mile.

Cadence is already speaking to several tier 1 automotive manufacturers about the benefits of building LPDDR5X into safety-critical advanced driver assistance systems (ADAS) and autonomous driving hardware.

These systems must therefore be capable of making critical real-time safety decisions based on massive amounts of sensor data within as small an energy footprint as possible.

Embracing edge AI

The need for data to be processed locally in automotive applications needs no explanation—any autonomous vehicle must be capable of driving itself without relying on anyone else. But why is it so important that consumer mobile devices can perform AI compute at the edge rather than relying on cloud smarts and low-latency 5G?

Consider that every day, trillions of megabytes of AI-actionable data are generated by the world’s mobile devices. A relatively limited amount is processed on the device using edge AI, while a significant portion is sent to the cloud.

But as the complexity of these AI applications grows, so does the amount of data that needs uploading. 77% of the world’s downstream bandwidth is today taken up with video data, delivered via a network infrastructure optimized to move data from the center outwards. The influx of data required to process complex AI use cases in the cloud stands to turn that tide and severely limit global networks in the process.

And when we expand mobile devices employing edge AI to other battery-powered IoT devices such as smart cameras or mobile healthcare tools, there’s also the issue of data value. When that value can be measured in milliseconds, any delay in processing reduces the value of that insight—potentially to zero.

Even accounting for 5G’s near-zero latency, the speed at which data can be transmitted is governed by propagation delay. In wireless communication, that’s equal to the speed of light. To put a number on it, for every 100 miles data travels, 0.82ms of latency is introduced.

That may not sound disastrous. But data center locations vary hugely, and networks are often spread over great geographic distances—the cloud can be many thousands of miles from the edge. Latency is further compounded by the weight of data and the number of routers and other pieces of network hardware that data has to jump through—it all adds up, quickly. And that’s even before that data can be analyzed, inference performed, and the result send back to the edge device.

That’s why it’s so important that we move the AI compute nearer in time to the edge device generating the data. Invariably in mobile and especially in automotive, that means performing complex AI compute on the device itself.

Designing for the future

test chip in daughter PCB card

Figure 1: Cadence LPDDR5X in PCB daughtercard

LPDDR5X is the latest in a long line of Cadence memory IP, dating back to our acquisition of Denali in 2010. It follows the announcement of our GDDR6 silicon in November 2022, designed for very high-bandwidth memory applications such as hyperscale computing, data centers, and 5G backbone.

Just like GDDR6, LPDDR5X is very much a future-proof technology, providing the performance for whatever comes next. The LPDDR5X standard increases peak data speed from LPDDR5’s 6400Mbps (6.4Gbps) to 8533Mbps (8.5Gbps), and there aren’t many applications today that will make full use of such incredible performance in edge devices.

Eye diagram of LPDDR5X from Cadence

Figure 2: Eye diagram of Cadence LPDDR5X IP

But as I said earlier, almost every day, I hear of incredible new and performance-hungry edge use cases—from using our handsets or standalone VR/AR headsets to explore the metaverse or taking ‘level 5’ (full) vehicle autonomy a step closer to reality. With technology like LPDDR5X soon to be in the palm of our hand, those use cases are no longer out of reach. We’re about to see what the edge is truly capable of.

Cadence Denali Memory and Storage IP solutions support the widest range of industry standards with controller and PHY implementations for both high-performance and low-power applications. Design for the future now.


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information