• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Verification
  3. Interconnect Beyond PCIe: CXL and Cache Coherent Interc…
Lana Chan
Lana Chan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Verification IP
VIP
PCIe
Internet of Things
Denali
PCI Express
verification

Interconnect Beyond PCIe: CXL and Cache Coherent Interconnect

18 May 2020 • 2 minute read

 As the de facto IO interconnect technology, PCIe has commendably addressed the performance bottleneck at the IO interface by doubling the bandwidth support every 3-4 years over the course of 5 generations of the specification and with Gen 6 at 64GT/s in the works. However, the datafication of everything requires that the raw data be processed and/or harvested for meaningful information. The high computational workloads of applications in Artificial Intelligence (AI), Machine Learning (ML), communication systems, and High-Performance Computing (HPC) require tighter communication between processors, accelerators, advanced memory, and storage.  Simply put, they demand low latency that is orders of magnitude faster than PCIe can provide and the extension of cache coherency beyond the processor.

Until March 2019, while Gen-Z, OpenCAPI, and CCIX had thrown their caps in the ring, Intel was visibly absent. That changed with the unveiling of Compute Express Link (CXL) as an open industry-standard interconnect between host processors and devices such as accelerators, memory devices, and smart I/O devices. Now, in 2020, v1.1 of the specification has been released and V2.0 is in the works. CXL is fully backed by most of the industry, with board members AMD, Alibaba, Arm, Cisco, Dell, Facebook, Google, HP Enterprise, Huawei, Intel, IBM, Microchip, Microsoft, and Xilinx. There are nearly 100 members in total (including Cadence). CXL seems likely to be the be cache coherent interconnect industry standard.

CXL is built on top of PCIe 5.0 infrastructure with native support at 32 GT/s. Support for dynamic multiplexing of 3 sub-protocols (CXL.io, CXL.mem, CXL.cache) on a single link enables low latency, high bandwidth performance in heterogeneous systems.

So how does one bring up a CXL system? The short answer is to use Cadence CXL VIP to jump-start your design.

A flexible port is defined that can auto-negotiate either to standard PCIe or CXL as an alternate protocol. The CXL.io protocol stack takes care of discovery, configuration, register access interrupts etc to basically bring up the system in CXL mode. CXL.io is an enhanced PCIe standard stack with low latency and dynamic framing considerations taken care of in the transaction and link layer. This differs from CCIX which is also built on top of PCIe but has a separate transaction layer. After successfully ‘negotiating’ CXL support, dependent upon the device support, CXL.mem and/or CXL.cache protocol traffic can be brought into the mix. CXL.mem addresses memory semantics to allow CPU to access non-local memory. Meanwhile, CXL.io allows device access to a local processor’s memory. 

Exciting and challenging times for IP and chip design and verification engineers who are targeting to support a new and quickly evolving specification. 

This is where Cadence’s expertise with its PCIe VIP, cache coherency protocols, and system performance tools come into play. More information on Cadence’s CXL VIP is available on Cadence CXL VIP page and be sure to contact your Cadence representatives for the latest development on this quickly evolving protocol.

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information