• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. System, PCB, & Package Design
  3. New 3D Analysis Engine Offers Faster, More Accurate Simulations…
Sigrity
Sigrity

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials
PCB
IC
3D full wave extraction
3D analysis
IC package design
Sigrity
High Speed design
clarity

New 3D Analysis Engine Offers Faster, More Accurate Simulations at Lower Cost

17 Mar 2020 • 6 minute read

A new era is taking shape in the system analysis for highly complex IC, package, and PCB designs with the availability of computers specifically architected to solve problems in complex 3D structures. The new 3D solvers deliver faster simulations and extractions for handling large and complex designs while running on hundreds of CPUs optimized for both cloud and on-premise computing.

Modern system analysis tools must evolve to take advantage of distributed computer architectures for the purpose of carrying out complex mathematical simulations across multiple cores. By doing so, design teams will be better equipped to address the challenges associated with large interconnect models and high-speed signaling.

New tools are required for detailed 3D analysis of ICs, packages, boards, and complete systems

Figure 1: New tools are required for detailed 3D analysis of ICs, packages, boards, and complete systems. (Image: Cadence Design Systems)

Design teams require all interconnect models to be highly accurate, whether it’s detailed routing on IC redistribution layers, dealing with bumps, balls, and vias on IC packages, or having to handle breakout routing, vias, and pad stacks on PCBs.

Here, the issue is that interconnect needs to transition from layer to layer, fabric to fabric, and board to connector – and all of these transitions involve 3D structures. Given these circumstances, engineers often encounter electromagnetic (EM) challenges when designing complex 3D structures on chips, packages, and PCBs. As a result, even the slightest impedance can cause discontinuity and destroy high data rates; this, in turn, leads to multiple design iterations.

Experienced designers rely on 3D analysis engines to model complex 3D structures on individual designs or structures that encompass multiple design territories spanning from chip to package to board. But before we delve into a new generation of tools offering a more detailed 3D analysis for optimized design configurations, it’s worth knowing what’s been available until now. 

Legacy field solvers

Large, complex designs are increasingly demanding more accurate interconnect models, and that calls for faster simulation and greater capacity compared to legacy field solver technology.

To accommodate such workloads, legacy field solvers rely on massive servers. Still though, they are not able to simulate the entire system. These system-level simulators often partition a design into pieces or subsystems to effectively use the available compute resources. However, when engineers cut designs into smaller structures manually, it creates inefficiencies in system analysis.

Traditionally, two approaches have employed distributed computing with the finite element method (FEM). The first method submits the job from a host machine for each analysis frequency, and the amount of memory used by each machine in a standalone environment is the same. That inevitably increases the amount of memory simultaneously used, which, in turn, leads to an increase in machine load.

The second approach divides one layout into several partial elements, but though it reduces the required resources, the interactions between the divided elements are not calculated. That splits the ground path and may lead to a critical accuracy problem.

 

Conventional filed solvers have limitations in analysis speed and capacity because engineers have to simplify the structure or divide the layout into several segments

Figure 2: Conventional filed solvers have limitations in analysis speed and capacity because engineers have to simplify the structure or divide the layout into several segments. (Image: Cadence Design Systems)

 

The second approach, based on a pseudo-3D method, poses the risk of creating inaccuracies due to artificial effects caused by superficial model boundaries. Moreover, in legacy field solvers, there are performance bottlenecks due to the use of a single machine for initial mesh and adaptive meshing. This multi-frequency point distribution creates capacity issues. Additionally, it requires multiple terabyte machines to simulate even not-so-big structures.

A parallel processing architecture

The risk factors and accuracy constraints outlined in the above section underscore the need for new tools to carry out a detailed 3D analysis for off-chip as well as package and PCB designs. That also calls for a new methodology to research, simulate, and analyze multiple design possibilities.

The solution: a parallelization technology that can run dozens of simulations at higher throughput and lower costs. The massively parallel computing also enables almost-linear scalability while using several CPUs, so the 3D solver doesn’t have to divide the layout into several partial elements.

 

This is how multi-core compute parallelization applies to a 3D matrix solver

Figure 3: This is how multi-core compute parallelization applies to a 3D matrix solver. (Image: Cadence Design Systems)

On the other hand, in traditional distributed computing, the pre-processing, mesh generation, and design partitioning are all performed on a host machine. Then the host machine passes the mesh data to the remote machine, and remote machines return the results to the host machine.

However, the new parallel computing architecture distributes the meshing process to the remote machines, calculating optimum parallel jobs using the required number of cores. From the host machine side, the remote machines used by distributed computing look as if they are multi-core processes operating as a single machine.

The system doesn’t put a limit on the number of parallel tasks and the amount of memory, allowing designers to handle any number of extractions within this virtual memory environment. In other words, designers don’t have to worry about the amount of memory because distributed processes are performed as if they are operating on multiple machines.

A new 3D simulation tool

Cadence Design Systems has developed a new 3D analysis engine that is based on the massively parallel computing architecture explained in the above section. The Clarity 3D Solver performs large-scale analyses using either cloud or on-premises distributed computing, so instead of buying large and costly servers, designers can rent web-based cloud servers. The Clarity engine allows users to employ cloud as a remote machine by using services such as Microsoft Azure and Amazon Web Services (AWS).

It’s a new technology that has been designed from the ground up to ensure high precision using wideband modeling. It performs accurate analysis without degenerating models and provides the required accuracy in a fraction of time compared to the legacy field solver solutions.

The Clarity 3D Solver creates highly accurate S-parameter models for use in signal integrity (SI), power integrity (PI), and electromagnetic compliance (EMC) analysis. In system-level designs, it enables engineers to perform the extraction of a combined structure of connectors and PCBs.

 

A design flow example of extraction of an IC package for data center applications that requires 3D modeling of a 5G interface

Figure 4: A design flow example of extraction of an IC package for data center applications that requires 3D modeling of a 5G interface. (Image: Cadence Design Systems)

The 3D simulation tool has also been integrated into Cadence’s Allegro and Virtuoso platforms, so design structures can be optimized in the analysis tool and automatically implemented in the design tool.

Conclusion

The high-speed signal design is getting more difficult by the day, and that makes the verification of ultra-high-speed interfaces a critical challenge. A solid interconnect design is also a must in today’s high-speed signal circuitry to shorten the turnaround time and lower the cost.

It’s about time that chip and system designers take into consideration the multi-CPU parallelism running on multiple machines and thus save cost and boost simulation capacity and speed. The simulation process distributed across multiple low-cost computers allows design engineers to precisely evaluate the trouble spots and efficiently accumulate measurement correlations.


CDNS - RequestDemo

Have a question? Need more information?

Contact Us

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information