# A Peek into the Future of Signal Integrity with Artificial Neural Networks

Imagine how great life could be if computers or robots can do all our tedious work and we get to enjoy life and work on the things that are meaningful to us, i.e. the first figure on our left. These aspirations are definitely the goals of many researchers in both academia and industry. The ultimate dream of an engineer pressing a “magic button” that automatically designs, layouts, and optimizes their product to meet performance specifications and manufacturability is still science fiction but great progress is being made now with the use of various Design of Experiments (DOE), specifically, Artificial Neural Networks (ANN).

As we know the concept of artificial intelligence and neural networks has been around for decades. It wasn’t until recently, around 2015, the abundant supply of relatively “cheap” processing power (i.e. low-cost multi-core processors and cloud computing) along with large of amounts of data (i.e. big data) have really enabled this boom in technology.

So, you may ask what are Artificial Neural Networks? More importantly, how is this going to help me as a signal integrity (SI) engineer?

To answer the first question, there are many tutorials available on the web so we’ll go through the basic concepts of Artificial Neural Networks and try to relate these to concepts electrical engineers are familiar with. To answer the second question, we’ll explore an example of how deep learning was used with the Cadence® Sigrity^{TM }SystemSI^{TM} tool to predict and optimize an eye diagram of a DDR4 multi-drop topology.

An ANN is a network of nodes that contain a basic building block known as a neuron (also called a perceptron) as shown in the second figure. This basic building block takes a series of inputs, *X _{i }*, where

*X*represents an input signal (which can be a constant if a bias or intercept point is needed),

*i*an index for the number of inputs with a range of 1 to

*N*inputs, each scaled or weighted by

*W*, to form an output

_{i }*Y*with the following relation:

From an electrical engineer’s standpoint, the output *Y* is similar to a weighted sum of input values, like in a multi-input summing OpAmp circuit, except that it is transformed by a function *f(x) *. This function *f(x)* has a special name called an activation function and is used to generate non-linearity as well as bound the output; which is important to prevent the output from saturating when there are lots of layers in the network. Common activation functions include the hyperbolic tangent and sigmoid functions.

As shown in the second figure, an ANN consists of an Input Layer, *L *-Hidden Layers, and an Output Layer that are connected together using the basic neuron building block. The number of Inputs and Outputs of the system determine the number of neurons in the Input and Output layers, respectively. The number of Hidden Layers and the number of neurons used in each Hidden Layer are design parameters determined by system requirements such as accuracy, speed, and complexity. The term Deep Learning refers to a large number of Hidden Layers in the artificial neural network; however, it’s not exactly clear how many Hidden Layers constitutes Deep Learning except for some number greater than one.

Looking at these ANNs, there are a lot of close ties to adaptive equalizers. In fact, the learning aspects of an ANN for supervised training is very similar to the training of tap coefficients in an adaptive equalizer, where weights *W _{i }* are trained with a known data sequence and adapted by minimizing an error or cost function. The backpropagation algorithm in ANNs is a gradient descent method used to calculate weights to minimize the cost function much like stochastic gradient descent algorithms are used to optimize tap coefficients in adaptive equalizers. Overall, engineer’s familiar with adaptive equalizers will be able to draw a lot of similarities between ANNs and adaptive equalizers.

On to the most important question, how is this going to help me as a SI engineer?

As SI engineers, we are responsible for the signal and power integrity of our high-speed system. Typically, these systems involve multiple high-speed integrated circuits, some with complex multi-pin packages, on multi-layer PCB boards with DIMM connectors and backplanes that need signal integrity simulation tools to extract and verify the system is meeting the performance and reliability requirements.

Often, we find ourselves modifying multiple parameters of a complex PCB layout (trace length, width, impedance, component placements, etc.), simulating, checking the results, and redoing the process repeatedly until we meet our required signal quality or eye diagram requirements. This process is inefficient and certainly not optimal, especially if the number of parameters we are changing is large and the time to run each simulation is not negligible. For example, if we were to change only 4 parameters of a PCB layout with 35 possible values for each parameter, this would require over 1.5 million simulations to cover the entire design space which is not realistic.

Instead, what if we can apply an ANN program with our SI simulation tool to predict the output and optimize the eye diagram with much less number of simulations? Essentially use ANNs to help us improve our efficiency.

The team consisting of M. Kashyap, K. Keshavan, and A. Varma developed a Deep Learning algorithm to do this and published their results in the Electrical Performance of Electrical Packaging and Systems EPEPS 2017 conference. Their example utilized Sigrity SystemSI tool to generate the dataset by random sampling 6 PCB input variables for a DDR4 multi-drop topology as shown in the third figure. Each input variable can take 20, 20, 20, 6, 6, and 50 distinct discrete values, respectively, giving a total design space of 14.4 million different combinations.

By applying a hybrid cross-correlation and Deep Learning algorithm with the Sigrity SystemSI tool only 1000 preliminary + 595 secondary signal integrity simulations were needed. The first 1000 simulations reduced the design space from 6 inputs to 4 inputs with cross-correlation and the other 595 simulation data sets were used to train, validate, and test the ANN. 395 points were used to train the ANN to obtain a model that defines the relationship between the remaining 4 PCB input parameters to the output (eye diagram performance). 50 validation points were used to check the effectiveness of the model after training was completed by comparing the predicted output from the ANN model versus the actual simulated output. The validation accuracy reached an average of 97.4% with a 1σ of 1.6%. The remaining 150 data points were used for testing and achieved an average accuracy of 97.3% with a 1σ of 1.8%. The results were also verified by matching the predicted output of a randomly selected set of inputs with the actual simulation result using the same inputs.

The total training time by conservative estimates is 100 seconds and the testing time for 150 data points is 20 seconds, for a total of 120 seconds or 2 minutes. The largest time consumed was generating the 1000 preliminary data and 595 secondary data sets but this is significantly less than the time needed to generate 14.4 million simulations. This preliminary study represents a tremendous improvement in time and efficiency while only sacrificing less than 3% in accuracy. More importantly, the ANN gives us a chance to explore and optimize large solutions spaces whereas it is practically impossible using brute force methods.

Overall, applications utilizing Artificial Neural Networks are still in its infancy but they are starting to become a reality. From a SI standpoint, we still have a long way to go before having the “magic button” capability but we can develop ANN programs that utilize information from Sigrity tools to help us explore large design spaces as shown by the paper from M. Kashyap, et al. I don’t know about you but I think this is worth celebrating, i.e. the first figure.