Get email delivery of the Cadence blog featured here
Every two days, we create as much information as we did from the beginning of time until 2003, according to Dr. Xin Li, an associate professor in the Department of Electrical and Computer Engineering at Carnegie Mellon University. Every minute, we send 204 million emails, 1.8 million Facebook likes, and 278K tweets, he noted. And, as a parallel to Moore’s Law, the total amount of data being captured and stored by the industry doubles every 1.2 years, he said.
“That’s big motivation for us to work on Big Data,” Li told an audience at Cadence San Jose headquarters on February 24. During his talk, “Machine Learning for Emerging Applications—Circuit, Brain, and Automobile,” Li discussed three emerging machine-learning applications:
At the heart of these applications are machine-learning algorithms and hardware accelerators to turn all of the associated data into actionable insights.
Dr. Xin Li of Carnegie Mellon University discusses machine learning with an audience at Cadence San Jose headquarters.
For circuits, machine learning involves collecting a large volume of circuit data for test and diagnosis. Circuit test yields data that is then run through data analysis and mining, resulting in yield learning. Given how expensive analog/mixed-signal testing can be, Li posed the question of whether we can predict wafer-level spatial variation patterns in order to reduce test costs. Following this approach, initially, we can test a number of chips to predict the spatial variation pattern. Based on these initial test results, we can predict performance of other chips.
In collaboration with different companies, Li has come up with a technique called Virtual Probe, based on compressive sensing. He also considers variation modeling as a linear regression problem by using discrete cosine transform (DCT). The result of these techniques is an under-determined linear equation, and additional information is needed to uniquely solve this equation. He and his collaborators explored sparsity to find a unique, deterministic solution from the under-determined equation.
Beyond reducing test costs, Li wanted to know whether we could do more. After all, if yield is low, costs are still high. What can be done to further reduce costs by salvaging defective chips? Self-healing, said Li, is an answer, following an approach to fix failed chips at the tail of the distribution. Self-healing can help improve circuit performance and reduce design overhead, he noted.
Machine learning, in turn, provides techniques to build models to predict chip performance. Li then discussed how a novel Bayesian model fusion framework can be used to substantially reduce model recalibration costs by quickly updating indirect sensor models to accommodate wafer-to-wafer variations.
For the remainder of his hour-long talk, Li discussed the brain and the automobile. Brain-computer interfaces are creating a direct communication path between the brain and external devices in order to do things like control prosthetics. Rather than simply training one part of the body, such as a paralyzed limb, such an approach would provide a more well-rounded rehabilitative experience by also training the brain.
As for automobiles and machine learning, that’s where autonomous driving comes in. Li noted that while today’s hardware implementations based on CPUs/GPUs is technically functional, it is also financially expensive. An alternative computing platform consists of customized SoCs with several vertical layers (application-specific accelerator, vector processor, DSP, CPU). While an SoC-based platform covers a broad range of computing tasks, it does come with high manufacturing costs and difficult software verification. Said Li, “Our vision here is, you need a heterogeneous system and FPGAs play a role here. Once you have that FPGA component, it will allow you to program your hardware to fit the different computational tasks.”
“We live in a very Big Data world, that gives lot of opportunity, not just for computer scientists but for hardware people,” concluded Li. “Predictability is important. Now that Moore’s Law is slowing down, we need other ways to predict what is going on in the future. “