Get email delivery of the Cadence blog featured here
Deep learning technologies are helping to make cars more reliable, buildings safer, social media channels more intuitive, and so on. “The capabilities that this technology has enabled is so powerful,” said Samer Hijazi, an engineering director in the IP Group at Cadence. “The challenges it brings are so unique and relevant to us.”
Hijazi addressed the topic, “Neural Network Technology for Embedded Systems: What Does Deep Learning Mean to Cadence,” on Wednesday, April 27, during a lunchtime talk at the company’s San Jose headquarters. The answer to that question, he noted, goes back to his take on Cadence’s mission—to enable the high-tech industry to develop better, faster, cooler silicon systems sooner.
Samer Hijazi speaks on deep-learning technologies and neural networks during a lunchtime talk at Cadence's San Jose headquarters.
A neural network involves a large amount of data plus massive compute capabilities. The network is made up of multiple layers of interconnected, feature-detecting artificial neurons, and each layer has many neurons that respond to different combinations of inputs from the previous layers. The connections in the system have numeric weights that are tuned during a training process. As a result, a properly trained network will respond correctly when presented with an image or pattern to recognize. A convolutional neural network (CNN) is a type of neural network consisting of one or more convolutional layers. In every human-versus-machine competition so far, the machine running CNN algorithms has outperformed the human, Hijazi said.
Can neural networks map to embedded systems? Hijazi discussed several reasons why neural networks are advantageous and disadvantageous for embedded. On the positive side, neural networks:
But on the negative side, neural networks:
“The know-how of neural networks is concentrated in academic circles and it has not penetrated down to traditional classical architects,” Hijazi noted.
So what does the semiconductor industry need in order to take advantage of neural network and CNN algorithms? And how will deep learning change SoCs? While the deep learning industry considers bigger to be better, embedded devices’ power, price, and form-factor requirements can’t accommodate this trend, Hijazi noted.
Addressing the challenges gives rise to new opportunities. First off, there’s a need for deep-learning algorithms that are optimized for embedded-class problems. This need presents opportunities for software differentiation. Also needed are optimized SoCs for low-cost, mass-production embedded supercomputers. GPU and DPU vendors are geared up for this, which also creates a need for new hardware, IP, memory, and interconnect technology for embedded devices. Finally, Hijazi said, demands for high-performance and low-cost mass production ASICs are anticipated to grow.
Cadence has been making a push into the deep learning space with its Tensilica Vision DSPs. The newest member of the family, the Tensilica Vision P6 DSP, quadruples the performance for CNN algorithms, compared to its predecessor.
In today’s imaging systems, the application processor sits near deep-learning accelerator. Looking ahead, Hijazi said, the pipeline will likely change such that the deep-learning accelerator moves closer to the image sensor. Accommodating this change will require:
Hijazi noted that he also anticipates increased expectation from EDA tools to facilitate development of low-cost mega-SoCs. Continuing to look ahead, Hijazi said that neural networks will likely continue to proliferate in cloud-based applications and expand into real-time embedded functions. Power constraints and extreme throughput needs will likely drive CNN optimization in processor platforms. In addition, he said, real-time neural networks will probably evolve from object recognition to action recognition.
Cadence’s deep learning experts will provide a tutorial on power-efficient recognition systems for embedded applications at the Computer Vision Pattern Recognition show (CVPR2016) on June 26 in Las Vegas.
Carnegie Mellon Professor on Machine Learning's Role in Predicting the Future
Imaging and Computer Vision: The Next Driver of SoC Growth (and Your Car)