Get email delivery of the Cadence blog featured here
Last week’s blog was about Venu Puvvada’s keynote at CDNLive India. Today’s blog is about the second keynote we had (on Day 2 of CDNLive India) by Dinakar Munagala, CEO of ThinCi Inc (pronounced Think-Eye). ThinCi is a semiconductor startup which came out of stealth mode in October last year. The company designs vision processors that can be used in a wide range of applications, from self-driving cars to deep-learning supercomputers. The firm is backed by well-known institutional and private investors.
Dinakar’s presentation was titled “Creating Real Business Out of AI, Machine Learning, and Deep Learning”. He believes we are in the early stages of AI and deep learning - their impact is already visible in autonomous driving, but it will proliferate to different industries and different walks of life. He cited several examples of applications that we are already seeing in everyday life. A few that he mentioned:
How does deep learning work?
Dinakar said that the major focus now is on deep learning, a way that extracts automatically from raw data, recognizes features, makes sense of it and it creates a program to recognize things. For example, from the pixels of a picture, deep learning uses the data to detect edges, then recognizes object parts, then full images and then distinguishes whether the image is a face, car, elephant, chair or traffic sign. The key thing is that the bottom layers are the same.
Why is deep learning taking off now?
While the science has been there for quite some time, deep learning is significant, even disruptive, because it has the ability to analyze data and solve problems that were challenging or even impossible earlier.
Dinakar said that there are three factors contributing to why deep learning is taking off now:
First, it is very data driven and there is a lot more data available today. Data is needed to “train” neural networks to recognize objects. Today there are 200 million plus types of data – images, speech, Optical Character Recognition (OCR – refers to handwriting and printed text) – which is rich material to train neural networks. The availability of data is growing exponentially every year, and as a result there are new neural networks coming up year on year.
Second, the availability of more powerful compute power at price points that are affordable. Because of Moore’s Law and other advances, more advanced architectures, purpose-built architecture for deep learning with really powerful hardware in a small silicon footprint is now a reality.
Lastly, improvements in the software itself. Better neural networks with the error rate in image recognition error rate now surpassing the human eye in accuracy. Using powerful computers and rapidly progressing neural networks, new technologies can now safely be deployed in a car and other mission-critical areas.
Some AI applications of the future
The final part of Dinakar presentation was about some AI applications that are at the cutting edge today. A few examples:
Dinakar closed his keynote by talking about the challenges in autonomous driving. The most challenging autonomous driving tasks are: (1) Sensing, where there are different sensors to get data; (2) Perception, where you are in a Lidar point cloud, or, based on multiple cameras, object recognition and tracking; (3) Decision-making and path planning - how do you overtake a car, how do you avoid hitting a pedestrian, etc; and (4) Manipulation, which controls propulsion, steering and braking. Out of these, perception and decision making are the most challenging and compute-intensive activities, and the ones where deep learning will have the most impact, he said.
Here are three key takeways from Dinakar's talk, in his own words. Click here to play this audio clip