Home
  • Products
  • Solutions
  • Support
  • Company

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  • Products
  • Solutions
  • Support
  • Company
Community Blogs Breakfast Bytes > Neural Networks and the Future
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
deep learning
enns
neural networks
autonomous vehicles
debugging

Neural Networks and the Future

17 Feb 2017 • 8 minute read

 breakfast bytes logoThe Panel Session

Neural network diagramThe recent embedded neural network symposium held at Cadence wrapped up with a panel session. Chris Rowen was the moderator and I think the panelists were Han Song, Ren Wu, Forest Iandola, Kai Yu and Jeff Bier (who all presented earlier). I didn't really note down who said what so I'll just report on some of the points that were made. Stuff in [square brackets] are my additional comments, not something any of the panelists said explicitly.

During the sessions, several speakers talked about how 8 bits (or even 4 bits or, in some cases, 2) are precise enough, and 32-bit floating point isn't really needed. But all of the real-world applications seem to be sticking with GPUs. The panelists figured that it was the lack of experience, it is only just showing up in the literature now. Everyone is excited by how fast the field is moving but the approaches actually being deployed are changing much more slowly. It only takes one highly visible success to move people, but going from 0 to 1 is really hard.

Dark silicon design issues

Dark silicon is an issue [this is the phenomenon whereby you can put a lot of functionality on a chip but not power it all up at once due to thermal constraints]. Selectively turning blocks on and off makes sense with heterogeneous blocks [it doesn't make sense with pure multicore since there is no point in putting, say, 16 cores on a chip if you can only power up 10 at a time since they all have the same functionality so you would never have a reason to power up core 11]. So one approach to dark silicon is to design special parts of the chip to handle specialized workloads. If you have a specialized voice recognition processor, then it only needs to be powered up when you are doing voice recognition, and it can be much more power efficient for that task than a general purpose processor.

One of the most difficult balancing act is how much specialization. Everyone is comfortable with a uniform array of the same thing. But that model dies with the need to grow efficiency, especially in the face of dark silicon. We need to get the factor of 10 or 100X improvement that you can get from a specialized architecture. However, there are lots of uncertainties since you don’t quite know what function to accelerate, the mix of jobs, future evolution all uncertain. So that means that we need highly specialized fully programmable solutions, as Kunle discussed in his keynote.

Photo of prototype autonomous car

Applications taking advantage of neural networks

Car architectures are being re-architected for the first time in decades, with fewer processors doing a lot more things, versus the old way with a proliferation of processors (ECUs) doing just one thing at a time. It's more like cloud computing versus everything being statically allocated. In fact, a car is becoming a datacenter on wheels.

There are some interesting partitions of functionality. The hottest toy last Christmas was a little social robot. It uses your smartphone as the "cloud" and so the toy itself needs very little intelligence, most of the intelligence is in your phone, which is actually a pretty powerful processor.

Video surveillance is an area where a lot is happening. As cameras proliferate, it becomes impossible to watch them; they are just archive footage for watching a crime later. But with deep learning, they can add intelligence so the cameras watch themselves and only communicate back to base when something abnormal is detected. Multiple cameras can also be correlated to make it even more accurate. The base doesn't even have to be a manned facility; at the level of a household, it can just be "manned" by your smartphone: "Let me know when something bad happens."

There was a feeling from a couple of people in the audience that neural nets were subsuming all other types of machine learning such as random forests or Bayesian models. The panel seemed to think that neural nets had the best chance of becoming standard. [Of course, the panel consisted of neural net experts, so may not have been the most objective people.]

Debugging is an issue

How do you figure out where something goes wrong? If there is a really bad autonomous car accident, how do you get to the root cause? The panel figured that it was easier than general-purpose processors since there are simply not unlimited cases. People are also talking about statistical computing all the way down to the transistor level, but it is really hard to bound how much unreliability you can take, so today everyone assumes chips are reliable (the faulty ones having been screened out during testing).

There was no consensus on how the neural net ecosystem will look. It is just starting. Will it be integrated (everything from one company) or modular (chip, operating system, apps from different companies)? What will be open source? What will be paid for? One thing people agreed was that for anything safety-critical, it won't kludged together, since it needs to pass functional safety standards. Open source can be a red herring, since the question of "make" versus "buy" has been going on forever, and in many cases, the cost of the software is a tiny part of the overall costs. It is tempting for companies to believe they can just hire a PhD and get some open source stuff, rather than properly funding their AI programs.

Chris Rowen's Wrap-Up

Chris Rowen speaks on panel about embedded neural networksChris had a quick wrap-up to the day. His summary was that there is significant progress on hardware and new ways of approaching training and compressing data. One of my big takeaways from the day was just how aggressively precision can be reduced to as low as 4 bits with minimal loss of accuracy, and how that just shows up as noise, which the neural net has to deal with anyway.

However, there remain essential challenges. Chris picked out two to highlight:

1. For deep learning to become mainstream, we need more accessibility to tools, datasets, and standards. A lot of education is needed, both in training students who will join the industry; and in the industry itself, where people are already in place. It reminds me of the introduction of Verilog and synthesis. We needed lots of people to learn Verilog in school, and we need to train at least some of the schematic-based designers to learn Verilog. Otherwise, we would never be able to get all the benefits from synthesis tools.

2. We need to find ways to apply these technologies to more problems, there are lots where creative work can be done to produce valuable results. There is a lot more than autonomous driving where these technologies will be applied: medicine, translation, security and more.

Cameras are Everywhere—Except in Cameras

I happened to come across this bar graph this weekend. It shows just how important computer vision is going to be. On the other hand, it shows the end of cameras as a market of their own (outside of specialized professional models). The standalone camera market is the vanishingly small orange segment.

Bar chart showing the global image sensing market size in billions of dollars

Social Implications

One of the challenges of this type of technology is that it has big implications for society. An obvious point is that there are apparently about 2M people who make a living from driving vehicles, everything from truckers, to taxi drivers, Uber drivers, FedEx delivery vans and more. A lot of these jobs will either vanish or be required in much smaller numbers. I can't wait for my autonomous car, but if I was a trucker I would have a different view. Even bigger changes might be going to happen, if and when few people feel the need to own a car (in the same way that very few of us feel that it is cost-effective to own our own airplane). If that happens, the automobile manufacturing industry may only require half or quarter as many vehicles, and employees to build them.

In Jeff Bier's presentation, he showed that computer vision is now as good as dermatologists at classifying skin lesions. If they get much better, then it may become immoral for a dermatologist to do it him or herself. And it will become pointless for dermatologists-in-training to learn how to do classification. Specialists, even as specialized as dermatologists, might not be needed in the same numbers. I've seen similar predictions about how most paralegal work will be outsourced to this type of technology.

Up to now, most automation has shown that if manual labor is repetitive or predictable, it can be automated. Soon, if mental labor is repetitive or predictable, it will be automated, as well.

It's deja vu all over again

Of course, we've been through these transitions in the past. Most people worked in agriculture until late in the 19th century. Now it is about 2% or so. Those displaced agricultural workers moved to factories, which were better-paying jobs anyway (since they were more productive to first order, people are paid by how productive they are). The service sector has grown enormously even as manufacturing employment (but not manufacturing output) has shrunk. Probably not that those displaced factory workers all became programmers, but over time, the number of people going into some segments shrunk and other segments grew. 10% of undergraduates at Berkeley are studying computer science, for example.

In the early 1900s, there were 120,000 horses in New York. Now there are just a handful. As this huge shift in machine learning automates many existing jobs, the big question to me is whether the people doing them are like the agricultural workers who moved to the factories, or like the horses who are no longer trotting the city streets. If the transition has been from agriculture to manufacturing to service, then what, if anything, comes next?

Related Resources:

  • Deep Learning, the New Moore's Law - (Embedded Neural Network Summit)
  • The Second Embedded Neural Network Symposium 
  • How to Build a Silicon Brain (Embedded Neural Network Summit)