Home
  • Products
  • Solutions
  • Support
  • Company

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  • Products
  • Solutions
  • Support
  • Company
Community Blogs Breakfast Bytes > Geoff Hinton, Yann LeCun, and Yoshua Bengio Win 2019 Turing…
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
deep learning
turing award
neural networks
AI

Geoff Hinton, Yann LeCun, and Yoshua Bengio Win 2019 Turing Award

3 Apr 2019 • 4 minute read

 breakfast bytes logoThis year's Alan Turing Award goes to Geoff Hinton, Yann LeCun, and Yoshua Bengio. According to the New York Times, they are the "Godfathers of Deep Learning", presumably in the non-mafia sense ("nice neural network you've got there, be a pity if the weights lost too much precision"). The Turing Award is bestowed by the ACM and is regarded as the Nobel prize for Computer Science. Last year I wrote Hennessy and Patterson Receive the 2018 Turing Award.

Geoff is an emeritus professor at the University of Toronto, but works at Google Brain. Yann is a professor at New York University but is also the chief AI scientist at Facebook, and Yoshua is at the University of Montreal and a co-founder of Element AI. There seems to be something in the water in Canada (Yann studied under Geoff in Toronto).

It turned out to be bad timing for Breakfast Bytes since it was announced last Wednesday. But Thursday's post was a prelude to Monday's cloud announcement. Friday's was the 20th anniversary of The Matrix. Monday and Tuesday were the announcements of CloudBurst and Clarity. So here we are in the first available slot a week later.

Neural Networks

Neural networks were invented decades ago. As Krste Asańovic said at some RISC-V event, "my PhD was on neural networks—I was 20 years too early."  Well, these three were 20 years too early too, but they persevered. Neural networks were considered to be a worked-out mine that would never produce any gold until about 10 or 15 years ago. It turned out everyone underestimated the amount of training data we needed and the amount of computer power we needed. Cloud computing, GPUs, FPGAs, and specialized processors turned out to be the key. The basic idea was fine all along (not to underestimate how much development has taken place since the early days).

 Yann LeCun, one of the honorees, was actually the person who showed me that deep learning was not a worked-out mine five years ago. At a keynote at the Embedded Vision Summit in 2014 he had a laptop with an attached camera, and he was pointing it at objects and the system was identifying them in real time: a phone, a mouse, a conference badge, a cup of coffee, a shoe. Such demos are commonplace these days, of course (usually based on the huge database of images called ImageNet that I covered in my post ImageNet: The Benchmark that Changed Everything). I like to tie the start of significant technology eras to a moment in time, so as far as I'm concerned, the current AI and Deep Learning era began at that keynote. You can still see a video of it.

Coincidentally, I am in the middle of reading Kai-Fu Lee's (excellent so far) book AI Super-powers: China, Silicon Valley, and the New World Order: Describing the end of the AI winter, he says:

Accurate results to complex problems required many layers of artificial neurons, but researchers hadn't found a way to efficiently train those layers as they were added. Deep learning's big technical breakthrough came in the mid-2000s when leading researcher Geoffrey Hinton discovered a way to efficiently train those new layers in neural networks. The result was like giving steroids to the old neural networks, multiplying their power to perform tasks such as speech and object recognition.

Yann was one of his postdocs, and then Yann and Yoshua worked together at Bell Labs. Yann is generally credited with inventing CNNs, convolutional neural networks, a key part of computer vision, the technology underlying his keynote I mentioned above.

The Future

What about the future? Hinton said that he believes that the technology will eventually get to consciousness:

I think we will discover that conscious, rational reasoning is not separate from deep learning but a high-level description of what is happening inside very large neural networks.

 In a more cautious note, when there was a lot of hype about AlphaGo beating the world Go champion, Yann said:

As I've said in previous statements: most of human and animal learning is unsupervised learning. We need to solve the unsupervised learning problem before we can even think of getting to true AI. And that's just an obstacle we know about. What about all the ones we don't know about?

The best example I know of the gap came from a different keynote at the Embedded Vision Summit, this time Jitendra Malik in 2017. He pointed out just how far there is to go for our vision systems to reach even the level of a toddler:
You take them to the zoo and say, "that is a zebra." That's all it takes. We still need a few thousand pictures of zebras for training.
 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.