Get email delivery of the Cadence blog featured here
We’ve heard the projection: by 2020, the world will have 50 billion connected things. But can our Internet backbone truly scale to support this level of connectivity?
“That highway only has so much capacity,” warns Dr. Ren Wu, founder and CEO of NovuMind, where he leads the effort to push the frontier of artificial intelligence (AI) via high-performance and heterogeneous computing. Earlier this month, Wu addressed a lunchtime audience at Cadence’s San Jose headquarters, speaking on, “HPC and Deep Learning: New Developments in Neural Network Computing.”
In Wu’s view, “cloud computing was so yesterday.” He noted that much of the discussion around smart homes and smart cities has placed a priority on connectivity. But connectivity isn’t the right challenge to focus on when the Internet backbone likely won’t scale to support this exponential explosion in connected devices. What’s more, concerns about privacy will limit how many people feel comfortable connecting, say, an HD video stream from a home camera to the cloud. “That means we have to push compute and decentralize compute back to the devices. You are the ones to really make the devices capable of doing more intelligent things,” Wu told the Cadence audience.
Prior to starting Novumind, Dr. Ren Wu was a distinguished scientist at Baidu Research, chief software architect of Heterogeneous System Architecture at AMD, and principal investigator at CUDA Research Center at HP Labs.
Computational Power is a Driving Force
Known for his early work in AI, Wu developed a Chinese chess (Xiangqi) program that was twice the world champion and that has dominated the computer Xiangqi field for more than a decade. At Novumind, he and his team are dedicated to improving lives by creating thinking things and bringing together AI, Big Data, compute, and heterogeneous computing.
“I think computation is the driving force,” Wu told the Cadence audience. “All other things will get developed better and faster if you have more powerful computation in your hands.”
What’s happening now in deep learning reminds Wu of what occurred decades ago in the computer chess community. Deep Blue represented a classic example of an application-specific system design with an IBM supercomputer and 480 custom-made VSLI chips running a massively parallel search algorithm. “That sheer computational power enabled IBM’s machine to beat the world chess champion,” said Wu.
Dedicated Hardware and Heterogeneous Computing
What has changed in deep learning since the 1980s? Today, we’ve got Big Data and Big Compute. “The machine has become an important component in our daily lives,” said Wu. “The Internet, mobile Internet and, soon, the Internet of Things can only give us more data. The computer industry has developed so much faster in the last 30 years, we can compute so much bigger models.”
And there’s much more to be done. Said Wu, “We have to make hardware optimized for different tasks that work together…particular circuits designed for particular tasks.” Along with this heterogeneous computing approach, dedicated hardware is another trend that Wu foresees as a precursor to an Intelligent Internet of Things. Developments in the OpenCL ecosystem are a step in the right direction toward standardization for heterogeneous computing, he noted.
According to Wu, the next waves on the horizon involve Big Data and AI, moving us beyond connectivity and into an era where our connected things have much more intelligence.