Get email delivery of the Cadence blog featured here
Once again, Cadence will be at CES in Las Vegas. It takes place January 7 to 10, 2020, the start of a new decade. I wonder what electronic marvels we will see before 2030 rolls around. We will be in the south hall of the Las Vegas Convention Center (LVCC) in booth MP25468.
As in the past, the focus is on the Tensilica product line. Of course, Cadence has all of our EDA tools, Verification IP, and Design IP. But nobody comes to CES to look at circuit simulation tools. But a huge amount of consumer electronics is loaded up with Tensilica cores, so that's what we will be showcasing.
TWS stands for True Wireless Stereo, and is the generic term for Bluetooth connected earbuds and headsets. So far this year, our semiconductor and OEM partners have shipped over 200M HiFi DSP-based SoCs into this market. Many of these earbuds also have noise-cancellation, voice pickup (so they don't cancel someone talking to you), the capability of talk so you can make a phone call, and sometimes voice commands. Many TWS earbuds contain three HiFi cores for voice neural network, audio codec, and Bluetooth codec. Two earbuds mean that there are then six DSPs per set. A huge number of Bluetooth headset products (buds and headphones) are HiFi DSP-powered. In fact, HiFi DSPs are in two of the biggest North American manufacturers of earbuds, one of the biggest in Japan, and one of the biggest in Korea.
Voice recognition needs some level of AI processing in the device, HiFi DSPs are not just about sound quality. But it is not just HiFi DSPs, the Tensilica Vision DSP product line also has had AI capability since the Vision P6 DSP was announced in 2016. The Vision P6 DSP can have 256 MACs giving half-a-teraop, the Vision Q7 DSP can have 512 MACs giving a teraop. As Linley Gwenapp said in his keynote at the recent processor conference, the Mediatek Helio P70 contains a Vision P6 DSP and a custom neural engine. That application processor (AP) is in lots of phones, especially in Asia.
So yes, we've had neural network processing as an option for years in both the HiFi and Vision product lines.
And, of course, the top-of-the-line AI processor is the Tensilica DNA 100 processor, which scales to hundreds of TOPs and more in a multi-processor system. This processor, announced last year, already has design wins in multiple markets.
So Tensilica is under the hood in top-tier mobile phones, leading automotive systems, the top virtual reality (VR) platforms, augmented reality (AR) platforms (for example, see my post HOT CHIPS: Microsoft Hololens 2), many of the wireless earbuds, surveillance, smart home, and more.
We obviously can't show all of this at CES since we don't have one of those booths that has its own zip code. But you should come by and see a sampling of products, and talk to our experts about your requirements.
We will have a Protium X1 FPGA prototyping system on the booth running a model of the DNA 100 processor. In turn, the DNA 100 processor will be showcasing two neural networks, one for segmentation and one for object detection. This type of application shows up in automotive and augmented reality.
The second AI demo is running on a Vision P6 DSP running in the Dreamchip SoC. This is showing attention-based image tagging, using a time-based recurrent neural network. This type of application is used in smartphones to automatically tag photographs in real time as you take them.
We will be showing lots of phones which use the Vision DSP family to do both vison-type things and AI-type things.
When you think of vision, radar is not the first thing you think of, but it is just another part of the electromagnetic spectrum. We will have Vayyar's 3D radar demo. Here's a sample board from Uhnder, another radar product. The SoC is the little silver square in the middle.
Black Sesame will be showing its chip based on an automotive co-processor. This chip goes into aftermarket automotive cameras for driver monitoring systems (SMS).
We will also be showing a Vision DSP-based system for stereo depth. This is targeted at industrial recognition (think inspecting apples or bolts on a conveyor belt).
We will have a demo from Qualcomm-backed Chinese company Elevoc (the "Ele" stands for elephant—look at their logo). They are the first AI-based speech de-noising technology that precisely extracts speech from background noise based on computation auditory scene analysis. No, I don't know what that is either, but I'll go along and find out.
Sensory will be demoing their VoiceGenie technology on the NXP RT600 chip with a HiFi 4 DSP inside. This is a home automation demo, showing wakeword detection, voice commands such as "turn on the light", or "make it colder", or "outside temperature".
Alango will also be showing their voice experience technology, also on an NXP RT600. They do microphone beamforming, noise reduction, stereo acoustic echo cancellation, the direction of arrival, and more. They can use fewer microphones, work over a longer distance, and so get better results.
Sonarax uses ultrasound as a "network", using commodity speakers and microphones. Their technology works even in noisy and echoey environments. It is especially effective where standard networks are unavailable (air is everywhere!).
Our booth is appointment-only, so book a meeting and come and see Tensilica everywhere. To book a meeting click on:
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.