Data processing presents difficult choices for designers of Internet of Things (IoT) and mobile devices. If you’re designing a device that must have low power, a small form factor, and an inexpensive price tag, you might want to take some of the processing off of the device and put it into the cloud. Yet moving data to the cloud takes energy and poses costs of its own.
This dilemma was explored in a session in the recent Linley Processor Conference titled “CPU and DSP Technology for IoT and Mobile Devices.” Speakers, represented left to right in the composite photo below, were as follows:
Martin came to Cadence in 2013 with the Tensilica acquisition, and he spoke about the “configurable and extensible” processor concept originally developed by Tensilica. His presentation was titled “Battling with Big Data: Efficient Protocol Processing for Storage Analytic Applications.” It focused on the urgent need for energy-efficient embedded processing of networking protocols and flash-storage optimization algorithms.
Martin noted that it may be appropriate to shunt data to the cloud for heavy-duty processing, but designers must remember that energy per operation will scale with distance. “You need to figure out what is appropriate to do locally, how you can facilitate cloud-based processing, and how to make data movement as efficient as possible in energy terms,” he said.
Martin said that “there is a whole lot of network processing going on at various levels of the hierarchy as data moves back and forth.” He noted that configurable and extensible processors are helpful at many levels of this hierarchy. Compared to a general-purpose processor, a configurable processor offers superior performance and energy consumption. Compared to a design-from-scratch RTL approach, it offers flexibility, programmability, and “future proofing” (it won’t be obsolete if a new standard comes out).
The configurable and extensible approach allows users to select an algorithm, choose a processor configuration, and generate processors (see figure below). The “real magic,” Martin said, is the ability to add new processor instructions. Essentially the designer is getting a RISC processor that can be closely tuned for a given application, with a wide range of performance, power, and area tradeoffs. Developers can improve register availability, execute multiple instructions at once, increase local memory bandwidth, and use direct memory access (DMA) to connect to system memories and logic.
Martin then presented a number of networking and storage applications that can benefit from configurable and extensible processors, including flash/SSD controllers, flash translation layer acceleration, multi-protocol acceleration, large receive offload acceleration, table lookup/address classification, cryptography acceleration, and hash and linked list traversal. In these examples, the Xtensa technology developed by Tensilica showed impressive performance and efficiency gains.
“Whether you’re dealing with a hand-held, the cloud, or the network in between, there are lots of opportunities where this [configurable and extensible] technology can be applied,” Martin concluded. “It really gives you a tremendous amount of flexibility and a tremendous gain in energy and programming efficiency.” However, configurable and extensible processing will require that designers “break free of some conventional thinking,” he said.
Xtensa customizable processor IP is further described on the Cadence IP web site.
Getting Rid of the Pipeline
Semiconductor IP provider CAST is also challenging conventional thinking. In a talk titled “The BA20 Processor: Responding to IoT and Wearable Device Energy Challenges,” Bill Finch gave a detailed view of the BA20 MPU, an “energy-optimized 32-bit embedded processor” that has what the company calls a “PipelineZero” architecture. That means there’s no pipeline.
Processors for IoT, Finch said, need to consume as little energy as possible when idle, use as little memory as possible, and complete tasks with the lowest possible energy cost. What they need is high performance efficiency, but not necessarily high frequency. Today’s pipelined CPU architectures are, in many cases, “overkill.” The BA20 represents a new architecture where “we got rid of the pipeline, which lets us do everything in one cycle.” This avoids the data, structural, and branch hazards posed by pipelines.
The lack of hazards means high performance. And high performance, Finch said, leads to lower energy, because the processor can do more in less time and sleep for a longer time. The BA20 comes with a complete software tool chain.
Connectivity, Sensors, and Clouds
Ceva, a provider of DSP-based IP solutions, is also preparing for IoT. In a presentation titled “DSP-Based Platforms for the IoT Era,” Eran Briman discussed challenges and solutions for “IoT fundamentals” including connectivity, sensing, and edge processing (the latter includes things like audio and video analytics).
In the connectivity realm, IoT developers are facing constantly evolving communications standards, Briman said. What’s needed are scalable, modular, and multi-standard connectivity platforms. Briman described CEVA IP platforms that support Wi-Fi, Bluetooth, and the IEEE 802.15.4 DSP standard, which specifies the PHY and MAC for low bit-rate wireless PAN (personal area network).
Briman also discussed challenges in “always sensing” IoT. “The more sensors you add, the amount of data and processing you need is exponentially higher,” he said, noting that he’s seen single devices with more than 10 sensors. DSPs are an important part of a sensor network because “a lot of signal cleaning is needed” to extract meaningful data, he said.
Finally, Briman acknowledged that there’s an ongoing debate over processing on a device versus processing in the cloud. “If you connect to the cloud, there’s a lot of power consumption, and it doesn’t always make sense,” he said. “Some things are best processed locally.”
My takeaway: What data is processed where is going to be a major challenge for IoT development – and a focal point for innovation.
The Linley Processor Conference
The Linley Processor Conference is an annual event focused on the latest processor chips, IP, and technologies required to efficiently process data in embedded, enterprise, and cloud applications. It is produced by the Linley Group, a research and analysis firm that tracks the semiconductor industry. The 2014 conference was held Oct. 22-23 in Santa Clara, California. It included a keynote speech and over 20 technical presentations that revealed some of the latest trends and products in networking and communications applications. Cadence was a conference sponsor.
Related Blog Posts
Archived Webinar: Cadence, ARM Forge Design Flow for Mixed-Signal Internet of Things (IoT) SoCs
IoT Focus: IoT Applications Require a New Architectural Vision
Why Cadence Agreed to Acquire Tensilica – And How It Can Change SoC Design