Get email delivery of the Cadence blog featured here
When second generation cell-phone technology, GSM, was developed the biggest issue was keeping the computational load manageable for the semiconductor technology of the time. That was the most important resource to optimize. Now, in the LTE era, the efficiency of use of the radio spectrum is the most important, and doing a lot of computation is the way to achieve that. But there was another second-generation cell-phone technology along with GSM: CDMA. There were also some variants in Japan and China that were deliberately incompatible with anything else.
It is very rare for a company to develop a new standard and establish it as part of creating differentiation. Usually companies piggy-back their wares on existing standards and attempt to implement them better than the competition in some way, as all the companies supplying chips for GSM phones such as TI, ST, Infineon, Freescale, VLSI Technology and more. There were exceptions with big companies. When AT&T was a big monopoly it could simply decide what the standard would be for, say, the modems of the day or the plug your phone would use. IBM could simply decide how magnetic tapes would be written. Qualcomm developed the standard, made the technology work, designed the SoCs, wrote all the software stack and, in the early days, even had a joint venture with Sony making actual handsets.
Qualcomm, however, created the basic idea of CDMA, made it workable, owned all the patents, and went from being a company nobody had heard of to being the largest fabless semiconductor company and are one of the top 10 largest semiconductor companies. They may struggle a little to keep that success going since the market leaders, such as Apple, Samsung, and Huawei/HiSilicon, design their own chips and the market for smartphones is past its period of fastest growth.
The first time I ran across CDMA it seemed unworkable. CDMA stands for code-division multiple access, and the basic technique relies on mathematical oddities called Walsh functions. These are functions that everywhere take either the value 0 or 1 and are essentially pseudo-random codes. But they are very carefully constructed pseudo-random codes. If you encode a data stream (voice) with one Walsh function and process it with another at the receiver you get essentially zero. If you process it with the same Walsh function you recover the original data. This allows everyone to transmit at once using the same frequencies, and only the data stream you are trying to listen to gets through. It is sometimes explained as being like at a noisy party, and being able to pick out a particular voice by tuning your ear into it.
Walsh functions and the idea of CDMA was very elegant. However, my experience of very elegant ideas is that they get really messy when they meet real-world issues. Force-directed placement, for example, seems an elegant concept but it gets messier once your library cells are not points and once you have to take into account other constraints that aren’t easily represented as springs. So I felt CDMA would turn out to be unworkable in practice. CDMA has its share of complications to the basic elegant underpinning: needing to adjust the transmit power every few milliseconds, needing to cope with multiple reflected, so time-shifted, signals and so on.
At the highest level what is going on is that GSM (and other TDMA/FDMA standards) could get by with very simple software processing since they put a lot of complexity in the air (radio) interface and didn’t make optimal use of bandwidth. CDMA has a very simple radio interface (just broadcast) but requires a lot of processing at the receiver to make it work. But Moore’s law means that by the time CDMA was introduced, 100 MIPS digital signal processors were a reality and so it was the way of the future.
Of course, my guess that CDMA was too elegant to be workable was completely wrong. Current and future standards for wireless are largely based on wide-band CDMA, using a lot of computation at the transmitter and, especially, receiver to make sure that bandwidth is used as close to the theoretical maximum as possible. The limit in the handset is largely power, how to get the computation done without draining the battery fast or overheating.
But I have a Qualcomm story that shows how far Qualcomm and CDMA have come. Before CDMA turned out to be a big success Qualcomm was struggling. In about 1995 VLSI tried to license CDMA to be able to build CDMA chips as well as the GSM chips that they already built. Qualcomm had “unreasonable” terms and were hated in the industry since they charged license fees to people who licensed their software, people who built phones (even if all the CDMA was in chips purchased from Qualcomm themselves) and people who built chips (even if they only sold them to people who already had a Qualcomm phone license). They were perceived as arrogant by everyone. Now that’s differentiation! The royalty rates were too high for us and we ended up walking from the deal.
I was in Israel two days from the end of a quarter when I got a call from Qualcomm. They wanted to do a deal. But only if all royalties were pre-paid up front and non-refundable. The answer had to be $2M. That way they could recognize the revenue that quarter. We managed to do a deal on very favorable terms (I stayed up all night two nights in a row, after a full day’s work, since I was 10 hours different from San Diego, finally falling asleep before we took off from Tel Aviv and having to be awakened after we’d landed in Frankfurt). The money was paid and Qualcomm avoided having a quarterly loss. Now, with revenue of $25B, they make that $2M roughly every hour of every day.