Get email delivery of the Cadence blog featured here
It is the start of the year, so time to provide my predictions for 2019. These are the topics that I expect I will spend a lot of time writing about this year. Let's start with the big picture, and work down to the small (do we call it a 30Å process yet?)
In 2018, the overall semiconductor market was very strong, but much of that was more demand than capacity in the memory market, especially DRAM. With additional capacity coming online, people who follow the memory market full-time all seem to predict softening of prices. For more details, see my post Semiconductor 2018: Up and to the Right...But Memory Way Up. This could mean that the overall headline numbers for the semiconductor market might shrink.
From an EDA point of view, it won't matter if the memory market shrinks, since memory companies only do a few designs and then manufacture them in massive volume. I'm always reminded of VLSI's corporate counsel at one point, who had come from Micron, saying in one meeting "At Micron we used to put a couple of designs into production per year, and in the ASIC market we put a couple into production most days." EDA, IP, and generally anything associated with design, is much more driven by the other end markets (see the rest of this post).
With a couple of dozen fabs under construction, this should be the year that some of that capacity starts to come online. But China is the big geopolitical story in many other ways, a big one being that they are committing $150B to semiconductor in a drive to be more self-sufficient. As Chinese fabs come online, even if they are not globally competitive (at least at first) there will be a lot of pressure for Chinese manufacturers to use the chips. Since about half the semiconductors in the world are imported into China, this could have a big effect. It will affect memory first, since China has no competitive leading-edge fabs, but it is worth remembering that a huge percentage of designs are done in non-leading-edge processes. In fact, the #2 foundry GLOBALFOUNDRIES, decided this year not to pursue the leading edge anymore (see my post GTC: GlobalFoundries Pivots), and China is not pursuing the leading edge anyway (at least for the time being, that could change eventually).
I will go to SEMICON China again this year, along with about 90,000 other people based on last year's numbers. So expect more on the Chinese market in late March.
This is the year that 5G rollouts will begin. But 2019 will mostly be the year of marketing hype, with pilot projects and expensive handsets that you can't really use anywhere. I expect that this year's Mobile World Congress in Barcelona in March will be an all-out 5G show, with little else being talked about.
I won't go into all the details of 5G here, I'll save that for a dedicated post, but 5G is not like previous standards in that there are multiple frequency bands and there is much more of a tradeoff between coverage and performance. I fully expect lots of deliberately planted confusion where the performance of the so-called mmWave band offering speeds of up to 10Gbps gets blurred with the performance of the mid-band and low-band spectrum, which is more like 100+Mbps. The reality is that mmWave won't go through walls, so you won't get that performance from the big basestations, only from so-called small cells.
I will be in Barcelona in March, so look for posts on mobile in general and 5G in particular then.
Last year the first announcements of EDA in the cloud were made (see, for example, my post Cadence Cloud or my post TSMC OIP Virtual Design Environment). At ESD Alliance meetings during the year, EDA in the cloud was the big thing that dominated the discussions. So far, it has been a technology using the scalability of the cloud to bring more compute power to bear on the problems. The business models haven't changed, but it wouldn't surprise me to see some tentative changes during the year.
I use these terms interchangeably. The advances in AI have been truly astounding over the last five or so years. I think that there are two different aspects in which deep learning will affect the semiconductor industry.
First, there are all sorts of applications of the technology for end-user applications. I think that training will continue to be largely done in the cloud using GPUs, but that increasingly inference will be done on the edge devices since that is where, in aggregate, most of the compute power is. For my most recent post on this topic, see my post Bagels and Brains: SEMI's Artificial Intelligence Breakfast.
Second, deep learning will be increasingly used in EDA tools, both under-the-hood in driving the algorithms inside tools, and also as part of the flow, automating a lot of the iteration that goes on, especially during phases like design closure, synthesis, and physical design. The buzzword here is "no human in the loop." That will remain a dream in 2019, but is certainly the general direction. For more on this topic see my post Cadence is MAGESTIC.
Automotive will continue to be a fast-growing segment of the semiconductor industry, driven by the need for foundries to find markets that are growing as mobile slows or perhaps (in terms of dollars, not transistors) shrinks. It is also driven by the end-markets switching away from human-driven internal combustion engine vehicles owned by individuals to...whatever vision of the world turns out to be true. This is another area where China is leading since, for pollution reasons, they are basically making it increasingly hard to purchase anything other than electric vehicles. For more on this, see all the links in yesterday's post in the automotive section.
In 2019, 7nm (and Intel's 10nm which is similar) will be mainstream. EUV will be in true volume production. The bleeding edge of process development will move to 5nm. Just because EUV works at 7nm doesn't mean that there are not major issues for 5nm. As was said at SEMICON West last year, "we're gonna need more photons." Next-generation interconnect might well be based on ruthenium (Ru). At 3nm, it looks like transistors will be nanosheet gate-all-around (GAA). For more on this area, see my posts from the recent IEDM (some of which have appeared, and some of which will appear in the next couple of weeks).
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.