The short answer is that hardware design is not going away, but that the costs and risks associated with it need to be reduced. Of course SoC verification is typically the largest contributor to hardware design costs, and its costs increase with the effort to reduce risk. One recommended approach to this was to utilize a "multiple platform" approach, where a chip is assembled using pre-verified IP subsystems. This approach simultaneously reduces risk along with design cost - a win-win for any hardware design team. But this approach also reduces the ability to differentiate through hardware.
Since the electronics market is driven more and more by the consumer segment these days, we can illustrate this with some of my shopping experiences.
Moving to high-definition TV made watching everything a much better experience, but what really shined was sports. However, I watch a lot of hockey, and the fast movement of the puck and sometimes the players can still be blurry. Shopping for a new TV is fascinating - the biggest push by TV makers seems to be 3D, which has subsided a bit due to the "system problem" of requiring the viewers to all wear silly and expensive glasses.
Then there's "smart TV" or apps. Apps are great; I love being able to watch Netflix, Hulu, and even YouTube on the big screen. I own a $100 Roku box that does a great job of this. I also own a $100 Blu-Ray player that can do that. If Roku updates its hardware to be able to support more software capabilities like, say, general purpose web browsing, I will pay to upgrade it (differentiation through hardware!). But I would not upgrade a $1000-or-more television for that. Plus these apps are all available from other television makers, so where's the differentiation there? But the main thing I will look for in a new TV is a faster refresh rate and higher-performance video processing. And as a bonus, I want better audio quality so I don't have to run my home theater receiver to get reasonable sound. This is mostly hardware, in some cases hardware in conjunction with software.
On the flipside I just upgraded to a new phone. Its video is higher quality than I ever imagined having on a phone. And it's 4G, so downloads are incredibly fast. In general this thing just rocks - who would ever have thought a phone would have a dual-core 1.2 GHz processor? But the battery life is less than I would like in a phone. And that's using a clever rules-based software system that turns off communications depending on certain conditions, like nighttime when the display is off and it's not in the charging cradle. So the software is cool, but a patch for hardware could be more power-efficient. Software did help me narrow down the phones I was choosing from - I wanted an Android phone - but there are a lot of device makers that make Android phones. So that part is more about the ecosystem. The main reasons I chose this one was for 4G and for a physical keyboard - both hardware.
Finally if you look at the tablet market, there are also a lot of tablets to choose from. And there's not a lot of software differentiation between them. You still have the iOS vs. Android ecosystem choice, and there are a couple other ecosystems. Sometimes device makers layer their own interface on top of that, as they do with phones, but those more often than not are detractors as opposed to differentiators. We are starting to see some segmentation within this product category - low-cost, kid-friendly, rugged-ized, application-specific like the Nook or Kindle - and this segmentation comes from a combination of hardware, software, and product design to meet the specific needs. This is the future of electronic design.
Segmenting and targeting products like this necessarily requires more variety in SoCs. The video processor for a Quad HD TV is going to be too expensive for an educational tablet, and too power-hungry for a smart phone. But the general video decoding algorithm may be the same. This brings us back to the original problem - that it's too expensive and risky to do this. This is what got us to where we are today. Buying or re-using standard subsystems is a conservative approach on the hardware side, but the more this is practiced, the less room to differentiate in hardware. That leaves the software team with the task of trying to differentiate (apps in my TV), or trying to adapt the hardware for a different need (power saving rules in my phone). Or even worse, it leaves the product design team with the un-enviable task of strapping a larger battery onto what was previously known as the worlds-thinnest 4G phone.
Wouldn't we be better off designing the product from the outside-in, to meet the needs of a specific target segment? But how do you justify the huge cost and risk of an SoC project without this subsystem platform approach? This is where TLM design and verification comes in. With high-abstraction TLM you can validate your video decoding algorithm by running the actual software. Hardware designers can then refine this same TLM into hardware architecture and rapidly verify all of the functionality.
At this point, the vast majority of the functionality has been verified more than it could have been with RTL. And the best part is that this already-verified piece of IP can then be run through high-level synthesis to generate an implementation that is specific to the end-product. So if the end-product is a television, the HLS constraints would focus on throughput, or if it is a phone, the HLS constraints would focus on area and power and you can target a low power library. But the point is that by taking this higher-level approach, you get the benefits of the pre-verified IP with the flexibility to target the micro-architecture to whatever segment you need to.
So there are still plenty of opportunities to differentiate through chip design. But it has to be done in the whole-product context - in conjunction with the software, the product design, the system. And so it needs to have the ability to follow market segmentation so that the end-product can satisfy a unique need, rather than being a general-purpose chip or device that inevitably requires differentiation to come from the software.