• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. It's the Second Mouse That Gets the Cheese
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
late to market
moat
early to market
barriers to entry
startup

It's the Second Mouse That Gets the Cheese

20 May 2020 • 7 minute read

 breakfast bytes logo I love short phrases that make you think, "Wait...what?" and then you think about it and gradually realize it is true. A good example is the title of today's blog post: "It's the second mouse that gets the cheese".

Product development, especially in startups, has a major focus on getting their product to market as fast as possible. Given that focus, you’d think that the primary mode of failure for a product would be being too late to market. But it’s actually hard to think of products that failed by being too late. Some products fail because they never manage to get shipped at all, which I suppose is a special case of being too late to market — you can’t be later than never. But try and think of a product that failed because, by the time it got to market, a competitor had already vacuumed up all the opportunities

In EDA, the only example I can think of is Monterey in place and route, simply too far behind and thus the #4 player in an EDA segment (Magma was still around) which is never a comfortable place to be In my experience. Most of the profit goes to the #1 player in each segment, the #2 company makes a small profit or breaks even, and everyone else loses money.

Too Early to Market

On the other hand, many projects fail because they are too early to market. In EDA, technologies tend to be targeted at certain process nodes which we can see coming down the track. There’s little upside in developing technologies to retrofit old design methodologies that, by definition, already work. Instead, the EDA startup typically takes the Wayne Gretsky approach of going where the puck is going to be. Develop a technology that is going to be needed and wait for Moore’s law to progress so that the world does need it. If the timing is perfect, the product arrives to market just as it is required for the new node. The trouble, though, is that often it arrives too early. Everyone underestimates the amount of mileage that can be got out of the old technologies. I remember telling customers that they wouldn't be able to do 1um (1000nm!) designs without a good floorplanning tool. Maybe that was true by about 130nm or perhaps even later.

Since process nodes come along every couple of years, although that is predicted to slow, getting the node wrong can be fatal for a startup or a small company. If you develop a technology that you believe everyone needs at 5nm but it turns out not to be needed until 3nm then you are going to need an extra two or more years of money. And even then, it may turn out not to be really compelling until the node after that (1.5nm? 2nm?). That will be after you’ve gone out of business.

The windows paradigm, also known as WIMP (for windows, icons, menus, pointing device) was originally developed at Xerox Palo Alto Research Center, or PARC. For more about that, see my posts The Alto: The Machine That Changed the World and The Alto—Forty Years On. Even earlier work was done by Doug Englebart at SRI, which I wrote about in The Mother of All Demos.

Xerox is often criticized for not commercializing the Alto but in fact, they did try. They had a computer, the Xerox Star, with all that good stuff in. But it was way too expensive and failed because it was too early. The next attempt was Apple. Not Macintosh, Lisa. It failed. Too early and so too expensive. Finally, the Macintosh was successful. One can argue to what extent the first Macs were too early, appealing only to hobbyists at first, until the laser printer (also invented at PARC) came along and suddenly "desktop publishing" was a thing.

There are other dynamics in play than just timing but Microsoft clearly made the most money out of commercializing those Xerox ideas, coming along after everyone else. It was the second mouse and it got most of the windows (small "w") revenue with Windows (large "W"). But it famously failed to turn that dominance on the PC into dominance in mobile, but I don't think you can put that down to anything to do with being early or late to market. My own view is that the phone handset manufacturers had seen what happened in the PC hardware market, where Intel and Microsoft took all the profit. They were not going to risk the same thing happen to handsets by using a Microsoft OS.

Another means of being too early is simply having an initial product that it turns out nobody needs yet because it’s not good enough yet. Semiconductor development methodologies are all about risk-aversion, and any change has to mean that the risk of changing is less than the risk of not changing. For a team with an early product in a process generation where the technology might be only nice-to-have, this is a high barrier to cross. Successful products are "antibiotics" not "vitamins". Furthermore, the startup might just serve as a wakeup call to everyone else that a product is required in the space, and eventually, another company executes better (having seen the first company fail) or a big EDA company adds similar capability into its own product line.

Boiling the Ocean

 There is another problem about trying to get into an EDA market segment as a startup. There seem to be fewer and fewer segments where you don't already need a large part of the flow already. For example, it was impossible to be the "double-patterning startup". You needed a whole product line to add double-patterning to: custom layout, place & route, DRC, extraction, and more. That's not to say there are no segments. You could see that by walking around the expo floor at DAC last year (this year, you won't be able to do that since DAC is going virtual).

I remember doing some consulting for a venture capital firm about 15 years ago. The company pitching the VCs had some interesting and innovative technology for doing symbolic Verilog simulation. I forget its name. There were a few startups doing something similar. The company said that the symbolic approach could be used for over 90% of a typical design. I asked them what they would do about the other 10% and they said they planned to build a "normal" Verilog simulator to handle that. This was a red flag to me. They would run into a sort of version of Amdahl's Law, that the overall speedup is limited by the part that cannot be parallelized. Or in this case, the speedup of the simulation would be limited by the part that could not take advantage of the symbolic technology. Let's stick with their estimate of 10% and let's assume that their symbolic simulation technology is 100X faster than Verilog (I forget what their claims were). From a standing start, I think their non-symbolic Verilog simulator would 4-10X slower than a state-of-the-art one. Let's be generous and say just 4X. So that 10% that is not amenable to the symbolic approach would take 40% of the runtime of "the competition", the pure Verilog simulators that symbolic simulation was 100X faster than. So that leads to a maximum 2.4X speedup even if the 100X number for the symbolic speedup was correct. Not nothing, but nowhere near 100X. The VC passed.

 A couple of years later I had lunch with Venk Shukla, who I knew since he had been the VP Marketing at Ambit when I was VP Engineering. He was CEO of a company called NuSym that was doing...symbolic simulation. I don't think it was the same company as I'd advised the VCs about earlier, or even quite the same idea. He was grousing about the amount of engineering that it took to support the various languages, the various incarnations of the Verilog API, and the various verification languages. That’s before his engineering team could go about delivering the symbolic cleverness the company was founded to provide. It wasn't exactly the problem that I'd identified for the VCs, but it was another flavor of the same problem: before you can deliver your secret sauce, you need a world-class simulator for basic Verilog, including everything that entailed. Salespeople sometimes use the phrase from poker "table stakes" to mean basic capability that you must have to even get a meeting. Realistically, those table stakes are too high for a typical startup to afford before their cash runs out and before it even gets to developing its secret sauce.

I opened talking about pithy phrases that make you think a bit before you realize that they are true. Here's another one: "Almost everyone has more than the average number of legs". For more discussion on that (and some other statistical anomalies) see my post Labor Day Off-Topic: Almost Everyone Has More Than the Average Number of Legs.

But in most markets, I think more people fail by being too early than by being too late. The early bird might get the worm, but it's the second mouse that gets the cheese.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.