Get email delivery of the Cadence blog featured here
On the second day of the Linley Processor conference, the keynote was by Bruce Davie. He did his PhD in computer science at University of Edinburgh (yeah, so did I) and worked for many years for Cisco before joining a company called Nicira, which got acquired by VMWare. He focuses on network virtualization (often called network function virtualization or NFV). His goal is to do for networking what VMWare have already done for servers, as he explained in his keynote. Also, to conform to the Monty Python stereotype, he is an Australian called Bruce.
Bruce started by asking us all what percentage of servers were virtualized? Most people got it wrong, although the choices of answers he gave made it easy. The highest percentage he offered was 60%. In fact, it was 81% in 2014, rising to 94% in 2019.
He admitted that network management has been a disaster. Security is pretty much a disaster. But he was here to tell us why it was getting better.
As an introduction to what network virtualization is, he had an analogy with server virtualization. Server virtualization runs each application on its own virtual machine. The application doesn't "know" it is not running on a real server. Underneath, there is a hypervisor that makes it all work. It manages all the physical devices and networks, and can even move a running application from one physical server to another. Underneath the hypervisor is the real server hardware. In an analogous way, each network runs on its own virtual network, and underneath is the network virtualization platform, the equivalent of the hypervisor. Underneath that is the physical network hardware. Configuring each network can be done independently since it doesn't "know" the other networks exist.
Deployment of a server has changed a lot in the last 20 years. It used to involve physical stuff like CDs and cables. Now it is entirely done in software. You turn the server on and it boots up, installs the hypervisor, and copies over everything it needs. For example, say you want 10 virtual machines on Amazon AWS, and you want them connected, and you want them connected to the public internet and firewalled. You get out your credit card and it just happens, driven by software. No Amazon employee has to be involved.
However, network deployment has hardly changed, even the commands to be typed into the router are basically the same as 20 years ago, just that it no longer uses telnet but ssh. He used the image at the start of this post to drive the point home.
One promise of network virtualization is the capability to have the same type of non-disruptive deployment. Today, it is so disruptive that people don't do it casually. It is done at 2am on a Sunday morning when the system is as idle as it gets. What is required is to separate the network as seen by the software from the actual physical network by adding a network hypervisor between them. Individual networks can be configured independently of each other, without risking disruption of other networks, even though everything is running on the same portfolio of hardware.
The next big area for improvement is security. Current security approaches are based almost entirely on perimeter defense, which has proven inadequate. It is a "crunchy shell with a soft chewy interior."
A modern attack relies on there being little security once inside. Somehow the attacker gets malware inside the datacenter. Perhaps an exploit, maybe an internal employee. It doesn't matter how much you spend on perimeter security, the attacker only has to be good once. Once inside the malware moves around freely trying to find the place where it can do the most damage. You hear about attacks like this every day, most recently the Yahoo one. Perimeter defense is not enough, and today the concept of a perimeter is unclear when companies make use of Box and Salesforce.
In fact this is the wrong focus anyway "You don't care if your datacenter is secure, you care if you data is secure." What is required instead is logical segmentation around application boundaries, with security. Instead of having a high-powered hardware firewall at the perimeter, it is now done in software on the server hardware. You can do 10 GB/s of firewalling on a server without deploying a single physical firewall box. The yellow dots in the above diagram are this little bits of firewall.
Increasingly, the "routers" and not just the firewalls are running on the servers. As Bruce put it, the "megascale datacenters have spoken". But the rest of us can have a software-defined datacenter, too, running any application on the (orange) software-defined datacenter layer (SDDC). There is still an issue that fixed function network hardware solutions can be 50 times faster than programmable. What will be required is programmable hardware. This will be a game-changer. In the limit, we can define our own network protocols and run them on "our" virtualized network, without affecting anyone else running different protocols over the same network. With the growth of open-source networking, this is less crazy than it sounds.
In the past you had to be a hardware manufacturer to be considered in the networking business. But VMWare doesn't make any hardware but is now listed on analysts charts in the same quadrant alongside Cisco, Juniper, and the rest.
Next: MemCon: Memory for the Next Five Years
Previous: Andrzej Strojvas, the 2016 Kaufman Award Recipient