Virtualization is about abstraction: first, separating the logical functions and organization of a system from the physical stuff the system runs on and, second, hiding the physical details as much as possible. Network functions virtualization (NFV)extends this approach to the network and its core functions.
To be sure, virtual appliances have been around for years, and they have slowly gained a foothold in both enterprise and cloud service provider data centers as a way to extend functions such as application delivery optimization into virtual server spaces.
NFV takes the idea of a virtual appliance and takes it beyond the notion of simply substituting it for a physical appliance. Instead, it specifies a complete ecosystem for managing dynamic network services -- those now performed by dedicated or proprietary routers, firewalls, load balancers and other components -- and places them in a virtualized infrastructure.
NFV as a concept emerged from the carrier space, with the European Telecommunications Standards Institute (ETSI) guiding the standards effort. For carriers, NFV makes network functions dynamic and scalable, allowing network services to be spun up automatically and quickly. What is more, because special-purpose network appliances are no longer needed, NFV promises to do all this while decreasing costs.
NFV is built around the idea of virtualized network functions (VNF). A VNF can be provided by a dedicated virtual machine, multiple dedicated virtual machines or even by applications, possibly running inside containers (e.g., via Docker). VNFs run on standard commodity server infrastructure rather than specialized appliances. The key requirement is that it be possible via APIs to instantiate VNFs, link them together as needed to deliver a service (aka “chaining”), assign resources to VNF components as needed, and monitor their health and performance.
For these purposes, ETSI defines three key management layers: the NFV orchestrator (NFVO), virtual network function manager (VNFM) and virtual infrastructure manager (VIM).
Conceptually, the orchestrator defines the service in terms of the number of virtualized network functions needed. The orchestrator then works through one or more virtual function managers to create and manage those functions. The VNFM in turn relies on one or more virtual infrastructure managers (such as OpenStack, vCloud Suite or CloudStack) to assign resources to VNF components and place them as needed in the infrastructure. Conceptually, these are three different components; in practice, tools sometimes seek to supply two or even all three classes of function -- whether reaching down from the top (e.g., Alcatel-Lucent extending from NFVO into VNFM) or up from the bottom (e.g., Wind River extending from VIM to VNFM).
NFV is conceptually related to software-defined networking (SDN) and each shares the goal of making the network software-controlled, automated and functionally independent of specialized hardware. NFV, however, does not require SDN's foundational separation of the control plane and data plane (neither does it forbid it, though). NFV and SDN can therefore be pursued separately or together.
For the enterprise data center, NFV may first arrive in the form of existing, pre-standard, vendor-centric offerings such as the one offered by Embrane. Or it may come through efforts to build VNFM and NFVO functions atop OpenStack. However it arrives, NFV -- because it supplies network services via familiar commodity x86 host hardware in chunks analogous to existing appliances -- may make it into data centers more quickly and easily than SDN.
John Burke is a principal research analyst with Nemertes Research where he advises key enterprise and vendor clients, conducts and analyzes primary research, and writes thought-leadership pieces across a wide variety of topics. John's expertise lies within the realm of virtual networks and software-defined networking (SDN) technologies, standards and implementations.