I/O virtualization (IOV), or input/output virtualization, is technology that uses software to abstract upper-layer protocols from physical connections or physical transports. This technique takes a single physical component and presents it to devices as multiple components. Because it separates logical from physical resources, IOV is considered an enabling data center technology that aggregates IT infrastructure as a shared pool, including computing, networking and storage.
Recent Peripheral Component Interconnect Express (PCIe) virtualization standards include single root I/O virtualization (SR-IOV) and multi-root I/O virtualization (MR-IOV). SR-IOV carves a hardware component into multiple logical partitions that can simultaneously share access to a PCIe device. MR-IOV devices reside externally from the host and are shared across multiple hardware domains.
How I/O virtualization works
In I/O virtualization, a virtual device is substituted for its physical equivalent, such as a network interface card (NIC) or host bus adapter (HBA). Aside from simplifying server configurations, this setup has cost implications by reducing the electric power drawn by these devices.
Virtualization and blade server technologies cram dense computing power into a small form factor. With the advent of virtualization, data centers started using commodity hardware to support functions such as burst computing, load balancing and multi-tenant networked storage.
I/O virtualization is based on a one-to-many approach. The path between a physical server and nearby peripherals is virtualized, allowing a single IT resource to be shared among virtual machines (VMs). The virtualized devices interoperate with commonly used applications, operating systems and hypervisors.
This technique can be applied to any server component, including disk-based RAID controllers, Ethernet NICs, Fibre Channel HBAs, graphics cards and internally mounted solid-state drives (SSDs). For example, a single physical NIC is presented as a series of multiple virtual NICs.
I/O virtualization pros and cons
IOV uses emulation and component consolidation to boost utilization of I/O adapters and related server-storage infrastructure. Architecturally, I/O virtualization removes network adapters from the host server, placing it in a switching box. Abstracting resources in this manner provides more flexibility through faster provisioning and increased utilization of the underlying physical infrastructure.
Using I/O virtualization, an IT administrator is able to spin up a large number of VMs on an individual server, which reduces the need for new hardware. Thousands of VMs could be deployed in a larger server cluster.
Other benefits include independently adding or removing servers from the cluster and running multiple operating systems (OSes) on a host machine. IOV has implications for server consolidation, since network adapters no longer reside in the server, enabling customers to move from 2U to 1U commodity hardware.
IOV enables enterprises to:
- Use existing cabling and peripheral components
- Improve server performance by using idle mezzanine slots
- Attach a single cable interconnect to support networking and storage I/O
- Reduce the cost of data center cooling, heating and power
- Scale for rapid redeployment as I/O profiles change
Despite such benefits, the increased processing power per square inch has created a different input/output problem: The pipes don't have sufficient bandwidth to move data fast enough to keep pace with modern processors.
To address that problem, the industry consortium PCI Special Interest Group (PCI-SIG) is advancing emerging standards to fully utilize the capacity of PCIe flash storage devices, based on higher speed Fibre Channel and FC over Ethernet networking fabrics.
I/O virtualization industry specifications evolve
PCI-SIG standards govern how PCI-based devices may be shared by multiple OSes running simultaneously on a given computer server. The group has offered two I/O virtualization specifications -- SR-IOV and MR-IOV -- to help govern standardization, although vendors are developing products independently of the standards.
The SR-IOV standard specifies how multiple guests on a single server with a single PCIe controller, or root, share I/O devices without requiring a hypervisor to reside on the main data path.
MR-IOV extends the concept to allow multiple, independent systems with separate PCIe roots to connect to I/O devices through a switch. A multi-root switching complex shares OS images across clustered servers. Blade servers are usually recommended when deploying MR-IOV. Devices reside externally from the host and are shared across multiple hardware domains. Both SR-IOV and MR-IOV require specific support from I/O cards.
The general I/O virtualization approach that most current products take is to connect the local host servers into a top-of-rack unit that holds a variety of network, storage and graphics adapters that can act as a dynamic pool of I/O connectivity resources. The top-of-rack device acts as an I/O fabric for the servers in the rack and can communicate with other servers in the rack or connect to end-of-row switches for more distant resources.
An I/O gateway is a hardware device that consolidates multiple I/O cards in a single unit. Hosts and servers are able to access the shared functionality contained in the I/O gateway.
See also: converged network adaptor (CNA)