Dynamic infrastructure refers to a collection of data center resources, such as compute, networking and storage, that can automatically provision and adjust itself as workload demands change. IT administrators can also choose to manage these resources manually.
Dynamic infrastructure relies primarily on software to identify, virtualize, classify and track data center resources. These resources are grouped into pools, regardless of their physical location within one or multiple data centers. By classifying data center resources, IT teams can establish and monitor multiple service tiers to ensure more demanding workloads receive more compute and storage resources.
In most cases, the software used in dynamic infrastructures can automatically allocate resources from the appropriate pools to meet workload demands. The software adds resources when workload demands increase, and then returns resources to the pool when demands decrease – a process known as workload balancing.
Dynamic infrastructure helps align IT use with business policies. For example, a critical workload can retain more resources longer to ensure top performance, while less-important business applications can use fewer resources or release unneeded resources sooner. Such behaviors help maximize resource use and reduce the need for new IT purchases.
Although dynamic infrastructure can work with any data center hardware, it is often deployed with highly integrated and expandable hardware systems, known as a hyper-converged infrastructure (HCI). These are typically appliances that include compute, storage and network capabilities. Examples of HCI systems include VMware EVO:RAIL, Nutanix Acropolis and SimpliVity OmniCube.
The evolving combination of software and hardware has facilitated the notion of software-defined data centers (SDDCs). Similarly, the ability to autonomously provision resources for new workloads, scale resources to meet changing demands and recover unused resources are important attributes of cloud computing.