Melpomene - Fotolia
Good IT professionals update, patch, configure, adjust and adapt workloads, striving to preserve application health against myriad issues with compatibility, scale and errors -- and every kind of unintended consequence. Immutable infrastructure flips this decades-old methodology upside down.
With cloud computing, users access a dynamic, highly scalable environment to provision infrastructure and services on demand with little more than an API call. It is easier and faster to spin up a new set of services and workload components than it is to patch and reconfigure what's there. The duty to craft and maintain servers over time simply goes away.
This is the immutable infrastructure reality that developers want -- whether or not IT professionals are ready for it. Immutable infrastructure usually starts with developers who create and maintain the code base used for a business workload.
Deploy immutable code
The immutable build process works best with newer development paradigms that create applications from maintainable, reusable and tightly version-controlled components and services. Organizations also require a high level of private cloud capability to institute automation and orchestration locally. Otherwise, apply this process to public clouds, such as Microsoft Azure, Amazon Web Services (AWS) and Google Cloud Platform.
Organizations make extensive use of automation and orchestration tools to define a new instance -- such as a container, VM or machine image -- and then provision and build it.
Staffers create the scripts to build the instance, as well as templates to define the build components. All of this code and the automation scripts required to implement it get stored in a version-controlled source repository. A repository can be local or a major cloud-based service, such as GitHub.
The repository provides the single source of content for a continuous integration engine that builds, tests and packages each instance. The completed instance awaits deployment in a registry, such as Docker Hub. Organizations also rely on update scripts to define how old instances or services are retired.
A deployment engine uses the information contained in the templates and scripts to provision resources; install and configure the code components and dependencies; connect load balancers and monitoring or other associated services; and set up storage and databases. Users can detail manual processes -- for example, the steps to shut down and reclaim old components -- in an update script. Public cloud vendors offer tools for organizations to deploy new code, such as AWS CloudFormation.
How to patch an application on immutable infrastructure
Simply put: Don't. The principal concept of immutable infrastructure is to enforce replacement rather than change.
A traditional application has dependencies on other files, services and workloads, and vice versa. When that application updates, IT must patch the changed files, adjust the configuration of the application, monitor performance and then troubleshoot any uncooperative dependencies. It's a recipe for unexpected downtime, rollbacks, user dissatisfaction and revenue disruption. Ask any IT professional about the stress levels experienced during a major application upgrade.
Never modify currently deployed instances, services or infrastructure in an immutable setup. Even with established change management frameworks and tools, like desired state configuration management, change can be exceedingly difficult. The result is that administrators implement changes manually, create errors, open unforeseen security vulnerabilities or even struggle to document successful changes accurately.
With immutable infrastructure, each workload is built with all of its components and dependencies and deployed to a single, clearly defined and independent platform of resources -- there is no deviation or change to that deployment. It is still monitored for performance, user experience and other key performance indicators. When a change comes, the team builds an entirely new workload instance with all of the new and changed components and dependencies. The old instance runs until the team is sure they do not need to roll back to the previous version.
Every patch or update is a new instance or deployment, which resides on newly provisioned resources. On the surface, this seems like a lot of extra work. Applications are complex entities with substantial infrastructure needs. Repeating all of that work for every change can seem like overkill. But immutable infrastructure is not intended for manual processes. It relies on copious documentation to define the precise components and steps needed to create a workload instance -- translated into scripts so that the entire process is automated.
Unchanged does not mean unsupervised
Immutable infrastructure does not reduce or eliminate the need to monitor and manage IT resources. IT professionals realize a strong emphasis on automation, orchestration and monitoring to drive essential tasks, like resource provisioning and workload deployment.
One major challenge of immutable infrastructure is how to automate resource startups, also known as bootstrapping. Service or environment discovery enables IT teams to assess and catalog available resources. Scripts and automation tools provision those resources as needed, load the requisite code components from repositories and then configure networking and other attributes to fit the anticipated needs of the overall deployment. Change management tools that enforce desired states should no longer be necessary, but they can still help to identify unexpected deviations in the configuration.
Immutable infrastructures often are refreshed frequently or run for relatively short periods, unlike traditional servers that host workloads that are online for years. The longer an instance runs in the data center environment, the higher the chance of an unanticipated configuration change. This rarely crashes an instance or renders a workload unavailable, but likely impairs performance or returns errors.
An immutable infrastructure should support automatic application scaling as traffic demands change. To achieve resilience and allow for failures without application disruption, deploy instances in clusters behind load balancers. Automation processes can scale instances in response to traffic. An application performance management tool monitors objective measures of the workload's operation. For example, is the workload processing an acceptable number of database transactions per second? Monitoring results can then spawn new instances or spin down existing ones. Errors, such as unresponsive instances, can precipitate automated responses or signal for administrators to intervene.
The same general approaches for scaling can be brought to bear on resilience. An immutable infrastructure embraces the concept of failure as normal. Don't invest in preventing failures, in apps or hardware. A workload deployment can include clusters and load balancing across multiple physical systems or locations. Public cloud features, such as AWS Availability Zones and Azure availability sets, implement this type of workload resilience capability.
It's important for potential adopters to understand that immutable infrastructure is not a reduction or abdication of control over IT. It's merely a new approach to deploy workloads and provision infrastructure. When this complex process is approached properly, an administrator simply opts to run an instance, and it's hands-off from there. When a change occurs, a new instance takes over.
While immutable infrastructure usually is a compute discussion, the concept applies to other routine management tasks, including those for network services, identity and access management, Active Directory management and encryption. The scope of tasks remains, but the goal is to automate and orchestrate as many tasks as possible to provide predictable, repeatable results with fewer errors and better security.