carloscastilla - Fotolia
Terraform best practices that aid DevOps infrastructure builds
Create the right configuration for any DevOps setup, and then, fluidly replicate and scale as necessary with infrastructure-as-code tools and strong management policies.
Cloud and DevOps shops are experimenting with the infrastructure creation capabilities of Terraform. Best practices from the start ensure that the infrastructure-as-code tool is scalable and supported throughout its use.
HashiCorp Terraform builds IaC, with easy-to-deploy, repeatable and predictable network infrastructure. Terraform makes it possible to provision a copy of the network environment as it exists, rather than an approximation or mock-up, for a team member, such as a developer or tester.
Administrators who experiment with the IaC tool should learn Terraform features and capabilities on a small scale, then apply best practices to deploy it more widely in a streamlined and hassle-free manner. As you learn Terraform, apply these general and tool-specific practices to avoid failure and confusion at scale. They make the infrastructure provisioning process more efficient, effective and secure.
Organize Terraform variables today, not later
Administrators must plan how they will use Terraform code. Terraform, like most platforms, understands variables. Administrators get into the bad habit of throwing a variable into the current Terraform file, with the intention to clean it up and fix issues at a later date. Terraform best practices dictate that the user places variables in an external file. There's a range of variables specifics available in Terraform's documentation.
To properly implement variables, use TFVARS files, and specify them in the Terraform command line with the appropriate variable assignments. For example, the administrator should keep API keys and private keys separate from other code and secure.
The code is cleaner with variables management, and the administrator and developers have an organized repository of their Terraform variables.
Attention to this best practice enables simpler versioning -- a staple DevOps practice -- when using Terraform. Users can easily deploy different instance configurations, such as a set of files for quality assurance (QA), one for preproduction staging and one for production. These files are all versions of the same code with different variables.
Protect privacy and control access
Variable management prevents Terraform users from accidentally disclosing API credentials in the source code file. It's just one of the Terraform best practices that ensures security and stablility.
Terraform is only as secure as the administrator running it and the security policies enforced in IT operations. Keep private data in files with the appropriate security policies applied. If key credentials are uploaded to public cloud, use identity and access management policies, and turn on logging.
A DevOps environment always carries the danger of a team member overwriting the environment that you are currently deploying. Overwrites have undesirable consequences, from the awkward to the cataclysmic. To prevent disaster, use an appropriate access control mechanism, such as Amazon DynamoDB, as a lock table. It prevents two administrators from deploying or modifying the same code at the same time. Update management is not only a Terraform best practice for application stability, but more broadly it is neccessary when scaling DevOps tasks.
Rely on Terraform modules judiciously
Terraform modules are self-contained configuration packages, prebuilt or created by the user. Administrators who learn Terraform might be tempted to rely on modules to excess. Modules are a good thing, but you can have too much of a good thing. The more modules you add to an infrastructure build, the slower and more complex the code, the more time it takes to execute and the more difficult it becomes to debug. Modules represent a balancing act for the administrator.
Modules must follow correct I/O methods, exposing the internal API rather than an alternative, less robust method of processing I/O. Deviating from Terraform best practices with modules can bite back when least expected. For example, a version upgrade of the tool could depreciate the method I/O in use.
Terraform isn't limited to modules. Advanced uses deploy a binary (executable) file to gather external facts. This capability opens up extensions. Learn to create extensions, and you can gather data for which there is no current Terraform module.
Back up the system state remotely
Backups are critical, yet many administrators fail to keep a copy of the system state stored remotely. It can reside in AWS Simple Storage Service, Microsoft Azure Blob storage or elsewhere. Backups are not difficult, and there is no reason not to do them.
Version control every project
Terraform files should be version-controlled using a source code management platform, such as Git. Version control might seem like overkill for tiny projects, but it's better to get in the practice from the start. As the projects grow larger, the team can fork, roll back and merge them as needed. This kind of flexibility is essential in complex, multi-administrator environments. It is also possible to store modules in Git as well so that the correct build version is pulled into a project on demand.
Some administrators also recommend using one directory per project. All the configuration files, modules and other required files are stored within the working folder. There is no need to traverse across long file system paths to pull down files. Complement this best practice by creating different TFVARS files for dev, production and QA, as mentioned earlier.
There are many ways to optimize implementation. As you learn and use Terraform more, always envision how a current practice will affect the way builds get composed in the future.