The industry is still defining the basic principles of GitOps, but some enterprise DevOps platform teams have begun to establish operational best practices to support it in production.
GitOps, according to a document released this week by a newly formed Cloud Native Computing Foundation GitOps Working Group, isn't just the use of Git to store application and infrastructure data. Rather, it's a set of workflows that define the desired state of a system within a versioned code repository, and then continuously keep a running production system in conformance with that desired state.
This is also what's known as a reconciliation loop, and is the most crucial principle of GitOps, working group members said.
"The defining thing about GitOps versus doing something with webhooks or CI triggering something to happen is that reconciliation loop," said Christian Hernandez, principal senior technical marketing manager at Red Hat, in a presentation at KubeCon this week. "Something is always checking [for updates]."
While GitOps is still in its infancy within the industry, analysts said there are signs it's growing, buoyed by the rise of edge computing, the ubiquity of Kubernetes and widespread enterprise adoption of DevOps principles that are prerequisites for GitOps.
Jay LymanAnalyst, 451 Research/S&P Global
"GitOps goes hand in hand with DevOps and also embodies that cross-discipline collaboration between software developers and IT operators," said Jay Lyman, an analyst at 451 Research, a division of S&P Global. "I heard GitOps described as pull requests for operations, and I think it’s an accurate way to describe how a Git-like process ... can be useful to IT operations teams."
The rise of DevOps platforms, and GitOps as a means of separating developer and platform operator responsibilities within those platforms, motivated one Microsoft Azure rep to get involved with the GitOps Working Group.
"Your commit to your repo becomes, basically, your operation as an application developer ... and you don't worry about the actual CD into the cluster itself," said Chris Sanders, a program manager for Azure Automation at Microsoft. "You worry about your CI part, and I think that's huge, because it takes a big part of the [risk] out of places where [devs] can cause problems if we go in there and try to do the CD part ourselves."
IT pros share lessons learned with 'Day 2' GitOps
As GitOps popularity grows, enterprise platform teams must adjust observability and IT governance practices to accommodate it.
GitOps first started at insurance company State Farm, based in Bloomington, Ill., with a three-person platform team formed in 2019. This team set up GitOps pipeline templates and a developer onboarding interface for the company's 7000 developers in three separate IT environments -- Amazon EKS, on-premises Kubernetes and Cloud Foundry.
The team went through a process of trial and error establishing its workflow as the platform grew, according to presenters at this week's GitOpsCon.
"We were getting really swamped -- it was really hard for us to manage keeping up with new tasks and also trying to answer support questions all the time," said Priyanka Ravi, a software developer at State Farm and one of the original GitOps platform team members. "So we came up with a system where we have a weekly support rotation and one of us will be on call, monitoring a GitOps channel in Rocket.Chat."
The GitOps team members also learned from a platform outage in which they weren't included in monitoring alerts that they should work more closely with the Kubernetes management team, said Mae Large, architecture manager at State Farm.
"This was an aspect of maturity for us, to actually drain a lot of those logs and risk events that are emanating from the Flux system … and getting alerted when things aren't behaving the way we expect," Large said. "Over time, we've gotten better with metrics [through] Prometheus."
The observability dashboards the GitOps team created from that process have prompted more usage of the GitOps platform, Large said.
"Managers were really excited about the transparency -- 'I can actually see the files change, the actual lines of code change to realize this feature,'" she said. "That empowers them and ... gives them better confidence that this is good to go to production."
The State Farm GitOps team also designed mechanisms to give risk management and compliance teams visibility into the platform as it matured.
"We have a handful of [scripts] that run on a scheduled pipeline, [and] one of those ... we call affectionately the Enforcer," Ravi said. "That one utilizes Terraform Enterprise and runs nightly to make sure that [deployments are] all still meeting compliance standards that were set forth."
GitOps security guidelines emerge
A common problem with GitOps platform administration lies in IT security, specifically secrets management. Most GitOps users rely on a secrets management system such as HashiCorp's Vault, since general security best practices require that secrets -- data such as passwords and other system access credentials -- not be exposed in code repositories.
This runs counter to a purist definition of GitOps, in which the Git repository fully mirrors the production environment, said Microsoft's Sanders.
"[With] a management system like Vault ... [you're] not storing even the encrypted version [of secrets in Git], you're storing the reference," he said. "So then, what's the source of truth there? There's a lot of things that I think are still up for discussion."
Still, the most experienced GitOps users, such as financial services software maker Intuit, which created Argo CD, were able to share some IT security guidelines with GitOpsCon attendees this week.
As with non-GitOps environments, defense in depth is a GitOps security best practice, but must be tailored to the GitOps environment, said Todd Ekenstam, principal software engineer at Intuit, in a GitOpsCon presentation.
"The CI/CD pipeline is defining your policies and standards for deploying to production -- this is how you enforce your engineering process," he said. "As part of that, you really want to protect it. You don't want that pipeline to be bypassed or compromised."
This means maintaining strict access control for the pipeline using short-lived credentials, Ekenstam said. Similarly, GitOps controllers such as Argo CD or Flux must operate according to the principle of least privilege, with the same Kubernetes cluster permissions as developers, and be subject to their own audit log process.
At the infrastructure level, GitOps administrators must secure container registries with scanning tools such as Prisma Cloud or Aqua, and configure registries so that container image tags can't be modified by a malicious actor, Ekstrom said. The Git repository where code is stored should use branch protection rules that require code reviews before changes are merged into the master branch and automatically deployed to production. Finally, the production Kubernetes cluster must be hardened using policy-as-code tools such as OPA to block potentially unsecure container images from being deployed.
This amounts to a long list of considerations for GitOps security, but there can be inherent security benefits to the GitOps approach, too, Ekenstam said.
"You have the opportunity to have code reviews and approvals on changes ... a second pair of eyes and an audit trail as part of your normal [deployment process]," he said.
Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.