James Steidl - Fotolia
In the first half of 2018, cybersecurity vendor Lacework Inc. disclosed that their researchers had found more than 21,000 publicly searchable container orchestration systems on the internet. Several hundred of these systems required no credentials and were completely accessible to any unauthorized parties.
While the vast majority of these container orchestration platforms had some level of access control, Lacework warned that being directly exposed on the public internet still carried significant risk. Kubernetes, Docker Swarm, Mesosphere, OpenShift and many other types of containers were found, indicating that developers and operations teams using most of the major platforms for container orchestration were in violation of some core best practices in information security architecture and access control.
The first question we need to answer is simple -- what could go wrong?
The risks of container orchestration platforms
Most container orchestration platforms have a number of exposed interfaces, some of which are designed for programmatic access via APIs, and others that are more traditional, web-based administrative or deployment consoles. There are significant risks to any of these falling into the wrong hands.
First, there's always the risk of data exposure. Container orchestration APIs may accept or transmit infrastructure details, source code repository information, container build configurations and much more. By exposing these APIs directly to the internet, organizations could be permitting unauthorized access.
Second, administrative and deployment consoles could enable an attacker to perform a wide range of malicious actions. These might include harvesting credentials and keys that are stored or are accessible through these systems, modifying deployment configurations or planting backdoor access into containers, revoking legitimate access from users and admins, or simply deploying containers for illicit gains, such as cryptocurrency mining, password cracking or spamming.
Why expose these systems to the internet?
At the heart of the issue is that there is no good reason for any of these systems to be directly exposed to the internet at all. The only reason this happens is laziness, or when developers and admins don't want to follow some additional steps to securely deploy a production architecture in the cloud -- or on premises, for that matter.
Critical systems that provide privileged access and control over the deployment of computing assets should never be directly accessible on the internet with no additional access controls. For most security professionals, a bare minimum level of access control would include a separate reverse proxy or jump host -- often called a bastion host -- that must be accessed first. This server would be incredibly locked down, allowing only SSH access with key-based authentication or something similar. From this system, admins could then authenticate to their container orchestration platforms.
In addition to having a jump host, strong authentication controls should be implemented, including key-based remote access for SSH, if applicable, and multifactor authentication using certificates or changing token codes and complex passwords. All the major orchestration platforms also support some degree of role-based access, so applying a least privilege approach is also prudent.
If there is some compelling business reason to place one of these systems on the internet without an intermediary jump host in front of it, then security teams need to implement the most secure access control model possible, which should include strong passwords and multifactor authentication at a minimum. Another strong access control is implementing source IP address restrictions -- with or without a bastion host -- that limit where any access can come from.
As the use of container-based automation and orchestration systems becomes more common, it's critically important to lock them down and restrict access to them. In addition to a secure configuration and limited access model, all of these systems should have remote logging enabled, and security operations teams should be carefully monitoring activity on these systems.