Security researchers discovered a new cryptojacking worm that spreads via Docker hosts exposed to the internet and can be hard to detect, though experts say there are a number of ways to mitigate the risk of infection before it starts.
Jay Chen, senior cloud vulnerability and exploit researcher at Palo Alto Networks, said this cryptojacking worm was the first time Palo Alto's Unit 42 researchers have seen a threat like this "spread using containers in the Docker Engine."
"Because most traditional endpoint protection software does not inspect data and activities inside containers, this type of malicious activity can be difficult to detect," Chen wrote in a blog post. "The malicious actor gained an initial foothold through unsecured Docker daemons, where a Docker image was first installed to run on the compromised host. The malware, which was downloaded from command and control servers, is deployed to mine for Monero and periodically queries for new vulnerable hosts from the C2 and picks the next target at random to spread the worm to."
The researchers named the cryptojacker worm "Graboid" because it "behaves similarly to the sandworms in [the 1990s movie 'Tremors'], in that it moves in short bursts of speed, but overall is relatively inept." Despite this, Chen warned that the worm could evolve by pulling new scripts and "repurpose itself to ransomware or any malware to fully compromise the hosts down the line."
Chen wrote in the blog post that "more than 2,000" Docker hosts are insecurely exposed to the internet, and he told SearchSecurity via email this alone makes Graboid more dangerous.
"Although we do not observe the scanning capability in the current version of Graboid, it can be very spreadable as it has full control (root access) of the compromised hosts," Chen said. "Once Graboid compromises a host, it may continue to scan the internal network and infiltrate other unsecured Docker engines that are not exposed to the internet."
As to why so many Docker hosts have been exposed to the internet, Chen said there were many potential reasons because "by default the Docker Engine … is not exposed to the internet, so it could be a misconfiguration during the initial setup, or lack of knowledge on how exposed the container is or just a simple human error."
To secure Docker hosts, prevention is the best medicine
IT experts said vulnerability to such an attack is relatively easy to avoid, simply by not allowing container hosts access to the open internet. This was also true in the case of a vulnerability discovered earlier this year in runC, a core container software utility. A patch was later issued, but container experts first and foremost recommended basic security hygiene -- don't expose container hosts to the open internet, and don't download container images from unknown creators from public registries.
Gary Chen, an analyst at IDC, said it is possible whomever configured vulnerable hosts felt they didn't contain valuable assets, and thus risk was minimal -- but the highly interconnected nature of container platforms means attacks can spread like wildfire from one compromised node.
Gary ChenAnalyst, IDC
"I don't know how you can really protect people against themselves," Gary Chen said, and added that "normal, good practices should have prevented this from ever happening."
Traditional IT security platforms are still catching up to containers. In the meantime, IT shops that have begun to adopt containers wait for familiar vendors to add container support, which can leave containers vulnerable. Specialized container security tools that detect unusual and possibly malicious behavior within container networks and allow whitelisting of a limited number of container network calls while blocking the rest might have caught or stopped the cryptojacker worm, had they been used. Several such tools can also protect container hosts as well.
There are additional layers of security that can limit the access of infected Docker hosts to the rest of the infrastructure. IT teams should run container images through security scanning tools such as Red Hat Quay, Docker Security Scanning or Docker Trusted Registry before they run them. Third-party vendors such as Twistlock, NeuVector and Aqua also scan container images for security before they run. Enterprises should also enable strong SSH authentication procedures for connecting to the Docker daemon and eliminating weak security access, even from within the trusted network. Such tools can also be used to check Docker deployments for unknown containers or images.
These aren't new messages from security experts, but some organizations still aren't heeding them, experts say. In other cases, gaps in communication and collaboration between DevOps teams that must move quickly and their security counterparts that must ensure security may keep container security best practices from being followed.
"My guess is that infrastructure teams have not plugged the security folks into their use of containers," said Chris Riley, cloud delivery director responsible for DevOps at Cprime Inc., an Agile software development consulting firm in San Mateo, Calif. "There could be containers in the wild with no oversight by the CISO… [but IT teams need] to build a pipeline where DevSecOps is included in order to scan and harden container definitions."