Hypervisor patching struggles exacerbate ESXiArgs attacks

Ransomware hit a high number of unpatched VMware ESXi servers by exploiting two- and three-year-old flaws, which has put hypervisor patching difficulties in the spotlight.

Prolonged downtimes and insufficient vulnerability management contributed to a high number of unpatched VMware ESXi servers that fell victim to a widespread, ongoing ransomware campaign.

Last week, threat actors targeted vulnerable VMware ESXi servers with a new ransomware variant now known as ESXiArgs. Reports indicate that the attacks exploit two remote code execution vulnerabilities tracked as CVE-2021-21974 and CVE-2020-3992 that still affect thousands of organizations worldwide.

Patches and mitigations have been available for two or three years -- how did the ESXiArgs campaign affect such a high number of organizations?

While a lack of timely patching is an ongoing problem for many organizations, particularly for cloud environments, addressing flaws in hypervisors presents its own complications. Oversubscriptions of resources, poor patching policies and a lack of architectural knowledge make it difficult for enterprises to keep pace with software updates, according to infosec experts, and older vulnerabilities like these can fall through the cracks.

Hypervisors are an important element of cloud computing that allow organizations to host multiple VMs at once, but that can also introduce additional concerns. Prolonged downtimes on multiple machines can create disincentives for enterprises to patch, and the internet-facing exposure can make those VMs a prime target.

A blog post by Intel 471 Wednesday referred to ESXi as a "fruitful target for attackers since it may be connected to several VMs and the storage for them."

Dan Mayer, a threat researcher for security vendor Stairwell, also described the bare-metal hypervisor as a high-impact target and listed several factors that make it difficult to defend.

"ESXi hosts run their own proprietary operating system, which, while similar to Linux, runs a different kernel developed and maintained by VMware," Mayer told TechTarget Editorial. "This means that there are less out-of-the-box security solutions for it and more knowledge gaps amongst IT professionals tasked with securing ESXi."

Patching woes

Additional hypervisor patching problems were highlighted in a Twitter post Monday by security researcher Jake Williams. The more difficult it is to patch hypervisors, he said, the less likely it is that enterprises will do so.

Williams told TechTarget Editorial that he attributed the brunt of the problem to hypervisors being oversubscribed in most organizations.

"Many [organizations] don't have enough spare capacity to migrate running virtual systems to other hypervisors to patch," Williams said in a Twitter direct message. "Everyone has a story about 'that one time' migrating a VM caused or was believed to cause some obscure issue."

As of Thursday, a Shadowserver Foundation scan showed more than 2,500 unpatched ESXi instances in the U.S. alone. The high number of potentially affected servers would require a lot more cross-functional buy-in to agree on an outage window for enterprises to patch, Williams said.

Bernard Montel, EMEA technical director and security strategist at Tenable, agreed this was more of a people and process issue rather than a technology issue. He attributed the widespread campaign to poor patch management practices and the trepidation of prolonged downtimes with business disruption.

"In this specific case, patching a VMware ESXi server would imply stopping all the applications running in the ESXi instance. Organizations may have delayed patching until the entire IT organization was ready," Montel said.

In addition, he believes organizations underestimated the risk the hypervisor presented and instead focused on fixing things at the machine level, which was not enough.

Fixing the flaws

One of the vulnerabilities that appears to have been used against ESXi servers also contributed to the patching problems. Pedro Umbelino, principal security researcher at BitSight, highlighted some potentially misleading metrics for CVE-2021-21974 that could have downplayed the servers' internet exposure.

Firstly, he warned that the CVSS score of 8.8, which is considered high severity but not critical, might not signal that the flaw is exploitable over the internet when it clearly is.

"This is because this legacy protocol was designed to work on internal networks only, as per RFC, but some products still enabled it on all listening interfaces regardless," Umbelino said. "This can have serious consequences, and this exploitation is just one of potentially many."

Secondly, the National Vulnerability Database lists the attack vector component as Adjacent when Umbelino said it should be labeled as Network, which would increase the CVSS score to 9.8. Being marked as Adjacent might have misled enterprises into thinking the flaw was exploitable only on internal networks, which Umbelino said is incorrect.

However, Umbelino was most interested in how the flaw affected port 427, the Service Location Protocol (SLP). While he said port 427 was not showing up in popular vulnerability and port scanning search engines, last week BitSight discovered more than 43,000 VMware-specific instances that had port 427 open with SLP available.

He warned that it is unsafe for the port and protocol to belong on the open internet. Multiple threat intelligence vendors and government advisories have also urged enterprises to disable SLP.

"It was designed clearly with only internal networks in mind, and that is stated in the RFC specific documentation," Umbelino said.

Similarly, Jeremy Kennelly, senior manager of financial crime analysis at Mandiant, attributed the high number of affected hypervisors to enterprises' general lack of knowledge on which systems with ESXi services were exposed to the internet. It should be required that enterprises limit service access only to systems or networks that need it, with no arbitrary locations on the internet, he said.

Another area that lacks awareness is in the pre-built ESXi images using click-and-play tools provided by hosting providers. Kennelly said that in some cases, vulnerabilities could happen at any development stage, including when the image was created. In addition, system owners might be unfamiliar with maintaining the images, which could lead to unpatched instances.

The initial ESXiArgs attacks that emerged last Friday were first reported by several cloud and hosting providers in France, such as OVHcloud and Scaleway.

Attacks only partially successful

In a joint cybersecurity advisory Wednesday, CISA and the FBI confirmed that 3,800 servers have been compromised by ESXiArgs globally. One day prior, CISA published a recovery script on GitHub to assist enterprises with data recovery during the ongoing attacks. CISA's tool was based on the work of Enes Sonmez and Ahmet Aykac, security researchers with the YoreGroup Tech Team who discovered an error with ESXiArgs' encryption that allows victims to recover some data.

While the rapid spread and high number of unpatched, outdated ESXi flaws was alarming, infosec experts agree that the fallout could have been far worse.

"In some ways, the number of unpatched systems is fairly low. Vulnerabilities in common internet-facing services can often lead to orders of magnitude more exposure than what we see here, even years later," Kennelly said.

Alon Schindel, director of data and threat research at Wiz, described this rate of unpatched servers as "not exceptional" for a two-year-old vulnerability. The cloud security vendor, which recently created a cloud-specific vulnerability database, has seen past critical flaws with higher rates of unpatched instances.

Because patching vulnerabilities can be a long process, Schindel urged enterprises to evaluate the implications of the new version, build a patching plan and prepare for downtime in some cases.

Dig Deeper on Cloud security

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close