Incident management (IM) is a necessary part of a security program. When effective, it mitigates business impact, identifies weaknesses in controls, and helps fine-tune response processes. Traditional IM approaches, however, are not always effective in a partially or completely virtualized data center. Consequently, some aspects of incident management and response processes require review and adjustment as an increasing number of critical systems move to virtual servers.
For our discussion of IM, virtualization is defined as the abstraction of logical servers from underlying hardware resources. This is not always the case, but it is a good starting point.
Why an IM Review is Important
Some organizations are eager to implement virtualization to quickly gain associated cost and flexibility advantages. In my experience, this rush to a virtualized data center assumes that either existing controls are enough or that—for some unexplainable reason—virtualized servers are isolated from common attack vectors and therefore more secure. Neither assumption is true.
Inherent IM Challenges
Because of VM abstraction, servers, their configurations, and their data are subject to being moved from one hardware platform to another. Further, data can travel between virtual machines (VM) on the same platform without passing through traditional network devices. Although these characteristics provide many of the benefits of virtualization, they also create challenges for security professionals. They include packet bypass of IPS or log management solutions and lack of consistent MAC address references.
In addition to monitoring issues, virtualized data centers provide a fertile environment for attack. For example, compromised servers in a traditional data center provide an attacker with a single corresponding production server for data extraction or for launching further attacks. However, compromise of a hypervisor hands an attacker access to the several servers it manages. Even with strong, traditional IM processes in place, this can result in multiple breaches before detection.
Probability of a VM Attack
Yes, virtualization is a relatively new technology. As such, it has not been a prime target for cybercriminals. However, that is changing. According to IBM X-Force (2010),
… 18.2 percent of all new servers shipped in the fourth quarter of 2009 were virtualized, representing a 20 percent increase over the 15.2 percent shipped in the fourth quarter of 2008 (p. 49).
Although this increase does not correlate to an increase in disclosed virtualization vulnerabilities, as shown in Figure 1, the overall increase of vulnerabilities does track with the increase in growth of virtualization as a strategic technology. It also indicates that the increase in the number of virtualized servers increases the attack surface for those attackers focusing on the hypervisor as a high-value breach target.
Figure 1 (IBM X-Force, 2010, p. 50)
The number of virtualization solution vulnerabilities is small compared to the number of vulnerabilities across all applications and operating systems—about one percent of the total. As Figure 2 shows, however, this is still reason for concern; a large majority of reported vulnerabilities allow an attacker to gain full control of a single hardware platform’s multi-server environment.
Figure 2 (IBM XForce, 2010, p. 53)
It is important not to view the 2009 drop in reported vulnerabilities as a trend. One explanation for the drop is the richer target environment in traditional data centers and on desktops. This tends to focus security researchers’ attention in those environments. However, as virtualization growth continues, and traditional targets harden, virtualization products will garner additional focus from both security experts and criminals.
Finally, the distribution of reported vulnerabilities seems to track closely with market share, as shown in Figure 3. Using Microsoft’s share of reported vulnerabilities on the desktop as an example, vulnerability research tends to focus on market leaders. In the virtualization market, the leader is VMware. Extending this comparison, other vendors’ solutions possibly possess just as many, although undiscovered, vulnerabilities. Efforts by researchers and criminals show greater ROI when focused on the larger number of possible targets.
This brief look at the vulnerabilities inherent in virtualization solutions demonstrates the potential for a high-risk attack. High-risk because a single breach can result in access to multiple servers and the data they process or store. Consequently, a close look at detection, containment, and response capabilities for the unique needs of VMs is an important step in integrating virtualization into the organization’s security program.
Figure 3 (IBM X-Force, 2010, p. 56)
Incident Management Basics
IM in a virtualized data center consists of the same steps used in traditional environments, as shown in Figure 4. Note the cyclical nature of this process. After each attack/incident, or each training event, a root cause analysis and after action review helps identify weaknesses in the organization’s response. Remediation tasks placed in an action plan are executed to strengthen the organization’s ability to mitigate business impact. For more information on this process, see Incident Management: Managing the Inevitable.
Figure 4: Incident Management Cycle
Most of the steps in this process are planned, designed, implemented, and documented during the prepare step. It is in this phase of incident management, security and infrastructure design teams address the unique challenges associated with virtualization. These challenges go beyond simple documentation changes. In most cases, infrastructure design changes—changes intended to enable quick detection and response—are required.
In the following sections, we examine areas for review in the preparation process. Because recovery is directly affected by virtualization, we also look at additional steps necessary to enable quick and safe recovery.
Unique Response Challenges
The flexibility and productivity gains virtualization brings to an organization can also weaken its ability to respond to attacks or other unwanted device or user behavior, including:
- VMs managed by the same hypervisor instance might share information without having to send it out onto the physical network;
- strict separation between partitions is not implemented by default, requiring design and build documentation changes;
- VMs are either manually or automatically moved to react to changes in resources or workload;
- there is limited physical access to the intra-partition pathways from outside the host;
- direct host memory access capabilities can prevent quickly moving a complete partition from a compromised platform to a recovery host; and
- it is easy to mix servers with different trust levels on the same host.
Five Steps to Ensuring Effective Response
A few simple steps—not always so simple to implement—will ensure an organization’s ability to detect unwanted behavior and effectively respond as virtual servers spread across the enterprise, including:
Step 1: Group VMs according to data classification
Step 2: Ensure monitoring tools see packets internal to VMs managed by the same hypervisor
Step 3: Segment virtual networks
Step 4: Remedy forensics issues
Step 5: Mitigate business impact
Step 1: Group VMs according to data classification.
Virtualization allows IT staff to place servers on any available host. This helps maximize available resources, but it can unnecessarily increase security complexity and costs. For example, VMs processing sensitive data require all controls defined as reasonable and appropriate for confidential data. Less sensitive VMs do not, but unnecessarily spreading a small number of sensitive VMs across multiple hosts, instead of aggregating them on restricted hosts, requires applying all controls to all hosts.
Do not mix VMs processing sensitive data with those that do not. This allows maximum protection while minimizing costs and complexity. It also helps enable adjustments to incident management processes without asking for large budget increases.
Step 2: Ensure monitoring tools see packets internal to VMs managed by the same hypervisor.
Monitoring for anomalous packets passing in an around VMs on the same host is just as important as seeing them passing between physical servers. The problem is the inability of traditional security appliances (i.e. IPS, IDS, firewalls, etc.) to see inside the virtual space. This looking into the pathways between VMs, the VMs and the hypervisor, and to and from the host operating system is called introspection.
Introspection is possible using some of the tools that come with a hypervisor solution. However, they are not always full implementations. Further, if not integrated with current monitoring controls, the administrator now has multiple sets of rules to manage.
Introspection and Detection
First, ask the right questions. Does the solution you currently use, or are about to purchase, allow introspection of all activity within the virtual space? In addition to packets, does it monitor memory and processes to ensure VM and hypervisor integrity? In other words, the virtual space cannot be an introspection-resistant black box.
If complete introspection is not part of the product’s capabilities, there are workarounds. For example, critical or high-value breach targets might reside on a host with more than the recommended two NICs (one for partition management and one for VMs to access the physical network). The extra NICs can route data from one VM, out to a monitoring device, and back to the internal target VM, as depicted in Figure 5. (The management NIC is not shown.)
All intra-VM traffic in this Microsoft virtual server example is routed to VLAN4. Attached to VLAN 4 is a physical IPS used to monitor physical network traffic. Once inspected and approved, packets are then returned through VLAN4 to the virtual switch and routed to the target partition. Although a possible solution, it has a disadvantage.
Figure 5: External Packet Inspection
Packets routed to an external monitoring device add additional propagation delay to response time. Consequently, be sure this is absolutely necessary before adding it to your design toolkit. In the interest of balance between business productivity and security, you might consider this only for your most sensitive data exchanges. Another alternative is making the case for additional funding so you can purchase one of the growing number of third-party products that add introspection capabilities. Either way, do not ignore internal traffic.
In addition to monitoring, integrate virtual environments into your log management processes. Does your virtualization solution support integration with your security information and event management (SIEM) controls? If not, does it support syslog integration? Whatever it takes, ensure VM, hypervisor, third-party application, and host system logs make it to your aggregation point and your correlation engine.
For more information about log management, see Guide to Computer Security Log Management, NIST SP 800-92.
Step 3: Segment virtual networks
Also shown in Figure 5 is a segmentation scheme to limit traffic between partitions. Implemented using VLANs configured in a virtual switch and VLAN access control lists (VACLs), this example is one way to help ensure unwanted traffic does not pass from a compromised VM to other VMs on the same host. Further, it allows response teams to quickly isolate one or more compromised systems, preventing enterprise-wide effects.
The reasons for segmentation are not different from those in the physical world. However, virtual server segmentation is often forgotten, even though the physical hosts might be placed in secure network segments. Remember that many controls you implement on the physical network must be configured in virtual environments, but VMs are by design isolated from controls on your physical network.
Step 4: Remedy forensics issues
Forensics is directly affected by virtualization in at least three areas: time synchronization, hardware addressing, and server seizure.
In addition to log management solutions, forensics solutions require time synchronization across the enterprise. Without it, correlation engines miss relationships between events and investigators struggle to reconstruct incidents. Most organizations use a time service—internal, external, or both—to ensure the same time is synchronized across all physical devices.
Virtual servers must synchronize with the same service. However, this is not always automatically configured. Ensure each VM directly synchronizes with the time service or with the physical host. Using the physical host assumes it synchronizes with the time service.
VMs use virtual hardware (MAC) addresses. When a VM moves, its MAC address changes. If these address changes are not tracked and logged, reconstructing a security incident within virtual environments is difficult. See Figure 6.
Figure 6: Logging Challenges (Brandon Gillespie, 2009, Slide 8)
Consequently, logging is not enough; knowing where a VM was located at any point in the past is also necessary. According to Brandon Gillespie (2009), security teams must ensure virtual MAC addresses are tracked, logged, and available for analysis. In addition, log management processes must consider the possibility that a moving VM has left behind logs on multiple hosts.
For an example of how scheduled tracking might be accomplished in an environment without an automated tracking solution, see Tracking a VM in a Nexus 1000v Virtual Network.
Seizing a Virtual Server
When a server is compromised or used to commit a crime, it is often necessary to seize it for forensics analysis. Security teams often face two challenges when trying to remove a physical server from service: retention of potential evidence in volatile storage or removal of a device from a critical business process. Proper planning mitigates the effects of both when seizing a VM.
Evidence retention is a problem when the investigator wants to retain RAM content. For example, removing power from a server starts the process of mitigating business impact, but it also denies forensic analysis of data, processes, keys, and possible footprints left by an attacker. This is one advantage VMs have over physical servers.
Most virtualization solutions, like VMware’s ESXi and Microsoft’s Hyper-V, provide snapshot capability. A VMware snapshot is a point-in-time image of a VM, including RAM and the virtual machine disk file (Siebert, 2011). The resulting file can provide an investigator with an encapsulated copy of the server at the time the breach or criminal activity occurs. When placed on a quarantined replica of the original hardware, the recovered VM presents a rich forensics environment.
Another method of both disabling a compromised server and retaining critical forensics data is VM suspension. In VMware, for example, suspending a VM creates a suspended state file (.vmss) representing “…the state of the machine at the time it was suspended, or paused…” (Durick, 2011, Suspend, para. 1). The .vmss file is similar to the hibernation file used on Windows systems. For more information on this topic, see the Durick reference above.
A snapshot or suspension file might not be enough, however. When planning snapshots or other processes for evidence preparation, be sure to collect all files your VM uses while running. In a VMware ESXi 4 update 1 environment, files to seize include those with the following extensions (Durick, 2011):
- .vmdk (snapshot file)
When using Microsoft Server 2008 R2 virtualization, look for the following file extensions (Microsoft TechNet, 2011):
- .avhd (snapshot file)
Administrators typically strive to meet four goals when a virtual server is removed from service: 1) contain a breach or malware infestation by removing the affected server from the network; 2) prevent any further damage to, or loss of, information residing on local storage; 3) remove the server to a secure location for forensics analysis; and 4) restore services provided by the VM. Meeting these goals requires planning, testing, and documenting processes.
Removal of a server usually starts with isolating it from the network. As I wrote in Step 3: Segment virtual networks, this is easily accomplished using documented steps to isolate one or more virtual network segments. In many cases, isolating the entire physical host is necessary, and proper physical network design enables this. Isolation protects the rest of the network and shuts down external attack sessions. Another option is to suspend the affected VM.
If quick isolation is not necessary or practical, suspend the VM. Shutting it down might or might not preserve volatile storage, such as RAM. Whether or not suspension affects forensics investigations depends on how your virtualization vendor implements this capability and how you configure it.
Both snapshots and suspensions allow preservation of evidence. Seizing the related files and taking them to a secure forensics lab is easily accomplished.
Restoring service, the last step in server seizure, is also an important business continuity planning step for any service-interruption event.
Step 5: Mitigate business impact
The bottom line is that IM is all about mitigating business impact. Modifying detection, containment, and evidence retention processes do not ensure continued operation of processes affected by a compromised server. Rather, this requires quick recovery of the server and its data to a point in time just before the compromise. In addition to traditional backups, a virtualized data center has unique tools to accomplish this.
Immediately after suspending a VM, virtualization technology allows another VM to take its place. Two tools make this possible: images and snapshots. VM images, created by the virtual server creation process, usually exist for every VM in the data center. If regularly patched, they allow quick recovery of a server when recovery of dynamic configurations or data is not necessary. (See Patch archived VMs…) Images, however, do not restore a VMs’ operational state at a specific point in time.
A better alternative to recovering a VM is using a snapshot. Snapshots save enough information to restore a server to a specific point in time. However, this requires regularly taking snapshots rather than waiting for an incident. It also means implementing snapshot management processes.
Snapshots introduce a new set of management challenges; the biggest is possible performance hits. For example, in VMware environments, post snapshot reading and writing to disk happens as shown in Figure 7. The VM retrieves data created before the snapshot is taken from the pre-snapshot virtual disk file. Reads and writes for other data and are sent to a delta file created at the time of the snapshot.
Figure 7: VM Snapshot Reads and Writes (Siebert, 2011a)
Microsoft snapshots work similarly. Figure 8 provides a step-by-step look of snapshot creation in a Microsoft Hyper-V environment.
Figure 8: Hyper-V Manager Snapshot Creation (Microsoft TechNet, 2011a)
Performance issues, and the need for additional storage, are necessary planning topics when considering snapshots. Also plan to seize all delta- and other snapshot-related files if snapshots are taken regularly, including:
- .vmsn (VMware snapshot state file)
- delta.vmdk (VMware differential disk file)
- .vmsd (VMware snapshot metadata file)
- .avhd (Microsoft snapshot file)
Finally, neither suspending a VM nor taking a snapshot is impossible if you cannot access the VM’s hypervisor. Be sure your management path to the parent partition, via the administration NIC, is truly isolated from the physical network. However, if you believe the hypervisor is compromised, simply isolate the physical platform and all servers it contains; assume all VMs are compromised, too.
Introduction of virtualized servers requires rethinking incident management processes. Revisiting the prepare step associated with incident response—including asking your vendor the right questions—is the best place to start.
Five steps lead to tight integration of VM and existing incident response processes. They examine and help remedy system, network, and process design challenges associated with VM placement, incident detection and containment, and business process recovery unique to virtualization. Without them, existing response documentation is only effective for the physical environment.