Using bare-metal recovery to migrate physical machines to virtual machines with VMware

Learn how to use the bare-metal recovery procedure to migrate physical machines into virtual machines in VMware.

One of the really nice things about using VMware (or other virtual server solutions) is that you don't have to worry about bare-metal recovery or a bare-metal restore of the virtual servers. As long as you can get them to not change during a backup, all you have to do is back up their files. However, you can use the bare-metal recovery procedure to migrate physical machines into virtual machines. We just did this and turned 25 very old physical servers into one very nice VMware server. The following describes that migration.

I get asked all kinds of questions about backup products and how they behave on different operating systems and applications, and I use a lab to answer these questions. In addition to the usual backup hardware (SAN, tape libraries, virtual tape libraries), it consists of some Sun, IBM, and HP hardware running Solaris, AIX, and HP-UX. Until just recently, we also had about 25 Intel machines running various versions of Linux and Windows and their associated applications (Exchange, SQL Server, Oracle, etc.).

I never had enough machines, and I never had the right machines connected to the right hardware. We were constantly swapping SCSI and Fibre Channel cards, as well as installing and uninstalling applications. I could have used 100 machines, but that would obviously be prohibitive in many ways. (The cooling alone would be crazy.)

So we recently decided to see if we could get rid of all these servers with VMware. We bought a white box with a 3.5GHz Dual Core AMD processor, 4GB DDR2 RAM and 1.75TB of internal SATA disks. I installed into that server two Fibre Channel cards and two SCSI cards. I then followed the alt-boot recovery method to move all of those physical servers into virtual servers, virtually upgrading each of their CPUs, storage, and memory in the process. Here are the steps I followed for each server:

  • I used the alt-boot full image method to create an image of the entire /dev/hda hard drive to an NFS mount on the new VMware server. (These images were typically 4 GB to 10 GB. They were old servers!)
  • I used VMware to create a virtual machine specifying a virtual IDE hard drive that was much bigger than the original, usually about 20 GB or 40 GB.
  • I used VMware to create a virtual CD drive that pointed to an ISO file that was actually a symbolic link to an ISO image of a Knoppix CD on the hard drive.
  • I booted the virtual machine into Knoppix using the virtual Knoppix CD.
  • I used dd to copy the image of the real hard drive to the virtual hard drive in the virtual machine booted from the virtual CD. (We did this by mounting the NFS drive where we stored the image.)
  • I "removed" the Knoppix CD by changing the symbolic link to point to an ISO image of a nonbootable CD and rebooted the virtual server.
  • In almost every case, the virtual server came up without incident, and voila! I had moved a physical server into a virtual server without a hitch! One Windows server was blue-screening during the boot, but I pressed F8 and selected Last Known Good Configuration, and it booted just fine.
  • I installed VMware tools into each virtual machine, which made their video and other drivers much happier.
  • Once I verified the health of each machine, I changed the CD symbolic link to point to Knoppix again and booted into Knoppix. I then used either qtparted (for Linux systems), or fdisk and ntfsresize (for Windows systems) to grow the original hard drive to the new size.

With 4 GB of RAM and a 3.5 GHz dual-core processor, I can run about eight virtual servers at a time without swapping. I typically only need a few at a time, and what's important is that I have Exchange 2000, SQL Server X, or XYZ x.x running; they don't need to run that fast. (That's how I was able to get by with those old servers for so long.) Each virtual server can have access to either one of the Fibre Channel cards or SCSI cards, which gives them access to every physical and virtual tape drive in the lab. They will also have more CPU, disk, and RAM than they ever had in their old machine. (I can even temporarily give any of them the entire 3.5 GHz processor and almost all of the 4 GB of RAM if I need to, and I don't have to swap chips or open up any CPU thermal compound to do it!)

I also get to have hundreds of virtual servers and not have any logistical or cooling issues, since each server only represents 20 GB to 50 GB of space on the hard drive. I can have a Windows 2000 server running no special apps, one running Exchange 5, one running SQL Server 7, a server running Windows 2003 with no special apps, one with Exchange 2000, and one running Windows Vista. I could have servers running every distribution of Linux, FreeBSD, and Solaris x86 -- and all of the applications those servers support. I think you get the point. I've got enough space for about 300 virtual server combinations like that. It boggles the mind.

This article originally appeared in Storage magazine.

About this author:
W. Curtis Preston (a.k.a. "Mr. Backup"), Executive Editor and Independent Backup Expert, has been singularly focused on data backup and recovery for more than 15 years. From starting as a backup admin at a $35 billion dollar credit card company to being one of the most sought-after consultants, writers and speakers in this space, it's hard to find someone more focused on recovering lost data. He is the webmaster of, the author of hundreds of articles, and the books "Backup and Recovery" and "Using SANs and NAS."

Next Steps

VMware ESX backup alternatives

A comparison of data backup software packages and their integration with VMware

Avoiding VMware virtual machine backup pitfalls

Dig Deeper on Cloud backup

Disaster Recovery