
tiero - Fotolia
How to build your own virtual test lab
A virtual test lab can give you a place to test patches and configuration changes -- and avoid the potential for downtime in the production environment.
For any administrator running a vSphere environment, it is almost essential to have a lab to test patches and configuration changes. As most of us have learned, making untested adjustments in the production environment can lead to mistakes, lost data and tears.
Using nested virtualization
Building your own virtual lab is easy enough if you understand the fundamentals and don't mind spending a reasonable amount on your hardware. If it helps, think of an investment in a lab as an investment in yourself.
A virtual test lab excels above physical clusters because you can add fake hosts, fake CPUs, fake networks, fake clusters and fake load balancing depending on what you want. You are only constrained by the amount of physical resources in your server. One important thing to note is that when you are adding networks that will need to go outside the virtualized networks, you need to have switch settings set to allow promiscuous mode. Without it, traffic won't flow through the networks correctly.
When designing my lab, I wasn't too concerned with redundancy and reliability. I preferred something that was quiet and could be experimented in using nested virtualization. With that in mind, I picked up a relatively inexpensive server with decent specifications.
This is nowhere near a production setup but rather a test environment. It was designed for learning and trying out new products, although it does run a couple of non-test VM's. Nested virtualization is definitely not a supported configuration by VMware.
My current server is an HP ProLiant ML310e with 32 GB of RAM. I find that it gives enough room for a proper test lab for most VMware products. Everything is virtualized, including the storage.
I chose a standard RAID 5 card because of the usage profile, and a bit of redundancy. I also included a single SSD disk for the disks that needed quick performance. Again, the loss of anything on this infrastructure isn't the end of the world. Be careful with performance of disks, as this does tend to be a choke point.
A key processor feature
Before rushing out to buy a new server, make sure it has the hardware page assist feature. This feature is critical to nested virtualization; without it, the performance will suffer when you use nested VMs. Hardware page assist in essence removes the double lookup of memory pages that would occur in a standard virtualized environment and is replaced with a second level memory map of page tables. Check to ensure your proposed server has this feature on the Intel ARK website. (AMD has its own version of this technology).
With regards to CPU speed, it isn't so important. As most VMware users know, RAM almost always runs out before CPU.
Putting the pieces together
The physical server runs a free standard copy of ESXi because my server has one socket and four cores. All the switches are standard so there is no need to waste a license or reinstall every 60 days. Within this physical machine, I created key "infrastructure" to power the cluster. These requirements were quite basic.
Next, I have a virtualized ISCSI server because shared storage needs to come up before the virtualized infrastructure or you will have orphaned VMs because the ISCSI storage server wasn't powered on. I chose to use Openfiler because it's free and easy to use. Again, a test environment is much different than production.
I also created a pfSense router in a VM. This allowed me to create as many networks as I liked and provide them as port groups for the VMware infrastructure, which is ideal for experimentation. As part of the setup, I created a lab network, in which the infrastructure sat with its own /24 network. This allowed me to segregate the lab but open access to a non-lab network laptop. Even more importantly was the VPN access via the pfSense firewall. This is ideal if your main machine is a MacBook Air or is ultraportable with limited capacity and power.
I made sure I installed VMware Tools on the pfSense and Openfiler VM instances, which is critical for good performance.
Select the correct guest type
Installing the second level VMware boxes was easy. The tip here was to select the correct guest type that could support a VM host. The machine used when creating a nested VMware box is "Other Linux 2.6 64 bit." This worked without an issue. Remember, these boxes are going to need some serious power as they are servers within servers. I chose to split the remaining 28 GB of RAM between two guest hosts. Another bonus is that you can add or remove the number of hosts by changing the amount of RAM and CPU. For basic experimentation, two hosts works fine.
Once the first virtual ESXi was installed, I gave it a new static IP and then logged in using the Windows client. I created my key virtualized infrastructure including a PDC , a BDC, Active Directory and a DNS server. These were required to allow the creation of the cluster. From this point on, I followed normal installation processes for a cluster.
VMware Tools for nested ESXi
One last tip that I can offer is you may find you are unable to control your internal machines from the first level hypervisor because you can't install VMware Tools on a virtualized host. Some clever chaps over at VMware have created a modified VMware Tools to allow you to interact with the hosts properly, rather than a hard power off. It doesn't however have the full tool set, but you can get it from VMware here.
In the next part of this series, I will cover how to set up and configure the shared storage and network infrastructure to make the above happen.