Nmedia - Fotolia
A comprehensive server deployment checklist involves a lot more than buying adequate computing resources at an attractive price.
It takes talented IT administrators and other personnel to source, acquire, prepare, install, configure, manage and support a fleet of servers -- whether in the tens, hundreds or thousands -- in a data center.
The emphasis on reducing data center hardware footprints and lights-out operations can sometimes cause IT staff to overlook important issues. These top 10 logistical considerations should factor into every rack-and-stack server deployment checklist.
Can the facility handle the added server load?
Every server installed in a data center will demand rack space, power, venting and cooling. Where will the new servers go?
If you're deploying another server or two and there is plenty of unused rack space available, check that the new server's placement will minimally affect air flow in surrounding servers.
When you're adding 10, 20 or 50 new systems to the existing racks, check for physical space, adequate power and sufficient cooling. Modern servers tend to use less energy and run cooler than previous generations of hardware, but you still must work the BTU data to verify the top-of-rack, end-of-row or hot/cold aisle cooling systems can handle the added heat. Otherwise, systems overheat and may fail prematurely. Similarly, total up the wattage demands of all the servers and check that the circuit capacity and power distribution systems have spare capacity for the additional load.
Hundreds of new servers necessitate new racks in new rows, which will have a major effect on data center space and floor loading. Such expansion projects can also cause reengineering of supplemental building systems such as fire suppression, physical security and water handling to keep racks and pipes far apart. Large expansion projects can also demand more building management system or data center infrastructure management sensors.
Are there enough server outlets and UPS capacity?
Determine where each new server will plug in. This devilish detail has been known to disrupt the simplest and most mundane server deployment.
The power distribution units (PDUs) deployed in many server racks offer a finite number of outlets, so a rack that's already heavily utilized might not have enough open PDU receptacles to accommodate additional servers, or a convenient arrangement of available receptacles for the servers' power cords. You might rearrange some cords, but only by unplugging servers and causing system downtime, which the ops team would need to schedule in advance.
Check your uninterruptible power supply (UPS) capacity. Even the best UPS has limited wattage capacity and battery support time. Overloading a UPS system can trip the internal circuit breaker. More load means less battery runtime, so determine how the additional servers will affect the available UPS backup time. In some cases, the added load may shorten the battery backup time too much, preventing an orderly system shutdown. Investigate a UPS upgrade or other changes to UPS power distribution within the racks before deploying more servers.
Large-scale expansions rarely face PDU and UPS oversights because new racks typically furnish new power infrastructure for the servers.
Is there enough network connectivity for the servers?
To connect to the data center's network, each new system's network interface connector (NIC) ports need connections to a local patch panel, and then to the local switches that interconnect beyond the rack.
Verify that enough ports are available on the local patch panels and switches to accommodate the additional servers. Clusters and resilient computing, as well as the additional network traffic demands of virtualized servers, ratchet up the number of NIC ports that go on production servers. It's a lot cheaper -- and redundant -- to add one or two 1 gigabit Ethernet ports than to install a single 10 GigE or faster port. A server may require two, four or more network cables to the patch panel and switch ports. If you're planning to add as few as 10 new servers across several racks, the number of new ports may take you by surprise.
If you get caught short at the patch panel or switch, add another panel and interconnect another switch if there's space in the rack. Alternatively, upgrade the existing patch panel and switch to high-density versions. Beware the effort and downtime involved to configure switches and shuffle cabling, and plan accordingly.
Large-scale server installations often don't face network capacity oversights because IT administrators will plan server installation and switch capacity as part of the project design.
Are there enough licenses for all of the servers' software?
Licensing software can be a costly endeavor, with many enterprise-class licenses costing thousands of dollars each year -- multiplied by the number of VMs running on each system. The licensing expense for a large server installation easily dwarfs the combined hardware costs.
New servers need an OS, a hypervisor and/or virtual container layer, applications, management tool agents and other components. Each piece of software is governed by licensing. IT administrators must plan the server's software needs and license requirements in advance as part of the server deployment checklist, to know whether to purchase a license or move an existing one to the new server. Mitigate expenses with volume license discounts and careful negotiation with vendors.
Replacing an existing server with a hardware upgrade is typically not as pricey as adding new servers, since most software and associated licenses can be transferred to the new hardware. However, additional VMs instanced to the new server could add license costs.
Is there a clear server configuration template?
New servers will be configured by tasks such as installing software, setting up server roles, setting IP addresses and working with domain name system and Active Directory details. This type of work was traditionally performed manually and may still be a manual endeavor when one or two new servers are involved.
But manual configuration is a time-consuming and error-prone process for even the most talented and experienced IT professionals. There are too many potential mistakes or oversights that might delay the deployment, trigger unnecessary troubleshooting, expose unexpected security vulnerabilities, or simply leave a working system that is configured differently (i.e., inconsistently) from others, which leads to confusion or errors in the future.
Organizations should prepare for server installations with a clearly defined configuration plan. This can certainly be a manual effort if the system's configuration is well-documented and follows a consistent checklist. But large deployments increasingly rely on established base image files that define the overall suite of software to install, along with scripting and automation tools that drive the setup and configuration procedure in a predictable and consistent manner. The net result is faster server deployment with fewer errors. This type of consistency contributes to corporate compliance at the data center level.
Are the new servers properly patched and updated?
Once a server is configured and software is initially installed, you'll need to update and patch the software -- usually as directed by the established configuration checklist, template, script or prevailing automation tool -- such as a server configuration management tool.
However, patching and updating after installing an established software image isn't always the right choice. The latest version of an OS or application is not necessarily the best version for your specific production environment. Many enterprise-class environments prohibit automatic updates -- such as Windows Update -- to prevent untested changes to the production servers.
Many organizations take the time to test and verify software patches and updates in a lab environment before authorizing updates to the production servers through configuration change management tools. Eventually, the base image files used to create new servers reflect the new software versions.
Are the new servers properly integrated into the greater data center?
Just installing, configuring and loading software onto a server isn't necessarily enough to make it production-ready. The new servers must also integrate into the data center operations. New servers must join the backup or replication process. The new servers need management agents to interoperate with the organization's remote lights-out management platform and appear in management logs, reports and alerts. IT administrators want to pool the new servers' resources and make them available for provisioning from the virtualization management platform such as VMware vCenter.
The exact series of steps for a functional server deployment can vary dramatically depending on organization size and business needs, but the underlying consideration is vital. Checklists, scripts and automation tools speed the integration process while reducing errors and oversights -- especially for large deployments.
Remember the implications for data center and corporate compliance from processes such as data protection and backups.
Have you fully documented the servers?
One of the final steps in any server installation checklist is to generate comprehensive documentation that details the setup, configuration and software complement. Proper documentation helps with troubleshooting because any deviation between the systems' documented (initial) and detected (current) state will usually reveal the problem. It also helps with compliance auditing by ensuring that every server is configured according to established standards and every piece of software holds a proper, current license.
Documentation manually entered and updated on spreadsheets and charts rarely works well because changes and updates are frequently ignored. Modern enterprise data centers rely on configuration and infrastructure management tools that recognize and inventory new systems, determine the hardware and software configurations, track licenses and support contracts, and generate charts that highlight relationships and dependencies. IT staff should conclude the server installation by updating any documentation, or verifying that any automated tools have correctly identified and inventoried the new systems.
What will you do with the server packaging?
Your server deployment checklist doesn't end when the server is operational. Servers generally ship with a significant amount of packaging material: foam, cardboard, papers, plastic and metals, or even wood from pallets and crates.
The waste from a small server project with up to 10 systems can often be disposed of with the normal business waste stream.
Large projects with hundreds of systems potentially generate enough waste material to fill a storeroom or clog a loading dock, with additional fire and personnel safety implications. Think ahead about how to handle the packaging material, which may be a mix of recyclables and waste. Special arrangements can be made with waste disposal contractors to remove it promptly. If a server vendor or value-added reseller is involved in server preparation and installation, see if that company is obliged to remove the packaging as part of the deal -- cleanup and removal should be spelled out clearly in the purchase and sale agreement.
What will you do with old servers?
One thing you must include on a new server deployment checklist is the old servers. When refreshing hardware, data centers plan for the old equipment or wind up with a cluttered storeroom.
Many organizations choose to repurpose old servers within the business. An aging system can handle secondary, low-value production workloads or test and development projects. Displaced systems can make a spare parts inventory for similar systems still in use, especially if service contracts have expired and no spare parts are readily available.
Rather than dispose of unusable servers, donate them to schools or charitable organizations and seek a tax deduction, or resell used servers on the secondary market.
Server disposal is a serious problem. Electronic components and assemblies typically contain toxic chemicals and cannot be discarded with the normal waste stream. Recycling companies will strip the systems of any valuable metals.
Your mileage may vary
This server deployment checklist is not comprehensive or complete. Unique business and technical requirements may minimize some of these issues -- or impose new server deployment considerations that aren't listed here.
Think ahead and plan carefully regardless of the number of new servers. It's not the biggest projects that get you into trouble. Often it's the smaller deployment projects that wind up popping circuit breakers or racking up overlooked software license fees.
The right CMDB tools to guide data center change management
Use automation to help untangle cable management
Get the best server networking gear for your buck
Guide to hardware failure and server uptime