How often should a server be replaced? It’s a question that has perplexed data center administrators for decades.
The good news is that servers are lasting longer, taking advantage of new technologies that can double, even triple, their working life. They can also save an organization a lot of capital in the process.
What’s behind a server’s longevity?
Servers are lasting longer because of server virtualization, combined with significant server hardware improvements that can take full advantage of consolidation. Virtualization extends the working life of application delivery servers, especially as servers evolve to include vastly more processor cores and memory. Only five years ago, the average service life of a typical server was pegged at about three years. At that point, the organization would usually opt to depreciate the system quickly and move up to a superior hardware platform.
Server hardware improvements, including the introduction of virtualization extensions, such as AMD-V and Intel-VT, onto processors with six, eight, 10 and even 12 cores each, has eased the relentless demand for newer systems. The technological shift to 64-bit computing now allows servers to routinely support 128 GB, 192 GB, 512 GB and even 1 TB of RAM, further extending the system’s working life by several years. Servers also routinely incorporate resiliency features like RAID support for local disks, multiple network adapters and redundant power supplies.
Clustering technologies can protect aging servers from unanticipated downtime, removing much of the worry from overworked administrators. Clustering can protect aging servers that may otherwise face end-of-live replacement; who cares if the old server hardware fails? Another node in the cluster will simply take over. The addition of virtualization features, such as live migration, allow fast workload balancing between servers and can offload applications from troubled or overloaded servers onto fast, new systems.
Service contracts are often expensive and grow even more expensive as the server hardware ages. After a few years, many companies find it too expensive to justify service contracts, and this becomes a major argument in favor of a technology refresh. Virtualization and clustering can alleviate much of this pressure in a production environment, but aging servers can also be reassigned to other tasks outside of the production data center. For example, when servers age out of their service contracts, it’s a simple matter to relocate the system in test and development, where a crash won’t affect production. Today, a common system can serve an enterprise for six or eight years–and even until it fails.
When is a server ‘too old’
But there are some caveats to long server hardware lifetimes. Perhaps the most important issue is parts availability. Server hardware faults do occur, and replacement parts can be prohibitively expensive, if they are even available at all. A wise IT administrator will watch service costs and parts availability, perform a basic cost/benefit analysis and make an informed decision when to reassign or decommission an aging server before a fault occurs.
Financial aspects may also influence the server refresh cycle. For example, leased servers have to be decommissioned on a set timetable. Otherwise, the organization may risk expensive or unanticipated lease extensions, or even more expensive month-to-month rental fees for the equipment. If you don’t own the servers, it may make more sense to refresh the equipment within the terms of a prevailing lease.
And finally, the march of technology may necessitate a server refresh. For example, an aging server can’t host as many virtual machines as a newer system, so a server refresh may be necessary to facilitate a server consolidation program. In addition, powerful new processors, advanced processor extensions, especially related to virtualization, and power conservation features can converge to make a strong case for new server purchases.