What’s the price for server uptime? How much would you be willing to invest to maintain five 9s of availability on your mission-critical line of business applications? With limited budgets and resources, many midmarket companies are investing an average of $20,000 in fault-tolerant servers and high-availability clusters to maximize uptime, according to a recent survey by the Information Technology Intelligence Corp (ITIC).
The ITIC survey revealed that despite limited resources and budget cuts, companies are seeing enough value and ROI potential in an investment in fault-tolerance servers and high-availability clusters for performance and availability. According to the survey, 76% of the respondents said that the TCO and ROI value of fault servers was excellent or at least good enough to justify the investment in these tight times.
In addition to the ROI value, what might be another reason cash-strapped midmarket IT shops would need to invest in new hardware and/or technologies like these? Try the increased use of virtualization. Sixty percent of the respondents said that virtualization increases the need for fault tolerance. Virtualization helps IT consolidate physical space and reduce the number of servers needed. And when optimally configured, virtualization reduces the time to manage and deploy applications. The drawback to this is that all the server-based applications are now in one physical server. Therefore, if it’s not configured properly or robust enough, it could cause an issue with performance and uptime.
So how much is availability worth to you? Think of it in these terms: 99% uptime equates to approximately 87.5 hours of downtime per server, per year. And when you move up to three 9s of availability, you’re at five hours of downtime per server, annually. It would be very difficult for any business to remain successful with that much down time.