Hyper-converged infrastructure architecture was once a niche technology that primarily appealed to organizations...
with specialized needs, such as virtual desktop infrastructure. Many enterprises are now transitioning their general-purpose data centers into HCI facilities, collapsing core storage and compute functionality into a single, highly virtualized system.
But before pulling the trigger on an HCI acquisition, it's important to steer clear of these 10 key mistakes that frequently trip up new adopters.
- Believing that hyper-converged infrastructure architecture is an infrastructure management panacea
"Although HCI can provide deployment and provisioning flexibility, it's still subject to a host of traditional management problems: capacity monitoring, network congestion and so on," said Jim Meehan, director of product marketing at network traffic intelligence company Kentik.
"Appropriate tooling is still necessary," he added.
- Placing storage on the HCI backburner
Many organizations place CPU and RAM needs ahead of storage when designing a hyper-converged infrastructure architecture. "That's a mistake," said Michael Colonno, solutions architect at data center infrastructure software provider Computer Design & Integration. "You need to consider all elements together for redundancy and make a solid plan for growth," he explained. It's important to ensure that, if a node fails, other nodes can effortlessly accommodate all CPU, RAM and storage needs. "If this is not done, there will be data loss," he added.
- Underestimating network requirements
Modern clustered HCI systems depend on a consistent, predictable network and benefit from converged operations, said Lee Caswell, vice president of products for VMware's storage and availability business unit. "Start early by making sure your network team is aware that stateful storage has joined the network."
- Failing to fully understand application requirements
Jim MeehanDirector of product marketing, Kentik
Some applications have a tough time coping with hyper-converged infrastructure architecture. Deduplication and compression, for example, can sometimes lead to problems if the application "locks" data. "The application will not have blocks [of data] to lock in a traditional fashion," Colonno explained.
Applications that have unique requirements, such as those with high local disk I/O or network throughput, may also not be a good fit. "HCI can make it more difficult to flexibly scale individual infrastructure components," Meehan said.
- Not anticipating the impact of future growth
Don't base a hyper-converged infrastructure architecture purchase on immediate needs. "Look two to three years down the line," said Mike Leone, senior analyst at Enterprise Strategy Group. "Know that you should be leveraging the technology to handle multiple applications, understand workload requirements and, more importantly, project the expected data growth rates of those applications." These attributes can also eliminate concerns associated with data locality within a cluster, he noted.
- Committing to a hardware-based hyper-converged appliance
Why continue to purchase hardware and software bundled as an appliance and deal with the typically high-cost model that is associated with it? "Moving to a software-defined approach will allow you to leverage your existing compute infrastructure and run your storage tier on [it] using internal and direct-attach storage," said Steve Bohac, senior principal product marketing manager at Red Hat.
- Selecting a proprietary hyper-converged product
Early HCI systems required you to buy the vendor's hardware and software together. Yet, this approach can also lead to vendor lock-in and stagnant innovation. "Many customers have begun looking to open source communities for their solutions," Bohac said. "Open source is more than just the Linux OS; it now encompasses ITaaS [IT as a service], PaaS [platform as a service], container application platforms, virtualization and storage, to name a few [options]."
- Introducing yet another vendor into your IT infrastructure
Consolidation is one of the primary benefits of moving to hyper-converged infrastructure architecture -- such as removing a discrete storage tier out of the infrastructure and combining it with the compute layer. "Certainly, this offers physical and cost benefits by reducing the physical footprint in your data center, resulting in lower power and cooling needs and costs," Bohac said.
Similar benefits can also apply to vendor consolidation, particularly if one chooses a hyper-converged offering from a current IT vendor. "Why consolidate your infrastructure and introduce a new vendor into your procurement process?" Bohac asked. Remember, too, that adding a new vendor also requires learning and adapting to yet another management system and UI structure.
Still, there are times when committing to a single source product may be something less than a great idea. "For example, you may have a feature request that your current vendor cannot or will not provide," Meehan said.
- Underestimating the demands of analytics applications
"When considering such applications and analysis, it is not only about the CPU capacity, but also storage capacity," said Masood Ul Amin, vice president of technology innovation at engineering and design company Aricent.
- Failing to select a full software-defined data center stack
HCI experts start by selecting an entire SDDC software stack. "Most HCI [products] converge only storage and compute, but a full stack will also comprehend operations management, automation and software-defined networking for both virtual machines and containers across all traditional and cloud-native applications," Caswell explained.