everythingpossible - Fotolia
Some IT administrators consider a reverse cloud migration simply because public cloud services are too expensive. The monthly charge never goes away and most likely never goes down. Admins also face limitations when it comes to cloud-based application customizations. No matter the reason, coming back on premises is a challenging task to undertake.
Though changes to data center virtualization platforms are positive in both form and function, shoring up preexisting infrastructure that is aging is a challenge, and it can be expensive after the normal three-year maintenance window expires. If it's been a while since admins hosted their applications on premises, the hardware is likely retired or repurposed, which means they have to start from scratch.
Selecting the right platforms after applications come back from the cloud is a difficult task that requires a lot more than simply picking out the newest equipment. It requires a balance between existing environments and starting a new data center and knowing how it will affect cost, support and performance. Admins must take certain considerations into account before rebuilding their data center for a reverse cloud migration.
When admins look beyond the traditional rack or blade server with shared storage, hyper-converged infrastructure (HCI) jumps out. With combined storage embedded in the compute model, HCI appliances are ideal for the data center that's an all-in-one box. However, HCI comes with downsides, such as cost and power.
HCI stacks require a significant amount of power, because multiple processes reside in one frame. This can present a problem for a data center with power feeds designed for a traditional server, rather than the higher-density power that HCI requires.
HCI power and storage requirements
This doesn't mean admins can't use HCI if they've failed to account for higher power demands, but they must understand that the power demands affect data center operations. For example, admins might have to limit their rack space to one or two units per rack based on power needs.
The other thing to remember concerning the power drain of HCI is uninterruptible power supply (UPS) systems. Larger shared storage platforms often have their own UPS systems supplemented by an admin's generator. This means the UPSes must only contend with compute platforms. HCI systems are fully dependent on the UPS systems for compute and storage. If UPSes aren't sized for this, it can cause issues such as reduced runtimes and/or limited protection.
VSAN and NVMe drives
Even if admins don't go with HCI, they still have the option of shared storage, such as vSAN with local solid-state drives and/or non-volatile memory express (NVMe) drives. These can help reduce the need for a shared storage platform in favor of lower-cost alternatives, but there is a catch.
One of the downsides of opting for shared storage is network speed limits. Both vSAN and NVMe drives can work on 1 GB lines, but admins might find a reduction in network speed. To accommodate for the increase in storage, admins must incorporate at least a 10 GB network. If rack switches can't handle the expanded network, that's another upgrade admins must consider. Also, Fibre Channel over Ethernet cards need the appropriate number of switches.
Compute power is another factor that admins must take into account with a reverse cloud migration. As CPUs have increased in both size and speed, wattage loads have also increased, which puts more load on power systems. And admins have to contend with the new per-core licensing with Windows Server 2016.
Admins also have to question how much life the hardware they retain will have once they begin the reverse cloud migration process. Investing time and resources in an aging platform only to have to redo it again the following year is a waste.