kentoh - Fotolia
Transitioning high-performance computing workloads to a colocation facility can reduce data center operation and maintenance costs. However, there are logistical considerations that admins must account for before migration, such as high-speed data connectivity, hardware transportation and application requirements.
High-performance computing (HPC) is often used for tasks such as scientific research involving large data sets, artificial intelligence and machine learning, and data-driven financial projections. HPC workloads initially ran almost exclusively in the data center.
Now, with more cloud and workload outsourcing options, corporate data centers are hosting fewer on-premises workloads. Because operating a sparsely populated data center may be cost-prohibitive, there is considerable interest in finding ways to migrate remaining workloads out of the data center and into the cloud or an HPC colocation facility.
Though public cloud providers such as Amazon and Microsoft now offer viable options to run HPC workloads in the cloud, these offerings aren't always the best option. Usage fees can make cloud-based HPC cost-prohibitive.
It might not make sense for an organization to completely abandon its existing HPC hardware just to adopt cloud technology. Moving to the cloud can require application rearchitecting, extra security measures and additional costs. A better option may be to migrate HPC workloads to a colocation facility, which allows the organization to keep its hardware but reduce the amount of required in-house upkeep.
Choosing HPC colocation services
Once your organization decides to move its HPC workload out of the data center and into an HPC colocation facility, there are a number of decisions that you will need to make. The first decision to be made is whether to reuse existing HPC hardware or to perform a hardware refresh.
This decision affects migration cost and workload availability. If, for instance, you decide to unplug all HPC hardware and physically move the components to a colocation facility, then any associated workloads will be offline during that timeframe.
Some cases make it possible to move some HPC nodes to the colocation facility, synchronize those nodes with the on-premises nodes, redirect application traffic to the colocation facility and then remove the remaining nodes.
However, taking a portion of the nodes offline for a move diminishes the application's performance until all nodes are back online. The required throughput between the nodes or amongst nodes and storage can rule out this transition option.
Another important consideration is the application data hosting location. The nature of HPC requires a high level of data throughput. This means that admins should locate application data as close as possible to the HPC nodes. This influences not only where organizations choose to move their hardware, but also what HPC colocation options are available.
If business requirements demand hosting the data in a separate data center, then admins must ensure that a high-speed direct connection exists between the colocation facility and the data storage location.
Any time organizations move a workload to a colocation facility, admins must think about security. The reason why colocation is comparatively inexpensive is because it often splits costs amongst multiple tenants. This makes it important to choose a colocation facility that carefully controls (and logs) physical access to the data center.
Even if the colocation facility does properly ensure physical data center security, admins should protect their HPC hardware from theft or tampering. Many colocation tenants opt to install fences around their assets to control physical access. There are also options for server-level security.
Preparing for HPC colocation transfer
The most useful technology to migrate an HPC workload varies depending on the migration's nature. An HPC migration to the cloud, for instance, should be handled differently than a migration to a colocation center. An organization often chooses a colocation facility because it is running out of data center space, needs to shrink its data center footprint or doesn't have enough facility staff.
In most cases, the migration process mostly involves transporting the HPC hardware to the colocation facility and then hooking up the servers and cables. Admins can use backup and recovery software to transfer applications and data to the new hardware.
For a successful migration, admins should work with the HPC colocation service provider to decide how to move applications, confirm that the applications can properly run in the new facility and if any preemptive testing is required before the migration. If the move is not a simple lift-and-shift model, there are any number of extract, transform and load tools that can help migrate and transform the data.
Dig Deeper on Data center ops, monitoring and management
University of York shifts HPC workloads to Swedish EcoDataCenter colocation site in green IT push
Imperial College London teams up with Intel and Lenovo for HPC push
Kao Data expands Harlow datacentre campus by opening second 10MW facility
Kao Data secures up to £130m in funding to finance colocation datacentre expansions