olly - Fotolia

Edge virtualization manages the data deluge, but can be complex

Virtualizing at the edge introduces device management benefits to admins, but they must ensure edge computing is worth the time and effort.

Edge computing provides administrators with the ability to process data closer to its source, which can increase speed, decrease latency and eliminate costly bandwidth, but virtualizing at the edge requires time and effort.

There are certain considerations when it comes to edge virtualization. For example, admins must determine whether their data centers are ready for edge virtualization and if they require a complex instruction set computing (CISC) processor or reduced instruction set computing (RISC) processor. However, edge virtualization can ease device management, introduce reduced costs and manage vast amounts of data, all of which significantly benefit modern data centers.

A main benefit of edge virtualization is device management. In implementing virtualization at the edge, admins can track resources, monitor performance and ensure the health of their systems to better control their edge devices.

Admins can use VMware ESXi to control their edge devices. This is beneficial because ESXi provides added isolation, which helps increase the security of edge devices. In addition, hypervisors such as ESXi help to ensure each VM within a network has the resources required to perform efficiently.

Edge virtualization tools rise in popularity

The IT market has seen increased interest in edge computing, and virtualization vendors are building products to satisfy admins' growing edge virtualization needs.

Amazon launched AWS Greengrass IoT, which enables admins to run local compute, messaging, data caching, sync and machine learning capabilities on virtualized edge devices.

VMware has released Pulse IoT Center infrastructure. This offering mainly focuses on edge device management, which monitors and secures large-scale edge systems. VMware has partnered with Amazon, Dell EMC and Lenovo to create hyper-converged hardware that admins can deploy at the edge.

The process of virtualizing edge computing servers is complex

With increased support for edge computing, admins are virtualizing edge computing servers. But the process and end result is not as straightforward as some might initially perceive.

Edge virtualization still lacks the general-purpose, enterprise-class servers that admins use in their virtualized data centers.

One issue with virtualizing an edge computing server is location. Admins manage edge computing servers remotely with tools that have intermittent network access. These sites also have specific space and power requirements, which make it harder for admins to modify the server's architecture and add capacity. In addition, administration for such deployments is complex because of a lack of industry standards for edge computing and increased attack vectors.

Admins have found ways to efficiently bring edge computing into their virtual infrastructure. Some admins implement containers and serverless architectures on bare metal to reduce hypervisor and VM overhead.

Determine whether a data center is ready for edge virtualization

Many admins use edge technology because of increased network bandwidth limitations, latencies and congestion, often caused by more applications, more users and vast amounts of data. By doing this, they can create edge data centers that are comparatively smaller and require less power and cooling than traditional facilities, which makes them suitable candidates for virtualization.

Edge data centers have two main classifications: micro data centers and nano data centers. A micro data center is an on-premises data center that admins scale down with rack-mounted servers capable of running vast amounts of VMs or containers. Micro data centers generally have high integration and availability, which many admins require.

Compared to micro data centers, nano data centers are smaller and more limited in their capabilities. These data centers use ruggedized equipment designed for outdoor conditions and are capable of withstanding vibration, heat and cold. Nano data centers generally rely on fewer than 100 VMs or containers to support a small subset of services and applications.

CISC vs. RISC processors for edge virtualization

Edge virtualization still lacks the general-purpose, enterprise-class servers that admins use in their virtualized data centers. Edge deployments rely on purpose-built devices that feed busy network traffic to different processors, such as IoT sensors. To fully virtualize the edge, admins must have their hypervisors function on a RISC architecture.

Edge computing relies on two processor types: CISC processors and RISC processors. CISC processors are best suited for servers and PC-class systems. CISC processors can contain innumerable amounts of transistors that contribute to the processor's power usage, and many hypervisors are designed to use CISC processors.

Less complex workloads and applications might not require the extensive set of features and capabilities found within CISC processors. Instead, admins might opt for RISC processors, which use simplified instruction sets and don't rely on an exceeding number of transistors. This helps to boost performance while also reducing the processor's power consumption.

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close