GP - Fotolia

Tip

The virtualization and container skills gap widens

Businesses face organizational and management challenges as their support staff transitions from virtualization to a new development and management environment.

Application design has shifted dramatically in the past few years.

Virtualization continues to be the primary foundation for many legacy systems, but organizations are increasingly building new applications on containers. This change requires support staff to acquire container skills, as they will spend less time configuring infrastructure and more time orchestrating system activities and double-checking artificial intelligence and machine learning actions.

An influx of new technologies -- such as cloud, containers and microservices -- is changing how businesses build, run and support applications. In the past, software was tightly connected to a computer infrastructure and IT personnel had deep technical expertise for select platforms, such as VMware's server virtualization software.

"Data center technicians' roles were defined by the certification badges -- Cisco, Microsoft, VMware -- that they had attained," said Torsten Volk, managing research director at Enterprise Management Associates.

These individuals had in-depth knowledge of certain platforms. They also understood their business' needs well enough to fine-tune configurations and maximize system performance for different applications. Nowadays, the need for such skills is disappearing.

"The challenge is that the new cloud-native workloads -- those already in the cloud -- do not look like or operate in the same way as VMs," said Roy Illsley, an analyst at Ovum.

Rather than creating application-specific configurations, enterprises rely on orchestration tools, such as Chef, Puppet and Ansible, and cloud management systems, such as vRealize, to push out consistent configurations.

"Today, a data center tech builds a system image once and an orchestration tool uses it 100 more times," said Gary Chen, research manager of software-defined compute at IDC.

No fixed positions

Troubleshooting has also changed. With virtualization, system processing workflows follow set patterns. Monitoring tools are stationed at entry and exit points, watch data move from place to place, and note irregularities that might need remediation.

With containers, connection points have become more fluid. Information often follows multiple different routes when moving from the source to the destination. In addition, software components have become smaller and more ephemeral. A container might exist for only a few minutes or even a few seconds.

Another change is the number of elements IT is tracking. With virtualization, a system alert sends a technician into action as they scurry to pinpoint and alleviate a potential problem.

With containers, the volume of alerts has become overwhelming as the number of system endpoints, applications, software components and the range of devices has increased. Data center technicians have too many alerts to track.

An evolving role

Technicians can no longer manage system configurations as they did in the past.

"Data center staff now need to function more like system architects," Volk said.

Operations teams must think at higher system levels and the API level and they must understand how an API interaction might trigger other system workflows.

Teams also need new tools to take on this work. In response, IT has turned to big data and analytics to sift through the growing volume of system performance data. In fact, three out of four data centers are investing in predictive analytics, according to AFCOM, a data center professional organization. Rather than working as hands-on virtualization technicians, staff members are evolving to act more like data analysts; they sort through reports, evaluate key metrics and drill down for more context.

Demand for more intelligence

As the job becomes more complex, companies are turning to AI and machine learning for help. Gartner estimates that only 5% of large IT departments today use AI and machine learning, but 40% will by 2022.

Such tools are helpful, but most companies don't completely trust AI or machine learning engines, and for good reason.

"There has been a lot of hype about artificial intelligence and machine learning, but there still are a lot of tasks, like systems management, that they are just not good at," Chen said.

These platforms fare well with simple deductive tasks, such as determining what type of animal is in a picture, but they fail with more complex reasoning, according to Chen. If an AI system makes a mistake, a company's network might be knocked offline. Increasingly, companies are using IT oversight to prevent such mishaps.

The move from a virtualized computing infrastructure to one that relies on containers can have a dramatic effect on data center technicians' jobs. They are no longer digging deep into and fine-tuning specific elements. Now, they evaluate how all of the elements work together and use new orchestration and AI and machine learning tools to mitigate potential problems.

Dig Deeper on IT operations careers and skills

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close