Rawpixel - Fotolia
Nearly every enterprise uses some form of cloud service today. More than 97% use software as a service (SaaS); more than 42% use platform as a service (PaaS); and more than 53% use infrastructure as a service (IaaS).
Yet breadth of commitment is not the same as depth. Typically, less than a quarter of enterprise applications are delivered via SaaS. Of enterprises using IaaS and PaaS, half have fewer than 3% of their workloads running there.
But this deployment scenario is on the cusp of change. As comfort levels increase, and integration and security techniques mature, the percentages of work running in the cloud will ramp up sharply. By 2020 we could see as much as 50% of enterprise workloads running in external clouds.
Among the drivers fueling this shift is the spread of DevOps techniques and philosophies within more organizations, as well as the arrival of true private clouds. Both depend heavily on automation and orchestration of resources: Workloads spin up, scale up, scale down and move from place to place. All of this is done automatically at the command of orchestration tools that can provision everything from an application container for a single microservice to a complex application architecture encompassing containers, virtual machines -- even physical servers -- plus storage, networking and security.
Along for the ride
In the midst of all this transition, IT has to make sure that application and network management tools come along for the ride.
That means -- beginning with the development phase for in-house applications and throughout the evolution of a service offering regardless of how it's sourced -- IT must have the ability to monitor availability and performance. Success means monitoring both service components and their underlying platforms element by element, as well as end-to-end, from the server to the user device.
Users don't care about all the bits in the middle, but IT needs to understand how these services appear to users as much as it needs to know about any problems in the constituent bits.
The must-monitor list
In essence, IT needs to monitor at the following levels:
- The resource level, which means:
- Compute hosts, whether for virtual machines (VMs) or for containers or those directly dedicated to a workload;
- Storage, whether block, file or object; and
- Network, physical and virtual.
- The VM level
- The container level
- The application/microservice level
- The user-visible service level
Moreover, IT must track both externally and internally hosted workloads.
To that end, if network management tools are currently based on a network appliance, IT needs to deploy virtual versions of those tools in external cloud environments just as it does in internal ones. Where management is based on agents running on a host, in an application server or in a container, cloud managers need to provision or configure an agent right along with the specific component it will monitor -- in the same automated workflow, declarative definition or golden image.
Likewise, IT needs a single pane of glass that provides a service-centric view of all the layers and resources pools that comprise the hybridized infrastructure. Ideally, this dashboard will be folded into the cloud manager platform to permit IT to respond to events as quickly as possible. One tool and one technique will not serve. The cloud manager is the logical place to bring together all of the management and monitoring tools needed in today's distributed computing environment.
John Burke is CIO and principal research analyst with Nemertes Research. With nearly two decades of technology experience, he has worked at all levels of IT, including end-user support specialist, programmer, system administrator, database specialist, network administrator, network architect and systems architect. He has worked at The Johns Hopkins University, The College of St. Catherine, and the University of St. Thomas.
Read more about how cloud APM requires designing and planning
Evaluate the role of SDN in hybrid cloud