Organizations with on-premises data centers are sometimes reluctant to move their IT operations to the cloud. Also, some startups want to buy powerful, expensive servers of their own so they can be in full control of their IT infrastructure.
Despite these initial instincts, organizations that require significant compute capacity should know the benefits of cloud computing, such as high availability, cost savings and environmental sustainability.
A highly available system is one that experiences negligible downtime. Downtime is typically counted in seconds rather than minutes or hours, since cloud-based services rarely go down. Common causes of downtime in an on-premises data center include the following:
- power outages
- natural disasters
- hardware failures
- understaffed IT departments
"One of the primary benefits of moving an organization's services to the cloud is near real-time deployment capabilities in a highly available architecture," said John Breth, an architect and managing principal at consulting firm JBC.
AWS, Microsoft Azure, Google Cloud and other cloud computing platforms provide service level agreements, or SLAs, that guarantee uptime at 99.95% for the majority of their services. Through additional configurations, such as the use of multizone regions in the IBM Cloud or multiple availability zones in Azure and AWS, the guarantee rises.
Reliability describes how well a service performs the tasks it promises to do. It ensures highly available databases don't randomly corrupt records or delete messages. Cloud providers routinely upgrade, update, patch and test their systems to make sure their services perform as promised. They further guarantee the reliability of their services in SLAs.
For example, Azure locally redundant storage, Google Cloud Storage and Amazon S3 Glacier Deep Archive all promise eleven nines of durability for the data they maintain. That's a 99.999999999% promise of reliability.
AWS chief evangelist Jeff Barr put eleven nines into perspective: "If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so."
Jeff BarrChief evangelist, AWS
What happens with an on-premises workload when demand outstrips capacity? To scale an on-premises data center, you would need to buy additional servers, install more CPUs, add memory to existing systems, expand the network and hope your upgraded infrastructure keeps pace with demand. Taking these steps is costly, time-consuming and error prone.
If you need more processing power, you can add more virtual CPUs to your EC2 instances on AWS. Or, just add virtual RAM to your ECS instances on Alibaba. Also, if your Kubernetes cluster needs more throughput, you can add new replica sets via a few clicks.
In the cloud, you can scale your architecture in minutes and with the click of a button.
The extra hardware you scale up can meet a temporary spike in demand, but what happens when demand trails off? You can scale cloud-based services as needed.
"Having the ability to scale out or in depending upon the current need presents a lower operational expense contrasted with the capital expense required to purchase hardware that is scaled to support your maximum need," Breth said.
For example, the Oracle Cloud Infrastructure Container Engine for Kubernetes will scale cloud-native applications across VMs that it can stop and start as needed. AWS provides a specialized Auto Scaling tool that helps companies dynamically rightsize EC2 instances, Aurora DB and NoSQL databases.
It's almost impossible to rightsize on-premises infrastructure because you must build a system that can meet your peak annual demand. An organization with a highly seasonal business, for example, could have millions of dollar's worth of hardware and software sitting idle during slow months. That's not a good allocation of capital.
Productive developers need to experiment with new software and test their changes against various server configurations. This can be time-consuming, even for the most experienced developer. In the cloud, it takes only seconds for a developer to start an IBM Virtual Server or a DigitalOcean Droplet that runs a fully configured application stack.
One of the cloud computing benefits developers love is that it frees them from the time-consuming chore of managing infrastructure.
In the cloud, capacity planning is no longer guesswork. You simply scale up and down as needed. You don't have to spend millions of dollars up front for software licenses or mainframe servers. And you'll never run into the problem of having bought too much hardware. With autoscaling, you always have a rightsized environment.
Also, you only pay for what you use, as you use it. Since there's no big, upfront expenditures to make, your costs become operational expenses. Also, because of the cost efficiencies that come with the cloud's economics of scale, costs are often lower than what you could achieve by running an on-premises data center of your own.
To reduce application latency, a data center should reside near its users.
AWS and Azure have data centers located on six of the seven continents; Google and IBM are on five. That immediate global reach is one of the most compelling benefits of cloud computing, especially for organizations that service customers around the globe.
With cloud-based services, you can deploy applications into any region on the globe. You can also use edge locations around the world that have the power to cache data and further reduce application latency.
Achieving this type of global reach on your own would be incredibly difficult and prohibitively expensive. In the cloud, worldwide deployment of your applications is instant and relatively inexpensive.
It is difficult to achieve government and industry compliance certification in the fields of privacy, security and regulated standards. Thus, pre-certified compliance is one of the biggest benefits cloud computing can bring to highly regulated industries.
AWS, Azure, Google and IBM cloud-based infrastructure comes pre-certified in a multitude of fields, including the following:
- Healthcare. Health Insurance Portability and Accountability Act (HIPAA)
- Legal. Criminal Justice Information Services (CJIS)
- Privacy. Personal Information Protection and Electronic Documents Act (PIPEDA)
- Regulatory. International Standards Organization (ISO)
- Audit. System and Organization Controls (SOC)
Each cloud vendor maintains a public list of their compliance certifications. If the vendor cites your industry's standards as pre-certified, you can run your applications in their cloud.
Even so, security and compliance require the cloud customer to do its part. Cloud-based infrastructure can provide systems that meet strict requirements and standards, but your organization still has to know the local regulatory rules that apply to your customers, industry, government and legal system.
Some detractors suggest that moving data and applications to the cloud creates a security risk, but that is not the case.
Take AWS, for example: All data that flows across the AWS global network is automatically encrypted. Most AWS services, such as S3, provide the option to encrypt all data at rest, so that if a data storage device is compromised, the information on it is indecipherable.
Top cloud vendors provide many built-in tools to monitor for security noncompliance. For example, AWS Config, Google Cloud Asset Inventory and Azure Security Control monitor assets across projects and can complete compliance checks.
Built-in encryption options, mandated encryption between data centers, and the various tools that help you track user changes and identify noncompliant configurations are not available out of the box in an on-premises data center.
This built-in API interface allows developers to execute the following:
- fully code around the provisioning of infrastructure;
- program around mundane, manual tasks; and
- automate complex, high-risk, error-prone tasks.
With the cloud, you can automate difficult tasks that could threaten the sanctity of your data center when performed improperly.
"Cloud-certificate rotation, applying a different encryption algorithm or even the configuration of Perfect Forward Secrecy is a matter of a few API calls," said Java champion Adam Bien. "Even disaster recovery is just a matter of configuration. It can be fully automated through infrastructure as code."
It takes resources to power a data center: land, water, energy and -- most importantly -- people.
When a cloud provider builds a massive data center, the economies of scale create efficiencies that an individual company would struggle to attain.
AWS claims customers generally use 77% fewer servers, 84% less power and a 28% cleaner mix of solar and wind power in the AWS cloud versus their own data centers.
You don't generally think of AWS, Azure or Google Cloud as leaders in the fight against climate change, but there would be a positive impact on the environment if smaller companies moved their infrastructure into the cloud rather than running its own less-efficient data centers.