WavebreakMediaMicro - Fotolia
Faced with a data center hardware refresh about five years after its inception, utility payment company PayGo's IT team realized it could save a lot of money by moving everything to the cloud.
PayGo migrated all the data from its Alpharetta, Georgia data center into Amazon Web Services in 2014. "It was astonishing what we saved," said Chad Gates, PayGo's senior director of infrastructure and security.
Gates said he found the $250,000 PayGo had for its refresh could go a lot farther in the cloud than in a private data center.
"When we looked at the financials, the savings were so high that we simply couldn't not do it," he said. "We had the same number of servers and same footprint in the cloud that we had in our data center for about a tenth of the cost."
The cloud migration to AWS required a lot of planning, though. PayGo stores transactional data, which must always be available for its business. Its data includes Payment Card Industry Data Security Standard (PCI DSS) and personally identifiable information (PII) data. PayGo's data requires secure networks to meet compliance regulations.
No big red switch
"I hear lots of people talk about lift and shift, and I thought, if it was only as simple as that," Gates said. "It wasn't just throw the big red switch on the wall and pray."
For high availability, PayGo used the SIOS DataKeeper software it runs in the data center, running it instead on AWS Elastic Compute Cloud (EC2) virtual servers. SIOS DataKeeper SANless Clustering is deployed on solid-state drive storage in AWS for fast automatic failover of PayGo's Microsoft SQL Server applications. SIOS enables synchronous data replication between AWS Availability Zones (AZs).
SQL Server is the back end database for PayGo's applications. Setting up SQL played a big role in PayGo's cloud migration to AWS.
"To run SQL highly available, you need Windows Failover Cluster Manager," Gates said. "One requirement of Failover Cluster Manager is that every node in a cluster can access the same disk storage. You can manage that yourself when running it in a private data center by putting in a SAN and having a SAN administrator who can provision those disk volumes and make them available to the SQL nodes. But in AWS, all your disk are virtual. They don't have a native shared disk model."
SIOS enabled PayGo to share disk storage without an on-premises SAN.
"When we moved to the cloud, we had to figure out if it would work in the cloud, and how," Gates said. "We needed to figure out the differences. One of the tricky things is, when you go completely virtual, if there's a problem with the server, you don't get to go over and touch it. You can't go look at it. But once we figured that out, it was an easy transition."
PayGo made it work by licensing SIOS software on nodes in different AWS Availability Zones. Each AZ consists of multiple data centers.
"The network speed between them is so fast, those two servers don't know how far apart they are," Gates said. "They could be sitting in the same rack. But the advantage of splitting them up in the AZs is that if AWS has an outage that affects one AZ, the other AZ is far enough away that the outage won't affect it. So you have a high availability cluster that's geographically separated. You're not just protecting yourself against a server outage, you're protecting against a data center outage, or an availability zone outage."
That brought up another potential problem in the cloud migration to AWS, though. When on-premises, servers in a rack share the same network and subnet. But AWS requires servers in different AZs to be on different subnets.
"That introduces some complexity into how it is you're going to configure the cluster," Gates said. "And was different than what we had when we were in a private data center. Having two servers in a SQL cluster on the same subnet makes things really easy."
Gates said PayGo ironed out the problem through about three or four months of testing its cloud migration to AWs.
Chad GatesSenior director of infrastructure and security, PayGo
"We ran a proof of concept up in AWS first," he said. "SQL is really the back end; it supplies all the data for our applications. If we didn't solve this problem, we would need to look for a completely different kind of architecture.
He said PayGo started by testing small development environments in AWS for a few months.
"We tested to see, if we lose one node, how does it work? What if we failover and then failback again? Is that successful? After about three or four months of testing, we discovered this was just as robust as we have in our private data center. Then we moved forward with smaller production environments, and then finally moved everything" into AWS, he said.
Gates said PayGo first set up its full production environment in AWS with everything but data. "Then we did a hot cut," Gates said. "One weekend we copied all the data into the cloud, got it into the databases and made the front end DNS switch. It took a couple of data to replicate through the internet, but once that was complete, we were there."
PayGo had other considerations besides cost and security when picking a public cloud provider. Another key factor was how the major providers handled SQL Server.
"We're a Windows shop," Gates said. "When we moved in 2014, we looked at which provider was going to give us the database model we wanted. We looked at Google Cloud Platform and they said, 'Sorry, no Windows.' So they were out. We looked at Microsoft, and at that time, if you wanted to run the SQL database up in Azure, you had to run it as their SQL-as-a-service model. And that didn't have the feature set we required. AWS said, 'You've got the freedom here to run Linux, run Windows, if you want to run Windows on your virtual servers, that's fine.' So by default, AWS became the choice."