Sergey Nivens - Fotolia


Weigh your AWS database options before a migration

A database migration can be quite messy. Determine your requirements and whether native Amazon services can meet them before you do a complete on-prem database teardown.

Enterprises are faced with a choice when they need to migrate a database to a platform such as AWS: consider the cloud-native options or embrace the challenge of architecting their own database for the cloud.

A business can use a cloud provider's database as a service or design a cloud environment that is suitable for redeploying an existing SQL, Oracle, SAP or other database. That choice depends on a number of factors. However, if a cloud provider's existing services meet your needs, it may be faster and easier to simply use one of those services and migrate your data. The cloud provider is also then responsible for maintaining the database and its related infrastructure.

AWS database options

Amazon Relational Database Service (RDS) is the quintessential example of a complete, hosted database service. It has high scalability and varied instance types, which enable the user to emphasize memory, I/O and overall performance. The service also supports six major database engines: SQL Server, MySQL, Oracle Database, PostgreSQL, MariaDB and Amazon Aurora. Once the service is implemented, users can migrate existing data to RDS via AWS Database Migration Service.

Amazon Aurora has garnered serious interest as a MySQL- and PostgreSQL-compatible relational database service. Aurora handles many of the granular, low-level tasks needed to deploy a database, such as provisioning, setup, patching and backup. Aurora has high availability with distributed, resilient and fault-tolerant storage that can scale to 64 TB per instance. The service also emphasizes high performance with up to 15 low-latency read replicas, replication across multiple availability zones (AZs) and continuous backup to Amazon S3. A user can create and manage Aurora database instances through Amazon RDS Management Console.

There are other AWS database options that support a range of enterprise needs. For example, there's Amazon DynamoDB, a managed NoSQL database; Amazon Neptune, a managed graph database service; Amazon Quantum Ledger Database, a managed ledger database that's similar in principle to blockchain; and Amazon Redshift, a data warehousing service.

Configure your own database deployment

The challenge with native AWS database options, however, is that your requirements must fit neatly within the cloud provider's offering. When your database demands more resources or capabilities than an AWS service can deliver, you may have no other choice but to assemble those resources on your own. For a large enterprise-class database deployment, this can be a complex endeavor, and cloud architects must consider a myriad of factors.

For example, a large database deployment will require compute and storage instances. This typically involves the selection of EC2 instances adequately sized for processor and memory resources. Popular databases, such as SQL Server, can use large EC2 instance types, including x1e.32xlarge with 128 virtual CPUs (vCPUs) and almost 4 TB of memory.

But these instances may not be sufficient for the largest database platforms, such as SAP HANA. These huge enterprise applications can take advantage of Amazon EC2 High Memory instances, such as u-6tb1.metal, u-9tb1.metal and u-12tb1.metal. Each instance offers 448 vCPUs and includes 6.1 TB, 9.2 TB and 12.2 TB of memory, respectively. AWS plans to provide even larger instances in 2019.

The database deployment will also require high-performance storage, and cloud architects could select Amazon Elastic Block Store (EBS) with solid-state drives. One set of EBS disks is typically provisioned for data and another for database recovery. The volumes can use provisioned IOPS to ensure adequate storage performance.

Architects typically duplicate the deployment across one or more additional AZs to build resiliency. They must also implement networking support to handle load balancing and failover.

A database deployment within AWS will usually include a Virtual Private Cloud configured with public and private subnets to create a virtual network. A domain name service, such as Amazon Route 53, provides automated IP addressing, which enables easier switching or failover between AZs without the need to manually adjust IP addresses and server names. An internet gateway -- and often a network address translation gateway -- supports internet access. This enables database instances to download database packages for installation and patching.

An organization will often need additional tools to manage such an extensive deployment, including AWS Command Line Interface for installation, as well as multiple security groups to manage inbound access control, communication between database instances and application access to the database.

These aren't your only AWS database options, but you can see many of the potential complexities involved.

Quick Start for large database deployments

Fortunately, most large database deployments on AWS don't require manual architecting, as architects can use automation tools to build a suitable cloud infrastructure. These tools include AWS Quick Start templates, used in conjunction with the CloudFormation console. AWS Quick Start eliminates much of the guesswork and common oversights that can occur when you architect complex applications for the cloud.

The AWS Quick Start template offers a comprehensive set of default parameters, such as network, EC2 instances, database settings, EBS choices, bastion host characteristics and backup preferences. But users can also input important parameters to the template, including passwords for the database, automatic storage management, the EC2 key pair name for security and desired AZs. In addition, users can tweak the default parameters to tailor the template to their unique needs.

Today, AWS Quick Start templates support cloud infrastructures capable of running some of the largest databases with Amazon EC2 High Memory instances.

Plan for testing

A database is often a core component of many other enterprise applications. This makes it critical to ensure that the deployment delivers adequate performance and availability before cutting over to the cloud-based version. Test the deployment or migration. Verify that the data is properly secured, backed up and meets the prevailing compliance demands for your business.

Dig Deeper on AWS database and analytics strategy

App Architecture
Cloud Computing
Software Quality