Top Considerations When Moving from On-Premise to AWS (Part Two)

moving from on-premise to AWS 2As discussed in part one, this series is meant to go over some of the biggest differences you will likely encounter when moving from on-premise to the cloud.  In this portion, I’ll look at the architectural and operational differences that are commonly seen.

Operational and Architectural Differences

One of the most common issues I see revolves around clients attempting to achieve a 1:1 mapping of their current environment in AWS.  While a carbon copy would seem like the easiest approach, depending on the current on-prem solution, it can actually be much more challenging.  Rather than heavily modifying an existing infrastructure (think scripts, automation, etc.), it is often much easier (and cleaner) to leverage the tools already in AWS.

High Availability

As mentioned earlier, a common approach to achieve HA is to leverage floating (or virtual) IPs and/or some form of a load balancer.  With the differences in networking, using an IP-based solution is generally very challenging. This complicates tools such as Orchestrator or keepalived that are common in on-premise deployments.

A more “AWS friendly” approach for HA would be to leverage an internal network Elastic Load Balancer (ELB) in conjunction with a layer 7 tool such as ProxySQL.  As an ELB is already a highly available component within AWS, you can create a target group of software load balancers that can intelligently route requests to the appropriate database server.  This gives your application a highly available endpoint with multiple layers of redundancy behind it and follows the AWS best practices. Here is a quick example of how this paradigm shift may look:

Backups/Disaster Recovery

Looking at an on-premise deployment, backups and DR generally follow this standard pattern:

  • Backups written either locally (and moved) or directly to some network-attached storage
  • DR achieved by replicating to another physical data center

In terms of backups, there are several different approaches that can be taken.  This includes both the process and storage options available. Some options include:

  • Writing to an Elastic Block Storage (EBS) volume and detaching it or taking a snapshot
  • Writing to an Elastic File Store (EFS) mount point attached to the instance (NFS equivalent)
  • Writing to an EBS volume and moving to S3 (flexibility in storage tier, retention, etc)

How you manage backups is highly dependent on existing processes.  In the case where you are actively restoring backups for other environments, a portable EBS volume or EFS mount may be a good option.  If you hope to never use your backups, then S3 (with an automatic rotation to Glacier) may be a more cost-effective alternative.

In AWS, DR is generally similar in that there is replication to another location.  However, the big difference here is that you don’t need to provision another data center, as this is managed through regions. Simply spin up an instance in a different region, start replication, and you have a national or global DR strategy in place.

In what has been a common theme, the alternatives you have allow you to tailor your processes to existing AWS components are vast.  Choosing a native AWS component or best practice will generally be a much better (and often cost-effective) option than trying to mirror an on-premise deployment.


This is a segment that is often fairly different and requires some significant changes.  While AWS offers tools like OpsWorks for Chef Automate, sometimes that may not be the best fit.  Much of the automation will require the use of the AWS CLI or API. Launching instances, registering them with ELBs (Elastic Load Balancer), creating and attaching EBS volumes: all of these actions can be done programmatically.  However, if you are already dealing with a large on-premise deployment, much of this will need to be re-written for AWS.

There are some tools such as Auto Scaling Groups that can help when combined with AMI images of your servers.  For intermediate layers (ProxySQL, for example), you can ensure two or more instances (for HA) using simple Auto Scaling Groups.  The actual database layer is generally more complicated, but certainly not impossible.


In conclusion, it is best to say that architecture in AWS will probably look somewhat different than an on-premise deployment.  While this could spell additional work up front, it is best to embrace the differences and design your system for the cloud. Even though it is sometimes possible to directly mimic an existing infrastructure, you will likely end up with a fragile deployment that gives more headaches in the long run.  Different doesn’t mean “worse”. Rather, it just means that you have a clean slate and the opportunity to leverage all the benefits of the cloud!

If you or your team is preparing for such a move, don’t hesitate to reach out to Percona and let us help guide you from start to finish!

For more information, download our solution brief below which outlines setting up MySQL Amazon RDS instances to meet your company’s growing needs. Amazon RDS is suitable for production workloads and also can accommodate rapid deployment and application development due to the ease of initial setup.

Grow Your Business with an AWS RDS MySQL Environment

Share this post

Leave a Reply