Loading....

Amazon RDS and MySQL

How to backup Mysql Databases on Amazon RDS

Cloud-based services let you scale capacity as your workload grows and automate the task of operating and maintaining your infrastructure at a lower cost.

There are several best approaches to backup Mysql databases on Amazon RDS . Backing up data can be challenging, and the number of solutions on the market varies greatly. It’s mandatory to find one that works well with the specific needs of your business. To learn more about backing up MySQL databases on AWS, including the benefits, limitations, and common ways to back up MySQL databases, visit the Amazon RDS guide . 

Best approaches to Backup Mysql Databases on Amazon RDS  

Physical Backup 

Database snapshots consist of copies of the entire contents of the databases of MySQL databases. The purpose of this type of Backup is often to prevent data loss in the event of a catastrophic server failure. You can take physical database snapshots of a self-managed MySQL database running on-premises or on Amazon EC2 instances using the tool Percona XtraBackup. You cannot use Percona XtraBackup to create database snapshots for RDS for MySQL instances.

How to do physical Backup

Amazon S3 is an excellent alternative to storing backup files, and Amazon RDS is a fantastic way to protect your on-premises data by backing it up to the cloud. If you’re not familiar with these services, here are some things to consider when setting up the environment you need to back up your files:

  • An amazon S3 bucket to store your backup files.
  • Percona XtraBackup’s Backup of your on-premises database.
  • An IAM role that grants Amazon RDS permission to access the bucket.

The command-line tool Percona XtraBackup can create a complete backup of your MySQL database files, which can be restored from Amazon S3.

This example creates an entire MySQL database backup and stores it in the /on-premises/s3-restore/backup folder.

xtrabackup --backup --user=<myuser> --password=<password> --target-dir=</on-premises/s3-restore/backup>

There are different formats in which you can compress your Backup. You can compress your Backup in any of these:

  • Gzip (.gz)
xtrabackup --backup --user=<myuser> --password=<password> --stream=tar \

   --target-dir=</on-premises/s3-restore/backup> | gzip - | split -d --bytes=500MB \

   - </on-premises/s3-restore/backup/backup>.tar.gz
  • Tar (.tar)
xtrabackup --backup --user=<myuser> --password=<password> --stream=tar \

   --target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \

   - </on-premises/s3-restore/backup/backup>.tar
  • Percona XBStream (.xbstream)

 

xtrabackup --backup --user=<myuser> --password=<password> --stream=xbstream \

   --target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \

   - </on-premises/s3-restore/backup/backup>.xbstream

For further details, you can visit this User Guide by Amazon.

Logical Backup

A logical backup saves the content of a database, the structure of the database, or both in a logical form. It makes sense that people use this kind of Backup when creating a new machine, installing a new software package, and restoring data onto a new platform. For example, if your site gets a spike in traffic, you can take a logical backup to cloud-based storage to restore it to a machine with more space. Using tools like mysqldump or MyDumper, you can create a logical backup of your MySQL database.

Method 1 :

Using the Flat Files

When performing an in-database import of data, it is much more efficient to use tables instead of files. There are multiple methods for importing data from flat files to a database. One of the easiest methods is to use the LOAD DATA INFILE statement, which can be very quick. If you create your flat files at the database level, they are created on a per-table basis, so parallel loading the data to the database is also possible. However, you cannot load flat files without an RDS for the MySQL database. So you can resolve the issue with the following steps :

1) Use Aws Migration services 

While you can use AWS Database Migration Service to move your data to Amazon S3, you can also migrate data to the service directly from RDS for MySQL databases. As the first step, you need to connect to your RDS instance using the AWS console. Next, select the database and click on the “RDS Instances” tab. From there, you can access the “Copy DB” link. Once you do, you’ll be directed to the “Choose a source database” screen. From here, choose the source instance. Then, select “Copy DB to S3” in the drop-down menu and click on “Create a copy of myDB.” This will initiate the copy process to Amazon S3. For more information on Amazon S3 and how to configure it, visit this website.

2) MySQL’s Command Line Interface (CLI) 

It is used to work with databases on the command line. If you’ve installed the CLI and it’s available, you can use the command prompt window to create flat files. Copy and paste this code there 

mysql -h <endpoint> -u <username> -p --silent --batch --quick --raw -e "select * from <database_name>.<table_name> limit 10" > <filename>.csv

Method 2 :

Create multiple files

Back up your data using separate files split into multiple files (one file per table). You can create an archive of individual database files by splitting the Backup into multiple files. This lets you import them in parallel, and it also helps to reduce the time of data import to your databases and provides faster rollback. When the tables are large, you can split the table data into separate files larger than 500 MB.

Here’s one way to create multiple dump files for each table within a database.

mysqldump -h <endpoint> -u <username> -p <database_name> <tablename> > <dumpfilename.sql>

To import the files, copy and paste the following command 

MySQL -h <endpoint> -u <username> -p <database_name> < <dumpfilename.sql>


Using primary key order

Load data in primary key order is the fastest way to insert rows, and it’s essential for tables that don’t fit entirely within the buffer pool. You’ll need to back up in primary key order to get the fastest approach. Use the following command 

mysqldump -h <endpoint> -u <username> -p --order-by-primary <database_name> <tablename> > <dumpfilename.sql>

Import data without a secondary and full-text index 

For enormous datasets, it’s much faster to load your data into a table that doesn’t have a secondary or full-text index. It will help if you also considered creating the index after you’ve loaded the data into the table. Once you have the index, you can delete the table with the unneeded index. In the case of a full-text index, as soon as the document is inserted, it’s tokenized, so the individual words and associated data are inserted into the full-text index.

How to enable Automated Backup 

It is pretty simple to enable automated backups in Amazon RDS. You need to perform these steps:

  1. Open the Amazon RDS console.
  2. From the navigation pane, choose Databases.
  3. Choose the DataBase instance you want to modify.
  4. Select the Modify button.

The Modify DB instance page will appear. Now, you can choose the backup retention period. This allows you to back up the data at any time you want. After choosing the period, click Continue. Finally, you can choose the Modify DB instance option to save your changes and enable the automated backups.

Why is a logical Backup preferred instead of a physical one

If you implement a physical backup, you can create a backup of your database using Percona XtraBackup. You can restore that Backup to the AWS instance using the Restore from Amazon S3 option. However, with Percona, you won’t be able to restore data to an existing RDS for the MySQL instance.

Leave a Reply

Your email address will not be published.