MySQL Solution for Hot backup for 60GB database

System Admin
System Admin used Ask the Experts™
MySQL Hot backup for 60GB database.

First I've read that mysql dump is can be slower than another enterprise app so that is where I'm exploring now.  
Trying to find a solution for a hot backup of a 60GB database.  It needs to be encrypted because of sensitive data so looking for the software to use, commands, and tips.

MySQL Enterprise Backup - Is what I'm looking at 30 day free trial but cannot find how much it cost?
If I wanted this recoverable up to the closet time like 15 minutes/1hour how can that be acomplished?  Which command?

I've also seen Percona XtraBackup 2.4.8 but seems to be Linux based and I would have to spin up and pay for another Linux server.  I know the db is small but the nightly backups we take would be bad if I could recover 23 hours ago.  I need a smaller window.  

Also since the DB is so small is it faster to backup to same server and then have an ftp script send or copy it somewhere else.
The fear is with this application it says there are reasons for a table to become corrupt.  So  I also won't know the time it does become corrupt is backing up every hour or every change just going to overwrite that?  How do I achoomplish the simple goal of Daister Recovery up to the minute of incident.  If it is a corrupt table or an entire database that needs to be restored for some reasons.
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Mark MurphyIT Consultant

Can you use the point in time recovery technique so you don't have to do so many full backups? See the MySql Manual for specifics. But basically, you would do a daily backup, and turn on binary transaction logging. Then if you need to recover, you restore the last full backup, and apply the transaction logs to get current so you loose nothing.


Ok I see some of that and understand a little more.  My bin logs folder seem to skip days is that normal?

They are around 1GB per file but have dates like 10/1, 10/3 , 10/5, 10/9, 10/11  10/13 Every two days it saves?  Or how does it figure out bin logs.  The vendor set this up bin logging up.  So If it take a few days to complete a full bin log can I just copy the files over while they are being written.
Mark MurphyIT Consultant

Section 5.4 of the docs, see my previous link, give information about bin logs. A new file is created when the current file becomes full. I would expect that the current bin log file is held open by the server. There is a line in there that you can't edit the bin log file while the server daemon is running as that would confuse the server. That would tend to confirm this to me. It appears that in your environment it takes about two days to fill up and detach a bin log file. There are ways, again see the documentation, to start processing a bin log at a specific point in time, or at a specific event (may be better if you can have multiple events occurring at the same timestamp), and also a way to determine what the events are around a specific time range.

mysqldump also has an option to delete binary logs on completion. This would ensure that everything in the binary log file occurred after the backup.

You might want to contact your vendor concerning the way your server environment is set up.
Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Mark MurphyIT Consultant

The documentation at section 7.3 shows a decent backup and recovery strategy using mysqldump and binary logs.
Fractional CTO
Distinguished Expert 2018
Here's how I accomplish this for my hosting clients with massive amounts of data.

1) Backup site - service mysql stop

2) Backup site - rsync production site /var/lib/mysql to backup site.

At this point the backup site data is inconsistent.

3) Production site - Place site(s) in maintenance mode.

4) Production site - service mysql stop

5) Backup site - 2nd rsync production site /var/lib/mysql to backup site.

At this point the backup site data is consistent.

6) Production site - take site(s) out of maintenance mode + service mysql restart

7) At this point your backup site /var/lib/mysql will contain bit wise copy of production site, with data being consistent.

At this point you can restart the backup mysql service + use mysqldump.

Or you can just backup raw /var/lib/mysql files.

You can also, startup backup server mysql service + do complex backups, like dumping data using text dumps + diffing current data with previous data to create small incremental backups or use the incremental backup system built into mysqldump.

This may seem like overkill + here's what all this buys you.

The data moved from your production site to backup site is usually very small, as very little data has changed between backups.

Also, your entire backup sequence uses CPU + Disk I/O on the backup server, rather than production server.

Also, if you make backups on your production server + try copying these backups somewhere else, you'll have to copy 60G of data for each full backup, which will likely saturate your production site(s) IP Stack + connection, causing slow content serving to visitors.

If you move the entire backup processing to another server, then the only real resource in use is 60G of disk space on your backup server + a small amount of Network I/O between your production site + backup server.


This is a good answer because I found out the vendor doesn't even want HOT Backups from third party software.  So this might have to be the way to complete the task.


I meant to give points to mark about MYSQLDUMP solution...

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial