How to backup a MySQL database that is too big to mysqldump? Cant take server offline.

Octalys
Octalys used Ask the Experts™
on
Hi,

How do I backup a MySQL database that is too big to mysqldump? Cant take server offline. The database is live and contains 20 million row of records.

I havent been able to backup it for a few months, the plan is to use mysql replication when we upgrade the code. But for now we just simple want a mysqldump, but when we try it, the whole servers stops doing everything else and only dumping the database. Which takes quite some time to finish, so how do people dump such big database on a live server?

Thank you.
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®

Author

Commented:
Hi,

Thanks for the answer, but I dont think bigdump is what I am looking for. I am not transferring data to other host or importing elsewhere. Just a local backup, the problem is, the second I run any dump tool using mysqldump everything gets locked for the dump.

Author

Commented:
Bigdump description:
To restore the very large backup of your mySQL database (or a part of it) into the new or the same mySQL database. You can't access the server shell and you can't import the dump using phpMyAdmin or any other scripts due to hard memory resp. runtime limit of the web server.

I dont want to restore, just backup and I have access to the shell~
Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Hi,

You can use Replication also:

http://onlamp.com/pub/a/onlamp/2005/06/16/MySQLian.html?page=last&x-maxdepth=0

Take a look at this.

Regards.
Top Expert 2007

Commented:
What options are you currently using with mysqldump?

Author

Commented:
No special arguments actually. I just mysqldump
Top Expert 2007
Commented:
You could try the

--quick

option to see if that makes much difference.
It would be good to go straight to mysql folder and looking for folder name data. All database name are same name with directory in this folder you can use software to backup that folder as well.
you could do this automatically by running a bash script with something similar to this:

#!/bin/bash

LOCAL_BACKUP_ROOT=/media/usb/backups
DATABASE_DUMP_DIR=$LOCAL_BACKUP_ROOT/database

DATABASE_SERVER=mysql
DATABASE=whatever your database name is
DATABASE_USERNAME=user
DATABASE_PASSWORD=pw

mysqldump -u $DATABASE_USERNAME --password=$DATABASE_USERNAME --opt -l $DATABASE >  $DATABASE_DUMP_DIR/$DB_DUMP_FILENAME && gzip $DATABASE_DUMP_DIR/$DB_DUMP_FILENAME

gzip $DB_DUMP_FILENAME

i may be a bit off but this way you can keep a compressed copy on your server somewhere. Once the script is written just enter it as a cron job to run over night so that it doesn't slow down your productive hours.
this should help you configure a cron job:
http://drupal.org/cron

Cheers
Commented:
mysqldump is probably locking the tables - as it should while taking a snapshot since the default backend doesn't support transactions. you may want to try:

mysqldump --skip-opt --lock-tables MYDB > MYDB.sql

While you will get a snapshot with possibly inconsistent data, at least you can back it up without causing the locks.
Commented:
For tables backed by the InnoDB engine, there's a commercial product
for performing a Hot Backup
http://dev.mysql.com/doc/refman/5.0/en/innodb-backup.html

For MyISAM tables, an  alternative to mysqldump  is  mysqlhotcopy
http://dev.mysql.com/doc/refman/5.0/en/mysqlhotcopy.html

Which works by performing a LOCK TABLES.
Once all   tables are locked  (so all pre-running transactions have been forced committed,  and  new transactions can't be started until copy is done)...

hot copy uses the "CP"  command  to copy raw database files  from /var/lib/mysql.


To test that your backup has worked,  you can then place the  copied files on a second server's   /var/lib/mysql  directories (or install-specific equivalent),  startup mysqld,    and  see the data as of when the backup was taken.


A similar method is to use a  filesystem or volume-level snapshot,  when MySQLD is run on a VxFS or ZFS volume  (on an OS that supports such).
Once the snapshot is made,  you copy files from the snapshot  version to the backup server,   to get a 'crash-consistent'  backup   (this is suitable primarily for InnoDB tables...   MyISAM tables may be more liely damaged in a crash).

Linux LVM  can also perform snapshots,  but I don't recommend that for DB live backups, due to the poor write performance when a snapshot exists.

Author

Commented:
I dont think there's a good way to do this. Thats why always use cluster or replication before rolling a big live database.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial