Link to home
Start Free TrialLog in
Avatar of Carletto
CarlettoFlag for Italy

asked on

Dump Mysql stack my web server , it needs improvments

Hi to all of you ,
I have the following configuration:
I have a Linux server running apache and mysql 5.5
At night there is a cron job running (see it below, it's simple as I'm new to bash ) we use it to dump the db and transfer it to a remote server on an Internet repository.

problem:
during the period of the dump/transfer of the Database the web site goes offline ( I have a monitor tool called pingdom ) .

I feel the problem is the mysql service taking all the resources for the dump, how can I improve the script?

Thank you
Carlo
 


------------------------------------------------------------------------------------------------------------------------


#!/bin/bash
# Path to backup directories
DIRS="/var/www/html/"

# Store backup path
BACKUP="/pippobackup/"

# Backup file name hostname.time.tar.gz
BFILE="files.tar.gz"
MFILE="database.mysql.sq.gz"


# Set MySQL username and password
MYSQLUSER="root"
MYSQLPASSWORD="aaabbbcccddd"

# Remote SSH server setup
SSHSERVER="192.937.965.934" # your remote ssh server
SSHUSER="root"                # username
SSHDUMPDIR="/backup/remote"    # remote ssh server directory to store dumps

# Paths for binary files
TAR="/bin/tar"
PGDUMP="/usr/bin/pg_dump"
MYSQLDUMP="/usr/bin/mysqldump"
GZIP="/bin/gzip"
SCP="/usr/bin/scp"
SSH="/usr/bin/ssh"
LOGGER="/usr/bin/logger"


# make sure backup directory exists
[ ! -d $BACKUP ] && mkdir -p ${BACKUP}

# Log backup start time in /var/log/messages
$LOGGER "$0: *** Backup started @ $(date) ***"

# Backup MySQL
$MYSQLDUMP  -u ${MYSQLUSER} -h localhost -p${MYSQLPASSWORD} --all-databases | $GZIP -9 > ${BACKUP}/${MFILE}

# Dump all local files to failsafe remote UNIX ssh server / home server
$SSH ${SSHUSER}@${SSHSERVER} mkdir -p ${SSHDUMPDIR}
$SCP -r ${BACKUP}/* ${SSHUSER}@${SSHSERVER}:${SSHDUMPDIR}

# Log backup end time in /var/log/messages
$LOGGER "$0: *** Backup Ended @ $(date) ***"
Avatar of skullnobrains
skullnobrains

if you are using innodb, you can use xtrabackup which features a --throttle option

if you are using myisam, mysqlhotcopy is much faster and puts less strain on the server than mysqldump

if not, you can pipe the output of mysqldump through "cstream -t THROUGHPUT" (where throughput is the number of bytes per second or "pv --rate-limit THROUGHPUT" (bytes/s as well, quantifiers allowed)

---

most likely your problem is not linked to resource consumed but rather the fact that mysqldump locks the whole database while it performs the copy.

can't you setup replication ?
can you give us an idea of the volume you are dealing with and constraints ? maybe we can figure out a better wy to reach your goal. also if database consistency is not that important, you may be able to backup piece by piece and not one shot
Avatar of Carletto

ASKER

Great ....
give me few hours .. here is early in the morning and I'll get back to you.
bye
Carlo
most likely your problem is not linked to resource consumed but rather the fact that mysqldump locks the whole database while it performs the copy.


You are right,
thank you for your advice .

Here is the situation
We have a production web server running Typo3 as CMS  with Apache and  mysql 5.5
The server must be accessible all time with no downtime
Tables are both MyISAM and InnoDB

The goal is :
backup the database and transfer it to a remote location as per security policy


the size is
[root@msserver mysql]# ls -lh
total 2.1G
drwx------ 2 mysql mysql  12K Oct 10 02:02 typo3db
-rw-rw---- 1 mysql mysql 2.1G Oct 10 02:02 ibdata1
-rw-rw---- 1 mysql mysql 5.0M Oct 10 02:02 ib_logfile0
-rw-rw---- 1 mysql mysql 5.0M Oct 10 02:02 ib_logfile1
drwx------ 2 mysql mysql 4.0K Oct 10 02:02 mysql
srwxrwxrwx 1 mysql mysql    0 Oct  7 11:25 mysql.sock
drwx------ 2 mysql mysql 4.0K Aug 16 11:56 performance_schema

Thank you
Carlo
ASKER CERTIFIED SOLUTION
Avatar of skullnobrains
skullnobrains

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Hi,
I've been discussing your message with my colleagues , they all thank you for the clear exposure of the problem, we decided to go converting to INNODB the few tables left.
then we will test xtradb to see how it works and if it's smooth as mysqldumps .

If not I will evaluate the dirty reads dumping .
I'l close this for now and I'll let you know how we menage this problem .
thank you
Carlo
ok, thaks for posting back. feel free to ask any related question in this help if you think i can be helpful.

like i said i have little to no experience with xtrabackup so i'd also be happy with some feedback if you have a chance.

if you happen to go the dirty way, dumping one table at a time should minimise locking time

actually using dirty reads with no lock at all, you'll need to use handlers
http://dev.mysql.com/doc/refman/5.0/fr/handler.html
handler TABLENAME open
handler TABLENAME read next limit 1000 (and loop until the end of the table)