Dump Mysql stack my web server , it needs improvments

Hi to all of you ,
I have the following configuration:
I have a Linux server running apache and mysql 5.5
At night there is a cron job running (see it below, it's simple as I'm new to bash ) we use it to dump the db and transfer it to a remote server on an Internet repository.

problem:
during the period of the dump/transfer of the Database the web site goes offline ( I have a monitor tool called pingdom ) .

I feel the problem is the mysql service taking all the resources for the dump, how can I improve the script?

Thank you
Carlo
 


------------------------------------------------------------------------------------------------------------------------


#!/bin/bash
# Path to backup directories
DIRS="/var/www/html/"

# Store backup path
BACKUP="/pippobackup/"

# Backup file name hostname.time.tar.gz
BFILE="files.tar.gz"
MFILE="database.mysql.sq.gz"


# Set MySQL username and password
MYSQLUSER="root"
MYSQLPASSWORD="aaabbbcccddd"

# Remote SSH server setup
SSHSERVER="192.937.965.934" # your remote ssh server
SSHUSER="root"                # username
SSHDUMPDIR="/backup/remote"    # remote ssh server directory to store dumps

# Paths for binary files
TAR="/bin/tar"
PGDUMP="/usr/bin/pg_dump"
MYSQLDUMP="/usr/bin/mysqldump"
GZIP="/bin/gzip"
SCP="/usr/bin/scp"
SSH="/usr/bin/ssh"
LOGGER="/usr/bin/logger"


# make sure backup directory exists
[ ! -d $BACKUP ] && mkdir -p ${BACKUP}

# Log backup start time in /var/log/messages
$LOGGER "$0: *** Backup started @ $(date) ***"

# Backup MySQL
$MYSQLDUMP  -u ${MYSQLUSER} -h localhost -p${MYSQLPASSWORD} --all-databases | $GZIP -9 > ${BACKUP}/${MFILE}

# Dump all local files to failsafe remote UNIX ssh server / home server
$SSH ${SSHUSER}@${SSHSERVER} mkdir -p ${SSHDUMPDIR}
$SCP -r ${BACKUP}/* ${SSHUSER}@${SSHSERVER}:${SSHDUMPDIR}

# Log backup end time in /var/log/messages
$LOGGER "$0: *** Backup Ended @ $(date) ***"
CarloAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

skullnobrainsCommented:
if you are using innodb, you can use xtrabackup which features a --throttle option

if you are using myisam, mysqlhotcopy is much faster and puts less strain on the server than mysqldump

if not, you can pipe the output of mysqldump through "cstream -t THROUGHPUT" (where throughput is the number of bytes per second or "pv --rate-limit THROUGHPUT" (bytes/s as well, quantifiers allowed)

---

most likely your problem is not linked to resource consumed but rather the fact that mysqldump locks the whole database while it performs the copy.

can't you setup replication ?
can you give us an idea of the volume you are dealing with and constraints ? maybe we can figure out a better wy to reach your goal. also if database consistency is not that important, you may be able to backup piece by piece and not one shot
0
CarloAuthor Commented:
Great ....
give me few hours .. here is early in the morning and I'll get back to you.
bye
Carlo
0
CarloAuthor Commented:
most likely your problem is not linked to resource consumed but rather the fact that mysqldump locks the whole database while it performs the copy.


You are right,
thank you for your advice .

Here is the situation
We have a production web server running Typo3 as CMS  with Apache and  mysql 5.5
The server must be accessible all time with no downtime
Tables are both MyISAM and InnoDB

The goal is :
backup the database and transfer it to a remote location as per security policy


the size is
[root@msserver mysql]# ls -lh
total 2.1G
drwx------ 2 mysql mysql  12K Oct 10 02:02 typo3db
-rw-rw---- 1 mysql mysql 2.1G Oct 10 02:02 ibdata1
-rw-rw---- 1 mysql mysql 5.0M Oct 10 02:02 ib_logfile0
-rw-rw---- 1 mysql mysql 5.0M Oct 10 02:02 ib_logfile1
drwx------ 2 mysql mysql 4.0K Oct 10 02:02 mysql
srwxrwxrwx 1 mysql mysql    0 Oct  7 11:25 mysql.sock
drwx------ 2 mysql mysql 4.0K Aug 16 11:56 performance_schema

Thank you
Carlo
0
10 Tips to Protect Your Business from Ransomware

Did you know that ransomware is the most widespread, destructive malware in the world today? It accounts for 39% of all security breaches, with ransomware gangsters projected to make $11.5B in profits from online extortion by 2019.

skullnobrainsCommented:
i'll assume that
- the database consistency is crucial
- your myisam tables are small
- you can possibly handle downtime if it's only a matter of seconds at night time

a few ways to do things come to mind

-> setting up replication to a remote server seems a good one. you can expect a slight performance hit on the server but given your specs, i don't think that will be much of an issue. if you need actual dumps, you can always take them on the replica. additionally, you'll have a secondary working instance that can be used if the main one fails or possibly for long lasting jobs that tend to interfere into your production, and this is rather easy to setup and monitor

-> convert all tables to innodb. xtradb has a feature that will not lock the database while it runs, or at least only lock it only for a few seconds. i have little to no experience with this so this is mainly taken from the online docs

-> use filesystem snapshots : you'll need to flush all buffers by either temporarily stopping the server or issuing a flush tables with read lock statement, take the snapshot, and unlock. this solution might be very easy to setup or just not possible depending on the filesystem you use (xfs or zfs are great in this respect). it is quite suitable if you want to transfer the data to a file storage because you won't need to dump anything in sql. note that you might need a couple of innodb tweaks if you need to start the server using the snapshot (but just the first time). you can destroy the snapshot once the copy is finished without touching the database.

if you don't really care about consistency, many other things are possible such as dumping the whole db using dirty reads

feel free to discuss any of these or whatever comes to mind. if none of this helps, i guess you'll need to dig a little in order to understand which table holds what data and what has to be kept consistent
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
CarloAuthor Commented:
Hi,
I've been discussing your message with my colleagues , they all thank you for the clear exposure of the problem, we decided to go converting to INNODB the few tables left.
then we will test xtradb to see how it works and if it's smooth as mysqldumps .

If not I will evaluate the dirty reads dumping .
I'l close this for now and I'll let you know how we menage this problem .
thank you
Carlo
0
skullnobrainsCommented:
ok, thaks for posting back. feel free to ask any related question in this help if you think i can be helpful.

like i said i have little to no experience with xtrabackup so i'd also be happy with some feedback if you have a chance.

if you happen to go the dirty way, dumping one table at a time should minimise locking time

actually using dirty reads with no lock at all, you'll need to use handlers
http://dev.mysql.com/doc/refman/5.0/fr/handler.html
handler TABLENAME open
handler TABLENAME read next limit 1000 (and loop until the end of the table)
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
MySQL Server

From novice to tech pro — start learning today.