Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win

x
?
Solved

Backing up current RHEL 5 Web Server Disk to USB Drive, Best Way to do this?

Posted on 2013-12-17
8
Medium Priority
?
62 Views
Last Modified: 2016-07-13
We are using a dedicated web server with Redhat Enterprise Linux 5 installed, we are using this to host a couple of our websites.

We asked our host to attach an external USB hard drive to our server so we could make a backup copy of our current files and DB data.

My question is what is the best way to do this? How to copy everything needed in our server to the USB without downtime?

1) We're thinking of using rsync to copy the files. Is there a bettter way to do this? Faster and more secure (no file corruption and should preserve file meta data)?

2) For the DB since the websites are live and there are a lot of visitors it would always be changing, rsync would have a problem with this. What is the best way to copy our DB data without down time? DB is ranging from 20GB to 50GB. DB used is MySQL Server 5.0.95

3) Is there anything we shouldn't copy? So far I know that copying:
/proc/
/tmp/
Is just a waste of time and HD space since we won't be able to use this. Is there anything else like this that I should avoid copying?
0
Comment
Question by:ibr33
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 2
  • 2
  • 2
8 Comments
 
LVL 27

Accepted Solution

by:
skullnobrains earned 1000 total points
ID: 39726440
1) if you want to copy files, rsync is a reasonable choice with tons of useful features. in your case, you probably should disable checkum computing since comparing mtimes should be enough.

unison would be a reasonable alternative

you may also consider mirroring filesystems or even use an incremental block-level copy if you want to copy more stuff

2) you'll find tools for each backend : for example
- innodbhotcopy (not free) for innodb
- xtrabackup for xtra. should work with innodb as well but will lock stuff
- myisam files cannot be copied without at least locking the tables during the copy

there are quite a few alternate way. i'd suggest this
- setup a regular replication to another mysql instance on the same or a remote host
- backup that instance doing a clean backup while the server is stopped
you may snapshot the filesystem while the replicate server is stopped. note that using snapshots without stopping the server will most likely not work unless you lock the whole database, and flush innodb buffers. even then you'll have to restart it in recovery mode (ie ignore the innodb log files)

3)
many things such as /var/run and /var/tmp

you should probably either mirror the whole disk or copy only what you need : it is easier to reinstall from a default install, a list of packages, and possibly a few config files such as your network setup, hostname and the likes, and obviously your database files than try to mirror needed files and reapply them on a new system. the install should be fairly easy to script

hope that helps
0
 
LVL 16

Assisted Solution

by:Joseph Gan
Joseph Gan earned 1000 total points
ID: 39728011
You could use linux dump/restore commands to backup filesystems to usb drive, and restore it back, if the filesystem is ext2, ext3 and ext4. Eg:

# /sbin/dump -0u -f /dev/usb_drive /filesystems

always start with a level 0 backup, after that can use differencial backup with level 2,3, etc..

Please look at man dump for details.
0
 
LVL 30

Expert Comment

by:serialband
ID: 39728337
You can use rsync as suggested above and use msqldump to save the database.
0
Looking for a new Web Host?

Lunarpages' assortment of hosting products and solutions ensure a perfect fit for anyone looking to get their vision or products to market. Our award winning customer support and 30-day money back guarantee show the pride we take in being the industry's premier MSP.

 
LVL 27

Expert Comment

by:skullnobrains
ID: 39728642
beware, mysqldump locks the tables while it runs. it is only a reasonable option on small databases
0
 
LVL 30

Expert Comment

by:serialband
ID: 39729326
20GB-50GB isn't too big at all and shouldn't take too long.  It might be prudent to just have a 2nd server with everything else duplicated to handle failover if your server is more critical.  You can also replicate the database to the 2nd server.
0
 
LVL 16

Expert Comment

by:Joseph Gan
ID: 41704248
One of the solution has provide: ID: 39728011
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

The Delta outage: 650 cancelled flights, more than 1200 delayed flights, thousands of frustrated customers, tens of millions of dollars in damages – plus untold reputational damage to one of the world’s most trusted airlines. All due to a catastroph…
Windows Server 2003 introduced persistent Volume Shadow Copies and made 2003 a must-do upgrade.  Since then, it's been a must-implement feature for all servers doing any kind of file sharing.
This tutorial will walk an individual through the steps necessary to install and configure the Windows Server Backup Utility. Directly connect an external storage device such as a USB drive, or CD\DVD burner: If the device is a USB drive, ensure i…
This tutorial will walk an individual through the process of installing the necessary services and then configuring a Windows Server 2012 system as an iSCSI target. To install the necessary roles, go to Server Manager, and select Add Roles and Featu…

636 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question