Solved

Backups are very slow

Posted on 2009-07-06
10
787 Views
Last Modified: 2013-12-01
backing up a file server with backup exec i get very slow throughput 300 - 400MB/min
initially i thought the network would be the bottleneck but then i did a ntbackup locally on the server and found the same speed.
I'm getting 1.5 to 2 GB/min on other servers.
I'm quite new in this job and I can't tell where the storage resides,
but this server stores network drives for the company, around 200GB of storage in total
How can i improve the speed of the backup?

thanks for any help
0
Comment
Question by:txarli33
10 Comments
 

Author Comment

by:txarli33
ID: 24789954
i forgot to mention
servers (backup and file server) are 2003 standard
backup exec v11d
0
 
LVL 14

Expert Comment

by:amichaell
ID: 24790069
You will get slower backup speeds if you are backing up many small files, which is typical with a file server.  For example, a 70GB flat database dump will back up faster than 70GB worth of individual documents (Word, Excel, etc).  Are you doing full backups every night or incremental?  Incremental can help as you'll only back up whatever files have changed since the last backup job.
0
 
LVL 6

Expert Comment

by:Francois_IT
ID: 24790072
A common problem for backup speed is a fragmented drive. It also depend if you are doing backup for millions of small files, indexing take long time to generate.
0
 
LVL 20

Expert Comment

by:SelfGovern
ID: 24790893
Fragmentation you have a decent amount of control over (run a background defragmenter like DiskKeepeer).
Small files, deeply nested directories, and long file names you have little control over.
Hosting data on a slow NAS device or a crappy (SW?) RAID controller you may or may not have control over (have you got budget for a better storage system?)

Before you try to solve a problem you're guessing at, run a tool like HP's free Library and Tape Tools and test the speed of the disk you're trying to back up.  If it reports a speed of 5-6MB/second, then it probably is your disk.

If you're stuck with the problems above, you have some of options:
1) If you rarely perform single-file restores, try backing up an image of your drive, instead of file-by-file.  An image backup reads the whole disk sequentially by sector, so it avoids the problems of small files or fragmentation.  But single-file restores will typically take a lot longer than they would from a filesystem backup.
2) Backup to disk first, then to tape.  This has some added cost, and possibly complexity, but the disk target won't care how slow the data comes in, and will organize it in to big blocks that can then be read fast and written to tape at a good clip.
2a) Use your backup software to create a backup-to-disk partition.  Cheapest option, but makes the server do additional work, makes the admin do additional work to manage the partition, is a one-server-at-a-time solution, and any data on the filesystem is subject to corruption from virus, accidental deletes, etc., so it's got challenges.  But it's low cost (usually just disk space, possibly the backup application license fee, and some admin time), and might be good enough.
2b) Or, buy a dedicated D2D target device that acts like a tape library, allowing you to use it for several or many servers.  Disadvantages: Costs more than server D2D.  Benefits: Looks just like a tape target so no new processes; space is shared among several servers if you choose (so scales); Good D2D systems can do tape offload directly; so the data doesn't have to go back through the backup server or over the network; Most D2D targets can do deduplication, so you can keep months of backups in the space that would only hold a couple of weeks without deduplication -- so single-file restores can be really fast; Probably the best-optimized for getting data off of your server quickest, and getting it to tape the quickest.

HP makes a D2D appliance called the D2D Backup System that would do all you need if you choose to go that route.  http://www.hp.com/go/d2d

Note: I do work for HP, but until that last line, the above is about as vendor-neutral as you'll find anywhere.  <smile>

0
 

Author Comment

by:txarli33
ID: 24795336
guys I fully apreciate your response as my question didnt give a lot of information out
(well that's the state I am now regarding this server)

to answer some of your points I'll say:
- I am doing full backups every night as tapes get taken off site and incremental backups could make difficult restore jobs although I am considering doing differentials so i'd only need normal and last tape for restoring.
- I have bad experiences with diskkeeper where some harddrives end up dying (due to too much work/indexing maybe?) but it is an option to be looked into
- backup to disk, I've done tests backing up to disk on the same server im backing up, on the local disk, and I get the same speed so doing a b2d don't think will help much speed wise although I could fit more backups during the same time window as I could run b2 tape and b2 disk simultaneously
- I've also found out no SCSI or FC cable coming out of the back of the server to a disk shelf so I assume the way it connects to the SAN/NAS is via iSCSI through the 2 NICs which are teamed up, how could I check this? if it has software or hardware iSCSI initiator? would that affect the speed of the backups as well?, also, could the slow speed be due to wrong iSCSI drivers installed?
0
Backup Your Microsoft Windows Server®

Backup all your Microsoft Windows Server – on-premises, in remote locations, in private and hybrid clouds. Your entire Windows Server will be backed up in one easy step with patented, block-level disk imaging. We achieve RTOs (recovery time objectives) as low as 15 seconds.

 
LVL 20

Expert Comment

by:SelfGovern
ID: 24800074
The iSCSI initiator configuration program on your server will tell you the IP address of the iSCSI storage you're connecting to.   Many storage boxes will have browser interfaces you can open by pointing your web browser to their IP address... this will give you a clue about partitions (LUNs) and such.

You could help some of your problems by performing a weekly full and daily differentials.   A differential backup doesn't re-set the archive bit, so that it backs up all files changed since the last full... to restore, you need at most the full backup, plus one differential tape.

The problem with sending data to tape if you can only hit 6MB/second, is that you're tearing up your tapes and yout tape drive motor.  Doing a backup to disk *should* let you then copy the backup to disk to physical tape much faster.than from teh fragmented, small-file-filled filesystem.  

Also re: speed of D2D backups: Yes, if the bottleneck is your server disk, then a faster target *won't matter*, even if the target were light-speed SSD... The  bottleneck is the source.   But again, a VTL can help by giving you an intermediate target that can be copied to tape faster, and which can increase concurrency (back up many servers at once to the VTL).

Maybe DiskKeeper isn't the best program of its type; I have used it and thought it did what it did well.... I know that Windows gets fragmented filesystems over time, and those will not only make your disk work harder, but affect performance as well.   (Oh for the days of HPFS!)


0
 
LVL 4

Expert Comment

by:DarrenJL
ID: 24814333
You might also find that the AntiVirus program on the server holding the data is checking every file as it moves.

I found this out after I implemented a backup of our e-mail archiver and it created hundreds of thousands of tiny ~30k files. Our Antivirus stopped each file and scanned it as it moved across the network so our backup time increased by around 12 hours. After I excluded the folder from the scanning it performed fine and I saved the 2 hours off our backup window.

Darren
0
 

Author Comment

by:txarli33
ID: 24819090
thanks again for your replies and apologies for late reply (really hectic at work these days)

well i have some more information
i had the chance to have a look at the server room today and found out that the data is local, this server has 4 hdd installed on it where all data resides
being the data local to the server i don't understand why the slowness
the antivirus has the same settings in all servers, i'll investigate but don't see why should drag the process here and not on any other servers
apart from the antivirus suggestion, what other factors may slow this process?

apreciate all replies, thanks again
0
 
LVL 20

Accepted Solution

by:
SelfGovern earned 500 total points
ID: 24830288
The most common thing leading to slow backups is having lots of small files.  Also consider deeply-nested directory structure, fragmentation, long file names, other processes accessing the disk at the time of backup, and slow source disk.  Small files will always be a problem -- I have seen fast servers slow to under 10MB/sec when sequentially reading a  bunch of 10K files.

Things to do:
- Put the files as close to the root directory as possible, without being in root, to minimize directory tree walking.
- Use the fastest possible source disk -- RAID 1+0 on a *good* *hardware* RAID controller
- Keep the disk defragmented (hopefully by a process that is not running during the backup!)
- Keep other applications from accessing this disk while backup is running, including background processes like indexing and defrag.
- If it's not a huge amount of data (i.e., tens to 100GB, vs. 100s of GB to TB+), you could put this data on SSD (solid state disk), which will give you the fastest possible read times, since there's no physical movement involved in reads.  The challenge is that SSD has a high, but limited, number of read cycles... so if these are very heavily accessed files, you might wear out SSD disks faster than you'd like.  (And remember to TURN OFF defragmentation processes on SSD!)

If none of those help (and they may not help much; this small-file thing is a NTFS/FAT attribute), then the question is, "How often do you have to restore single files?"  If the answer is "Rarely" or "Never", then the best solution is to put the files on a disk (stripe set, probably) of their own, and perform an image backup.  An image backup will read the disk sectors sequentially, and can give much faster speeds than a file-by-file backup (which you're doing now) that has to read the directory, walk the tree, read one file, go back to root, read the directory, walk the tree, read one file....

The problem with image backup is that restores will take significantly longer... but if restores are rare and this is just for archive in case of disaster, then that is probably not a problem.  Note that, even if there is "some" other data on this disk that does need occasional single-file restores, you will still back that up as part of the disk image (which gets *everything*), but you can also back up those other files separately as part of a file-by-file backup (specify that particular directory in a standard backup).

If an image backup is not practical for some reason, you've got one other choice, which is to use a disk target, then move that to tape.  With D2D2T (Disk to Disk to Tape), you use disk as the first target of your backup job, which creates a huge single file that is mostly contiguous (and is your backup job in the same format as if it had been written to tape)... then step 2 is to use your backup application to copy that to physical tape.  Since it's coming from a huge file (hopefully close to root!), you can get good backup speeds to tape (but this will not improve the original backup speed, since source disk is the bottleneck).

If you're going to use D2D2T, the cheapest method is to use the backup application to create a D2D tartet on your server's hard disk.  Make sure it's big enough to hold the complete backup job.  Problems are that there is much more server overhead, you have to manage the space manually, and it's a server-by-server task, not something you can do for all servers easily.

Then the more expensive but much more scalable solution is to purchase a D2D backup system (a type of viirtual tape library, or VTL), a typically Linux-based appliance that mimics multiple tape libraries and acts as a backup target for multiple servers at once.  The best VTLs allow you to perform some sort of automigration, where the D2D system itself can copy or move the data to physical tape, so you don't have to go back over the network.  Different VTLs are available, they can have either iSCSI (simple, free, decent performance) or Fibre Channel (more expensive, high performance) connectivity to your servers.

I'm pretty sure those are your options.   If you do look at VTLs or D2D backup systems, please consider the HP D2D2500 or D2D4000 series (see http://www.hp.com/go/d2d ).  Obligatory Disclaimer: Yes, I do work for HP -- but everything in this email up to this paragraph is as vendor-neutral as you can get.
0
 

Author Closing Comment

by:txarli33
ID: 31600396
Quite impressed by your response, I have now lots to do here.
I thought I could just tweak some settings but it seems I'll have to do a lot of data investigation.
I'll follow your steps and hopefully i'll get a faster throughput

thanks again for the time you've all taken replying to this

0

Featured Post

How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Join & Write a Comment

Ever notice how you can't use a new drive in Windows without having Windows assigning a Disk Signature?  Ever have a signature collision problem (especially with Virtual Machines?)  This article is intended to help you understand what's going on and…
The Delta outage: 650 cancelled flights, more than 1200 delayed flights, thousands of frustrated customers, tens of millions of dollars in damages – plus untold reputational damage to one of the world’s most trusted airlines. All due to a catastroph…
This tutorial will walk an individual through locating and launching the BEUtility application to properly change the service account username and\or password in situation where it may be necessary or where the password has been inadvertently change…
This tutorial will walk an individual through configuring a drive on a Windows Server 2008 to perform shadow copies in order to quickly recover deleted files and folders. Click on Start and then select Computer to view the available drives on the se…

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

11 Experts available now in Live!

Get 1:1 Help Now