Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17


Backups are very slow

Posted on 2009-07-06
Medium Priority
Last Modified: 2013-12-01
backing up a file server with backup exec i get very slow throughput 300 - 400MB/min
initially i thought the network would be the bottleneck but then i did a ntbackup locally on the server and found the same speed.
I'm getting 1.5 to 2 GB/min on other servers.
I'm quite new in this job and I can't tell where the storage resides,
but this server stores network drives for the company, around 200GB of storage in total
How can i improve the speed of the backup?

thanks for any help
Question by:txarli33
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions

Author Comment

ID: 24789954
i forgot to mention
servers (backup and file server) are 2003 standard
backup exec v11d
LVL 14

Expert Comment

ID: 24790069
You will get slower backup speeds if you are backing up many small files, which is typical with a file server.  For example, a 70GB flat database dump will back up faster than 70GB worth of individual documents (Word, Excel, etc).  Are you doing full backups every night or incremental?  Incremental can help as you'll only back up whatever files have changed since the last backup job.

Expert Comment

ID: 24790072
A common problem for backup speed is a fragmented drive. It also depend if you are doing backup for millions of small files, indexing take long time to generate.
Comprehensive Backup Solutions for Microsoft

Acronis protects the complete Microsoft technology stack: Windows Server, Windows PC, laptop and Surface data; Microsoft business applications; Microsoft Hyper-V; Azure VMs; Microsoft Windows Server 2016; Microsoft Exchange 2016 and SQL Server 2016.

LVL 21

Expert Comment

ID: 24790893
Fragmentation you have a decent amount of control over (run a background defragmenter like DiskKeepeer).
Small files, deeply nested directories, and long file names you have little control over.
Hosting data on a slow NAS device or a crappy (SW?) RAID controller you may or may not have control over (have you got budget for a better storage system?)

Before you try to solve a problem you're guessing at, run a tool like HP's free Library and Tape Tools and test the speed of the disk you're trying to back up.  If it reports a speed of 5-6MB/second, then it probably is your disk.

If you're stuck with the problems above, you have some of options:
1) If you rarely perform single-file restores, try backing up an image of your drive, instead of file-by-file.  An image backup reads the whole disk sequentially by sector, so it avoids the problems of small files or fragmentation.  But single-file restores will typically take a lot longer than they would from a filesystem backup.
2) Backup to disk first, then to tape.  This has some added cost, and possibly complexity, but the disk target won't care how slow the data comes in, and will organize it in to big blocks that can then be read fast and written to tape at a good clip.
2a) Use your backup software to create a backup-to-disk partition.  Cheapest option, but makes the server do additional work, makes the admin do additional work to manage the partition, is a one-server-at-a-time solution, and any data on the filesystem is subject to corruption from virus, accidental deletes, etc., so it's got challenges.  But it's low cost (usually just disk space, possibly the backup application license fee, and some admin time), and might be good enough.
2b) Or, buy a dedicated D2D target device that acts like a tape library, allowing you to use it for several or many servers.  Disadvantages: Costs more than server D2D.  Benefits: Looks just like a tape target so no new processes; space is shared among several servers if you choose (so scales); Good D2D systems can do tape offload directly; so the data doesn't have to go back through the backup server or over the network; Most D2D targets can do deduplication, so you can keep months of backups in the space that would only hold a couple of weeks without deduplication -- so single-file restores can be really fast; Probably the best-optimized for getting data off of your server quickest, and getting it to tape the quickest.

HP makes a D2D appliance called the D2D Backup System that would do all you need if you choose to go that route.

Note: I do work for HP, but until that last line, the above is about as vendor-neutral as you'll find anywhere.  <smile>


Author Comment

ID: 24795336
guys I fully apreciate your response as my question didnt give a lot of information out
(well that's the state I am now regarding this server)

to answer some of your points I'll say:
- I am doing full backups every night as tapes get taken off site and incremental backups could make difficult restore jobs although I am considering doing differentials so i'd only need normal and last tape for restoring.
- I have bad experiences with diskkeeper where some harddrives end up dying (due to too much work/indexing maybe?) but it is an option to be looked into
- backup to disk, I've done tests backing up to disk on the same server im backing up, on the local disk, and I get the same speed so doing a b2d don't think will help much speed wise although I could fit more backups during the same time window as I could run b2 tape and b2 disk simultaneously
- I've also found out no SCSI or FC cable coming out of the back of the server to a disk shelf so I assume the way it connects to the SAN/NAS is via iSCSI through the 2 NICs which are teamed up, how could I check this? if it has software or hardware iSCSI initiator? would that affect the speed of the backups as well?, also, could the slow speed be due to wrong iSCSI drivers installed?
LVL 21

Expert Comment

ID: 24800074
The iSCSI initiator configuration program on your server will tell you the IP address of the iSCSI storage you're connecting to.   Many storage boxes will have browser interfaces you can open by pointing your web browser to their IP address... this will give you a clue about partitions (LUNs) and such.

You could help some of your problems by performing a weekly full and daily differentials.   A differential backup doesn't re-set the archive bit, so that it backs up all files changed since the last full... to restore, you need at most the full backup, plus one differential tape.

The problem with sending data to tape if you can only hit 6MB/second, is that you're tearing up your tapes and yout tape drive motor.  Doing a backup to disk *should* let you then copy the backup to disk to physical tape much faster.than from teh fragmented, small-file-filled filesystem.  

Also re: speed of D2D backups: Yes, if the bottleneck is your server disk, then a faster target *won't matter*, even if the target were light-speed SSD... The  bottleneck is the source.   But again, a VTL can help by giving you an intermediate target that can be copied to tape faster, and which can increase concurrency (back up many servers at once to the VTL).

Maybe DiskKeeper isn't the best program of its type; I have used it and thought it did what it did well.... I know that Windows gets fragmented filesystems over time, and those will not only make your disk work harder, but affect performance as well.   (Oh for the days of HPFS!)


Expert Comment

ID: 24814333
You might also find that the AntiVirus program on the server holding the data is checking every file as it moves.

I found this out after I implemented a backup of our e-mail archiver and it created hundreds of thousands of tiny ~30k files. Our Antivirus stopped each file and scanned it as it moved across the network so our backup time increased by around 12 hours. After I excluded the folder from the scanning it performed fine and I saved the 2 hours off our backup window.


Author Comment

ID: 24819090
thanks again for your replies and apologies for late reply (really hectic at work these days)

well i have some more information
i had the chance to have a look at the server room today and found out that the data is local, this server has 4 hdd installed on it where all data resides
being the data local to the server i don't understand why the slowness
the antivirus has the same settings in all servers, i'll investigate but don't see why should drag the process here and not on any other servers
apart from the antivirus suggestion, what other factors may slow this process?

apreciate all replies, thanks again
LVL 21

Accepted Solution

SelfGovern earned 2000 total points
ID: 24830288
The most common thing leading to slow backups is having lots of small files.  Also consider deeply-nested directory structure, fragmentation, long file names, other processes accessing the disk at the time of backup, and slow source disk.  Small files will always be a problem -- I have seen fast servers slow to under 10MB/sec when sequentially reading a  bunch of 10K files.

Things to do:
- Put the files as close to the root directory as possible, without being in root, to minimize directory tree walking.
- Use the fastest possible source disk -- RAID 1+0 on a *good* *hardware* RAID controller
- Keep the disk defragmented (hopefully by a process that is not running during the backup!)
- Keep other applications from accessing this disk while backup is running, including background processes like indexing and defrag.
- If it's not a huge amount of data (i.e., tens to 100GB, vs. 100s of GB to TB+), you could put this data on SSD (solid state disk), which will give you the fastest possible read times, since there's no physical movement involved in reads.  The challenge is that SSD has a high, but limited, number of read cycles... so if these are very heavily accessed files, you might wear out SSD disks faster than you'd like.  (And remember to TURN OFF defragmentation processes on SSD!)

If none of those help (and they may not help much; this small-file thing is a NTFS/FAT attribute), then the question is, "How often do you have to restore single files?"  If the answer is "Rarely" or "Never", then the best solution is to put the files on a disk (stripe set, probably) of their own, and perform an image backup.  An image backup will read the disk sectors sequentially, and can give much faster speeds than a file-by-file backup (which you're doing now) that has to read the directory, walk the tree, read one file, go back to root, read the directory, walk the tree, read one file....

The problem with image backup is that restores will take significantly longer... but if restores are rare and this is just for archive in case of disaster, then that is probably not a problem.  Note that, even if there is "some" other data on this disk that does need occasional single-file restores, you will still back that up as part of the disk image (which gets *everything*), but you can also back up those other files separately as part of a file-by-file backup (specify that particular directory in a standard backup).

If an image backup is not practical for some reason, you've got one other choice, which is to use a disk target, then move that to tape.  With D2D2T (Disk to Disk to Tape), you use disk as the first target of your backup job, which creates a huge single file that is mostly contiguous (and is your backup job in the same format as if it had been written to tape)... then step 2 is to use your backup application to copy that to physical tape.  Since it's coming from a huge file (hopefully close to root!), you can get good backup speeds to tape (but this will not improve the original backup speed, since source disk is the bottleneck).

If you're going to use D2D2T, the cheapest method is to use the backup application to create a D2D tartet on your server's hard disk.  Make sure it's big enough to hold the complete backup job.  Problems are that there is much more server overhead, you have to manage the space manually, and it's a server-by-server task, not something you can do for all servers easily.

Then the more expensive but much more scalable solution is to purchase a D2D backup system (a type of viirtual tape library, or VTL), a typically Linux-based appliance that mimics multiple tape libraries and acts as a backup target for multiple servers at once.  The best VTLs allow you to perform some sort of automigration, where the D2D system itself can copy or move the data to physical tape, so you don't have to go back over the network.  Different VTLs are available, they can have either iSCSI (simple, free, decent performance) or Fibre Channel (more expensive, high performance) connectivity to your servers.

I'm pretty sure those are your options.   If you do look at VTLs or D2D backup systems, please consider the HP D2D2500 or D2D4000 series (see ).  Obligatory Disclaimer: Yes, I do work for HP -- but everything in this email up to this paragraph is as vendor-neutral as you can get.

Author Closing Comment

ID: 31600396
Quite impressed by your response, I have now lots to do here.
I thought I could just tweak some settings but it seems I'll have to do a lot of data investigation.
I'll follow your steps and hopefully i'll get a faster throughput

thanks again for the time you've all taken replying to this


Featured Post

Ransomware: The New Cyber Threat & How to Stop It

This infographic explains ransomware, type of malware that blocks access to your files or your systems and holds them hostage until a ransom is paid. It also examines the different types of ransomware and explains what you can do to thwart this sinister online threat.  

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Your data is at risk. Probably more today that at any other time in history. There are simply more people with more access to the Web with bad intentions.
Windows Server 2003 introduced persistent Volume Shadow Copies and made 2003 a must-do upgrade.  Since then, it's been a must-implement feature for all servers doing any kind of file sharing.
This tutorial will walk an individual through the steps necessary to enable the VMware\Hyper-V licensed feature of Backup Exec 2012. In addition, how to add a VMware server and configure a backup job. The first step is to acquire the necessary licen…
This tutorial will walk an individual through the process of installing of Data Protection Manager on a server running Windows Server 2012 R2, including the prerequisites. Microsoft .Net 3.5 is required. To install this feature, go to Server Manager…

704 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question