We help IT Professionals succeed at work.

We've partnered with Certified Experts, Carl Webster and Richard Faulkner, to bring you a podcast all about Citrix Workspace, moving to the cloud, and analytics & intelligence. Episode 2 coming soon!Listen Now


BackupExec backup to disk speed

Zenith63 asked
Medium Priority
Last Modified: 2013-12-01
Hi guys,

I'm just looking for some feedback from others using something similar to this.

Basically I have a server with about 1.5TB of data (your average day-to-day data with lots of different sized files).  I have a second server with BackupExec 12.5 installed, lots of SATA disk space and an Ultrium LTO4 connected.  What I want to do is disk-disk backup from the production server to my disk backup server, then from there onto tape.  I have a crossover cable between the servers to give 1gb LAN connectivity.

I have my policy setup to do a full backup-to-disk job one day a week then differentials the other, and then duplicate jobs to automatically copy the data from the backup-to-disk-folder onto tape.  This all works perfectly in theory, the problem is the speed of the disk-disk part across the LAN.  The data shoots onto the tape at 2-3000mb/min no problem, but coming across the LAN for the disk-disk part it runs at 250mb/min.  Doing the math on that it will take 4-5 days to complete a full backup, I'll then get a couple of differentials in and be back to another 4-5 days.  No really feasible.

So my questions are -
- is 250mb/min normal enough in this scenario, am I flogging a dead horse with this particular method?
- what other ways are there without spending a lot of money?  The production server is on a fibre attached MSA1000 SAN by the way, it's a DL380 server.  The disk backup server is a DL320S.

All thoughts appreciated!
Watch Question

Is that server you're backing up severely fragmented, or under heavy load all the time? Gigabit network peaks around 7500mb/min in a perfect world, but realistically 2-3000mb/min would sound about right. If it isn't badly fragmented or stressed, try running the backups without the crossover, just running over your normal LAN. 100mbit LAN should peak around 750mb/min, so faster than what you're seeing now still.


Thanks for the reply!

I didn't check the fragmentation but will tomorrow for sure.  Common sense should have made me check that one!
I only put the cross over in today to try and rule out network bandwidth issues.  Before the crossover I had both servers connected to the LAN with two teamed NICs each to gigabit switches, so 2gb/s load balanced connections.  The speed was about the same.
I spent most of the day copying files back and forwards to try and detect a pattern.  If I copy a 100Mb single file to the server, or any server for that matter on the LAN it goes very fast, 2000mb/min upwards.  I then started testing with a directory of about 3000 files comprising 2Gb of data as a single file isn't a very realistic test.  I never got above 450-500mb/min with a straight Windows copy.  I also did an NTBackup of the same folder, which ran slightly faster the BackupExec, but not much.  So I've pretty much ruled out BackupExec itself as a problem, and from the speed of that 100Mb file and/or the cross-over test I don't think the network is the issue either.  I either can't read the data from the SAN fast enough or can't write to the BackupToDiskFolder fast enough.  By the way the server is only busy 9-5, the backup speed doesn't improve much outside these hours so I don't think load on the SAN is an issue.  Also there are only a handful of servers on the SAN.
The real question is what are others seeing backup-to-disk wise both to local BackupToDiskFolders and across their LANs?  Maybe 250mb/min is normal.  I find it hard to believe though because 1.5TB doesn't seem like a lot of data to me and lots of people use disk-disk-tape strategies and praise the improved backup speed.


My backup to disk jobs vary. My file server backup averages 700mb/min, I think due to the large number of small files. Exchange backs up at 2000mb/min and Oracle at 3000 mb/min.

Not the solution you were looking for? Getting a personalized solution is easy.

Ask the Experts


OK so 700mb/min should be my target then.  Even at that though it will still take 1.5 days or so to complete a full backup.  How much data are you backing up?
Really, I'd say 700mb/min is a bit low, but it's only 670GB that it backs up, so a full backup on Friday night is done by lunch on Saturday so I haven't worried about it much. With 1.5TB you'd be running around 36 hours, 1.5 days like you said.


And that's assuming I can get to 700mb/min.  It was at 180 this morning, I'm up to 270 now but it's a long way to go!  So any other thoughts other the fragmentation?  What drives are on your destination, SCSI/SAS or SATA?
After this the only option I can see is to stick the whole file server into a HyperV virtual machine and back the VHD up across the LAN, as it would be one big file is should go at the 2000mb/min+ I was seeing with my 100Mb file test, so I could just do full backups every night in 12 hours or so.
The backup server is 8 1.5Tb 7200RPM SATA drives in RAID 5. The file server is eight 300GB 15kRPM SAS drives in RAID 5.

How many files are there, what's the average size? Like mine, that 670GB is a little over a million files, averaging 600KB. Other than the possible fragmentation, a lot of the slowness just comes from the overhead of dealing with a bunch of small files. This thread is about a different backup software, but I would think a lot of it applies to Backup Exec as well.

If it is a lot of smaller files, you may see an improvement if you split it into a couple or more jobs. Like say you have 3 main folders on the server, split them into 3 backup to disk jobs all running at once, then the backup to tape job can come through later and copy those B2D files to tape.


A very similar hardware setup to here actually.
There are just over 600,000 files, average file size of 2.1MB.  In terms of fragmentation Windows reports an average of 2.3 fragments per file, which seems quite small, though from the number of actually fragmented files it looks like a lot of this fragmentation happens in about 10% of the files.  It also reports 800,000 excess fragments, not sure what that means though.
I'll have a look at that article now and get back to you.


I've checked through that article, everything looks OK.  I've started a defrag to run over the weekend and I'm also rebuildnig the disk backup server to move from RAID6 to RAID1+0.  RAID6 on SATA was probably a bad choice.  My reason for this is that I notice when the backup is running, even if it's only going at 180mb/min the server crawls along, just opening the start button takes time.  There's only so many things that can cause that and disk i/o is definitely one of them!
Any more thoughts, keep 'em coming...


OK I had a bit of a result on Friday.  I defragged the source server but as you know the Windows defrag is pretty useless, it didn't seem to make much difference.  Might look to get DiskKeeper or something for it.
The next possibility was my RAID6 array on the disk backup server.  The array was hanging off a HP P400 256Mb RAID controller, which is certainly one of their lower spec models.  I rebuilt the server and used RAID10 this time instead (there are 12x 750Gb SATAs in there so space isn't an issue).  Reinstalled BackupExec 12.5 and kicked off the disk backup from the production server again.  It started off at 1000mb/min and when I was leaving half an hour later it was going at 1800mb/min!  So it looks like this was the problem.  I'm surprised the RAID6 made that amount of difference, but nothing else has changed and the server is now useable when a backup is running, not crawling.
I'll update this on Monday when I see how the backup ended up over the weekend, but from that it is looking very very good.  Thanks for sticking with me CrashDummy and the info on how your backup runs!
Thanks to you too for the last update. I may have to look into switching mine from RAID 5 to 10 if my backups start to take too long in the future. I knew there was a lot of overhead with RAID 5 and 6, but didn't expect it to be that bad. I'm glad you got it running faster.


Just a final closing comment on this.  The backup finished over the weekend in 22 hours with an average speed of 1330mb/min, a massive improvement from 180mb/min.  So for this case the problem is sorted.
It's worth noting fo future users how important the destination server is for a disk-disk backup solution.  You hear day-in-day-out how disk-disk speeds up your backups, cuts down backup windows and being secondary storage and use cheap disks.  In reality the likes of LTO4 is going to beat disk-disk-over-LAN hands down every time so backup speeds and time windows will become considerably worse, not better.  Using SATA disks is cheaper, but to get decent backup speeds you can't go with the space efficient RAID5/6, you're either looking at striping (and who's going to do this on a server) or mirroring.  Definitely something to keep in mind!  Stuff like in-file-delta backups and synthetic backups will improve this greatly, but I don't like over complicating something as crucial as backup that needs to be working in 2 years when you come to it after not looking at it for months :).
Thanks CrashDummy!


Thanks for your help with this, without knowing the speeds you were getting etc. I wouldn't have gone any further with this!
Access more of Experts Exchange with a free account
Thanks for using Experts Exchange.

Create a free account to continue.

Limited access with a free account allows you to:

  • View three pieces of content (articles, solutions, posts, and videos)
  • Ask the experts questions (counted toward content limit)
  • Customize your dashboard and profile

*This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.


Please enter a first name

Please enter a last name

8+ characters (letters, numbers, and a symbol)

By clicking, you agree to the Terms of Use and Privacy Policy.