Backup exec 12.5 - need improvement in backups....

sunder_462
sunder_462 used Ask the Experts™
on
Dear Experts,

I am using Backup exec 12.5  rev. 2213.
Daily - Differential backups, Weekly/monthly - full backup
Backup device: 1/8 IBM autoloader having ult3580-HH4 (LTO 4)
Backup data are transferred into the tape via an isolated network.
Using the “User define selections” for backup. No remote agent is being used.
Full backup runs at 1100mb/mint (approx) speed. But the differential backup runs at 700 - 800 mb/mn speed.
Differential backup does not complete during night & takes more than 17-18 hours to complete. Backup size of diff backup is 900 GB & full backup is 1.8TB.
I cant predict which setup can imrove my backup environment..
We need to improve the backup throughput of differential backup so that backup can complete at night.I donot want to use incremental backups as I personally dnt like it. or
Any suggestion which I can implement on my backup server to improve the backup environment like using B2D or adding one more tape drive..

Regards
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Problem #1: you've got a tape drive that has a native write speed of 80-120MB/sec (4800-7200 MB/minute), and you are feeding it at about 20% of that speed or less.   This causes severe stress on the physical tape and the tape drive mechanism, because the drive buffer is constantly emptying, the drive stopping, rewinding, starting again, repeat, repeat, repeat.    Your tape media will fail sooner from the stretching, and in the process of failing, will shed more 'particles, and make backups less reliable.

What to do?  If you are on a GbE network, the problem is almost certainly in the server and its disk subsystem, not in the tape drive.   If you are on 100Mb/S network your first step is to update to GbE -- 100Mb is useless with today's tape drives.  

After eliminating the network as a bottleneck, If IBM has any diagnostic utilities available that are equivalent to HP's free Library and Tape Tools, that's the first place to start.  After that, it's time to optimize the server and network, optimize the backup, or go to a D2D2T backup strategy.

0) Optimize the server:  Make sure that the server disk is kept defragmented, as fragmentation kills backup performance.  Check the server with PerfMon (or your OS's equivalent performance monitor) and see if you are memory or processor limited, and if so, upgrade.  A faster server with the same disk will in general be able to feed a backup application faster.    Analyze your disk subsystem and see if faster HW may allow data to be read from the disks faster.  Use RAID 10 or 5 instead of JBOD.  Use a HW RAID card like an HP SmartArray controller, instead of OS-based SW RAID like most inexpensive RAID cards.  Turn off unnecessary processes like Windows' Indexing, that can seriously slow performance on older systems.

Optimize the backup.  
1) If you're doing encryption or compression in software (using the backup application), stop -- either of those can slow a backup by as much as 50%.  If encryption is a requirement, your LTO-4 drive can do that in hardware at no performance cost, by selecting the "use hardware compression" option in your backup application and following instructions.    All modern tape drives do compression in HW, so don't do it in software!
2) For your full backups, if restores are rare, consider doing an image (that is, sector by sector) backup, instead of a file (file by file) backup.  This will likely speed your backups up quite a bit, at the cost of slowing restores.  If it's fast enough, you could even consider switching to daily image full backups, instead of daily file differentials.
3) Maybe there are entire directories or large temporary files that you can exclude from your backups.  There may be a significant amount of data you can exclude from your backups this way, thus making the backup faster.
4) If you can't fix this by any of the steps above, I suggest you turn OFF HW compression in your tape drive -- because to the extent that your data is compressible, that compression means that the tape write buffer empties faster, making your buffer under-run problems worse.  Without compression, I want my LTO-4 to be writing at at least 50MB/second.   If using compression with a 2:1 compression ratio, double that.   With a 1.3:1 compression ratio, make it 65MB/sec (note: these numbers are on an HP drive; IBM's adaptive write technology is not as effective, and you want faster data feed to avoid buffer under-run with IBM drives).

5) With some backup applications, it's possible to have multiple readers active at once -- so if I have a physical C: drive and a physical D: drive, I can create a backup job that is one process backing up C: and one process backing up D: at the same time, both writing to the same physical tape.  Since they are two different physical disks, the two processes are not competing for the disk head, and you should be able to get significantly better performance.   This functionality is dependent on your backup application.

6) If you don't see any, or enough, improvement with the above, we have to move to a bit of a different architecture -- instead of backing up disk directly to tape (D2T), we're going to go to disk, and then to tape: D2D2T.    The simplest and cheapest way of doing this is to get some sort of fast HW RAID disk array and use it as a backup-to-disk target.   When the backup is complete, you use the backup application to copy that backup job to physical tape.  NOTE: To avoid a two-step restore process, it's important to copy the "backup job", not the *files* to physical tape.

This improves your performance in several ways when copying data to physical tape:  a) we've got an initial disk target, so we don't have to worry about buffer under-run; disk is inherently a random-access medium, not sequential as tape is.  b) the data written to the disk will be a huge file written in contiguous chunks of data that are near the root directory, instead of the tiny files in deeply nested directories that your backup application is currently seeing.  

7) Since the first five steps didn't work, the alternative to a backup-to-disk as your middle step in D2D2T, is to backup to a virtual tape library (VTL).  These devices are designed to be backup targets, are optimized to accept multiple simultaneous backups, and often optimized to stream data to physical tape afterward.    HP's D2D Backup System is one example of this technology ( http://www.hp.com/go/d2d ).  A second benefit is that they perform deduplication -- storing unique blocks only once -- so that you can retain data much longer than you could on a backup-to-disk partition, allowing fast restores.  The HP D2D also lets you attach a tape drive directly to the appliance so that the copy to physical tape has no performance impact on your backup server or network.


Let me know when you've worked these suggestions and how much difference they make.


yes, One of the option you can use B2D.
One other way to look at this -- until you know what the bottleneck is, you don't know what to fix.  If the bottleneck is the file system on your server, or in the network for network backups, adding a second tape drive or using D2D backup won't speed things up any.  That's why the first things I suggested were to check the network for any bottlenecks, followed by running tests to see if some of the performance problems could be server-side, or due to tape drive health.

Once you get through those things, you can -- in something close to the order I put above, try steps to optimize the server, optimize the retrieval of files from the file system, optimize the streams, and finally, spend more money on re-architecting your backups to a D2D2T solution.
PMI ACP® Project Management

Prepare for the PMI Agile Certified Practitioner (PMI-ACP)® exam, which formally recognizes your knowledge of agile principles and your skill with agile techniques.

Author

Commented:
Thanks Selfgovern for ur valuable inputs..

diagnostic utilitiy : IBM has ITDT , TDX utility but it is not good like HP L&tt. It is gud for firmware upgrade only.

Optimize the server : I will first go with this option & let u know

Optimize the backup. : 1.we are using hardware compression option. 2. I dnt know where is the option in backup exec to use the image backup , pls tell me abt this option in backup exec 3. I will exclude the temp directories & let u know the status.4.  if no option will work I will disable the hard ware compression from library.5. Backup exec does have any option of multistreaming.6. if the above steps  dont work , I will go with this.7. This is a good option for backup. I will also suggest this option to my managers..

regards

Author

Commented:
We have Gigabit Ethernet...
Let me suggest that you disable HW compression until you get the issues solved.   Because there's a problem getting data to the tape drive fast enough, using HW compression only increases the amount of buffer under-run you'll experience.    If we can get this backup to 80 or 100MB/second, we can probably use HW compression with a song in our hearts joy in our souls.

You can also use Library and Tape Tools to test the speed of reading data from your hard drive, simulating how a backup works.

I've got to bring up a BE system and see if it's got an image option; I know lots of backup applications do.  I'll let you know.  It looks like BE2010 does... I'll check on 12.

Eliminating temp files and directories from your backups won't make them run faster, but because you won't be backing up some transient data (that you know will have no value tomorrow), they'll get done quicker.  

Oh, there may be some potential tunings you can do, also.  
- Put BE and its log files on the fastest disk you have, since each file backed up generates an entry in the BE log and database.
- make sure BE is configured to use the largest supported block size, and probably the largest buffer size as well.  NOTE: Make a note of your current block size.  You will most likely have to use this block size when restoring tapes.
- If you're running off of a SCSI or Fibre Channel host bus adapter, there is a parameter "MaximumSGList" that may need to be changed or added.   (see attached in "code snippet")
- I have also seen indications that people have significantly better performance using IBM tape drivers instead of Symantec tape drivers, so try switching if you're using Symantec drivers.   Likewise, there may be a better driver for your HBA.





[From Larry Fine]
The parameter change to buffer size greater than 64 KB on Windows can only take effect if there is also a registry change to a parameter called ”MaximumSGList,” or Maximum scatter/gather List. This must be made for the HBA, which is connected to the tape drive.  Use regedit from the run command line to start the registry editor.
Navigate the directory tree to
\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\adpu160m\Parameters\Device\MaximumSGList
"adpu160m" is just an example of an Adaptec 29160 HBA registry entry.  You will need to determine the proper driver name for the SCSI card used in YOUR system for the tape drive and replace it in that path.
The REG_DWORD MaximumSGList may not exist for a given HBA because the default is being used, so it may be necessary to create a NEW key from regedit.
The formula for calculating the value of MaximumSGList is as follows:
MaximumSGList = (Tape Blocksize (bytes)/4096) + 1
Or, set MaximumSGList to a value of ff (hex) if in doubt.

Open in new window

Author

Commented:
Dear Experts,
Sorii for late reply...

Our Process will take some more time to implement the changes...I am closing this forum by accepting the first answer of SelfGovern as it has all the possible answers...

Thanks SelfGovern for ur valubale inputs...

Regards
Happy to help!  

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial