Netbackup 6.5 Block level delta backups

I have recently started in an environment that is using Netbackup 6.5 for backups. Currently the backup windows are barely being met and I was wondering if there are some changes that could be made at least in some areas. I have never worked with Netbackup, and in fact have always worked with disk to disk backups, and never with tape.

There is about 9 TB of data that need to be backed up. There are 2 backup servers, one with (2) LTO 3 tape drives, and one with one LTO 3.

1. Is compression for the tape done on the drive itself? Is this automatic, or something that has to be enabled? Also does this work with any compression that is done by Netbackup, or is it one or the other?

2. Can block level incremental backups be done with Netbackup 6.5? I have looked and looked, but cannot find anything relating to 6.5. plenty about BLIB for VMs in 7.0. I also see an option in the software for Block level incremntals, but only under the snapshot feature.

LVL 12
Who is Participating?
Thomas RushCommented:
Yes, that's a more difficult problem.

For the few biggest servers, you could set aside some time to isolate performance by doing a single job at a time and seeing the numbers you get (Say, do a full backup of server 2 out-of-rotation on Tuesday night just so you get the performance numbers you need).

Or, get an approximation by starting with a blank tape, seeing how much data is on that tape after being used for a backup, and then noting when you started writing data to it, and when you stopped.

Or, if you have HP LTO3, LTO4, or LTO5 tape drives, you can download the free TapeAssure and probably get a lot of what you want in real time, comparing the particular backup job being written to what you see on the tape drive itself. .
Thomas RushCommented:
In answer to your questions, and a bit more for good measure --

1) Tape compression is done in the hardware of the tape drive itself.   It is on by default if the backup application detects a compression-enabled tape drive... and that's been the case for long enough that I can't recall having seen it not enabled on a functioning drive.

1a) Compression works by finding patterns in the data and replacing them with tokens.   After good compression, the data will be close to random, because the patterns have been removed.   If you try to "stack" hardware compression on top of software compression, the only thing you'll be likely to notice is that your server CPU utilization goes up (because compression is a very processor-intensive task)... but you won't see additional compression benefit.  You may see that backup performance goes down because of the additional CPU load.    So -- use one or the other, almost certainly the tape drive hardware compression for best performance.

2) I don't know if Netbackup supports block level incrementals, also referred to as software-enabled server-side deduplication... but some of the Symantec products do, probably under the heading of "PureDisk".    This is a pretty cool technology, but the challenge is, again, much higher processor load on the backup server.   VMware is starting to integrate better with hardware; for instance, the VAAI API lets disk arrays do a lot of work that used to have to be done by the hypervisor... but it's not as "there" yet for backup.   Veeam does some nice things with VMware backups, as does HP's just-released Data Protector 6.1.2.    Or, for more $$, Netbackup integrates with both HP D2D Backup Systems and Data Domain systems to share some of the backup work between the backup target appliance and the backup server.

3) What you don't tell us is what your actual backup throughput.   What do the Netbackup logs say was your average speed for each backup job?   It's possible that faster tape drives will help -- but if your current performance is less than the 80MB/sec native speed of LTO-3, then you'll be wasting money on new drives.    An LTO-3 drive with uncompressible data can perform up to 288GB/hour; three drives is ;850GB/hour, or 9TB in about 11 hours.   If your data does compress, then you could multiply that number by the compression you see (1.5:1 means multiply by 1.5) and see what your theoretical maximum speed is.    Again -- if you're not seeing that in actual use, then the bottleneck is not in the tape drive and you need to find the actual bottleneck and eliminate that.

4) Other things you can do to make your backups take less time include --
- Keep your filesystem defragmented!  File fragmentation wreaks havoc on you backup speed
- Make sure that processes like virus scans or indexing don't happen while backups are in process
- If you're not getting up to your theoretical possible speeds, perhaps optimizing your backup jobs to multiplex -- send data from two physically different spindles to the same tape drive simultaneously -- might get you faster backups, at the potential cost of slower restores.
- If the disks you're backing up aren't very fast, upgrade the storage on your servers so you can read the data faster.
- Get rid of duplicate files that can be hidden all over your disk and increase your backup times.  Archive files no longer used or useful, or just delete them.
- Perform an analysis of your data.   If there's a directory that doesn't change, then put that on a special archive tape, and don't continue to back that up as part of your full backups... or if there's a huge directory with mostly static files, back it up as a full once, and after that set it up for differential backups, only performing a full backup rarely.

5) One of the slickest technologies for situations like yours is called Incremental Forever, with Synthetic Full backups.  This is supported by HP Data Protector and IBM TSM (perhaps others).   What's different is that you create some disk space somewhere and perform a full backup once.   After that, every day you will be doing only incremental backups.   Periodically -- say once a week -- you will tell the backup application to create a "synthetic full backup", in which it looks through its catalogs and creates a physical tape that has exactly the same files as if you'd performed a backup to tape at that point in time.  This works really well, since data may change 1%/day or about 5%/week... so instead of having to back up 9TB every week, you'd only have to back up about 500GB/weekly or 100GB/daily.  See 
ryan80Author Commented:
Thanks for the response.

1) How do I see the speed that the tape drives were running at? I have a 100 jobs that run through the backup window. How can I tell what the total speed of the tape drive was?

2) I will have to call up and find out more information. I am used to using backup software that uses block level delta for backups. So even though I was backing up servers that were around 1TB, there would only be a few GB of change in data.

I am not sure if Netbackup does this automatically, and after searching through the documentaion I cannot find anything that supports this. I have never heard of it refered to as client side deduplication, so maybe that is what I need to look for.

3) I will have to check on the speed. I know that a 3rd drive was added only a few months ago to keep the bakcups within the backup window. There are 100+ servers, so the throughput can certainly be supplied to the tape.
Thomas RushCommented:
Check the logs for each backup job.   Although I haven't run Netbackup in a while, most every backup application will give you a summary -- x GB backed up in y:zz and either display the average speed, or you can figure it out given that information.

ryan80Author Commented:
the issue that i have, is there are multiple streams going to each tape drive. So while I can look at one of the backup jobs, there are multiple running at the same time.

It would be possible to go through each one, look up what drive it is going to, record the speed and then add it up with the reset of the servers, but to do that for all of the drives and go through several days would become very time consuming. I am hoping there is a better way.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.