iSCSI DAS/Backup Exec performance question

Recieved my new DroboPro yesterday, ran a test backup to disk job last night using a copy of the normal Differential tape job.  Normal job takes on average 6 hours..this test took 8.5 backing up a hair over 500gigs.

Server is an HP Proliant DL360 G5, 3.25ghz Xeon, 3.5gb ram, pcie 1gig dediacted nic for iSCSI.  Wired directly to the DroboPro whose also got a 1gb interface.

Having never setup or configured iSCSI before I let the install disk do that for me.

Anyway, Are there any "best practices" or performance related things I can do either in Backup Exec (ver 12) or within Server 2003 itself?

Since this test job and the previous nights jobs both backed up  583gb and 596gb respectively, I would have assumed that sata-300 disks would write faster than an LTO4 tape drive...

Any advice?
LVL 14
Ben HartAsked:
Who is Participating?
I should have written this first in MS Word because the browser just ate my homework.

GbE is 95MB/s at best when using jumbo frames. SATA drives are typically 90 to 120 MB/s but this can drop to a few MB/s when access is random and file system data is being updated. The option to optimise to reduce CPU load can slow network operations that want a fast reply before the next request is sent. An LTO4 drive can probably write 270MB/s max if the data is 2.5:1 compressible while the tape runs at a an average of 108MB/s raw instead of 120MB/s.

Obviously it is only as fast as it's weakest link.
I agree with Tygrus2. Only to add to what Tygrus2 posted, I would also check some of the options withing Backup Exec itself. Do you have the option checked to "Verify"? This can take up extra time. I know that we also have an LTO4 library and I have seen it backup to tape at an average of 1.5GB/min with gusts up to 6GB/min. Then when I go to DAS (Direct Attached Storage) it sometimes went slower. I then changed it from a RAID5 to a RAID0. We did this for two reasons: 1 - needed more space and RAID0 allowed this, 2- It has much faster Read/Write than RAID5 or 10. this helped out our time. *We are aware that if we lose a disk that the entire RAID is gone.*

When we were having these same issues as you and it was because we were trying to shove data the size of a lemon through a data cable the size of a garden hose. That, again, is what Tygrus2 was saying.
Ben HartAuthor Commented:
@tygrus: I just read where the limit for Gigabit Ethernet was 125mbps.. nut Im unsure if that was theoretical or what.  You mentioning hte weakest link got me to thinking though and I chose poorly with regards to the hard drives themselves.  The Western Digitals I chose have a max rate of 100mbps from the buffer to disk.  I should have put more time into that decision becaue Hitachi Deskstars for just a little more in price have a max of 202mbps (buffer to disk).

@ggipson: No I do not use teh verify option specifically because it doubles teh amount of time teh job runs.  And I tossed around the idea of using the Drobo drives in a Raid0 format however since one of the goals was to stop daily jobs to tapes altogether and only push to tape for the weekly jobs I can't afford to lose the data protection raid5 offers.
How different were your jobs switching from Raid5 to 0?

Also I'm going through and updating the nic, disk/array controller drivers and firmware since I realized teh backup server was using scsi array controller drivers from 2006.  Anyway I found that Microsoft's iSCSI Initiator 2.08 is dated 2008.. I've googled a bit and it doesnt appear they've updated it at all.  Should I continue to use it or is there a better alternative?
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

I think this document may answer your queries.
Ben HartAuthor Commented:
I am evaluating that article right now.
Ben HartAuthor Commented:
OK I can't do anything about the number or types of files, the block size, compression (since the disc destination doesnt support it), and SCSI is irrelevant here.  But I am going to bump the amount of RAM on the backup server, try hard setting the page file size, and I've already swapped out the nic with w TOE enabled one.  Didn't see much difference there.

Your NIC with TCP/IP offload engine, if you find settings for reducing bus load or CPU-vs-performance then high interrupts with high CPU usage is better than lower CPU usage and slow network performance. Actually they now have iSCSI offload NIC's to speed it up. Have you tested the Network speed with the above NetSpeed tool ? OR is this a problem because you can't access remote box to install, different OS or can't run normal network with iSCSI ? DroboPro should not be used through router.

Have you tested file creation/copy speed between systems ?

How many HD's are you using ? You are only going to see 20 to 50MB/s in typical use. The DroboPro doesn't like small blocks/files. Random read/write may be improved with more drives but the system is limited by the unit's CPU & OS. Standard HD's don't do so well with small files and random read.

You can try O&O defrag or other defragged that can sort by directory and filename. Does that work through iSCSI or is there another tool for DroboPro ?
Ben HartAuthor Commented:
Nice comparisons on the two links you provided, however I'm not concerned about the lack of read performance as nothing reads from it except during a restore and even then.. I'm not in that big of a hurry.

I will try the netspeed though and report back.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.