We have a backup solution which uses D2D2T. (VMWare VM backup using VCB to tranfer the VMDK files to the VCB backup proxy server for backing up to tape).
I need to increase the Hard disk throughput of the staging disks on the proxy/backup server to greater 200MB/sec sustained (combination of sequential read & sequential write) to ensure our backups are completed in the backup window.
The source data is on a SAN which is fibre attached to the backup server. We have confirmed we can read data to the staging disks (Holding tank) at 110 MB/sec sustained throughput.
We have also confirmed we can write to the Backup tape at sustained speed of 110 MB/sec from the staging disks. (Tape drives rated at 240MB/S native)
However, since the backup server can multiplex jobs and be reading and writing from the staging disks at the same time, the staging disks need to be able to achieve 220 MB/sec when reading from san/writing to tape at the same time. At the moment, when reading & writing at the same time, reading & writing throughput are both halved or slightly less (as expected).
Now for the tech stuff....
OS = windows 2003 server.
Dual 2000 MHz CPU + 2GB Memory. Neither CPU or memory is being significantly loaded during file transfers.
Staging disks are 3 x Hitachi Ultrastar 300 GB SCSI Ultra 320 15K with 16 MB cache each rated at 72~123 MB sustained TP configured on "software raid0" attached to a LSI LOGIC Ultra320 PCI-X 133 MHz scsi card.
The scsi card for the staging disks is on a dedicated 133MHz PCI-X bus (fibre card to san and scsi card for the tape device are also on separate dedicated 133MHz pci-x buses and are both rated at 133MHz so all buses are running at best speed).
Raid 0 has been selected as redundacy of the staging area is not needed and to my knowledge, striping will give the best IO throughput. I have tried different raid block sizes 64K up to 8 MB but with only marginal improvements.
Windows performance monitor shows that disks are constantly being accessed , disk queueing on the staging disks is around 20 and disk access time is over 20 ms so these are definately reaching their limits.
Before investing in additional hardware,my questions are:
1) Will adding more disks (identical models) into the raid0 increase the throughput considerably ? I have seen articles both for and against this. Has anyone actually proven this, not just a theoretical answer please.
2) If yes, to 1) above, theoretically, how many more of the same model disks would be needed to get to 200MB/sec sustained?
3) Will changing the Ultra 320 scsi card to a Ultra320 SCSI hardware raid controller (hardware raid versus software raid) increase throughput considerably? Remember neither CPU or Memory is being heavily utilised on the server during backups and i believe this is the biggest difference between HW & SW raid other than data loss prevention in power failure which is not a concern.
We do not wish to change our backup topology or invest in 100K worth of large SSD HDD's so please keep answers within the scope of the questions being asked.