Powervault 650F slow performance

Here's my situation: I have a Dell powervault 650F fibre channel array attached to a poweredge 2400 via 1 Qlogic QL2200/66 HBA using HSSDC - DB9 cabling. The PV is configured with 1 5-disk RAID 5 LUN right now and I am basically in the testing phase. The problem I have is in slow performance with the PV; copying a 200 MB file to the RAID 5 array takes about 10 seconds, which I consider decent. The real problem is in reading the file, if I copy that same file from the RAID 5 array to the local disk on the server it takes around 90 seconds. I have tried it with read caching for the LUN enabled and disabled, with the same outcome. I have also tried several different types of RAID LUNs to see if it could have been a RAID 5 issue, which it doesn't appear to be as RAID 1 had the same performance (besides which EMC claims that RAID 5 should be the fastest on a PV, aka Clariion 5600). The drives in the PV are 18 GB 10K RPM Cheetah, and I assume I should be getting far greater transfer rates than 2 MB/s from a 10K RPM fibre channel drive. If I can't get this thing to read at a decent speed it is basically garbage, so if anyone has any ideas on what could be causing such poor performance here's your chance to shine.

Thank you
Who is Participating?
GhostModConnect With a Mentor Commented:
PAQd, 500 points refunded.

Community Support Moderator
Have you tried disabling the indexing services on the drive(s).  Go to C: drive, and select properties.  Remove "Indexing Services".

- Info
Perhaps some further testing is in order. For example try the same test with a different target. Perhaps the source system is slower on reads than writes.

Also try a much larger data sample for comparison.

Since the problem is only in one direction, it does not seem so likely it is the fibre channel network. I'm just thinking that the problem is more likely at one end or the other. So hopefully a few more tests will help to identify were it is.
Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

This may be out from left field, but have you tried replacing the fiber cable?  Typically the pair of fibers - one is transmit and one is receive. So if you're having really bad receive issue, it's possible there's a break or microbend in the fiber. Or, there could be goobers on the connectors.

At least a physical inspection of the cable and cleaning the ends, better yet trying a new cable.
Just a thought.
dcnormanAuthor Commented:
I removed indexing from the c: drive last night but it didn't have any affect. It did however give me an idea that I should have thought about much earlier. I tried copying the 200 MB file from the RAID 5 array on the PowerVault to another PC across our 100 Mb network, which took only 26 seconds. So it seems that the array is communicating with the server fine, but something is wrong with the c: drive on the server (I kicked myself for about 10 minutes after this one). It was late at that point, so I decided to go home and continue tackling this issue tomorrow, which is now today. So what I am going to do is put another drive in there for testing, on a completely separate controller, as I fear I may have a bad disk in the array that is the C: drive on the server. As soon as I can finish the testing on the new drive I'll put a new post here.
dcnormanAuthor Commented:
OK, the PowerVault actually works fine, the problem lies in the performance of the drives on the server. So I am going to change this question to anyone know why my SCSI drives might be performing so poorly? Here's the setup:
1 PowerEdge 2400 server with:
  1 PERC2/Si controller with 64 MB RAM
  6 9.1 GB Ultra 160 Seagate Cheetah drives (ST39204LC)
  512 MB Memory
  Win 2000 server
the 6 disks are configured with 2 stand-alone volumes (c: and f:), and 1 4-disk RAID 5 array (e:).
I'll be able to copy a 200 MB file between any of the different drives in a couple of seconds, which seems normal to me, but then after doing this several times the system will bog down somehow and the copying will take around 40 seconds. During the slow copying period there is no drive activity either which could indicate a large disk queue. During the copy performance monitor shows the disk queue for physical disk at 1.8 (which admitedly means nothing to me), and processor percentage never goes over 10. Reseting the server, via software or hard off, doesn't seem to help the problem at all. Any ideas on this one?
Are the hard drives all using the same firmware version?
Perhaps one drive is hogging the bus slowing the whole thing down. I saw this twice with a drive (same make & model) that had an older
firmware version.  
dcnormanAuthor Commented:
All are the same firmware. At one point I actually had all drives set up as separate volumes and used IOMeter to test throughput. With a 10 MB file I would get around 7 - 8 MB/s write performance on each drive. The I/O response time was always over 1 second too (generally around 1200 ms).
dcnormanAuthor Commented:
Ok, here's what I have determined-- the PERC2/Si controller is a complete piece of crap, that's about all there is to it. A RAID controller with 64 MB of cache that is unusable because there is no battery backing it up. I plugged a PERC2/QC that has a battery on it into the SCSI backplane that the Si used to be connected to and saw a tripling of the performance in writes and about double in reads. So this is going to be my solution, I just won't use the built-in controller.

Thanks for all of the help, I am going to close this question now.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.