Link to home
Start Free TrialLog in
Avatar of dcnorman
dcnorman

asked on

Powervault 650F slow performance

Here's my situation: I have a Dell powervault 650F fibre channel array attached to a poweredge 2400 via 1 Qlogic QL2200/66 HBA using HSSDC - DB9 cabling. The PV is configured with 1 5-disk RAID 5 LUN right now and I am basically in the testing phase. The problem I have is in slow performance with the PV; copying a 200 MB file to the RAID 5 array takes about 10 seconds, which I consider decent. The real problem is in reading the file, if I copy that same file from the RAID 5 array to the local disk on the server it takes around 90 seconds. I have tried it with read caching for the LUN enabled and disabled, with the same outcome. I have also tried several different types of RAID LUNs to see if it could have been a RAID 5 issue, which it doesn't appear to be as RAID 1 had the same performance (besides which EMC claims that RAID 5 should be the fastest on a PV, aka Clariion 5600). The drives in the PV are 18 GB 10K RPM Cheetah, and I assume I should be getting far greater transfer rates than 2 MB/s from a 10K RPM fibre channel drive. If I can't get this thing to read at a decent speed it is basically garbage, so if anyone has any ideas on what could be causing such poor performance here's your chance to shine.

Thank you
Avatar of infotrader
infotrader

Have you tried disabling the indexing services on the drive(s).  Go to C: drive, and select properties.  Remove "Indexing Services".

- Info
Perhaps some further testing is in order. For example try the same test with a different target. Perhaps the source system is slower on reads than writes.

Also try a much larger data sample for comparison.

Since the problem is only in one direction, it does not seem so likely it is the fibre channel network. I'm just thinking that the problem is more likely at one end or the other. So hopefully a few more tests will help to identify were it is.
This may be out from left field, but have you tried replacing the fiber cable?  Typically the pair of fibers - one is transmit and one is receive. So if you're having really bad receive issue, it's possible there's a break or microbend in the fiber. Or, there could be goobers on the connectors.

At least a physical inspection of the cable and cleaning the ends, better yet trying a new cable.
Just a thought.
Avatar of dcnorman

ASKER

I removed indexing from the c: drive last night but it didn't have any affect. It did however give me an idea that I should have thought about much earlier. I tried copying the 200 MB file from the RAID 5 array on the PowerVault to another PC across our 100 Mb network, which took only 26 seconds. So it seems that the array is communicating with the server fine, but something is wrong with the c: drive on the server (I kicked myself for about 10 minutes after this one). It was late at that point, so I decided to go home and continue tackling this issue tomorrow, which is now today. So what I am going to do is put another drive in there for testing, on a completely separate controller, as I fear I may have a bad disk in the array that is the C: drive on the server. As soon as I can finish the testing on the new drive I'll put a new post here.
OK, the PowerVault actually works fine, the problem lies in the performance of the drives on the server. So I am going to change this question to anyone know why my SCSI drives might be performing so poorly? Here's the setup:
1 PowerEdge 2400 server with:
  1 PERC2/Si controller with 64 MB RAM
  6 9.1 GB Ultra 160 Seagate Cheetah drives (ST39204LC)
  512 MB Memory
  Win 2000 server
 
the 6 disks are configured with 2 stand-alone volumes (c: and f:), and 1 4-disk RAID 5 array (e:).
I'll be able to copy a 200 MB file between any of the different drives in a couple of seconds, which seems normal to me, but then after doing this several times the system will bog down somehow and the copying will take around 40 seconds. During the slow copying period there is no drive activity either which could indicate a large disk queue. During the copy performance monitor shows the disk queue for physical disk at 1.8 (which admitedly means nothing to me), and processor percentage never goes over 10. Reseting the server, via software or hard off, doesn't seem to help the problem at all. Any ideas on this one?
Are the hard drives all using the same firmware version?
Perhaps one drive is hogging the bus slowing the whole thing down. I saw this twice with a drive (same make & model) that had an older
firmware version.  
All are the same firmware. At one point I actually had all drives set up as separate volumes and used IOMeter to test throughput. With a 10 MB file I would get around 7 - 8 MB/s write performance on each drive. The I/O response time was always over 1 second too (generally around 1200 ms).
Ok, here's what I have determined-- the PERC2/Si controller is a complete piece of crap, that's about all there is to it. A RAID controller with 64 MB of cache that is unusable because there is no battery backing it up. I plugged a PERC2/QC that has a battery on it into the SCSI backplane that the Si used to be connected to and saw a tripling of the performance in writes and about double in reads. So this is going to be my solution, I just won't use the built-in controller.

Thanks for all of the help, I am going to close this question now.
ASKER CERTIFIED SOLUTION
Avatar of GhostMod
GhostMod
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial