HP E200i poor write performance

Hi all,

I am using ATTO Disk Benchmark to testing my company's HP ML350 G5 with E200i that running RAID 10 (128K Stripe Size) 50%/50% Read Write Cache with the 128MB BBWC ...

It is using 4 SATA 7200rpm Hot-Plug SATA.

And the OS is W2K3R2 x64 with 3GB RAM.

Client Side running Win7 RC, so I can know that the speed while copy some big ISO files from the server to the client ...

The ATTO tested result is here

The Image look like special on 512, 8192 etc ...

The Write performance look like very poor, how to improve that? SATA Disc problems or the E200i is too lower end? If using the latest 750GB SATA 7200 rpm will better performace?

And the Test read speed is from server copy a 600mB iso to client @ 25-40MB/s (The Brust speed look like can over 60MB/S since I do the copy at least time time)
I had own a HP DL360 G4 with RAID 0 of 72GB 10K SCSI that can make the same as 69MB/S ....
So It is not the 3Com Gigabit Switch problems that...

The E200i upgrade to the latest firmware, and driver already.

Any suggestion are welcome.

Best Regards,
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Is NCQ enabled? If not, do so ... but in your situation it won't help until you eliminate the networking bottleneck.

It is highly likely your problem is network related.

Prove it to yourself by doing disk benchmarks on the local machine only. If you can't match then once you copy files over the network, then this shows you the problem is the network.
explorer1979Author Commented:
Hi dlethe,

Can't enabled NCQ, since E200i not support it ...

I have read other forum the E200i at least 40MB/s write speed.

But I am just 4-5MB/s .....

So I don't think it is network problems, since I have a HP DL360 G4, it read and write speed is total different compare with the ML350 G5 ...

If network probems, how to check?

explorer1979Author Commented:
And I also try login to the server by RDP ...
Do the same file copy to same partition..

The speed is look like same as the network write speed (I cal by see how long second it taked)

So the problems I don't think so on the networking problems.
Active Protection takes the fight to cryptojacking

While there were several headline-grabbing ransomware attacks during in 2017, another big threat started appearing at the same time that didn’t get the same coverage – illicit cryptomining.

explorer1979Author Commented:
Once more.

This is a File Server by using DFS and DFS-R ....
explorer1979Author Commented:
Now I on the same server (Server Local) just on same partition copy the samee file ( a 400MB) need over 1 minutes 30 seconds ...

So I real don't think so it is a network problems...
Fair enough, now were getting somewhere ...
Let's see if anything else is going on.  Run perfmon and have it record I/O on the same volume before, during, and after the copy.   Is there a lot of other activity going on?  

I am concerned that your chunk size in the RAID is way to big and the file system is getting bogged down.  With 128Kb stripe size, and 4 disks, then that means that if an application needs to write just one byte, then your disks are going to have to write 512KB.

Now imagine all of the other things that may be going on, like writing log files, NTFS journals, updating last-access-time on every file that has been touched.

Is your NTFS also set up for allocation size of 512KB?  If not, then it is inefficient.  If NTFS is 4KB which is probably what you have, then every time the O/s wants anything at all, it is going to get 32X more data I/O then is necessary, no matter what.

explorer1979Author Commented:
And this RAID 10 (1+0)

Have two logicial Disk

C: for 18GB and E: 458GB ..

I don't know do this relevant or not..
explorer1979Author Commented:
NTFS is 4KB by default ....

So do I chance the RAID Stripe Size from 128KB to 64K will be better?
explorer1979Author Commented:
And how you cal that 32X of the 4K NTFS??? Base on what standard?

Thank you of your help and valuable time

My mistake, it is only 16x, sorry it is 12 AM here so cut me some slack :)

Also it is relevant you partitioned the 0+1, this makes it even worse.

First, the RAID stripe size is how big of an I/O gets called on each disk at a time. Your stripe size is 128KB, and since you have RAID0+1, you need each disk to read/write 128KB at a time to satisfy an I/O request (the controller will do the mirrored part for you, so it is not fair to penalize the RAID1 part of RAID10, just the RAID0 part).   I do not now if the firmware in the RAID controller is smart enough to read less than the stripe size but it certainly can never take this shortcut if the host wants a write.

If you are doing large block I/O, then you certainly need RAID10, but you went wrong by putting C: on the same RAID group.  The O/S partition gets pounded with small block I/O, which is killing the performance of the E: drive.  

Your disks are good for approx 100 I/Os per second.   But due to your RAID0+1 then you need 2 disks to satisfy any I/O, so you are only capable of satisfying 50 IOPs.  Now since you have 2 logical disks, let's do a rough estimate and divide by 2 (so C: and E: get half of the available I/O).
This gives you 25 I/Os per second total you can get out of all 4 disks ... best case scenario.  No wonder performance is so bad.

At very least, you should just break it up into 2 x RAID1s.  Make C: stripe size of 4KB (with both NTFS & the RAID). Make E:\ something larger, perhaps 32KB, and make NTFS allocation match.

Then use C: for files that have smaller I/Os,  E:\ for files that are going to have lots of large block I/Os.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
explorer1979Author Commented:

Thank you very much. Do that since my company haven't money to bought multi disk, do the C: and E: are on the same group same card...

If break it up, how to not need reinstall the OS and DFS?
I have Symantec Backup and Recovery Server Edition, do I just can image the c: and E:

And then buy addition two SATA for the OS only (2x 250GB with RAID 1)
Then the other 4 Disk as RAID 10 (HP call it 1+0)

And then recovery the image to those RAID?

Thank you very much and wish you have a nice dream :-)

explorer1979Author Commented:
Add Point for any other suggestion.
andyalderHaemorrhoids victimCommented:
Am I right in thinking your 4 * 7.2k SATA disks in RAID 10 perform about the same write speed as 1 * 10K U320 disk? Well that's to be expected, just compare the seek speeds. Don't forget write speed is divided by 2 for RAID 10 since both disks have to be updated so you're really comparing 2 * 7.2k Vs 1 * 10K.


15ms for an average I/O on a 7.2k SATA, 8ms for 10K U320.
There's a problem with this response.  

Yes, with RAID1 (mirroring), both drives have to be updated, and there are only two drives, but there's nothing stopping the controller from updating both drives at once.

But this is RAID10, 4 drives, and it can spread wites across striped drives so that it can actually write twice as fast.  The total write bandwidth with 2xRAID1 is actually the same as 1xRAID10, and with the latter, every write is twice as fast.

The only real benefit to splitting a RAID10 into two RAID1s is when write activity is over double what a RAID1 set can handle, it then prevents IO in one area from affecting the other.   Althought that is modulo the cache not being hogged by one of them anyway.

I have the same controller here with an ML350 G5, all firmwares and drivers up to date with similarly ugly performance curves in ATTO.  Mine's got the max 128M cache as well.  I'm using a brand new pair of WD RE3, and I don't think this is a very good RAID controller.  With the stock 3x146G 15K RAID-5 sequential writes still bottlenecked at 20 megs per second.  Transfer rates are all over the map at different block sizes.  Mine keeps spewing event log errors about bad blocks (it's a brand new pair of drives), but the performance curve is identical so I think this controller doesn't handle SATA drives well.  I set RAID controllers up all the time and Promise software based RAID-1's such as the TX2650 crush the performance of this piece of junk.
andyalderHaemorrhoids victimCommented:
They're really only intended to run drives that have HP firmware on them. You shouldn't have anything in the event log as the controller hides drive errors from the OS.
I've now resolved the problem, write performance won't go above 20M/sec unless the drive write caching is enabled.  This can't be enabled unless you upgrade the RAID managers to the newer versions, as older versions fail to enable it even if the drivers and firmware support it on the E200i.

With this resolved, under ATTO diagnostics the write performance still spirals terribly with block sizes above 128k.  If this is important you can set the array accelerator to 100-0 on read-write priority and the write performance progression will be natural and even faster than with the controller write cache enabled--110 megs per second in my case with WD 1TB RE3 drives.  With the priority set to 75-25 the controller tops out at around 80 megs per sec.

In my case, the final configuration I used was 75-25 R/W balance on the array accelerator with drive write caching enabled.  This provide most of my full sequential transfer rates at block sizes below 256k, and still provide normal random write performance gains with 4k write operations on Crystal DiskMark load tests.  All of our ML350 G5's (all with E200i's) have redundant power supplies and are battery backed with UPS's of course.

The final piece of this equation is RAID-5 arrays, which are cpu limited and won't probably write performance beyond around 30 megs per second, even with drive write caching enabled.  In the case with the RE3's, I converted down from a 15k RAID-5 to this RAID-1 to improve write performance on our Exchange/File server.  On our other servers, which are running HP 15k and 10k RAID-5's, even with the drive write caching enabled writes would not go above ~30 megs per second, despite read speeds of up to 170M/sec on one machine (15k 3.5in).  These configurations would be limited by the controller's CPU performance (or lack thereof) due to the increased complexity of RAID-5.  With gains being at best 50% on writes in these configurations with this controller, there is less incentive to increase the risk of potential RAID sync problems in worst case scenarios.
...won't provide write performance beyond ~30 megs per second that was (first sentence of final paragraph)
Also, regards to the bad sector errors in the event logs, this is another buggy "feature" of this E200i RAID controller.  When you boot with the new array's logical drive to test it (having removed the drives from the original RAID-5 logical drive), the controller leaves the device manager item for the first array's logical drive in the list of devices.  Every time windows attempts to communicate with this device, it leaves a spew of entries about bad blocks in the event viewer's system log.  Once the old logical drive pointing to the missing array is removed using HP's RAID manager, these entries disappear.  This could definitely be a bit more intuitive, as it makes the new array appear to have a DOA hard drive...
" ...These configurations would be limited by the controller's CPU performance (or lack thereof) due to the increased complexity of RAID-5."

No, it has nothing to do with the increased complexity. We're talking nanoseconds to do the math, and milliseconds to read/write the data.   Cpu overhead is statistically insignificant.

The reason you have the bottleneck in RAID5 is due to the extra I/O required, as well as the bus and I/O limitations of SATA, which can't be dual-ported.

The RAID-5 peformance bottlenecks at 30M/sec I mentioned are on SAS RAID-5's, not SATA RAID-5's.  The SATA array I mentioned was a RAID-1.  The 15k SAS RAID-5 topped out at around 30M/sec, and the 10k SAS RAID-5 topped out at under 25M/sec on Crystal Diskmark's sequential write tests.  There may be more I/O required, but it doesn't make sense that this I/O would make a RAID-5 array with 170M/sec seq. read performance top out at 31M/sec on writes.
Yes, it makes a lot of sense, you are not factoring block sizes, and the additional I/O, and even bus saturation.  300MB/sec  isn't a heck of a lot of bandwidth.  

Do the math as a starting point, and consider what has to be done to not only write the data, but to read the disk(s) to make the parity (redundant data), and then to write that out also.    With reads, you are just doing reads, and not only that, you are spreading it out a bit, as the disk with parity rotates and the controller never even reads it.  
Ok, let's do some math.  If your 300MB/sec reference is re: the SATA spec, this card is actually limited to SATA1's 150MB/sec per port for SATA drives.  Despite this low bandwidth ceiling vs SAS, the SATA mirror is turning out efficient results in its RAID-1 setup with > 100M/sec on reads and writes from its array.  If SATA in fact isn't dual ported, it would seem strange that I can get 130M/sec ATTO reads from the RAID mirror--reading the two drives at that effective speed should have saturated the 150M/sec SATA bus in a single-ported state?

Again, the remaining unresolved write performance bottleneck is on the SAS RAID-5's which have even higher bandwidth vs SATA, so I'm unsure what to make of your logic.  The overhead of the controller's CPU processing RAID-5 write operations would seem to be the limiting factor to me, as SAS RAID-5 sequential write transfer rates are bottlenecking at levels that a mirrored array (with lower performance drives on a lower bandwidth interface) can even beat on 4k random operations.  Am I missing something?
when you write on RAID5, vs RAID1, there is MUCH more to do.  In RAID1, the same data is being written in two places, and the controller can do this easily.

(ignore block sizes, and everything else .. just consider it is optimal for purposes of below)

But when you do RAID5, you write the XOR plus you write the data, these are not the same so you can't do a shortcut.  Not only that, but you have to do a read on the parity drive.  The read has to come before the write, so this blocks.

andyalderHaemorrhoids victimCommented:
Use HP smart update manager under Windows to update the drivers, apparently there are far superior drivers that it can find on HP's FTP site that aren't listed on the web site.

I wouldn't leave the disk write cache on if I was you unless you don't care about data corruption if you have a power outage.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.