Stress Testing a Windows 2008 R2 Server with an IBM DS3000 bay attached.

alimoore used Ask the Experts™
I'm looking for some software to thoroughly stress test an IBM system x3650 M3 ( 2 x Quad Core processors, 32GB RAM, 2 x 146GB drives mirrored) which has an IBM DS3000 bay attached to it (10 x 450GB in Raid5, 2 x 450GB Hot swaps).

Can anyone recommend something to put this lot through it's paces please?

Many Thanks for your help.
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Been doing well with Passmark for years.
Stress testing CPU and Memory may be done using Prime95 x NbCores (10 hours minimum)

Stress testing the io subsystem may be done through IOMeter

Stress testing the network subsystem may use products from Mu, Ixia or Spirent

Regarding the DS3000 RAID 5 array of 10 drives:
-Will you use this array for Backup, Archive, Write Once Read Many usages where RAID 5 write penalty is not a real problem ?

-Do you use "Enterprise class" drives with a very low "Nonrecoverable Read Errors per Bits Read" (or UBE) like the "1 sector per 10E16" of the Cheetah 15K7 3.5" 450GB or can you manage to have about 4% probability of a rebuild failure using Enterprise drives with a UBE of 1 unreadable sector per 10e15bits read ?


Thanks for the Passmark recommendation, however I'm looking for something which will process read/write tests to the DS Bay continually over the course of a day or so.  I ran passmark and all the tests completed in less than 30 minutes.  

Again thanks for those recommendations.  In response to your questions the Raid5 array will be used for archived docs, WORM and also docs that are ammended on a regular basis.  It will be used as the main file server for the business.
The drives are IBM 15K 450GB SAS FRU, I've been trying to find the info regarding the read errors per bits read but have drawn a blank on the IBM website.  

Thanks for your help so far.

Rowby Goren Makes an Impact on Screen and Online

Learn about longtime user Rowby Goren and his great contributions to the site. We explore his method for posing questions that are likely to yield a solution, and take a look at how his career transformed from a Hollywood writer to a website entrepreneur.

Ah very sorry for the mis-recommendation. I apparently was not clear on what you were looking for at the time I posted. As mentioned Pime95 is absolutely the best for stress testing CPU, I'm partial to Memtest86 for RAM.

As far as the specifics regarding the drives. This is why I end up staying away from so many of the solutions OEMs out there. When you get into these provided solutions  then your left only with what they have decided to give you. You lose the option to pick and choose hardware/software based on abilities and extras. The information you're looking for is not available in any stand alone application that I am aware of and I've looked in the past. This is one of several reasons the gear I work with has controllers that provide this information in the storage management software made to work with the controller. Example: Adaptec Storage Manager will provide you with a history of read errors etc on a per drive basis.

I know this isn't helping your situation and I wish it was. I will be sending an email to one of my vendors to ping a guy about this. If there is something like what you need floating around he should know about it. [pinkies crossed]
A 15K SAS drive usually has a maximum UBE of 1 sector per 10^15 bits read ... means about 3.8% of a rebuild failure probability.

I confirm that stress testing memory may be done through Memtest86+ (different than Memtest86)

Regarding a File server and RAID5, you can minimize the write penalty with some rightly chosen params:
-Check your File server file system io size : For example with NTFS, you define the cluster size at formatting time and it can any 2^n multiple from 1KB to 64KB
-If you expect most of the files to be above 30KB, you should use a 32KB cluster size; that will allocate a 32KB minimum per file but it will optimize the NTFS part of your fileserver
-Setting the "stripe size" of your RAID 5 array to this "cluster size" will clearly lower the write penalty
-Otherwise a 64KB stripe and a 4KB NTFS default cluster size will almost always have to read 64KB data, read 64KB parity, compute the new parity, write both 64KB stripes

In summary :
-define the "NTFS cluster size" has a multiple of the array "stripe size" (usually equals)
-for sequential throughput optimization, use the highest "stripe size" possible
==> ONLY THEN it makes sense to start testing your io subsystem

IOMeter can simulate many io usages including :
-High concurrencies
-%read %write ratio
-different io size

Although, you may later have a use of the "Windows Performance Kit"
==> Its xPerf tool allows to trace all io usage per process
How are we doing in here? Any more info needed?
Qlemo"Batchelor", Developer and EE Topic Advisor
Top Expert 2015

This question has been classified as abandoned and is being closed as part of the Cleanup Program.  See my comment at the end of the question for more details.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial