Link to home
Start Free TrialLog in
Avatar of dn0rm
dn0rm

asked on

Best read/write speed benchmarking tool?

I'm looking for an accurate drive speed benchmarking tool.  This application would need to test read/write speeds and be able to read and write various ways (configurable would be nice).  The tests will be on windows machines and reading/writing from/to SAN, NAS, Local drivers, RAID and Non RAID configurations.

Free would be great but willing to throw some money at this if necessary, i've struggled to find a tool that is reliable and highly accurate, thanks!
Avatar of Timothy McCartney
Timothy McCartney
Flag of United States of America image

HD tune is a great application for HDD performance monitoring and testing
There's Disk Bench, or Hdd Speed Test Tool. I've used all three.
The previous experts recommended some basic consumer stuff which is ok, but they are not *really* good disk benchmarking tools, certainly not the kind of things that will help you evaluate RAID.

For example, to properly evaluate RAID, you need to inject errors.  I.e, I want an ECC error on Block #10000 on disk #3, then I want to change the block size and see how long of a delay it costs me.

These tools also do not actually measure how much I/O goes to the disk drive. they measure how much I/O their particular programs told the midlayer to create.  File system efficiency or even device driver efficiency can be a factor, and these low-end tools won't detect it.

What about unrecoverable & recoverable read errors;  backplane/expander saturation?  Is the physical interface (if SAS or SAS-2) using 3Gbit, 6Gbit?  Is the I/O using one or both ports?

Are you looking at configurable mode pages, and setting baselines for configurable drive parameters.   (SCSI, FC, SAS can literally have hundreds, but only a dozen or so can significantly impact performance.  Sometimes you can get 50% delta based on a configurable disk parameter that varies between firmware revisions).

Are the benchmarking programs skewing the results (well, always), but can they even be trusted?   Are you aware that all of these tools under windows can actually continue to send I/O to the disks for a good 5 or I've seen I/O go to disks for up to 10 seconds after the "benchmark" program completes?

Do they flush i/O? If you copy 1GB as a benchmark, do they make sure that the 1GB actually gets saved to the disk BEFORE the clock stops?   How is queue depth handled?  Is the bench reordering I/O requests artificially which can skew results?  

So  you see if you want an ACCURATE bench, you wont get it with those products.    So exactly what do you need, and what are you trying to measure.   Those products may meet your needs just fine, but then again they may be useless for purposes of any serious tuning or analysis.

Avatar of dn0rm
dn0rm

ASKER

thanks to all who have responded, i have played with these tools and they do measure performance and do a decent job, but i am looking for something a bit more in depth.  

diethe, i'll assume you work in the enterprise storage space based on this reply, i tip my hat to you on the breadth and depth of your note.  thank you for taking the time to write all of this.  i'll do my best to simply summarize what I'd love this tool to accomplish (such tool may not even exist).

what i am looking for is a good way to evaluate disk performance up to the point of saturation where said saturation causes application performance to degrade.  so this tool would be able to simulate multiple writes from multiple different processes (sequential writes are fine)...example: worker1.exe worker2.exe worker3.exe all writing large amounts of data simultaneously - how many of those workers can i run simultaneously while writing data to the same or multiple volumes before i saturate the data flow pipeline (ethernet, sata, sas, backblane, fibre, physical disk, file system etc) somewhere.

the same tool would be able to simulate the reads with multiple worker processes similar to the writing scenario.  the data being written and read could be in many ~100MB files or it could be one large ~300GB file.

the interface speed and other details you are asking about i believe would be irrelevant because it is what it is...what i need to understand is when and how many workers writing/reading cause the storage system to be the performance degradation point.  This tool would be for understanding best optimal high speed storage configurations and for troubleshooting performance issues.

thanks again for the well written, well thought out reply.  :)
ASKER CERTIFIED SOLUTION
Avatar of David
David
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of dn0rm

ASKER

If real world load were easy to come by I don't think I'd be looking for these utilities for some type of simulation :)  Real world load in my world requires loads of hardware and expensive infrastructure.  Aside from a  customer letting us use their system for weeks to gather this data, simulation is all I have to go from.  

After reading through your posts I believe I have a good (better for certain) understanding of where I stand.  I need to tool around with iometer for some advanced thoughput testing and understand which of these consumerish applications I can trust the most for simple to intermediate analysis.  In terms of the level of analysis you mention above, I typically am not responsible there...I'll hope someone like yourself is around (from the storage vendor) should I find myself in such a bind :)

netapp, emc/isilon, hp?

one of those?  :)
Nope, I'm independent, but 2 of 3 are current customers, and if you're running any of those, chances are good you've run some of my code :)  
Oh, and me or just about any other expert can be coerced to provide some one-on-one for things that fall outside the scope of EE.   Just look at an expert's profile and many have a way to contact them.  (But this is not a solicitation, I'll answer questions for t-shirts like everybody else).