We help IT Professionals succeed at work.

IOPS Calculator

lbotha used Ask the Experts™
I am looking for a calculator or method to calculate the IOPS for a given drive, i.e. 300GB - 15K - 4GB Fibre Channel Drive or a 146GB - 10K - 4GB Fibre Channel Drive.

I would like to deploy Exchange 2007 and the database LUN requires 3000 IOPS to perform optimally.  Also how does these IOPS per drive differ from Raid 1+0 and Raid 5?

Knowing my IOPS, I need to know what drive count + Raid config is best.

Any assistance will gladly be appreciated.

Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Brian PiercePhotographer
Awarded 2007
Top Expert 2008



Thank you, I am familiar with this specific calculator but are looking more for the actual IOPS that a given drive are capable of, i.e. a 15K drive is 33% faster than a 10K drive.  I need to know what the IOPS is for each drive type, FC, SATA etc.  The calculator tells me that I need 3000 IOPS but what it does not tell me is which drives and quantity and type will achieve those 3000 IOPS.  Also I know that Raid 1+0 = 1 write as oppose to Raid 5 = 4 writes.  I am looking for the most cost effective option referring to the speed, size or drive type either using Raid 1+0 or Raid 5 in achieving those IOPS.

I do not seem to find on any drive vendor's website the actual IOPS for a specific drive type.  If I had those one could easily do the calculation.  Logic says that Raid 5 will be slower than Raid 1+0 but all I am concerned about if achieving the IOPS, and if Raid 5 can achieve those with less drives than Raid 1+0 then to me that is cost effective as far as the drives go.

Any suggestions anyone?
Top Expert 2014

I don't think there is such a calculator or spreadsheet with them all listed, I tend to use the manufacturer's figures for half a rotation plus the average seek time in ms to calculate an average time to do a single I/O but it's a worst case scenario as it doesn't allow for cache on the disk or controller. As far as RAID levels go RAID 5 needs 4 I/Os and RAID 1+0 needs 2 for each write, disk reads are just 1 I/O. Reads are a bit faster than writes but I won't go into that.
what brand san are you looking at?  My vendor gave me pretty accurate information when I asked.


EMC Clariion CX3-20 with 300GB - 15K - 4GB FC Drives
Well, this is how it works.  The more drives you have the more IOPS you get, regardless of the speed of the drive.  Yes, a 15k drive is faster as far as access goes but if you only get 100 iops per spindle/second with the 300gb 15k drives and you have 10 spindles thats 1000 iops/second.  If you get 80 iops/second with the 146gb 10K drives but have 20 you get 1600 iops/second.  More drives = better performance.  

I recently looked at NetApp, Xiotech and HP.  All performed well.  The one thing that all 3 vendors told me was that aside from the architectural differences with the devices, IOPS depend on the number of drives.  So, I opted for a bunch of 146gb FC drives (16) over a smaller amount of 300GB FATA drive due to the IOPS for my DR site.  Even though my DR site was only going to be a replication partner I wanted to have the performance level if the primary site ever goes down.

Now I am  not sure what brand of drive is in the EMC Clarion but most drives, Hitachi, Seagate, etc, have relatively similar performance and MTBF when it comes to FC drives.  So based on what I know about the solution I chose I would guess that any FC drive you purchase is going to be right around 100 iops per second.


Based on the above link it does not look like EMC offers a 15k 300 gb drive.  If that is true, you are better off purchasing the 15k 146gb drive with a faster read and write time or go with the 10k 146gb drives that are the same speed as the 10k 300gb drives.  just buy more drives.

Hope that helps.

Top Expert 2014

I think what he wants is a list of each make of disk with how many IOPS it does. The SAN vendors tend to list the IOPS from cache figure.


Hi All,

Thank you, we are getting closer.  John, you recon 100 IOPS for a 15K drive that is low, very low.  If you look at the figures from the Exchange 2007 storage calculator from Microsoft they say 15K drives perform at 180 IOPS, 10K @ 130 IOPS and so on.  Who is correct MS or the hardware vendors?  I am not looking for marketing calculations but real life ones.  I understand that the more spindels the better performance but how does a RAID 5 set with it parity overhead affect the total IOPS for the LUN as oppose to a RAID 10 set with no parity overhead.  Does it mean if I have to go with RAID 5 that I will even need more drives to make up for the parity overhead in achieving my required total IOPS?
Top Expert 2014
MS's figures are fairly close. Do the sums that I gave you above for average seek time plus half rotation.


OK, thank you.  Any idea on the parity overhead % for RAID 5 then?
Top Expert 2014

4 physical I/Os for each logical I/O although this is greatly offset by the cache, I'd guess at about 2.5 physical I/Os per logical I/O for RAID 5 with a reasonable cache algorithm.
Forced accept.

EE Admin

Calculating IOPS

Single Disk IOPS = 1 / Average Latency, where
Average Latency = Rotational Latency + Average Seek Latency
Rotational Latency =
        14.3 ms @ 4200 RPM
        11.1 ms @ 5400 RPM
        8.3 ms @ 7200 RPM
        6.0 ms @ 10K RPM and
        4.0 ms @ 15K RPM
Average Seek Latency = Varies with Manufacturer, typically
        Notebook drives ~ 12+ ms
        7.2K SATA drives ~ 8 to 10 ms
        10K SATA drives ~ 4.5 to 5.0 ms
        10/15K SAS drives ~ 3 to 4 ms

RAID Read IOPS = Sum of all Single Disk IOPS
        RAID0 = Sum of all Single Disk IOPS
        RAID1/10 = Half of the sum of all Single Disk IOPS
        RAID5* = One-quarter the sum of all Single Disk IOPS
        RAID6* = One-sixth the sum of all Single Disk IOPS

* Note while there is an XOR calculation involved with RAID5/6, it's usually inconsequential in modern hardware.

Real IOPS will be somewhere between your read and write IOPS, depending upon your read-write ratio. Transactional databases are generally considered to be 50:50, whereas operational databases are considered to eb about 90:10.

This represents PEAK IOPS that can be sustained with no regard to cache. It also requires that you have as many outstanding IO operations as there are spindles to reach this. For example, with eight spindles, you would need eight outstanding operations (i.e., queued) to reach full potential.

Cache is harder to determine. For an estimate, you need to know your data sample size versus your cache size. For example, you have a 200GB database, of which about 10% is routinely accessed in a day. That's about a 20GB data sample size, so a 2GB cache would have approximately a 10% cache-hit ratio.

The IOPS of Cache is HUGE, so the easiest way would be to take the remaning percentage, e.g., the cache-miss ratio, and divide your IOPS by that. For example, if you array sustains 1000 IOPS and you estimate a 90% cache-miss ratio, you could bump up your IOPS estimate to 1,111 IOPS. Obviously the more cache the better - but it can become very expensive to have huge amounts of cache. However, as you'll see below, even 4GB of cache can mean very little on large transactional databases.

Sun released a white paper a while back on the design of SANs and recommended 256MB for each 15K spindle, 512MB for each 10K spindle and 1GB for each 7.2K spindle. So an array of 32 SATA drives should have no less than 32GB of cache available to it.

Let's take a practical example, in reverse. You need 3000 IOPS. We'll assume RAID1 to begin with. This is a heavily transactional database type, so we'll assume a 50:50 read-write ratio.

A single 2.5" 15K SAS drive should be able to achieve about 250 IOPS. To achieve this without cache, you would then need sixteen spindles. That is, 16*250 = 4000 read IOPS and 2000 write IOPS; at 50:50 that's 3000 IOPS. So a single MD1120 enclosure would fit the bill nicely. This will, however, only give you about 584GB of space, which may or may not be enough (unless Dell has 15K 146GB drives now).

With RAID6 (I cannot recomment RAID5 for reliability reasons), you'd need a few more drives to hit 3000 IOPS - about 21. That is, 21*250 = 5250 read IOPS and 875 write IOPS; at 50:50 that's about 3060 IOPS.

Cache makes this more complex, however. Each drive has 8MB of cache, the controller usually has 256MB or more of cache, and if it's on a SAN, you'll have your controller cache, usually in the gigabyte range. Using Sun's figures, for sixteen spindles we should have 4GB of cache (at 256MB each spindle). Your dataset size is a little tougher to estimate without empiracal data, but assuming each user sends and receives about 30MB of emails a day with 2000 users, you'd have a data sample size of about 60GB. With only 4GB of cache, you're cache miss ratio is about 93.3%. This only improves your IOPS to about 3,200 IOPS.

Alternatively, with SATA drives, you can get a 1000GB spindle running at about 116 IOPS. To have enough performance with RAID10, we'd need about 34 spindles. That is 34*116 = 3944 read IOPS, 1972 write IOPS. With caching, we could probably get that down. Again, using Sun's recommendation, we'd need 34GB of cache in this example. Assuming 32GB, we'd have a cache-miss ratio of about 46.7% (much better), raising our IOPS to nearly 6400 IOPS.

Anyway, the sweet-spot here (I cheated and used MS Excel Goal-Seek function)  is at 22x 1TB SATA spindles with 24GB of cache, giving 3,190 IOPS peak (RAID1) and 11TB of space. Probably talking two 3U trays and a heafty 2U controller. This gives you 11TB of space. If you're not going SAN, then add the cache requirement to your Exchange/Windows requirement. For example, if your'e doing this on a single box, get a system with 32GB of RAM - this gives 24GB for cache and 8GB for Windows and Exchange (more than enough - the Microsoft recommendation is 4GB).

Since you also need boot drives, etc. I would suggest 2x73GB 15K SAS drives (on the server) for the system volume, swap file, etc; 2x36GB 15K SAS drives (also on the server) for the Exchange binaries, temp files, etc; and 2x300GB 10/15K SAS drives for your logs (all RAID1).

Then, in two SAS enclosures, simply have 26x1TB using RAID60 for the data partition (stripe across the enclosures and/or controllers). This will give you 3,182 peak IOPS on the data array with about 24TB of available space. Alternatively, you could go with RAID10 by only installing 22 spindles for 11TB of space and 3,080 IOPS. This also gives you room to grow.

If space is not a concern, go with 15K SAS drives. You'd need about 15 SAS drives to reach 3000 IOPS at RAID10, or about 20 SAS drives to reach 3000 IOPS with RAID6. In either case, get as big a drive as you need - if you're only giving your 2000 users 2GB of space, then 4TB would be enough and could be done with 300GB drives using RAID6, or 400GB drives using RAID10. The big benefit here is that the system only needs about 6GB of cache to run optimally, saving in the RAM cost. While the drives might be more expensive, the cost savings on the number (15 versus 26, 1 eclosure vs 2, 6GB cache vs 24GB) may actually make this the less expensive option. Price them out, then decide - do you need more than 4TB of space for an Exchange server.

Hope that helps.