# IOPS ~ FC, SAS, SATA @ 10K and 15K RPM

I am looking for a range (Low, Average, High) of the IOPS that can be expected from the following drives:

1. FC @ 10K
2. FC @ 15K
3. SAS @ 10K
4. SAS @ 15K
5. SATA @ 10K
6. SATA @ 15K

I am using this data to perform some calculations and so far I have found different answers or no answers.  Grant it, IOPS will be dependent on block size, but I am looking for average numbers that I can plug in.

###### Who is Participating?

x

Commented:
1. FC @ 10K Low = 0, Average (use for sizing) = 120 IOPS, High 240 - 300
2. FC @ 15K Low = 0, Average (use for sizing) = 180 IOPS, High 360 - 450
3. SAS @ 10K Low = 0, Average (use for sizing) = 140 IOPS, High 280 - 350
4. SAS @ 15K Low = 0, Average (use for sizing) = 200 IOPS, High 400 - 500
5. SATA @ 10K  Low = 0, Average (use for sizing) = 80 IOPS, High 160 - 200
6. SATA @ 15K No such thing as a 15K SATA drive, I'm afraid

Note that the high figures assume a real-world true random workload. You would get better numbers if you used techniques such as short-stroking drives or tuned benchmarks purely to get the best numbers possible. I use the average numbers for sizing storage arrays as it gives you burst capacity.
0

Commented:
Intel based SSD : 8000 IOPS
Indilinx based SSD : 12000 IOPS
SandForce 1000 series based SSD : 20000 IOPS
SandForce 2000 series based SSD : 50000 IOPS

There is a very complete examples list in http://en.wikipedia.org/wiki/IOPS

SSD rules the "IOPS usage" world...
The only IOPS situation where using HDD may be a good idea is when you NEED a multipath feature allowing for TWO IO controller to be plugged in each drive. Rare SSD has this "dual-port" feature at a very expensive cost.
0

Commented:
I found an example of a dual-port SAS2 SSD.
Pliant technology was acquired by SanDisk and they sell an EFD (Enterprise Flash Drive using SLC and stating a stuning UBE of 1 per 10^17) with dual-port SAS2
http://sandisk.com/enterprise-storage-solutions/lightning-products/lightning-6gb-sas-efd
...of course, they are about \$4k each...but you need 50x more SAS 15k HDD to keep the same IOPS level
0

Author Commented:
Thanks for the responses.

As you may have figured out, this is for a VDI implementation.  I know that "WriteIOPS" is the big killer when it comes to a virtualization initiative.  The calculations I am performing are based on the Write IOPS value of the drive, minus the raid penalty.  From there I am using an average IOPS number for the VD at Normal usage (about 7-8 IOPS).  This will tell me how many VD's I can support on a single drive.

Am I on the right track here or should I be using the total Random IOPS value for this?

Another question I have is that from everything I have seen and researched, the size of the drive has nothing to do with IOPS!?

No matter if you have a 300GB or a 750GB drive, if they are both 10K, you are only using the RPM speed, average seek time and R/W latency to calculate IOPS.

BigSchmuh:
I like the SSD drives and they are very appealing because of the IOPS they are able to support, however... If I had a 5000 IOPS requirement for a VDI, one (1) SSD could provide all the IOPS I need.

From my example above, VD = 8 IOPS, Total IOPS needed = 5000

5000 / 8 = 625 VD's

Each VD is 15GB + 3GB Persistent Disk = 18GB Total per VD.

18 * 625 = 11.250 TB or data storage needed.

I would have to purchase 24 - 500GB SSD's or 12 - 1TB SSD's in order to accomodate the storage needs that the VD's will need. This would be IOPS overload... grant it, there would never be a latency issue, but at the cost, it just isn't practical.
0

Author Commented:
Thank you for the information.
0

Commented:
The best approach for calculating the number of drives is to work out the number of write IOPS and number of read IOPS. You can then calculate total workload:

Total = Read IOPS + (Write IOPS x RAID penalty)
where RAID penalty = 2 for RAID 1/0, 4 four RAID 5 and 6 for RAID 6. NetApp is a bit harder because of their write optimized file system.

Then divide IOPS per drive into total IOPS for the total number of drives. Don't forget to round up to the nearest even number if you're using RAID 1/0. Also Note that the total includes parity drives as they participate in providing performance.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.