Proper drive arm recommendation

I have a customer looking at iASP and our config, due to todays market, has just over half of the number of arms as his current config.  He is concerned about performance on the new system due to the decrease in the number of arms.  The new controllers and new drives are substantially faster today so I dont THINK it will be an issue.  

But, does anyone have any whitepapers or recommendations that I can use in our behalf (or not, if that is the case)?  I realize that we can do a performance analysis and submit it to the sizing tool but that takes a lot of time and we are not there yet.  

Another piece of info is that the customer would be moving from RAID5 to RAID6 which should also improve performance.

This is on an iSeries but I dont think the question would be that difference on xSeries or pSeries.

Thanks in advance
Jon SnydermanAsked:
Who is Participating?

The change in processor was not my primary point.

The percent busy is very important in this decision even if you keep the same processor.

If your percent today is 12% and you cut the number of arms in half then it would be, generally, around 24%.  This should not be a problem.  Stay under 40% and you will be ok.

By arms do you mean disk spindles?

New disks aren't that much faster than older ones as far as random access goes, and controllers rely on the disks for their speed although admittedly you get more battery backed cache on them than was possible before. If we knew what disks they currently had we could look up their spec.

RAID 5 is slower than RAID 6 and both are slower for RAID 10

N is number of disks, I is IOPS for a single disk:

RAID 10 read performance is N *I
RAID 5 read performance (N-1)*I
RAID 6 read performance (N-2) *I

RAID 1 write performance N/2*I
RAID 5 write performance N/4*I
RAID 6 write performance N/6*I

That's raw sums, the RAID 5 and RAID 6 write penalties a offset a bit by the controller's cache.
Jon SnydermanAuthor Commented:
So do I have this ROUGH scenario right assuming all else is like and realizing that there are many other factors at play here?

16x139Gb drives RAID5 vs 8x139Gb Drives RAID6...  

RAID5= 16-1*2000IOPS = 30000 performance value vs.
RAID6 = 8-2*2000IOPS = 12000 performance value

RAID5= 16/4*2000IOPS = 8000 performance value vs.
RAID6 = 8/6*2000IOPS = 2600 performance value

Thats a huge variance based on spindles.  

These drives are both 15k.   Are you saying that, with todays hardware and recent hardware, the controllers and drives are not different enough to make a dramatic difference in the disk performance?
Network Scalability - Handle Complex Environments

Monitor your entire network from a single platform. Free 30 Day Trial Now!

During a busy time, do a WRKDSKSTS.  Wait a few minutes and F5.  What do you have on the far right side for percent busy?  

In my opinion, if you are in the teens then you have nothing to worry about with your proposed change.  If you are in the 40s, look out.

A year ago, I moved a customer for a 9406 with many drives to a 8203 with 6.  The drives are pretty busy but the overall performance went up.

Steve Bowdoin
Jon SnydermanAuthor Commented:
Thanks Steve,

We have done similar upgrades and I agree that the performace always increases.  But in this case, the other variables such as processor and memory will not change.  So the only noticable factor will be disk.   Hence the question.

You've got one too many zeros in your IOPS value, you'll get about 200 random IOPS using the latest 15K drives, 5 years ago that was more like 180 IOPS. Other than that though the figures are about right, twice as many spindles (or arms if you prefer) = twice as many IOPS.

Compare (2003)
with (2010)
Average seek hasn't changed significantly, average latency can't change because it's defined by 15K RPM. What has changed is the areal density and hence capacity. This doesn't affect random IOPS but it does speed sequential access greatly since data passes under the head much quicker than before.
Jon SnydermanAuthor Commented:
All great information guys.  both from a storage standpoint and iSeries standpoint.   Thanks!  I am going to split points since both were very helpfull.

Gary PattersonVP Technology / Senior Consultant Commented:
The math isn't that hard, but as you add factors, it gets complicated due to the number of variables, and the fact that a bottleneck anywhere in the chain limits overall subsystem performance.  

Since we don't know where your bottlenecks or workload characteristics are, it is hard to comment.

In general, unless there is a dramatic difference in drive speed, cutting arms in half is going to reduce performance by about 50%, plus or minus a bit for variations in other DASD subsystem components.  If you are having arm utilization issues, it is almost always a bad idea to cut arms, unless you're really moving up a few notches in disk technology.

If the controller is only 10% busy, but your average peak arm utilization with the current subsystem is 40%, then no, the controller doesn't matter much.  If on the other hand, the controller is 90% busy and current arm utilization is 4%, then halving arms won't matter, but increasing controller capacity might speed things up a lot.

If your new drives are 10% faster overall than the old drives, and your arm utilization is 38%, then halving arms would put you at about (38-10%)*2 = 68.4%.  That is definitely going from bad to worse.  

I suggest you just use the IBM workload estimator (WLE).  It is great for stuff like this:

There's even a video tutorial for disk subsystem estimates.  If you have your numbers together, you can do this in about ten minutes.

- Gary Patterson
Jon SnydermanAuthor Commented:
Thanks for your input Gary.  I dont really have enough info to go through the WLE yet.  We have no arm utilization issues today, and we are trying to keep it that way :).  But I understand your points and agree with the caveats that you mentioned.   I appreciate you chiming in.

Gary PattersonVP Technology / Senior Consultant Commented:
You're making it too complicated.  Assuming you are logged onto your current system, you can do the minimum data gathering in less than a minute (suggest doing during peak times, though).

We're just talking about a disk subsystem estimate.  All you need is the following info about your current system:

Disk attachment (SCSI/IOS/DAS)
Drive speed (7.2K, 10K, 15K)
Storage protection level (Mirrored, RAID-5, RAID-6)
Total storage (from WRKSYSSTS)

Read ops/sec and Bytes per read op
Write ops/sec and Bytes per write op

If you don't have the read/write ops info, you can just plug in your current disk busy% and number of drives, and it'll estimate ops for you.  (WRKDSKSTS).

Once you put in the existing system info, you can start plugging in the target disk subsystem, and generating drive requirements.

Here's a 4-minute video tutorial that takes you through the entire process.  It literally is a video of a WLE disk conversion estimate session from start to finish:

Even with learning curve, this is a ten minute job.  One note, if you don't currently use WLE, you'll want to launch it from the Sizing Guide pages, so you can start with the Generic workload.  Search for "System i Generic Workload", and run WLE with that workload.

- Gary Patterson
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.