Link to home
Start Free TrialLog in
Avatar of richn
richnFlag for United States of America

asked on

Drive Seek Time

If the specification for a hard drive shows average seek time of 4ms, is there a rule of thumb for what the minimum and maximum seek times would be?  Does the diameter of the drive make a difference?
SOLUTION
Avatar of nobus
nobus
Flag of Belgium image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of richn

ASKER

That is an interesting article, but unfortunately it assumes you either know the maximum seek time from specifications or you are designing to meet a desired maximum seek time.  My problem is that most drive manufacturers seem to have eliminated maximum seek time from their specifications, and I was wondering if there is any way to "back into it" using average seek time and drive diameter.
you can calculate the rotational max delay from the disk rotation speed, but there is still the position delay of the actuator.
i fear you have to look into it for each different model
Avatar of richn

ASKER

Yes, that figure is commonly called "rotational latency", or simply "latency".  It is the same for every track on the disk.  Most disk manufacturers are only listing the average seek times and not the maximum seek time if you had to seek all the way from the first track to the last track.  The reason I am asking is that I know you can decrease maximum seek time (which will indirectly reduce average seek time) by formatting drives to less than their total capacity.  I was just wondering if there was a rule of thumb that might let you estimate how much you would save.  Also, if you split the drive into two partitions and only used one of them, would you want to use the first partition or the second partition?  I would assume you would want to use the outer tracks of the drive, but do all drives allocate tracks in the same direction?  
try to contact Garycase, i believe he can help you better :
https://www.experts-exchange.com/M_2048329.html
A rule of thumb is almost impossible  - the minimum seek time could be as little as a fraction of a millisecond to the the time it takes to move from the outside of the disk to the innermost track plus the time of one whole rotation of the disk plus head settling time. The maximum seek time will be a function of the spin speed of the disk as well as the type o actuator used for the head assembly. If you refer to the arcticle nobus found, add raw seek time + settle time + 1 rotation. In the example about two thirds down the page, 4msec + 3msec +6msec = 13msec seek time total for a 10K rpm disk, or about 21msec for a 5400rpm disk.

BTW - all disk drives work from the outside track in - CDs and DVDs work from inside out.

To cover off your second question, you can use that to your advantage when you carve up disks. In a all-out peformance configuration, you'd configure smaller "slices" of each disk to limnit head movement. Best performance is at the outside cylinders (greater linear velocity of tracks under the heads) with the drive getting slower (relatively speaking) as you move toward the center of the drive. It should be said, however, that going to all this trouble would be worthwhile if you were trying to win a TPCC benchmark, but in a real-world environment? I'm not sure you'd see the benefit after a few weeks of operation. If you were running SQL Server, Exchange, VMware or any other random workload, you get your performance by having loads of drives(spindles) available for the workload, and you spread that workload over as many spindles as possible. Unfortunately, the relationship between the number of drives and performance is not linear, but for the purposes of calculations, you can assume it is.

The reason I say that you'd most likely not see an advantage after a few weeks is thet NTFS (assuming your workload is Windows. Linux file systems work differently) looks for empty areas to write to before it will use a chunk of space that has had data deleted from it. Once it uses up the available free space, it will start to use space freed up from deleted files. This has the effect of turning a sequential operation into a more random workload. A random workload means more seeks, more rotations of the disk and settling time for every read or write operation. To get things back to a happy state, you'll need to defrag the file system which moves all the free space to the end of the drive. In effect, you'd start out with a really nicely performing disk which will degrade over time until you do a defrag at which point you'll get back to a nicely performing disk. Depending on the workload, you may be better off having bigger disk partitons so that you don't need to run the defrags as often.

Disk performance is a complicated subject full of more contradictions and competing requirements than you can possibly shake a stick at. What is the application and OS you are planning for?
Avatar of richn

ASKER

Thank you for the information on the order in which tracks are allocated.

Rotational latency has nothing to do with seek time.  Seek time refers only to the time required to position the head to the correct track, and as far as I can tell in most disk drive specifications would include settling time.  Minimum seek time could never be the time required for the head to move all the way from the innermost to the outermost track or vice versa; that would be the maximum seek time.  Minimum seek time would be the time it would take to move (and settle) the head one track.  You add the  seek time, the rotational latency, and the data transfer time to find the total time for the operation.  Every drive specification I  have seen gives me the rotational latency and the sustained transfer rate, but none of them list the minimum and maximum seek time, only the average.

As far as specific application, I don't have a specific project at this point.  In the future I would probably only go to the trouble for database servers, and then only if there was a significant difference.  That is what I am trying to ascertain.  Let's say a drive has an average seek time of 3.5 ms.  If the minimum and maximum seek times are 2.5 and 4.5, then it is not worth my time to worry about it.  But if the minimum and maximum times are 0.5 and 20, then it is probably worth my time to get creative with partitions, and find ways to use only the outer tracks of the drive during heavy use periods and find a way to use the inner tracks only during none peak hours.

Going back to the refernce from Nobus, there was an assumption made that average seek time was approximately equal to a 1/3 stroke time.  Would it be safe to say the a full stroke seek should never be more than 3 time the average seek?  If so, then it would actually be even less given that the settling time should be the same regardless of how far the head moved before stopping.  Does anyone know if most modern drives have a similar settling time, and what that time might be?  If so, then the formula would be Maximum Time = (Average Time - Settling Time) * 3 + Settling Time.  This would lead you to conclude that Minimum Time = Settling Time would be a fairly safe assumption since the time required for the actual movement of the head in a one track seek should be almost zero.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of richn

ASKER

Our current system storage unit of choice is the IBM DS3400.  If we setup a RAID 10 Array, and then split that into two partitions, is it reasonable to assume that the first partition allocated would be using the outermost tracks of each drive in the array and the second partition would be using the innermost tracks?
Yes, that's a reasonable assumption. But remember that if the 2nd partition is also actively in use, the heads will be "thrashing" between the two partitions during any concurrent accesses to the two.
Avatar of richn

ASKER

The plan would be to only use the 2nd partition for off-hours activity.  One example I can think of would be to have our database transaction logs partnered with a disk backup partition.  Since we don't run 24/7 there would be very little, if any, activity to our transaction logs during our backup window.  We could do the same thing with the array containing our data file partition.  If we were concerned about the backup itself causing contention, we could have the data files backup to the partition on the log files array and the log files backup to the partition on the data files array.

Something like this:
Array 1 - RAID 10 - 4 * 300 GB (600 GB Usable)
   Partition 1 - TrxLogs (200 GB)
   Partition 2 - DataBackup (400 GB)
Array 2 - RAID 10 - 4 * 300 GB (600 GB Usable)
   Partition 1 - DataFiles  (400 GB)
   Partition 2 - TrxBackup (200 GB)

Given that the drives do pack more sectors per track at the outer tracks, is 20% enough to "discard" and still get a noticeable improvement?  If so, then the simplest approach might be to simply say that "disk is cheap" and only allocate one partition per array at 80% capacity, especially if real world requirements don't fit as neatly as my artificial scenario above.
It's hard to say just how much you could "discard" (i.e. relegate to off-hours use) and still get a good improvement.   Clearly if you allocate half, you'll be using fewer than half the cylinders (since the outermost cylinders have more sectors/cylinder), so that's a good starting point.   And you'd likely also limit your maximum seek to something just over the specified average seek time, since you'd probably only be using a bit more than 1/3rd of the cylinders (perhaps 40% or so).  [Note, of course, that YOUR average seek time would be lower than the manufacturer's specification, since a 1/3rd stroke for YOUR "drive" would be much less travel than the "real" 1/3rd stroke is for the drive.]      My gut feel is you could allocate 2/3rds and still get a nice improvement, but I'd be reluctant to go beyond that if performance is the key objective.

Your concept above is exactly what I do with some of my drives => put backup partitions at the end of the drives; and use them to backup a DIFFERENT drive.    I'd think the allocations you show above would work very well.
Avatar of Member_2_231077
Member_2_231077

If performance is the requirement you chuck the disks and use SSD or even plug the flash directly into the PCI bus like Fusion-IO, but it still costs a packet.

Stop press: El Reg reporter finally realises disks aren't as big as the form factor or they would scrape the sides...
www.theregister.co.uk/2009/05/07/platter_size/
Avatar of richn

ASKER

Thanks for the help, everyone.  I had a hard time splitting the points.  In addition to the accuracy of the answers, I tried to factor in the time that each person appeared to have spent on it.  I hope I was fair.