Solved

# Drive Seek Time

Posted on 2009-05-06
904 Views
If the specification for a hard drive shows average seek time of 4ms, is there a rule of thumb for what the minimum and maximum seek times would be?  Does the diameter of the drive make a difference?
0
Question by:richn
• 6
• 3
• 3
• +2

LVL 91

Assisted Solution

nobus earned 70 total points
here a lot on seek times - happy reading !
http://www.logicsmith.com/seektime.html
0

LVL 1

Author Comment

That is an interesting article, but unfortunately it assumes you either know the maximum seek time from specifications or you are designing to meet a desired maximum seek time.  My problem is that most drive manufacturers seem to have eliminated maximum seek time from their specifications, and I was wondering if there is any way to "back into it" using average seek time and drive diameter.
0

LVL 91

Expert Comment

you can calculate the rotational max delay from the disk rotation speed, but there is still the position delay of the actuator.
i fear you have to look into it for each different model
0

LVL 1

Author Comment

Yes, that figure is commonly called "rotational latency", or simply "latency".  It is the same for every track on the disk.  Most disk manufacturers are only listing the average seek times and not the maximum seek time if you had to seek all the way from the first track to the last track.  The reason I am asking is that I know you can decrease maximum seek time (which will indirectly reduce average seek time) by formatting drives to less than their total capacity.  I was just wondering if there was a rule of thumb that might let you estimate how much you would save.  Also, if you split the drive into two partitions and only used one of them, would you want to use the first partition or the second partition?  I would assume you would want to use the outer tracks of the drive, but do all drives allocate tracks in the same direction?
0

LVL 91

Expert Comment

try to contact Garycase, i believe he can help you better :
http://www.experts-exchange.com/M_2048329.html
0

LVL 30

Expert Comment

A rule of thumb is almost impossible  - the minimum seek time could be as little as a fraction of a millisecond to the the time it takes to move from the outside of the disk to the innermost track plus the time of one whole rotation of the disk plus head settling time. The maximum seek time will be a function of the spin speed of the disk as well as the type o actuator used for the head assembly. If you refer to the arcticle nobus found, add raw seek time + settle time + 1 rotation. In the example about two thirds down the page, 4msec + 3msec +6msec = 13msec seek time total for a 10K rpm disk, or about 21msec for a 5400rpm disk.

BTW - all disk drives work from the outside track in - CDs and DVDs work from inside out.

To cover off your second question, you can use that to your advantage when you carve up disks. In a all-out peformance configuration, you'd configure smaller "slices" of each disk to limnit head movement. Best performance is at the outside cylinders (greater linear velocity of tracks under the heads) with the drive getting slower (relatively speaking) as you move toward the center of the drive. It should be said, however, that going to all this trouble would be worthwhile if you were trying to win a TPCC benchmark, but in a real-world environment? I'm not sure you'd see the benefit after a few weeks of operation. If you were running SQL Server, Exchange, VMware or any other random workload, you get your performance by having loads of drives(spindles) available for the workload, and you spread that workload over as many spindles as possible. Unfortunately, the relationship between the number of drives and performance is not linear, but for the purposes of calculations, you can assume it is.

The reason I say that you'd most likely not see an advantage after a few weeks is thet NTFS (assuming your workload is Windows. Linux file systems work differently) looks for empty areas to write to before it will use a chunk of space that has had data deleted from it. Once it uses up the available free space, it will start to use space freed up from deleted files. This has the effect of turning a sequential operation into a more random workload. A random workload means more seeks, more rotations of the disk and settling time for every read or write operation. To get things back to a happy state, you'll need to defrag the file system which moves all the free space to the end of the drive. In effect, you'd start out with a really nicely performing disk which will degrade over time until you do a defrag at which point you'll get back to a nicely performing disk. Depending on the workload, you may be better off having bigger disk partitons so that you don't need to run the defrags as often.

Disk performance is a complicated subject full of more contradictions and competing requirements than you can possibly shake a stick at. What is the application and OS you are planning for?
0

LVL 1

Author Comment

Thank you for the information on the order in which tracks are allocated.

Rotational latency has nothing to do with seek time.  Seek time refers only to the time required to position the head to the correct track, and as far as I can tell in most disk drive specifications would include settling time.  Minimum seek time could never be the time required for the head to move all the way from the innermost to the outermost track or vice versa; that would be the maximum seek time.  Minimum seek time would be the time it would take to move (and settle) the head one track.  You add the  seek time, the rotational latency, and the data transfer time to find the total time for the operation.  Every drive specification I  have seen gives me the rotational latency and the sustained transfer rate, but none of them list the minimum and maximum seek time, only the average.

As far as specific application, I don't have a specific project at this point.  In the future I would probably only go to the trouble for database servers, and then only if there was a significant difference.  That is what I am trying to ascertain.  Let's say a drive has an average seek time of 3.5 ms.  If the minimum and maximum seek times are 2.5 and 4.5, then it is not worth my time to worry about it.  But if the minimum and maximum times are 0.5 and 20, then it is probably worth my time to get creative with partitions, and find ways to use only the outer tracks of the drive during heavy use periods and find a way to use the inner tracks only during none peak hours.

Going back to the refernce from Nobus, there was an assumption made that average seek time was approximately equal to a 1/3 stroke time.  Would it be safe to say the a full stroke seek should never be more than 3 time the average seek?  If so, then it would actually be even less given that the settling time should be the same regardless of how far the head moved before stopping.  Does anyone know if most modern drives have a similar settling time, and what that time might be?  If so, then the formula would be Maximum Time = (Average Time - Settling Time) * 3 + Settling Time.  This would lead you to conclude that Minimum Time = Settling Time would be a fairly safe assumption since the time required for the actual movement of the head in a one track seek should be almost zero.
0

LVL 55

Assisted Solution

andyalder earned 170 total points
I can't think of any rule of thumb, maximum being about 3 times average sounds reasonable from figures I've seen. There is of course a difference in average read seek and average write seek, the disk tries to read before the head has completely settled and may get lucky, it can't do the same when writing or it might corrupt an adjacent track.
0

LVL 70

Accepted Solution

garycase earned 260 total points
I'll toss in a few thoughts ...

"... is there a rule of thumb for what the minimum and maximum seek times would be? " ==> In general the maximum seek time is about 2.5 times the average.   As noted in the reference cited above, average seek is stated as the time for a 1/3rd stroke (statistically, this works very well).   But a full stroke doesn't take triple that time due to the acceleration/deceleration and settling times.   The minimum (track-to-track) time is typically between 1 & 2 ms for modern drives.   The settling time is typically about 0.1ms.

"... Does the diameter of the drive make a difference? " ==>  Yes ... the important measurement is the width of the data portion of the drive; but this is clearly related to the platter's diameter.

As you've already noted, the drive manufacturers make it very difficult to find detailed specifications on their drives these days.    But it's also true that there's little difference between drives of the same rotational speed => if you want to maximize performance you can basically do two things:  (1)  Get the highest rpm drive you can (typically 10,000 rpm for SATA, 15,000 rpm for SAS or SCSI); and (2)  partition the drive so you're using the much faster outer cylinders for your data.

As I assume you know, all modern drives use zoned sectoring, so the outermost cylinders have more tracks than the inner ones -- thus the sustained transfer rate is appreciably faster on those cylinders (nearly double).   Consequently, you can get much better performance from a partition that uses only the outer cylinders than you can from a partition on the inner cylinders (or that spans the entire drive).   Both in transfer rate (as I just noted, it will be about double) and in the maximum seek time ... since the partition won't span the whole drive, the maximum seek time within that partition will be much lower.

But the seek time advantage is only true IF you don't have another partition on the same drive that's also actively in use at the same time (so the heads are always in the range of the 1st partition).

All modern drives use logical block addressing, and block numbering starts on the outermost cylinder (working inward) => so the first partition on the drive is the outermost/fastest.     However, modern drives also do automatic bad sector reallocation (S.M.A.R.T.) ... so if your drive has a lot of failed sectors any access to those will be notably longer, since the reallocated sectors may/may not be anywhere near the actual block number.   This is a very minor point, but just thought I'd note it for completeness.

Bottom line:   For any given drive, you'll get the best performance by creating a modest sized partition as the first partition on the drive and simply not using the rest of the drive (or use it for an archival partition that's not accessed often).   The smaller the partition, the better the performance -- as long as it's "big enough" for your purposes (e.g. don't make it so small that the OS is hampered by a nearly full "drive").

But for a SYSTEM (as opposed to a drive), don't forget the other point that's already been made above:  you can improve performance a lot by spreading accesses across multiple platters => either with a RAID array (this doesn't help access time, but can notably improve transfer rate); or with multiple drives (and the "drives" themselves can be RAID arrays) that spread activities to different physical drives (thus reducing thrashing and effectively reducing your overall average access times).
0

LVL 1

Author Comment

Our current system storage unit of choice is the IBM DS3400.  If we setup a RAID 10 Array, and then split that into two partitions, is it reasonable to assume that the first partition allocated would be using the outermost tracks of each drive in the array and the second partition would be using the innermost tracks?
0

LVL 70

Expert Comment

Yes, that's a reasonable assumption. But remember that if the 2nd partition is also actively in use, the heads will be "thrashing" between the two partitions during any concurrent accesses to the two.
0

LVL 1

Author Comment

The plan would be to only use the 2nd partition for off-hours activity.  One example I can think of would be to have our database transaction logs partnered with a disk backup partition.  Since we don't run 24/7 there would be very little, if any, activity to our transaction logs during our backup window.  We could do the same thing with the array containing our data file partition.  If we were concerned about the backup itself causing contention, we could have the data files backup to the partition on the log files array and the log files backup to the partition on the data files array.

Something like this:
Array 1 - RAID 10 - 4 * 300 GB (600 GB Usable)
Partition 1 - TrxLogs (200 GB)
Partition 2 - DataBackup (400 GB)
Array 2 - RAID 10 - 4 * 300 GB (600 GB Usable)
Partition 1 - DataFiles  (400 GB)
Partition 2 - TrxBackup (200 GB)

Given that the drives do pack more sectors per track at the outer tracks, is 20% enough to "discard" and still get a noticeable improvement?  If so, then the simplest approach might be to simply say that "disk is cheap" and only allocate one partition per array at 80% capacity, especially if real world requirements don't fit as neatly as my artificial scenario above.
0

LVL 70

Expert Comment

It's hard to say just how much you could "discard" (i.e. relegate to off-hours use) and still get a good improvement.   Clearly if you allocate half, you'll be using fewer than half the cylinders (since the outermost cylinders have more sectors/cylinder), so that's a good starting point.   And you'd likely also limit your maximum seek to something just over the specified average seek time, since you'd probably only be using a bit more than 1/3rd of the cylinders (perhaps 40% or so).  [Note, of course, that YOUR average seek time would be lower than the manufacturer's specification, since a 1/3rd stroke for YOUR "drive" would be much less travel than the "real" 1/3rd stroke is for the drive.]      My gut feel is you could allocate 2/3rds and still get a nice improvement, but I'd be reluctant to go beyond that if performance is the key objective.

Your concept above is exactly what I do with some of my drives => put backup partitions at the end of the drives; and use them to backup a DIFFERENT drive.    I'd think the allocations you show above would work very well.
0

LVL 55

Expert Comment

If performance is the requirement you chuck the disks and use SSD or even plug the flash directly into the PCI bus like Fusion-IO, but it still costs a packet.

Stop press: El Reg reporter finally realises disks aren't as big as the form factor or they would scrape the sides...
www.theregister.co.uk/2009/05/07/platter_size/
0

LVL 1

Author Closing Comment

Thanks for the help, everyone.  I had a hard time splitting the points.  In addition to the accuracy of the answers, I tried to factor in the time that each person appeared to have spent on it.  I hope I was fair.
0

## Featured Post

Ever notice how you can't use a new drive in Windows without having Windows assigning a Disk Signature?  Ever have a signature collision problem (especially with Virtual Machines?)  This article is intended to help you understand what's going on and…
Create your own, high-performance VM backup appliance by installing NAKIVO Backup & Replication directly onto a Synology NAS!
This video Micro Tutorial explains how to clone a hard drive using a commercial software product for Windows systems called Casper from Future Systems Solutions (FSS). Cloning makes an exact, complete copy of one hard disk drive (HDD) onto another d…
This Micro Tutorial will teach you how to reformat your flash drive. Sometimes your flash drive may have issues carrying files so this will completely restore it to manufacturing settings. Make sure to backup all files before reformatting. This w…