• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 566
  • Last Modified:

How do I improve disk io performance

I'm looking for NT tunable ways of improving disk io performance.  Are there registry settings thatare safe to change that will improve performance?  What other settings can I change to increase file cache, etc.

  • 2
1 Solution
This might give you some ideas.
Actually, the best way to increase the file cache is to add more memory.  The more memory, the better NT runs....it's as simple as that :

Tuning for "Disk" Performance

As you might have guessed, disk performance is the single most important aspect of I/O performance. It affects many other aspects of system performance. Good disk performance enhances virtual memory performance and reduces the elapsed time required to load programs that perform a great deal of I/O, and so on.
If you discover a disk bottleneck, the first thing you need to determine is whether it’s really more memory that you need. If you are short on memory, you will see the lost performance reflected as a disk bottleneck.
Gotcha. Because disk counters can increase disk access time by approximately 1.5% on a 386/20, Windows NT does not automatically activate these counters at system startup. To activate disk counters, type diskperf -y at the command prompt and restart the computer. On a 486 or better system, the hit is not apparent.

What to Watch

·      If the “Physical Disk object’s % Disk Time” counter consistently registers at or near 67%, the physical disk is the bottleneck. This counter is the percentage of elapsed time that the selected disk drive is busy servicing read or write requests, including time waiting in the disk driver queue.
·      If “Physical Disk Disk Queue Length” (pending disk I/O requests) is greater than 2, it generally indicates significant disk congestion. (Note: This same rule applies to most all I/O devices.)
·      Determine the portion of the disk I/O used for paging with the following function “% disk time used for paging = 100 * (‘Memory Pages/sec’ * ‘PhysicalDisk Avg,DiskSec/Transfer’)”. If this is more than 10% of the total disk activity then paging is excessive. Avg. Disk sec/Transfer is the time in seconds of the average disk transfer. This formula does not include the case where you may be paging over the network.
What You Can Do
·      Install a faster disk and/or controller. Determine if the controller card does 8-bit, 16-bit, or 32-bit transfers. The more bits in the transfer operation, the faster the controller moves data. You may also want to choose a different drive technology. IDE (integrated drive electronic) has a 2.5 MB/s throughput, ESDI has a 3 MB/sec, SCSI-2 has a 5 MB/s throughput, and a Fast SCSI-2 has a 10 MB/sec throughput.
·      Create mirrored data sets. The I/O system can issue concurrent reads to 2 partitions. The first portion of the read will be to partition A, while the next portion of the read will be to partition B. (Assuming the disk driver and controller can handle asynchronous I/O).
·      Create striped data sets. Multiple disks (between 3 and 32)can process I/O requests concurrently (assuming the disk driver and controller can handle asynchronous I/O).
·      Add memory (RAM) to increase file cache size.
·      Change to a different I/O bus architecture. EISA, MCA, and local bus (VESA or PCI) buses transfer data at a much higher rate than ISA buses. PCI is fast because it transfers data at 33 MHz, a double word at a time (33 MHz * 4 = 132 Mb/sec) whereas ISA maxes out at about 5 Mb/sec and EISA about 32 Mb/sec (EISA transfers at 8 MHz * 4 bytes). There has been talk about raising the PCI clock rate to 66 MHz (to get a 264 Mb/sec transfer rate) but most manufacturers are resisting the idea (at about 50 MHz or so, getting past FCC class B certification is a nightmare).
·      When choosing a I/O device such as a disk adapter, consider the architecture of the card. For example here are some of the points to consider about each architecture:
·      PIO: PIO (programmed I/O) requires intervention by the CPU. For example, the Adaptec 1522 is a PIO device and can do either 16-bit PIO or 32-bit PIO. However, CPU-usage is quite intensive (30–40%) and it will slow down your system during a large transfer or a CD-ROM access. As such, most high-performance systems don’t use a PIO device because they adversely impact system throughput. BYTE magazine did a comparison of Adaptec 2940 (PCI) against a Future Domain adapter (PIO). While the Future Domain and Adaptec 2940 provide almost identical benchmark results, the Future Domain consumes a hefty 40% of CPU time whereas the 2940 does not. However, all PIO devices are much cheaper to manufacture— the FD is about half the price of the 2940. Another thing to keep in mind is that the standard ATDISK disk (most IDE drives) does PIO.
·      DMA: ISA DMA has only 24-address lines so it can physically address 16 MB. However, if you happen to have 32 MB of RAM, the OS can see all of the memory. Therefore, if the OS wants to transfer a block of memory (which happens to be located at memory location above 16 MB, which the ISA DMA card, such as the Adaptec 1542C, cannot physically see), it will have to copy that block down to an area in the 0–15 MB range (where the Adaptec 1542c can see) so the 1542C can initiate the DMA transfer (double buffering). This copying down to 0–15 MB range and also copying up (16 MB and up) takes quite a bit of time (using Intel repsb, repsw, repsd) so that explains the slow down. However, you don’t have that problem with either VL, PCI, or EISA as they all have 32-bit DMA address lines and can physically see up to 4 GB. PIO devices can see all of the memory, including those above 16 MB. The only problem is that it takes the processor to do any kind of data transfer. The last thing to keep in mind is that some devices do both PIO and DMA. If your system is not an ISA computer WITH more than 16 MB of RAM, you should always run with the controller in DMA mode.  
      Gotcha. The Adaptec 154x ISA busmaster, has a hard coded limit of a DMA speed of 5 MB/sec transfer rate.  This is hard coded in the Windows NT driver.  
·      Bus Master: Bus master devices have their own intelligence and offload this work from the CPU. The CPU can resume doing its own thing while the bus-master device is doing all the I/O. When it’s done, it hands the result to the CPU. These cards are by far the best solution.
      Gotcha. Make sure that you check the Windows NT Hardware Compatibility List before you purchase a controller. This will tell you if the controller is supported by Microsoft and has a certified driver.
·      On a 2 SCSI disk daisy-chained system, the SCSI controller has more of an impact on your total performance than your disk drive. You would be better off buying a slower, cheaper disk and investing in a better SCSI controller.
·      Adding more physical drives in a RAID 5 configuration can result in significant performance improvements when the disk subsystem is the bottleneck. However, adding more controllers usually does not significantly improve performance. When using high-performance disk controllers, the physical drive access times are usually the performance limiting factor for the disk subsystem.
·      Choose a disk with a low seek time (the time required to move the disk drive’s heads from one track of data to another). The ratio of time spent seeking to time spent transferring data is usually 10 to 1, and often much higher.
·      Distribute the workload as evenly as possible among different disk drives. This will allow you to take full advantage of the system’s I/O bandwidth. For example, if you have one user population that does a great deal of reads and writes to directory \\server\ExcelData and another user population that does a great deal of reads and writes to a directory \\server\WordData then you may want to consider putting the ExcelData directory on a different disk and/or controller than the WordData directory. You can take advantage of the auditing facility of Windows NT and the NTFS file system to track how certain network files are being used. User Manager lets you enable file access auditing, and File Manager lets you specify the users and files whose access you want to record.
·      If you choose a FAT file system, with time it tends to become fragmented. As the file system becomes full, pieces of files tend to be scattered over the disk; the system cannot find enough contiguous blocks to store a new file in one place, so it must fit the file in empty spaces between other files. As files are added, deleted, truncated, and expanded, the file system becomes increasingly disorderly. Performance suffers because the disk drive cannot read a file with a sequential group of operations. Instead, it must constantly seek for different pieces of the file. To avoid fragmentation, use a Defrag utility, such as Executive Software’s DiskKeeper, to adjust files in a sequence (for more information refer to their web site at http: \\www.earthlink.net\execsoft on the internet).
·      NTFS is best for use on volumes of about 400 MB or more. This is because performance does not degrade with larger volume sizes under NTFS as it does under FAT. As the size of the volume increases, performance with FAT will quickly decrease. When using the FAT file system, the disk space taken by files is more than the space taken when using NTFS. FAT file system uses clusters to allocate disk space for files. Clusters are the smallest allocation units that the file system uses to allocate space for the files. For example, for a 1-byte file, 1 cluster will be allocated, thus wasting all of the unused space. When a large number of small files are stored on a FAT partition, the cluster size may tend to waste a large amount of disk space. The cluster size is dependent on the size of the logical drive. FAT can only track a maximum of 64K clusters since there are 64K entries in the File Allocation Table. That would indicate that the cluster size will increase for large drives, in order to access the whole drive. The maximum cluster size is 64K, thus making the largest logical drive size to be 4 gigabytes. With NTFS there is a limit, however it’s 264.
·      Disabling short name generation on an NTFS partition will greatly increase directory enumeration performance especially in the case where individual directories contain a large number of files/directories with non-8.3 filenames. To disable short name generation, use REGEDT32.EXE to set a registry DWORD value of 1 in the following Registry location:
      Gotcha. This may cause compatibility problems with 16-bit MS-DOS– and Windows-based applications.

More info :

Windows NT does not allow much tuning of caching except for one registry entry.

   1.Start the registry editor (regedit.exe)
   2.Move to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ Session Manager\Memory Management
   3.Double click on LargeSystemCache and set to 0 to reduce the amount of memory used for file caching.
   4.Click OK
   5.Close the registry editor

If you start the Network control panel applet and select the Services tab you can select Server and click Properties. Select
"Maximize Throughput for Network Applications" to use less memory (this actually sets LargeSystemCache to 0).

System internals have released CacheSet (http://www.sysinternals.com) which allows you to more specifically set memory used
for caching.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now