NTFS block size in NT4.0

What are the pros and cons of using a non-default block size for NTFS partitions?  I am configuring NT as an application server to run Oracle and in Oracle I can control the database block size (2K, 4K, 8K, 16K, etc.) but it must be the same for the entire database, even if there are different sized drives.  I suspect that response times will be best if both NT and Oracle use the same size blocks.  I have read that the default block size in NT is 1/1,000,000 of the partition size (2K for 2gig, 4K for 4gig, etc.).  I would like to use 4K blocks on 9gig partitions.  Is this possible/advisable?

I am willing to give more points for a good answer, and/or to give credit to multiple responses.
LVL 35
Mark GeerlingsDatabase AdministratorAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
rawatsonConnect With a Mentor Commented:
Hello there,
   Generally speaking, you can boost performance and reduce overhead by using the correct non-default block size.  On the other hand, you can also decrease performance...  
  What you really need to do is estimate your average file size of files in the database and make the block size a multiple of that in some way.  So, if you have many 28K files, you would want to have 4K blocks, and not 8K blocks.  Similarly, you would want Oracle and NT to have the same block sizes ideally, but if that is not possible, then even multiples of each other.  
   The 1/1M default block size is correct, but only for 4K blocks and smaller, I believe.  So, by default, you will not get an 8K or 16K default block because of file compression and disk defragmentation issues.  (Apparently, those utilities like the 4K size)  So, as far as that goes, your decision to go with 4K blocks sounds wise and intelligent.  I would probably go with that size unless average file size is something really weird (i.e. not even close to a multiple of 4K....)
   Finally, there is a really good article on optimizing NTFS that you should check out.  It's on Microsoft's Technet site (which I believe is accessible to those without a Technet subscription) at the following URL:

http://technet.microsoft.com/cdonline/Content/Complete/windows/winnt/Winntas/Tips/winntmag/optntfs.htm

If you can't get to it, write back, and I'll try to see if I can find another copy of it somewhere.
  I hope that this has answered some of your questions.

rawatson
0
 
rawatsonCommented:
Hello there,
   Generally speaking, you can boost performance and reduce overhead by using the correct non-default block size.  On the other hand, you can also decrease performance...  
  What you really need to do is estimate your average file size of files in the database and make the block size a multiple of that in some way.  So, if you have many 28K files, you would want to have 4K blocks, and not 8K blocks.  Similarly, you would want Oracle and NT to have the same block sizes ideally, but if that is not possible, then even multiples of each other.  
   The 1/1M default block size is correct, but only for 4K blocks and smaller, I believe.  So, by default, you will not get an 8K or 16K default block because of file compression and disk defragmentation issues.  (Apparently, those utilities like the 4K size)  So, as far as that goes, your decision to go with 4K blocks sounds wise and intelligent.  I would probably go with that size unless average file size is something really weird (i.e. not even close to a multiple of 4K....)
   Finally, there is a really good article on optimizing NTFS that you should check out.  It's on Microsoft's Technet site (which I believe is accessible to those without a Technet subscription) at the following URL:

http://technet.microsoft.com/cdonline/Content/Complete/windows/winnt/Winntas/Tips/winntmag/optntfs.htm

If you can't get to it, write back, and I'll try to see if I can find another copy of it somewhere.
  I hope that this has answered some of your questions.

rawatson
0
 
Mark GeerlingsDatabase AdministratorAuthor Commented:
To rawatson:

Thank you for your comment and the link to the Technet article - that was informative.

For me the issue isn't file size relative to cluster size, or possible wasted space from choosing a larger-than-default cluster size.  I will have only a few (3-10) large files per drive.  A few of them will be in the range of 1-4MB, many will be 20-200MB, some will be 1-2gig.

I am just trying to optimize the combination of O/S disk I/O and Oracle disk I/O.  I am assuming (and am quite sure) that Oracle reads and writes individual blocks, not complete files.  NTFS must also be able to support this.

If no one else responds, I'll give you the points, but I would like to leave this open yet to see if anyone else has additional information.
0
Free Tool: Path Explorer

An intuitive utility to help find the CSS path to UI elements on a webpage. These paths are used frequently in a variety of front-end development and QA automation tasks.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

 
dsazamaCommented:
You may already know this, but you should be able to adjust the size of the clusters on an NTFS partition during formatting.. I know this can be done in the command prompt (type format /? to find the syntax) and I am pretty sure it can be done through the gui as well... This may not be helpful but I didn't see the procedure for actually changing the cluster size listed in rawatson's response (although it was an excellent one)
0
 
dsazamaCommented:
Sorry for having two comments in a row, but you may want to check out Compaq's web site... I have read a white paper discussing exactly this issue (about 20 pages long) and it was informative...

If you need the exact location let me know.
0
 
Mark GeerlingsDatabase AdministratorAuthor Commented:
To dsazama:

I would appreciate the exact URL for the Compaq paper, since I didn't find it on the Compaq site.
0
 
akbCommented:
I believe that if you have a few very large files then it is most efficient to select the largest cluster size available.  This will minimise file fragmentation, although I suspect Oracle will preallocate the files anyway, in which case the cluster size will have no impact upon performance.
0
 
Mark GeerlingsDatabase AdministratorAuthor Commented:
To akb:  It would be most efficient to use the largest cluster size available IF Oracle always read the entire file at a time, but that is not the case.  Oracle supports random access, but in units based on the Oracle block size, not the O\S block size, so I would like to get the two to the same size to maximize efficiency.  A size of 4K or possibly 8K would likely be best for Oracle, but how does NT respond with 4K or 8K cluster sizes in 9-gig drives, each with one partition for the full size of the drive and one logical drive per partition?

Note: I do not want RAID5 and larger or multiple logical drives.  Oracle offers the data protection I need through its archive logging, so I don't want the performance penalty on write operations of RAID5.  Also, Oracle does its own buffering, so the slightly faster read performance on RAID5 is usually not an advantage for Oracle either.
0
 
Mark GeerlingsDatabase AdministratorAuthor Commented:
I was hoping for more comments from people who have tried non-default cluster sizes, but maybe not many people have.

I was also hoping for a reply from dsazama with a specific URL for the paper he mentioned, but that hasn't happened either.

I'll accept your comment and close out the question.
0
All Courses

From novice to tech pro — start learning today.