Link to home
Start Free TrialLog in
Avatar of Monis Monther
Monis MontherFlag for Iraq

asked on

Linux file system

I am trying to create a file system with 16k or 64k block size. I have created a GPT disk and formated it with XFS. the problem is when I mount it I get

mount: Function not implemented

Googling this problem I found out that I cannot mount a filesystem with larger than 4k block size because it cant be larger than the page size which is 4k in my case

How can I overcome this problem?

I have Linux CentOS 32 bit Kernel 2.6.18
Avatar of wesly_chen
wesly_chen
Flag of United States of America image

http://oss.sgi.COM/projects/xfs/
----Quote---
Filesystem Block Size

The maximum filesystem block size is the page size
of the kernel, which is 4K on x86 architecture
---------
AS root:
# getconf PAGESIZE
4096

So not for Linux. man page say it can up to 64kB but limited by pagesize. In this case, linux x86 is limited by pagesize 4k. SGI might be set larger than 4k.
Avatar of Monis Monther

ASKER

1- What about increasing the page size? There is something called large pages and huge pages. Do you know of any options that might help increase the page size.

2- Does this mean that any Linux is limited to a 4k block size filesystem. This is a shock if its true. What do people do to have filesystems for larger block sizes?? why does ext3 support 8K if the O/S is limited to 4k??
> why does ext3 support 8K if the O/S is limited to 4k??
No, ext3 is still limited by pagesize.

> Do you know of any options that might help increase the page size.
Sorry, I don't know. It could be the code in kernel.

Some database like Oracle requires larger block size for performance usually do on RAW disk (not formated as any filesystem).
Changing the page size in the kernel is target dependent.  It can be done (I've worked on versions of Linux with 16K pages) but it requires a significant amount of work and there are many dependencies which need to be resolved.  On these systems, the file system block size may be larger than 4k.  

>>What do people do to have filesystems for larger block sizes?
On x86 systems, file systems have 4k blocks.  There is little or no benefit to larger page sizes.

Why do you want/need a file system with 16k or 64k blocks?  The file system block size is a logical, not physical, block size. This is not the size of records written on the disk
@eager

1- Can you please let me know the steps needed to change the page size or better if you can post a link with details if possible.

2- I have large media files and I read that having larger block size would help performance.
The details are complex:   You would re-target Linux for a new variation of the x86 hardware.   This would take months for someone who is experienced with both x86 and Linux memory management.

You have been misinformed:  Larger file system block sizes will have minimal impact on performance.  Increasing block size (say from 256 to 512) can result in an improvement in performance.  There is a point of diminishing returns where the costs of increasing page size outweigh any improvement in performance from having 4K blocks.  This is somewhere around 4K page size.  
ASKER CERTIFIED SOLUTION
Avatar of wesly_chen
wesly_chen
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
What I mean to say was "There is a point of diminishing returns where the costs of increasing page size outweigh any improvement in performance from having larger file system blocks."
I appreciate all the info. However I read that Oracle advise for having 8k block size. How can you achieve this in terms of the 4k limitation you are referring to?  The article mentioned using HugeMem.

Provide a reference, please.  
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
@wesley_chen

You mentioned that Windows NTFS has also a limit of 4k block size on the file system for a x86 arch. I just previously installed a Win2008 R2 64bit and was able to format a drive with 64k block size and tested by creating a .txt file. Now by selecting right click --> properties you can see that the file is a few bytes and size on disk is 64k (the block size).

How come that Windows NTFS had a 64k block size ?? Hope you can help explain
Allocation units are not the same as block size.  I'm not familiar with the internals of NTFS, but every file system I am aware of allocates disk space in some multiple of the block size.  The minimum size of a file on disk is an allocation unit.  
@eager

I formated a partition with ext3 with default options and it created it with 4k block size. I created a small file and ran the command

du -sh file

it reported 4k (Which is equel to the block size)

I reformated the same parition with

mkfs.ex3 -b 1024 and it created a file system of 1k block size. I repeated the same test of the file and it reported 1k (Which is the block size of the filesystem)

So I guess that the allocation unit here is the same as the block size of the filesystem.

In Win2003 the maximum option you have to format a drive/partition is 4K which makes sense to the what all experts above have mentioned of relation to pagesize.

Now in Win2008 new options for formating are 4,8,16 and 64K block size are available and the a small file as the above tests shows a 64k which is the block size of the filesystem.

My question: How come in Win2008 the block size is so big while this is not supported on Linux ?
Linux and Windows have significantly different designs and internal organizations.  The relationship between PAGESIZE and file system block size in Linux is a design decision, not a physical constraint. I'm not familiar with NTFS internals.

I would not take the results of du as offering a clear view of EXT3 allocation methods.
Thanks Eager for your comments