Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 716
  • Last Modified:

Linux file system

I am trying to create a file system with 16k or 64k block size. I have created a GPT disk and formated it with XFS. the problem is when I mount it I get

mount: Function not implemented

Googling this problem I found out that I cannot mount a filesystem with larger than 4k block size because it cant be larger than the page size which is 4k in my case

How can I overcome this problem?

I have Linux CentOS 32 bit Kernel 2.6.18
0
Monis Monther
Asked:
Monis Monther
  • 7
  • 7
  • 3
2 Solutions
 
wesly_chenCommented:
http://oss.sgi.COM/projects/xfs/
----Quote---
Filesystem Block Size

The maximum filesystem block size is the page size
of the kernel, which is 4K on x86 architecture
---------
AS root:
# getconf PAGESIZE
4096

So not for Linux. man page say it can up to 64kB but limited by pagesize. In this case, linux x86 is limited by pagesize 4k. SGI might be set larger than 4k.
0
 
Monis MontherSystem ArchitectAuthor Commented:
1- What about increasing the page size? There is something called large pages and huge pages. Do you know of any options that might help increase the page size.

2- Does this mean that any Linux is limited to a 4k block size filesystem. This is a shock if its true. What do people do to have filesystems for larger block sizes?? why does ext3 support 8K if the O/S is limited to 4k??
0
 
wesly_chenCommented:
> why does ext3 support 8K if the O/S is limited to 4k??
No, ext3 is still limited by pagesize.

> Do you know of any options that might help increase the page size.
Sorry, I don't know. It could be the code in kernel.

Some database like Oracle requires larger block size for performance usually do on RAW disk (not formated as any filesystem).
0
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

 
eagerCommented:
Changing the page size in the kernel is target dependent.  It can be done (I've worked on versions of Linux with 16K pages) but it requires a significant amount of work and there are many dependencies which need to be resolved.  On these systems, the file system block size may be larger than 4k.  

>>What do people do to have filesystems for larger block sizes?
On x86 systems, file systems have 4k blocks.  There is little or no benefit to larger page sizes.

Why do you want/need a file system with 16k or 64k blocks?  The file system block size is a logical, not physical, block size. This is not the size of records written on the disk
0
 
Monis MontherSystem ArchitectAuthor Commented:
@eager

1- Can you please let me know the steps needed to change the page size or better if you can post a link with details if possible.

2- I have large media files and I read that having larger block size would help performance.
0
 
eagerCommented:
The details are complex:   You would re-target Linux for a new variation of the x86 hardware.   This would take months for someone who is experienced with both x86 and Linux memory management.

You have been misinformed:  Larger file system block sizes will have minimal impact on performance.  Increasing block size (say from 256 to 512) can result in an improvement in performance.  There is a point of diminishing returns where the costs of increasing page size outweigh any improvement in performance from having 4K blocks.  This is somewhere around 4K page size.  
0
 
wesly_chenCommented:
As I know the x86 architecture, 4k page size is limited so the block size is up to 4k.
NTFS on x84 hardware is also up to 4k block size.
You can try ia64 (itanium) which might get 8k block size.
0
 
eagerCommented:
What I mean to say was "There is a point of diminishing returns where the costs of increasing page size outweigh any improvement in performance from having larger file system blocks."
0
 
Monis MontherSystem ArchitectAuthor Commented:
I appreciate all the info. However I read that Oracle advise for having 8k block size. How can you achieve this in terms of the 4k limitation you are referring to?  The article mentioned using HugeMem.

0
 
eagerCommented:
Provide a reference, please.  
0
 
Monis MontherSystem ArchitectAuthor Commented:
This is the article that I read

http://www.dba-oracle.com/t_linux_hugepages.htm

0
 
eagerCommented:
The Linux Hugepages setting controls the size of page tables and how memory is allocated. Changing this value may improve performance by reducing memory fragmentation and reducing the frequency that page tables and TLB need to be updated.  The article you mention is about optimizing memory performance, not file access.  

Changing Hugepages (which the article describes)  doesn't change PAGESIZE within x86 Linux, which is 4k, and is not related to the size of blocks on a file system.  See http://lwn.net/Articles/374424/
0
 
Monis MontherSystem ArchitectAuthor Commented:
@wesley_chen

You mentioned that Windows NTFS has also a limit of 4k block size on the file system for a x86 arch. I just previously installed a Win2008 R2 64bit and was able to format a drive with 64k block size and tested by creating a .txt file. Now by selecting right click --> properties you can see that the file is a few bytes and size on disk is 64k (the block size).

How come that Windows NTFS had a 64k block size ?? Hope you can help explain
0
 
eagerCommented:
Allocation units are not the same as block size.  I'm not familiar with the internals of NTFS, but every file system I am aware of allocates disk space in some multiple of the block size.  The minimum size of a file on disk is an allocation unit.  
0
 
Monis MontherSystem ArchitectAuthor Commented:
@eager

I formated a partition with ext3 with default options and it created it with 4k block size. I created a small file and ran the command

du -sh file

it reported 4k (Which is equel to the block size)

I reformated the same parition with

mkfs.ex3 -b 1024 and it created a file system of 1k block size. I repeated the same test of the file and it reported 1k (Which is the block size of the filesystem)

So I guess that the allocation unit here is the same as the block size of the filesystem.

In Win2003 the maximum option you have to format a drive/partition is 4K which makes sense to the what all experts above have mentioned of relation to pagesize.

Now in Win2008 new options for formating are 4,8,16 and 64K block size are available and the a small file as the above tests shows a 64k which is the block size of the filesystem.

My question: How come in Win2008 the block size is so big while this is not supported on Linux ?
0
 
eagerCommented:
Linux and Windows have significantly different designs and internal organizations.  The relationship between PAGESIZE and file system block size in Linux is a design decision, not a physical constraint. I'm not familiar with NTFS internals.

I would not take the results of du as offering a clear view of EXT3 allocation methods.
0
 
Monis MontherSystem ArchitectAuthor Commented:
Thanks Eager for your comments
0

Featured Post

Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

  • 7
  • 7
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now