Link to home
Start Free TrialLog in
Avatar of grantmiller1
grantmiller1

asked on

Unable to create a 3.5TB filesystem

I'm running into a bit of a wall trying to create a 3.5TB filesystem.  I'm running RHEL AS 4u6 x86_64, using the largesmp kernel to see all the memory and CPU's.

uname -a says:
Linux foobar.company.com 2.6.9-67.ELlargesmp #1 SMP Wed Nov 7 14:07:22 EST 2007 x86_64 x86_64 x86_64 GNU/Linux

The system hardware is a 4 CPU Intel Xeon 7350 (quad-core) server with 256GB of memory.  There's a internal drive for the OS and a QLogic QLE2462 card attached to a Promise VTrak E610f RAID.  

I have a RAID set created through the management interface on the Promise box, it's 3.7TB in size.

The Linux box recognizes the Qlogic card (didn't need to do anything there to load additional drivers) and sees that it's attached to the Promise RAID.

I ran parted and set the disklabel to GPT, created a single 3.5TB partition.  When I ran mkfs.ext3, I would only end up with a 1.4TB filesystem.

Here's what I see when I run parted and mkfs.ext3:

root# /sbin/parted /dev/sdb

Using /dev/sdb
(parted) print
Disk geometry for /dev/sdb: 0.000-3576277.500 megabytes
Disk label type: gpt
Minor    Start       End     Filesystem  Name                  Flags
(parted) mklabel gpt
(parted) mkpart primary 0 3576277
(parted) print
Disk geometry for /dev/sdb: 0.000-3576277.500 megabytes
Disk label type: gpt
Minor    Start       End     Filesystem  Name                  Flags
1          0.017 3576277.483
(parted) quit
Information: Don't forget to update /etc/fstab, if necessary.


root# mkfs.ext3 /dev/sdb1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
189333504 inodes, 378655357 blocks
18932767 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
11556 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
root#
root#
root# mount /dev/sdb1 /mnt
root# df -lh                
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5              47G  8.2G   37G  19% /
/dev/sda1             122M   18M   99M  15% /boot
none                  126G     0  126G   0% /dev/shm
/dev/sdb1             1.4T  102M  1.4T   1% /mnt



Avatar of SysExpert
SysExpert
Flag of Israel image

Not sure, but the issue may be 2 TB rollover with some of the tools, or issues with 32 vs 64 bit support somewhere.

Can you test with a 64 bit linux LIVE cd to compare ?


I hope this helps !
The size difference could be the result of you using a small block size (4k)  try using a larger 16k or 32k block size to see whether that will allow you to address more of the 3.5TB space.

mkfs.ext3 -b 16384  -f 16384 /dev/sdb1
mkfs.ext3 -b 32768  -f 32768 /dev/sdb1
Avatar of grantmiller1
grantmiller1

ASKER

Responding to SysExpert's comment, I installed RHEL AS from the x86_64 DVD image from RedHat, so I'm pretty sure I'm running with 64-bit binaries.
mkfs.ext3 didn't like 16k blocks.  It wasn't very happy with 8k blocks, but it went ahead and did it:

root# mkfs.ext3 -b 8192 /dev/sdb1
Warning: blocksize 8192 not usable on most systems.
mke2fs 1.35 (28-Feb-2004)
mkfs.ext3: 8192-byte blocks too big for system (max 4096)
Proceed anyway? (y,n) y
Warning: 8192-byte blocks too big for system (max 4096), forced to continue
Filesystem label=
OS type: Linux
Block size=8192 (log=3)
Fragment size=8192 (log=3)
189214080 inodes, 189327678 blocks
9466383 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4311218176
2890 block groups
65528 blocks per group, 65528 fragments per group
65472 inodes per group
Superblock backups stored on blocks:
        65528, 196584, 327640, 458696, 589752, 1638200, 1769256, 3210872,
        5307768, 8191000, 15923304, 22476104, 40955000, 47769912, 143309736,
        157332728

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
root#
root# mount /dev/sdb1 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
       or too many mounted file systems

Oops sorry, 4096 is the largest block size.
Thought of a different OS.
you can try adjusting the bytes per inode using the -i
mkfs.ext3 -i 32768 /dev/sdb1.

This however will quickly eat into your storage space if your files are all small.





Hi,

Dont think the fs is the problem.

Try using lvm and then build a fs on top of that.

1. pvcreate /dev/sdb (use the whole disk, no need for a partition table)
2. vgcreate BigVG
3. vgdisplay -v BigVG (record the Logical Extents (LE))
4. lvcreate -n my_big_volume -l <no of LE> BigVG
5. mkfs.ext3 /dev/mapper/BigVG-my_big_volume

//jonas

(ext3 has got a filesystem limit of 8/16 TB so that shouldn't be a problem)
While it shouldn't be necessary, did you try rebooting?  It could be you've written out the GPT table, but the kernel is still reading an old partition table (or otherwise not seeing the full 3.5T).

While LVM may be a good idea in and of itself, it sounds like there's a core problem here, that still needs to be fixed.

The one other question I'd have, is did you try using parted to create the filesystem as well?

That you're not creating a 2T partition, and instead a 1.5T partition, suggests either a problem with the partition table being read (i.e. reboot required), or a bug (e.g. "rollover", as SysExpert suggested).
I have rebooted the system after creating the partition, same results when running mkfs,ext3, got a 1.5TB filesystem.

I was able to use mkfs.ext3 and specify the size of the filesystem:

mkfs.ext3 /dev/sdb1 3499759999999

df -lh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             3.4T  102M  3.4T   1% /mnt


Using mkpartfs in parted does the trick, it doesn't support ext3, but I can just run tune2fs to convert it.

(parted) mkpartfs primary ext3 0 3576277                                  
No Implementation: Support for creating ext3 file systems is not implemented yet.

(parted) mkpartfs primary ext2 0 3576277

df -lh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             3.4T   52K  3.2T   1% /mnt

We've established that mkfs.ext3 is capable of initializing the filesystem with the full available capacity, but is not doing so by default.  This sounds to me like a bug.  If possible, I would test this under the latest versions of tools available in the RHEL4 branch (i.e. make sure you're fully up-to-date), and if the problem still exists, open a bug with Red Hat.

For the meantime, it sounds like two work-arounds have been established.  Since ext2 and ext3 are "the same thing", but with the addition of a journal, both should be acceptable.
ASKER CERTIFIED SOLUTION
Avatar of Computer101
Computer101
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial