?
Solved

Unable to create a 3.5TB filesystem

Posted on 2008-11-18
11
Medium Priority
?
2,451 Views
Last Modified: 2013-12-01
I'm running into a bit of a wall trying to create a 3.5TB filesystem.  I'm running RHEL AS 4u6 x86_64, using the largesmp kernel to see all the memory and CPU's.

uname -a says:
Linux foobar.company.com 2.6.9-67.ELlargesmp #1 SMP Wed Nov 7 14:07:22 EST 2007 x86_64 x86_64 x86_64 GNU/Linux

The system hardware is a 4 CPU Intel Xeon 7350 (quad-core) server with 256GB of memory.  There's a internal drive for the OS and a QLogic QLE2462 card attached to a Promise VTrak E610f RAID.  

I have a RAID set created through the management interface on the Promise box, it's 3.7TB in size.

The Linux box recognizes the Qlogic card (didn't need to do anything there to load additional drivers) and sees that it's attached to the Promise RAID.

I ran parted and set the disklabel to GPT, created a single 3.5TB partition.  When I ran mkfs.ext3, I would only end up with a 1.4TB filesystem.

Here's what I see when I run parted and mkfs.ext3:

root# /sbin/parted /dev/sdb

Using /dev/sdb
(parted) print
Disk geometry for /dev/sdb: 0.000-3576277.500 megabytes
Disk label type: gpt
Minor    Start       End     Filesystem  Name                  Flags
(parted) mklabel gpt
(parted) mkpart primary 0 3576277
(parted) print
Disk geometry for /dev/sdb: 0.000-3576277.500 megabytes
Disk label type: gpt
Minor    Start       End     Filesystem  Name                  Flags
1          0.017 3576277.483
(parted) quit
Information: Don't forget to update /etc/fstab, if necessary.


root# mkfs.ext3 /dev/sdb1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
189333504 inodes, 378655357 blocks
18932767 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
11556 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
root#
root#
root# mount /dev/sdb1 /mnt
root# df -lh                
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5              47G  8.2G   37G  19% /
/dev/sda1             122M   18M   99M  15% /boot
none                  126G     0  126G   0% /dev/shm
/dev/sdb1             1.4T  102M  1.4T   1% /mnt



0
Comment
Question by:grantmiller1
  • 3
  • 2
  • 2
  • +3
10 Comments
 
LVL 63

Expert Comment

by:SysExpert
ID: 22986583
Not sure, but the issue may be 2 TB rollover with some of the tools, or issues with 32 vs 64 bit support somewhere.

Can you test with a 64 bit linux LIVE cd to compare ?


I hope this helps !
0
 
LVL 81

Expert Comment

by:arnold
ID: 22986605
The size difference could be the result of you using a small block size (4k)  try using a larger 16k or 32k block size to see whether that will allow you to address more of the 3.5TB space.

mkfs.ext3 -b 16384  -f 16384 /dev/sdb1
mkfs.ext3 -b 32768  -f 32768 /dev/sdb1
0
 

Author Comment

by:grantmiller1
ID: 22987262
Responding to SysExpert's comment, I installed RHEL AS from the x86_64 DVD image from RedHat, so I'm pretty sure I'm running with 64-bit binaries.
0
Efficient way to get backups off site to Azure

This user guide provides instructions on how to deploy and configure both a StoneFly Scale Out NAS Enterprise Cloud Drive virtual machine and Veeam Cloud Connect in the Microsoft Azure Cloud.

 

Author Comment

by:grantmiller1
ID: 22987380
mkfs.ext3 didn't like 16k blocks.  It wasn't very happy with 8k blocks, but it went ahead and did it:

root# mkfs.ext3 -b 8192 /dev/sdb1
Warning: blocksize 8192 not usable on most systems.
mke2fs 1.35 (28-Feb-2004)
mkfs.ext3: 8192-byte blocks too big for system (max 4096)
Proceed anyway? (y,n) y
Warning: 8192-byte blocks too big for system (max 4096), forced to continue
Filesystem label=
OS type: Linux
Block size=8192 (log=3)
Fragment size=8192 (log=3)
189214080 inodes, 189327678 blocks
9466383 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4311218176
2890 block groups
65528 blocks per group, 65528 fragments per group
65472 inodes per group
Superblock backups stored on blocks:
        65528, 196584, 327640, 458696, 589752, 1638200, 1769256, 3210872,
        5307768, 8191000, 15923304, 22476104, 40955000, 47769912, 143309736,
        157332728

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
root#
root# mount /dev/sdb1 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
       or too many mounted file systems

0
 
LVL 81

Expert Comment

by:arnold
ID: 22990568
Oops sorry, 4096 is the largest block size.
Thought of a different OS.
you can try adjusting the bytes per inode using the -i
mkfs.ext3 -i 32768 /dev/sdb1.

This however will quickly eat into your storage space if your files are all small.





0
 
LVL 14

Expert Comment

by:cjl7
ID: 22992003
Hi,

Dont think the fs is the problem.

Try using lvm and then build a fs on top of that.

1. pvcreate /dev/sdb (use the whole disk, no need for a partition table)
2. vgcreate BigVG
3. vgdisplay -v BigVG (record the Logical Extents (LE))
4. lvcreate -n my_big_volume -l <no of LE> BigVG
5. mkfs.ext3 /dev/mapper/BigVG-my_big_volume

//jonas

(ext3 has got a filesystem limit of 8/16 TB so that shouldn't be a problem)
0
 
LVL 7

Expert Comment

by:macker-
ID: 23001061
While it shouldn't be necessary, did you try rebooting?  It could be you've written out the GPT table, but the kernel is still reading an old partition table (or otherwise not seeing the full 3.5T).

While LVM may be a good idea in and of itself, it sounds like there's a core problem here, that still needs to be fixed.

The one other question I'd have, is did you try using parted to create the filesystem as well?

That you're not creating a 2T partition, and instead a 1.5T partition, suggests either a problem with the partition table being read (i.e. reboot required), or a bug (e.g. "rollover", as SysExpert suggested).
0
 

Author Comment

by:grantmiller1
ID: 23001564
I have rebooted the system after creating the partition, same results when running mkfs,ext3, got a 1.5TB filesystem.

I was able to use mkfs.ext3 and specify the size of the filesystem:

mkfs.ext3 /dev/sdb1 3499759999999

df -lh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             3.4T  102M  3.4T   1% /mnt


Using mkpartfs in parted does the trick, it doesn't support ext3, but I can just run tune2fs to convert it.

(parted) mkpartfs primary ext3 0 3576277                                  
No Implementation: Support for creating ext3 file systems is not implemented yet.

(parted) mkpartfs primary ext2 0 3576277

df -lh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             3.4T   52K  3.2T   1% /mnt

0
 
LVL 7

Expert Comment

by:macker-
ID: 23047944
We've established that mkfs.ext3 is capable of initializing the filesystem with the full available capacity, but is not doing so by default.  This sounds to me like a bug.  If possible, I would test this under the latest versions of tools available in the RHEL4 branch (i.e. make sure you're fully up-to-date), and if the problem still exists, open a bug with Red Hat.

For the meantime, it sounds like two work-arounds have been established.  Since ext2 and ext3 are "the same thing", but with the addition of a journal, both should be acceptable.
0
 
LVL 1

Accepted Solution

by:
Computer101 earned 0 total points
ID: 23886735
PAQed with points refunded (250)

Computer101
EE Admin
0

Featured Post

Free learning courses: Active Directory Deep Dive

Get a firm grasp on your IT environment when you learn Active Directory best practices with Veeam! Watch all, or choose any amount, of this three-part webinar series to improve your skills. From the basics to virtualization and backup, we got you covered.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Many businesses neglect disaster recovery and treat it as an after-thought. I can tell you first hand that data will be lost, hard drives die, servers will be hacked, and careless (or malicious) employees can ruin your data.
Windows Server 2003 introduced persistent Volume Shadow Copies and made 2003 a must-do upgrade.  Since then, it's been a must-implement feature for all servers doing any kind of file sharing.
Finding and deleting duplicate (picture) files can be a time consuming task. My wife and I, our three kids and their families all share one dilemma: Managing our pictures. Between desktops, laptops, phones, tablets, and cameras; over the last decade…
Despite its rising prevalence in the business world, "the cloud" is still misunderstood. Some companies still believe common misconceptions about lack of security in cloud solutions and many misuses of cloud storage options still occur every day. …
Suggested Courses
Course of the Month14 days, 17 hours left to enroll

840 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question