Solved

Another Centos Linux partition / fs : fdisk lvcreate vgcreate mkfs ext3 mount

Posted on 2011-09-23
11
1,483 Views
Last Modified: 2012-06-27

I currently have a Centos Linux (which I've just plugged in 3x 146GB
HDDs into it;  ie it now houses 6x 146GB HDDs) & would like to mount
up all unused space.  Further below are details of the partitions.

I also can't mount the newly inserted 3x 146GB RAID5 (hardware raid)
disks :

#  mount -t ext3 /dev/VolGroup01/nfs1 /nfs1
mount: wrong fs type, bad option, bad superblock on /dev/VolGroup01/nfs1,
       missing codepage or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

[root@nfs1 ~]#  mount  /dev/VolGroup01/nfs1 /nfs1
mount: you must specify the filesystem type


Current Filesystem/partition details:
============================

# lvm
lvm> lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup01/nfs1
  VG Name                VolGroup01
  LV UUID                59NMoq-pxE3-IIu0-jWAv-8aAM-0RL5-nedWty
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                273.45 GB
  Current LE             70003
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                dXoGq7-azl1-EQYH-2Htn-YrzY-Wljs-3KDGFq
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                30.00 GB
  Current LE             960
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                sSIq0I-35D0-lBkq-KzSf-T8aG-ZqlU-8ZD2dU
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                2.00 GB
  Current LE             64
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

lvm> exit
  Exiting.
[root@nfs1 ~]# fdisk -l

Disk /dev/cciss/c0d0: 293.6 GB, 293617820160 bytes
255 heads, 63 sectors/track, 35697 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

           Device Boot      Start         End      Blocks   Id  System
/dev/cciss/c0d0p1   *           1          13      104391   83  Linux
/dev/cciss/c0d0p2              14       35697   286631730   8e  Linux LVM

Disk /dev/cciss/c0d1: 293.6 GB, 293617820160 bytes
255 heads, 63 sectors/track, 35697 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

           Device Boot      Start         End      Blocks   Id  System
/dev/cciss/c0d1p1               1       35697   286736121   8e  Linux LVM
[root@nfs1 ~]#
[root@nfs1 ~]#
[root@nfs1 ~]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup01
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               273.45 GB
  PE Size               4.00 MB
  Total PE              70003
  Alloc PE / Size       70003 / 273.45 GB
  Free  PE / Size       0 / 0
  VG UUID               zJqZAn-fxMf-G6F0-9ird-1b1z-1tcn-ee8fMf

  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               273.34 GB
  PE Size               32.00 MB
  Total PE              8747
  Alloc PE / Size       1024 / 32.00 GB
  Free  PE / Size       7723 / 241.34 GB
  VG UUID               QCeebm-BhWL-ENfK-Jvd8-pZwS-yB7w-ms6w9j


Q1:
How can I fix the issue of not being able to mount

Q2:
Kindly provide detailed step by step instruction to use fdisk, lvcreate, vgcreate,
mount etc to create/merge/initialize (mkfs ext3) the filesystems.  Let me know the
/etc/fstab entry to be added (to maintain the setup/mounting) after reboot

Q3:
Cluster size setting

Q4:
to share out the partitions as NFS to another server (HP-Ux B11.11) to host DB
0
Comment
Question by:sunhux
  • 7
  • 4
11 Comments
 

Author Comment

by:sunhux
ID: 36585920

I suspect the reason I can't mount was the  "mkfs.ext3 /dev/VolGroup01/nfs1"
crashed earlier when my colleague ran it & it did not complete & the Centos
hanged.

Now I repeated the mkfs & it stopped for about 5 minutes now at the stage below:

# mkfs.ext3 /dev/VolGroup01/nfs1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
35848192 inodes, 71683072 blocks
3584153 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
2188 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616

Writing inode tables:  796/2188   <== paused for 5 minutes & still waiting

0
 
LVL 76

Accepted Solution

by:
arnold earned 500 total points
ID: 36590133
you need to run fsck on the partition /dev/volGroup01/nfs1
 mkfs wipes the data and recreates the partition.

Your initial post deals with adding storage which seems not to have been deployed.
Are the three new drives added to an existing RAID 5 volume or are they a new set of RAID 5 disks?
/dev/cciss/c0d1 seems to be the newly added group of three which is /dev/volGroup01.

Did the initialization of the newly created RAID 5 complete or is it still in build mode??

You are creating a single 250 GB volume.
you could try to use the -v to at least get some output while the filesystem creation is running.
mkfs -t ext3 -v /dev/VolGroup01/nfs1

0
 

Author Comment

by:sunhux
ID: 36598714

They are a new set of RAID 5 disks

> Did the initialization of the newly created RAID 5 complete or is it still in build mode?
It did not complete.  From my ssh session, it just hanged but when I went to
the console, could see some console logs, something about "Bad Magic number"
0
 

Author Comment

by:sunhux
ID: 36598719

I tried power cycling  but it would stop at "Enter root password to get to maintenance mode"
or "Ctrl-D to continue" : Ctrl-D won't help.  Entering root password would come to sort of
'single-user' mode
0
 
LVL 76

Assisted Solution

by:arnold
arnold earned 500 total points
ID: 36599474
When you login into the maintenance/single user mode:

Presumably you have an HP server with hardware raid and that is where you configured the RAID 5 group.
What is the controller status?
You did not complete the formating (mkfs) on the RAID 5 which will prevent it from mounting.

pvdisplay this will list which physical volume/disk makes up which group.
presumably you did
pvcreate /dev/cciss/c0d1
vgcreate VolumeGroupName /dev/cciss/c0d1
lvcreate
http://linux.die.net/man/8/lvcreate
This will create the logical volume based on the size specification
After which you would need to run the mkfs -t ext3 /dev/VolumeGroupName/LogicalVolumeName

with /etc/fstab
http://www.tuxfiles.org/linuxhelp/fstab.html
0
Network it in WD Red

There's an industry-leading WD Red drive for every compatible NAS system to help fulfill your data storage needs. With drives up to 8TB, WD Red offers a wide array of solutions for customers looking to build the biggest, best-performing NAS storage solution.  

 

Author Comment

by:sunhux
ID: 36838989

Honestly, it's my colleague who issued those commands, though
I can't recall exactly.  I think he probably defined till the very last
cylinder with fdisk & this may have caused mkfs to fail half way
through with Bad magic number

# pvdisplay -v      
    Scanning for physical volume names
  /dev/hda: open failed: No medium found
  --- Physical volume ---
  PV Name               /dev/cciss/c0d0p2
  VG Name               VolGroup00
  PV Size               273.35 GB / not usable 9.80 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              8747
  Free PE               5760
  Allocated PE          2987
  PV UUID               Itu1yv-Bp97-81pv-mAQ2-Yowz-AS31-3qYyIR


0
 

Author Comment

by:sunhux
ID: 36840385

After it crashed, I power cycled & brought it up but now I can't
even login to root or my id at the console.  So the next thing I'll
need help is how to regain root login to it
0
 
LVL 76

Assisted Solution

by:arnold
arnold earned 500 total points
ID: 36851514
You have to boot the system using a centos disk.
Then you would need to know which logical volume has /etc and mount it as /mnt
then you would locate /etc/shadow which will be in /mnt/etc/shadow and update the records there.
Then you should be able to login.
Was this system tied into LDAP/AD etc. It might have the /mnt/etc/nsswitch.conf for passwd, group, host to have file as the first check.

It is possible the hda is disconnected or failed. recheck all connections.
0
 

Author Comment

by:sunhux
ID: 36890236

I've managed to recover & login as root.  This Proliant sever is not connected
to LDAP/AD, just standalone connected to LAN.

Attached is the /var/log/messages which may give some clue to the crash I
encountered while doing mkfs

# pvdisplay -v
    Scanning for physical volume names
  --- Physical volume --- (think this is the RAID5 vol that can't be mkfs'ed /mounted )
  PV Name               /dev/cciss/c0d1p1
  VG Name               VolGroup01
  PV Size               273.45 GB / not usable 3.74 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              70003
  Free PE               0
  Allocated PE          70003
  PV UUID               yfhJNR-VetF-CTYi-31FD-cGpg-VxNu-FX9Ngu

  --- Physical volume --- (2nd RAID5 volume)
  PV Name               /dev/cciss/c0d0p2
  VG Name               VolGroup00
  PV Size               273.35 GB / not usable 9.80 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              8747
  Free PE               7723
  Allocated PE          1024
  PV UUID               AoLINZ-KtbD-3TZg-lh8P-DONd-yoFS-nRpaL6


msg.txt
0
 
LVL 76

Assisted Solution

by:arnold
arnold earned 500 total points
ID: 36892342
not sure whether the pe size you have for /dev/volgroup01 limits the group to 256GB
http://www.walkernews.net/2007/07/02/maximum-size-of-a-logical-volume-in-lvm/
try recreating the lv nfs1 to 256 and see if the mkfs error goes away.
Alternatively, you should alter the pe on volgroup01 from 4096KB to 8192KB or better still to 16384KB.

I saw the crash event, but because of your configuration it did not dump (kdump) an core which would reflect what the issue was.
0
 

Author Closing Comment

by:sunhux
ID: 36938618
ok thanks
0

Featured Post

How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Join & Write a Comment

Suggested Solutions

Java performance on Solaris - Managing CPUs There are various resource controls in operating system which directly/indirectly influence the performance of application. one of the most important resource controls is "CPU".   In a multithreaded…
Lets start to have a small explanation what is VAAI(vStorage API for Array Integration ) and what are the benefits using it. VAAI is an API framework in VMware that enable some Storage tasks. It first presented in ESXi 4.1, but only after 5.x sup…
Learn how to navigate the file tree with the shell. Use pwd to print the current working directory: Use ls to list a directory's contents: Use cd to change to a new directory: Use wildcards instead of typing out long directory names: Use ../ to move…
This tutorial will walk an individual through the process of installing the necessary services and then configuring a Windows Server 2012 system as an iSCSI target. To install the necessary roles, go to Server Manager, and select Add Roles and Featu…

759 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

19 Experts available now in Live!

Get 1:1 Help Now