Solved

RAID5 physical volume doesn't show up in /dev directory on reboot

Posted on 2011-03-11
25
457 Views
Last Modified: 2012-05-11
These are the steps I have taken so far to permanently mount a RAID5 array with 2 logical volumes.  I'm using fedora14 and I have 4 sata dirves that are partitioned as
/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
 
created 2 directories for mount locations
/raid_mount/volume1
/raid_mount/volume2

created RAID5 array
mdadm –create /dev/md1 –level=5 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
 
created physical volume on new array
pvcreate /dev/md1

created a volume group
vgcreate VOLUME_GROUP /dev/md1

add volume to new volume group
lvcreate –L 1G –n volume1 VOLUME_GROUP
lvcreate –L 1G –n volume2 VOLUME_GROUP

formatted both logical drives
mkfs –t ext3 –L ext3 /dev/VOLUME_GROUP/volume1
mkfs –t ext3 –L ext3 /dev/VOLUME_GROUP/volume1

contents of mdadm.conf file
  GNU nano 2.2.4                   File: mdadm.conf                                            

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
#MAILADDR root
# definitions of existing MD arrays

# This file was auto-generated on Tue, 18 May 2010 18:15:07 +1000
# by mkconf $Id$

DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
ARRAY /dev/md1 level=raid5 devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

this is what I get when I do a mdadm -D /dev/md1
[root@localhost etc]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Mar  8 21:19:06 2011
     Raid Level : raid5
  Used Dev Size : 1047040 (1022.67 MiB 1072.17 MB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu Mar 10 09:24:49 2011
          State : active, FAILED, Not Started
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:1  (local to host localhost.localdomain)
           UUID : 0d025ae9:be8494da:157c69c3:9b7a1089
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       0      0          1      removed
       2       0      0          2     removed
       4       8       65        3      active sync   /dev/sde1


I'm able to manually mount these 2 logical drives to /raid_mount/volume1 and 2

mount -t ext3 /dev/VOLUME_GROUP/volume1 /raid_mount/volume1
mount -t ext3 /dev/VOLUME_GROUP/volume2 /raid_mount/volume2

when I reboot the system, I notice that the VOLUME_GROUP is not in /dev directory at all

now I try and mount it using fstab

/dev/VOLUME_GROUP/volume1     /raid_mount/volume1  ext3  defaults  0 0
/dev/VOLUME_GROUP/volume2        /raid_mount/volume2     ext3  defaults  0 0

and the system reboots fine with /etc/fstab entries added to mount raid, but raid is not started

vgdisplay shows the VOLUME_GROUP but its not showing in /dev.
Any help would be appreciated......
0
Comment
Question by:dmalovich
  • 14
  • 10
25 Comments
 
LVL 3

Expert Comment

by:tearman
ID: 35112570
Did you make sure that the kernel is loading the RAID-5 module and the Softraid subsystem?  https://wiki.archlinux.org/index.php/Installing_with_Software_RAID_or_LVM
0
 
LVL 7

Expert Comment

by:droyden
ID: 35113098
You have created the raid with 4 devices but are trying to mount it with two, the raid requires 3 drives to be degraded since one is parity. You have a failed array.
You are missing devices /dev/sdc and /dev/sdd you must provide one of these to be able to continue
0
 

Author Comment

by:dmalovich
ID: 35113133
droyden says:
You are missing devices /dev/sdc and /dev/sdd you must provide one of these to be able to continue

how do I provide them.  They do exist.  Somehow they are not recognized at boot?
0
 
LVL 7

Expert Comment

by:droyden
ID: 35113176
check your physical cabling. either that or you have two failed drives...
0
 

Author Comment

by:dmalovich
ID: 35113361
tearman says:

Did you make sure that the kernel is loading the RAID-5 module and the Softraid subsystem?
I can't see anywhere in those instructions where it says anything about loading raid5 with kernel?
0
 

Author Comment

by:dmalovich
ID: 35113571
here  is some extra information. maybe this helps.......

[root@localhost etc]# mdadm -Es
ARRAY /dev/md/1 metadata=1.2 UUID=7716a75e:085f1d1f:91ac9be0:5ecd4208 name=localhost.localdomain:1

contents of my /etc/mdadm.conf file

DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

ARRAY /dev/md/1 metadata=1.2 UUID=7716a75e:085f1d1f:91ac9be0:5ecd4208 name=localhost.localdomain:1
#
# /etc/fstab
# Created by anaconda on Wed Dec  1 09:54:24 2010
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VolGroup-lv_root /                       ext4    defaults        1 1
UUID=757ebad9-2d6e-4296-88c1-8c70d6c5124e /boot                   ext4    defaults        1 2
/dev/mapper/VolGroup-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VOLUME_GROUP/volume1  /raid_mountt/volume1 ext3 defaults 0 0
/dev/VOLUME_GROUP/volume2  /raid_mount/volume2  ext3 defaults 0 0


contents of /etc/fstab

0
 

Author Comment

by:dmalovich
ID: 35113579
typo in previous post. last 2 lines of my /etc/fstab are

/dev/VOLUME_GROUP/volume1  /raid_mount/volume1 ext3 defaults 0 0
/dev/VOLUME_GROUP/volume2  /raid_mount/volume2  ext3 defaults 0 0


0
 
LVL 7

Expert Comment

by:droyden
ID: 35115033
Do fdisk -l and paste the output, also paste the output of uname -r
0
 

Author Comment

by:dmalovich
ID: 35115813
 [root@localhost etc]# fdisk -l

Disk /dev/sda: 17.2 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00046db3

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048    33554431    16264192   8e  Linux LVM

Disk /dev/sdb: 1073 MB, 1073741824 bytes
139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf5da02f8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2097151     1047552   83  Linux

Disk /dev/sdc: 1073 MB, 1073741824 bytes
139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7ce94229

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2097151     1047552   83  Linux

Disk /dev/sdd: 1073 MB, 1073741824 bytes
139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xbabe004f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048     2097151     1047552   83  Linux

Disk /dev/sde: 1073 MB, 1073741824 bytes
139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x73db6940

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048     2097151     1047552   83  Linux

Disk /dev/dm-0: 15.6 GB, 15569256448 bytes
255 heads, 63 sectors/track, 1892 cylinders, total 30408704 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/dm-1: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-1 doesn't contain a valid partition table


[root@localhost etc]# uname -r
2.6.35.6-48.fc14.i686
0
 

Author Comment

by:dmalovich
ID: 35115989
I noticed that mdadm -Es and mdadm -Ds gives me a different folder for ARRAY
/dev/md/1 and /dev/md1    could this be a possible problem?

[root@localhost dev]# mdadm -Es
ARRAY /dev/md/1 metadata=1.2 UUID=7716a75e:085f1d1f:91ac9be0:5ecd4208 name=localhost.localdomain:1

[root@localhost dev]# mdadm -Ds
ARRAY /dev/md1 metadata=1.2 name=localhost.localdomain:1 UUID=7716a75e:085f1d1f:91ac9be0:5ecd4208
[root@localhost dev]#
0
 
LVL 7

Expert Comment

by:droyden
ID: 35116007
Try to force an assembly of the array with mdadm --assemble
0
 

Author Comment

by:dmalovich
ID: 35116084

The raid is running but not on reboot

[root@localhost dev]# mdadm -A -f /dev/md1 /dev/sd[b-e]1
mdadm: /dev/md1 has been started with 4 drives.
[root@localhost dev]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Fri Mar 11 18:03:24 2011
     Raid Level : raid5
     Array Size : 3141120 (3.00 GiB 3.22 GB)
  Used Dev Size : 1047040 (1022.67 MiB 1072.17 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sat Mar 12 09:15:35 2011
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:1  (local to host localhost.localdomain)
           UUID : 7716a75e:085f1d1f:91ac9be0:5ecd4208
         Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1
[root@localhost dev]#

0
How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

 
LVL 7

Expert Comment

by:droyden
ID: 35116107
ah good, ok run this:

echo 'DEVICE partitions' > mdadm.conf
mdadm  --examine  --scan --config=mdadm.conf >> ./mdadm.conf

backup your old /etc/mdadm.conf and copy over this new one and reboot
0
 

Author Comment

by:dmalovich
ID: 35116241
It didn't build on reboot.  this is mdadm -D /dev/md1 output and I added contents of mdadm.conf below.

[root@localhost dev]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Fri Mar 11 18:03:24 2011
     Raid Level : raid5
  Used Dev Size : 1047040 (1022.67 MiB 1072.17 MB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Mar 12 09:38:06 2011
          State : active, FAILED, Not Started
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:1  (local to host localhost.localdomain)
           UUID : 7716a75e:085f1d1f:91ac9be0:5ecd4208
         Events : 39

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       0        0        1      removed
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1
[root@localhost dev]#


DEVICE partitions
ARRAY /dev/md/1 metadata=1.2 UUID=7716a75e:085f1d1f:91ac9be0:5ecd4208 name=localhost.localdomain:1

does is matter that ARRAY is /dev/md/1 in my mdadm.conf?  I just wondered why it came out like that instead of /dev/md1
0
 
LVL 7

Expert Comment

by:droyden
ID: 35116297
One is devfs, should be symlinked anyway.

cat /proc/mdstat
0
 

Author Comment

by:dmalovich
ID: 35116358
After reboot i started the array again with: mdadm -A -f /dev/md1 /dev/sd[b-e]1

and cat /proc/mdsata is below

[root@localhost etc]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md1 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1]
      3141120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
     
unused devices: <none>
[root@localhost etc]#
0
 

Author Comment

by:dmalovich
ID: 35116380
here is output of fdisk -l     it shows that md1 doesn't have a valid partition table, does this matter?
also my logical volumes on volume group 426VOLUME are dm-2 dm-3 and they show no valid partition table also......


[root@localhost 426VOLUME]# fdisk -l

Disk /dev/sda: 17.2 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00046db3

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048    33554431    16264192   8e  Linux LVM

Disk /dev/sdb: 1073 MB, 1073741824 bytes
139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf5da02f8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2097151     1047552   83  Linux

Disk /dev/sdc: 1073 MB, 1073741824 bytes
139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7ce94229

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2097151     1047552   83  Linux

Disk /dev/sdd: 1073 MB, 1073741824 bytes
139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xbabe004f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048     2097151     1047552   83  Linux

Disk /dev/sde: 1073 MB, 1073741824 bytes
139 heads, 8 sectors/track, 1885 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x73db6940

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048     2097151     1047552   83  Linux

Disk /dev/dm-0: 15.6 GB, 15569256448 bytes
255 heads, 63 sectors/track, 1892 cylinders, total 30408704 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/dm-1: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-1 doesn't contain a valid partition table

Disk /dev/md1: 3216 MB, 3216506880 bytes
2 heads, 4 sectors/track, 785280 cylinders, total 6282240 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/dm-2: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
Disk identifier: 0x00000000

Disk /dev/dm-2 doesn't contain a valid partition table

Disk /dev/dm-3: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
Disk identifier: 0x00000000

Disk /dev/dm-3 doesn't contain a valid partition table
[root@localhost 426VOLUME]# ls -la
total 0
drwxr-xr-x  2 root root   80 Mar 12 10:55 .
drwxr-xr-x 22 root root 4020 Mar 12 10:55 ..
lrwxrwxrwx  1 root root    7 Mar 12 10:55 volumeDb -> ../dm-3
lrwxrwxrwx  1 root root    7 Mar 12 10:55 volumeWeb -> ../dm-2
[root@localhost 426VOLUME]#
0
 
LVL 7

Expert Comment

by:droyden
ID: 35117068
OK, use this as your /etc/mdadm/mdadm.conf

DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

ARRAY /dev/md0 level=raid5 num-devices=4 UUID=7716a75e:085f1d1f:91ac9be0:5ecd4208
0
 

Author Comment

by:dmalovich
ID: 35117179
That still didn't do it.  I really don't know what else to try......
0
 
LVL 7

Expert Comment

by:droyden
ID: 35117293
Are your init scripts set to auto start the array? Also make sure that the location of mdadm.conf is correct
0
 

Author Comment

by:dmalovich
ID: 35117363
How/where do I set init scripts to auto start? Right now location of mdadm.conf is in /etc
0
 
LVL 7

Expert Comment

by:droyden
ID: 35117521
is this redhat enterprise? check in /etc/init.d/mdadm ?
0
 

Author Comment

by:dmalovich
ID: 35117764
its fedora14 .  I do have a /etc/init.d folder but no mdadm in /etc/init.d
0
 
LVL 7

Accepted Solution

by:
droyden earned 500 total points
ID: 35117911



Edit /sbin/start_udev

and change:


/sbin/udevd -d


into


/sbin/udevd -d --children-max=1
0
 

Author Closing Comment

by:dmalovich
ID: 35118153
Awesome.  It worked.  Thank you so so much
0

Featured Post

Maximize Your Threat Intelligence Reporting

Reporting is one of the most important and least talked about aspects of a world-class threat intelligence program. Here’s how to do it right.

Join & Write a Comment

Currently, there is not an RPM package available under the RHEL/Fedora/CentOS distributions that gives you a quick and easy way to allow PHP to interface with Oracle. As a result, I have included a set of instructions on how to do this with minimal …
Join Greg Farro and Ethan Banks from Packet Pushers (http://packetpushers.net/podcast/podcasts/pq-show-93-smart-network-monitoring-paessler-sponsored/) and Greg Ross from Paessler (https://www.paessler.com/prtg) for a discussion about smart network …
Learn several ways to interact with files and get file information from the bash shell. ls lists the contents of a directory: Using the -a flag displays hidden files: Using the -l flag formats the output in a long list: The file command gives us mor…
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.

747 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

16 Experts available now in Live!

Get 1:1 Help Now