troubleshooting Question

I'm having problems mounting RAID5 array with 2 logical volumes in fedora14 using fstab

Avatar of dmalovich
dmalovich asked on
LinuxLinux Distributions
4 Comments1 Solution577 ViewsLast Modified:
These are the steps I have taken so far to permanently mount a RAID5 array with 2 logical volumes.  I'm using fedora14 and I have 4 sata dirves that are partitioned as
/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
created 2 directories for mount locations

created RAID5 array
mdadm –create /dev/md1 –level=5 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
created physical volume on new array
pvcreate /dev/md1

created a volume group
vgcreate VOLUME_GROUP /dev/md1

add volume to new volume group
lvcreate –L 1G –n volume1 VOLUME_GROUP
lvcreate –L 1G –n volume2 VOLUME_GROUP

formatted both logical drives
mkfs –t ext3 –L ext3 /dev/VOLUME_GROUP/volume1
mkfs –t ext3 –L ext3 /dev/VOLUME_GROUP/volume1

contents of mdadm.conf file
  GNU nano 2.2.4                   File: mdadm.conf                                            

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
# definitions of existing MD arrays

# This file was auto-generated on Tue, 18 May 2010 18:15:07 +1000
# by mkconf $Id$

DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
ARRAY /dev/md1 level=raid5 devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

this is what I get when I do a mdadm -D /dev/md1
[root@localhost etc]# mdadm -D /dev/md1
        Version : 1.2
  Creation Time : Tue Mar  8 21:19:06 2011
     Raid Level : raid5
  Used Dev Size : 1047040 (1022.67 MiB 1072.17 MB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu Mar 10 09:24:49 2011
          State : active, FAILED, Not Started
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:1  (local to host localhost.localdomain)
           UUID : 0d025ae9:be8494da:157c69c3:9b7a1089
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       0        0        1      removed
       2       0        0        2      removed
       4       8       65        3      active sync   /dev/sde1

It shows that only 2 of the devices are working.  I don't know if this matters?

I'm able to manually mount these 2 logical drives to /raid_mount/volume1 and 2

mount -t ext3 /dev/VOLUME_GROUP/volume1 /raid_mount/volume1
mount -t ext3 /dev/VOLUME_GROUP/volume2 /raid_mount/volume2

when I reboot the system, I notice that the VOLUME_GROUP is not in /dev directory at all

now I try and mount it using fstab

/dev/VOLUME_GROUP/volume1     /raid_mount/volume1  ext3  defaults  0 0
/dev/VOLUME_GROUP/volume2        /raid_mount/volume2     ext3  defaults  0 0

and it crashes and it gives me a command prompt to enter root password.
Is there anything wrong in the steps and commands I have run?  Any help would be appreciated.
Join our community to see this answer!
Unlock 1 Answer and 4 Comments.
Start Free Trial
Learn from the best

Network and collaborate with thousands of CTOs, CISOs, and IT Pros rooting for you and your success.

Andrew Hancock - VMware vExpert
See if this solution works for you by signing up for a 7 day free trial.
Unlock 1 Answer and 4 Comments.
Try for 7 days

”The time we save is the biggest benefit of E-E to our team. What could take multiple guys 2 hours or more each to find is accessed in around 15 minutes on Experts Exchange.

-Mike Kapnisakis, Warner Bros