I'm having problems mounting RAID5 array with 2 logical volumes in fedora14 using fstab

These are the steps I have taken so far to permanently mount a RAID5 array with 2 logical volumes.  I'm using fedora14 and I have 4 sata dirves that are partitioned as
/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
 
created 2 directories for mount locations
/raid_mount/volume1
/raid_mount/volume2

created RAID5 array
mdadm –create /dev/md1 –level=5 –raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
 
created physical volume on new array
pvcreate /dev/md1

created a volume group
vgcreate VOLUME_GROUP /dev/md1

add volume to new volume group
lvcreate –L 1G –n volume1 VOLUME_GROUP
lvcreate –L 1G –n volume2 VOLUME_GROUP

formatted both logical drives
mkfs –t ext3 –L ext3 /dev/VOLUME_GROUP/volume1
mkfs –t ext3 –L ext3 /dev/VOLUME_GROUP/volume1

contents of mdadm.conf file
  GNU nano 2.2.4                   File: mdadm.conf                                            

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
#MAILADDR root
# definitions of existing MD arrays

# This file was auto-generated on Tue, 18 May 2010 18:15:07 +1000
# by mkconf $Id$

DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
ARRAY /dev/md1 level=raid5 devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

this is what I get when I do a mdadm -D /dev/md1
[root@localhost etc]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Mar  8 21:19:06 2011
     Raid Level : raid5
  Used Dev Size : 1047040 (1022.67 MiB 1072.17 MB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu Mar 10 09:24:49 2011
          State : active, FAILED, Not Started
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:1  (local to host localhost.localdomain)
           UUID : 0d025ae9:be8494da:157c69c3:9b7a1089
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       0        0        1      removed
       2       0        0        2      removed
       4       8       65        3      active sync   /dev/sde1

It shows that only 2 of the devices are working.  I don't know if this matters?

I'm able to manually mount these 2 logical drives to /raid_mount/volume1 and 2

mount -t ext3 /dev/VOLUME_GROUP/volume1 /raid_mount/volume1
mount -t ext3 /dev/VOLUME_GROUP/volume2 /raid_mount/volume2

when I reboot the system, I notice that the VOLUME_GROUP is not in /dev directory at all




now I try and mount it using fstab

/dev/VOLUME_GROUP/volume1     /raid_mount/volume1  ext3  defaults  0 0
/dev/VOLUME_GROUP/volume2        /raid_mount/volume2     ext3  defaults  0 0

and it crashes and it gives me a command prompt to enter root password.
Is there anything wrong in the steps and commands I have run?  Any help would be appreciated.
dmalovichAsked:
Who is Participating?
 
arnoldConnect With a Mentor Commented:
At this point your RAID 5 is down a drive /dev/sdd1.

try
mdamd --re-add /dev/md1 /dev/sdd1

To get the removed hard drive back in.
You should double check whether the sdd1 is functional.
0
 
arnoldCommented:
The issue you have is that your 4 drive RAID 5  has two broken ones.


mdamd -A -f /dev/md1 /dev/sd[b-e]1
0
 
dmalovichAuthor Commented:
This is what I get when I run the command

root@localhost etc]# mdadm -A -f /dev/md1 /dev/sd[b-e]1
mdadm: cannot open device /dev/sdd1: Device or resource busy
mdadm: /dev/sdd1 has no superblock - assembly aborted

[root@localhost etc]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Mar  8 21:19:06 2011
     Raid Level : raid5
     Array Size : 3141120 (3.00 GiB 3.22 GB)
  Used Dev Size : 1047040 (1022.67 MiB 1072.17 MB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Thu Mar 10 15:28:23 2011
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:1  (local to host localhost.localdomain)
           UUID : 0d025ae9:be8494da:157c69c3:9b7a1089
         Events : 118

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       0        0        2      removed
       4       8       65        3      active sync   /dev/sde1


looks like  /dev/sdd1 has some problem, but I don't know what it is.....
0
 
dmalovichAuthor Commented:
adding it worked.  I was able to mount using fstab.  Thanks alot.......
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.