Go Premium for a chance to win a PS4. Enter to Win

x
?
Solved

MDADM Raid 5 Issues

Posted on 2008-06-13
5
Medium Priority
?
799 Views
Last Modified: 2016-12-08
I am running FC8 and have a DS1220 12-disk storage pack.  The second 5 disks are assembled into a software raid level 5 array.  The array has automatically gone off line twice in the last two weeks causing me to re-create or re-assemble.  I have backed up my data so I feel reasonably safe about that; however, I am attempting to diagnose the cause of the problem and have seen that one of the devices comprising the array is "removed".

mdadm --detail /dev/md4
/dev/md4:
        Version : 00.90.03
  Creation Time : Sat Jun  7 15:09:36 2008
     Raid Level : raid5
     Array Size : 1953535744 (1863.04 GiB 2000.42 GB)
  Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 13 20:42:57 2008
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 494bce9a:c596eeb1:1b66a7c1:828cf00a
         Events : 0.334

    Number   Major   Minor   RaidDevice State
       0       8      177        0      active sync   /dev/sdl1
       1       8      193        1      active sync   /dev/sdm1
       2       8      209        2      active sync   /dev/sdn1
       3       8      225        3      active sync   /dev/sdo1
       4       0        0        4      removed

The device that is removed is /dev/sdp1.  If I attempt to re-add, the device gets added as a "faulty spare".

mdadm --detail /dev/md4
/dev/md4:
        Version : 00.90.03
  Creation Time : Sat Jun  7 15:09:36 2008
     Raid Level : raid5
     Array Size : 1953535744 (1863.04 GiB 2000.42 GB)
  Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
   Raid Devices : 5
  Total Devices : 5
Preferred Minor : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 13 21:04:56 2008
          State : clean, degraded, recovering
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : 494bce9a:c596eeb1:1b66a7c1:828cf00a
         Events : 0.340

    Number   Major   Minor   RaidDevice State
       0       8      177        0      active sync   /dev/sdl1
       1       8      193        1      active sync   /dev/sdm1
       2       8      209        2      active sync   /dev/sdn1
       3       8      225        3      active sync   /dev/sdo1
       4       0        0        4      removed

       5       8      241        -      faulty spare   /dev/sdp1

I have read that this may be due to the devices being assembled out of order, but also I am not sure if I truely have a failed drive.  How does one know for sure?  How can I get my raid back to a non-degraded state?

Thanks!
0
Comment
Question by:daveokst
  • 3
  • 2
5 Comments
 

Author Comment

by:daveokst
ID: 21784058
Also, I recall from the first time I got the raid back together, I was moving a 250g TC container to a backup drive, but the cp process failed both times at the exact same spot.  The raid was degraded as usual during these attempts.
0
 
LVL 43

Expert Comment

by:ravenpl
ID: 21784458
What "cat /proc/mdstat" says?
What command have You used to add this offed device?
0
 

Author Comment

by:daveokst
ID: 21785473
I used --re-ad once and I tried --add another time, like this...
mdadm --add /dev/md4 /dev/sdp1
mdadm: re-added /dev/sdp1

mdadm --detail /dev/md4 revealed the same wither way.

cat /proc/mdstat shows...

md4 : active raid5 sdl1[0] sdp1[5](F) sdo1[3] sdn1[2] sdm1[1]
      1953535744 blocks level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]


I assume the (F) means 'faulty', but that doesn't necessaruly mean the drive is failed does it?

Thanks!
0
 
LVL 43

Expert Comment

by:ravenpl
ID: 21785518
F means faulty. Before adding it back, You have to remove it first
mdadm /dev/md4 -r /dev/sdp1
mdadm /dev/md4 -a /dev/sdp1
#should do the trick.
0
 

Accepted Solution

by:
daveokst earned 0 total points
ID: 21785917
thx... I tried this, did not work.  I just decided to reformat the drives.  Thx for your assistance.
0

Featured Post

Get your Disaster Recovery as a Service basics

Disaster Recovery as a Service is one go-to solution that revolutionizes DR planning. Implementing DRaaS could be an efficient process, easily accessible to non-DR experts. Learn about monitoring, testing, executing failovers and failbacks to ensure a "healthy" DR environment.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In part one, we reviewed the prerequisites required for installing SQL Server vNext. In this part we will explore how to install Microsoft's SQL Server on Ubuntu 16.04.
I have written articles previously comparing SARDU and YUMI.  I also included a couple of lines about Easy2boot (easy2boot.com).  I have now been using, and enjoying easy2boot as my sole multiboot utility for some years and realize that it deserves …
Learn several ways to interact with files and get file information from the bash shell. ls lists the contents of a directory: Using the -a flag displays hidden files: Using the -l flag formats the output in a long list: The file command gives us mor…
Get a first impression of how PRTG looks and learn how it works.   This video is a short introduction to PRTG, as an initial overview or as a quick start for new PRTG users.
Suggested Courses
Course of the Month8 days, 20 hours left to enroll

877 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question