• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 209
  • Last Modified:

How can I tell which drive in my software raid is bad

When I do a cat /proc/mdstat I get the following results.

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sdc1[2] sdb1[1] sda1[0]
      4395407808 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
      [===========>.........]  recovery = 59.5% (871892528/1465135936) finish=949.8min speed=10408K/sec

unused devices: <none>

The question is do the [UUU_] flags match the drives in the order displayed sdd1[4] sdc1[2] sdb1[1] sda1[0]?

If so then sda1 is the bad drive in this scenario, or do I look at the number after the drive device in this case the [4] position would indicate sdd1 as the bad drive.

Thank you in advance.
0
PowerToaster
Asked:
PowerToaster
1 Solution
 
CrunchedCommented:
Try run a
mdadm --query --detail /dev/md0

That should give you the info you are after.  Although it appears from your screendump that the array is ok and is merely recovering from a disk failure?
0
 
PowerToasterAuthor Commented:
Exactly right, on all counts. The query detail gave me exactly the info I needed.

And you are correct the array is just fine and just rebuilding a spare.

Thank you
0

Featured Post

Prep for the ITIL® Foundation Certification Exam

December’s Course of the Month is now available! Enroll to learn ITIL® Foundation best practices for delivering IT services effectively and efficiently.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now