How can I tell which drive in my software raid is bad

When I do a cat /proc/mdstat I get the following results.

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sdc1[2] sdb1[1] sda1[0]
      4395407808 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
      [===========>.........]  recovery = 59.5% (871892528/1465135936) finish=949.8min speed=10408K/sec

unused devices: <none>

The question is do the [UUU_] flags match the drives in the order displayed sdd1[4] sdc1[2] sdb1[1] sda1[0]?

If so then sda1 is the bad drive in this scenario, or do I look at the number after the drive device in this case the [4] position would indicate sdd1 as the bad drive.

Thank you in advance.
LVL 2
PowerToasterAsked:
Who is Participating?
 
CrunchedConnect With a Mentor Commented:
Try run a
mdadm --query --detail /dev/md0

That should give you the info you are after.  Although it appears from your screendump that the array is ok and is merely recovering from a disk failure?
0
 
PowerToasterAuthor Commented:
Exactly right, on all counts. The query detail gave me exactly the info I needed.

And you are correct the array is just fine and just rebuilding a spare.

Thank you
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.