Link to home
Start Free TrialLog in
Avatar of Brian S
Brian SFlag for United States of America

asked on

how to rebuild XFS volume from LV

system is RHEL7

After a reboot, my home partition disappeared.

lvdisplay show that the volume still exists but /dev/mapper doesn't show the volume. How do I re-create the XFS mount points?

#  lvdisplay; ls -l /dev/mapper/
  WARNING: Device for PV 3Wr2SB-bJxf-BTbq-Wd4x-udX9-0ifq-XBWKcV not found or rejected by a filter.
  WARNING: Device for PV FX7n9D-0Vc2-kyB8-3rQ8-NRVq-McEy-myB33R not found or rejected by a filter.
  --- Logical volume ---
  LV Path                /dev/rhel_rhel7/swap
  LV Name                swap
  VG Name                rhel_rhel7
  LV UUID                a59WUg-Nvqs-ldyU-xlO9-h46P-d7Zf-M72Ocx
  LV Write Access        read/write
  LV Creation host, time rhel7.fios-router.home, 2016-12-11 20:19:17 -0500
  LV Status              available
  # open                 2
  LV Size                7.81 GiB
  Current LE             2000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel_rhel7/home
  LV Name                home
  VG Name                rhel_rhel7
  LV UUID                lHbuyL-Nk0e-Ob0v-dHdc-ECRH-snIg-K5cEu8
  LV Write Access        read/write
  LV Creation host, time rhel7.fios-router.home, 2016-12-11 20:19:17 -0500
  LV Status              NOT available
  LV Size                19.94 TiB
  Current LE             5228442
  Segments               12
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/rhel_rhel7/root
  LV Name                root
  VG Name                rhel_rhel7
  LV UUID                FnRKah-dcK3-Wtow-zsEv-j4Bw-Rw10-3La7dd
  LV Write Access        read/write
  LV Creation host, time rhel7.fios-router.home, 2016-12-11 20:22:44 -0500
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
total 0
crw-------. 1 root root 10, 236 Jan  6 20:22 control
lrwxrwxrwx. 1 root root       7 Jan  6 20:22 rhel_rhel7-root -> ../dm-0
lrwxrwxrwx. 1 root root       7 Jan  6 20:22 rhel_rhel7-swap -> ../dm-1

Open in new window

Avatar of Brian S
Brian S
Flag of United States of America image

ASKER

After re-reading the screen output -- I believe that the LV volume is gone. :(

and the journactl -xb -p3 gives some more details:

# journalctl -xb -p3
-- Logs begin at Fri 2017-01-06 20:21:26 EST, end at Fri 2017-01-06 23:10:01 EST. --
Jan 06 20:21:34 rhel7.fios-router.home kernel: ata14.00: revalidation failed (errno=-2)
Jan 06 20:21:36 rhel7.fios-router.home kernel: ata4: COMRESET failed (errno=-16)
Jan 06 20:21:39 rhel7.fios-router.home kernel: ata14.00: revalidation failed (errno=-2)
Jan 06 20:21:45 rhel7.fios-router.home kernel: ata14.00: revalidation failed (errno=-2)
Jan 06 20:21:46 rhel7.fios-router.home kernel: ata4: COMRESET failed (errno=-16)
Jan 06 20:22:21 rhel7.fios-router.home kernel: ata4: COMRESET failed (errno=-16)
Jan 06 20:22:26 rhel7.fios-router.home kernel: ata4: COMRESET failed (errno=-16)
Jan 06 20:22:26 rhel7.fios-router.home kernel: ata4: reset failed, giving up
Jan 06 20:22:29 rhel7.local smartd[985]: Device: /dev/sdb [SAT], WARNING: A firmware update for this drive may be available,
Jan 06 20:22:29 rhel7.local smartd[985]: see the following Seagate web pages:
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/207931en
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/223651en
Jan 06 20:22:29 rhel7.local smartd[985]: Device: /dev/sdc [SAT], WARNING: A firmware update for this drive may be available,
Jan 06 20:22:29 rhel7.local smartd[985]: see the following Seagate web pages:
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/207931en
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/213915en
Jan 06 20:22:29 rhel7.local smartd[985]: Device: /dev/sdd [SAT], WARNING: A firmware update for this drive may be available,
Jan 06 20:22:29 rhel7.local smartd[985]: see the following Seagate web pages:
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/207931en
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/213915en
Jan 06 20:22:29 rhel7.local smartd[985]: Device: /dev/sde [SAT], WARNING: A firmware update for this drive may be available,
Jan 06 20:22:29 rhel7.local smartd[985]: see the following Seagate web pages:
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/207931en
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/223651en
Jan 06 20:22:29 rhel7.local smartd[985]: Device: /dev/sdf [SAT], WARNING: A firmware update for this drive may be available,
Jan 06 20:22:29 rhel7.local smartd[985]: see the following Seagate web pages:
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/207931en
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/223651en
Jan 06 20:22:29 rhel7.local smartd[985]: Device: /dev/sdh [SAT], WARNING: A firmware update for this drive may be available,
Jan 06 20:22:29 rhel7.local smartd[985]: see the following Seagate web pages:
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/207931en
Jan 06 20:22:29 rhel7.local smartd[985]: http://knowledge.seagate.com/articles/en_US/FAQ/223651en
Jan 06 20:22:37 rhel7.local systemd[1]: Failed to start Remote desktop service (VNC) [2: brian - Display[:1] - 0 ].
-- Subject: Unit vncserver@:1.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit vncserver@:1.service has failed.
-- 
-- The result is failed.
Jan 06 20:22:54 rhel7.local spice-vdagent[6216]: Cannot access vdagent virtio channel /dev/virtio-ports/com.redhat.spice.0
Jan 06 20:22:55 rhel7.local setroubleshoot[6579]: SELinux is preventing /usr/libexec/colord from read access on the file /etc/udev/hwdb.bin. For complete SELinux messages. run sealert -l 78137353-c4af-445b-919b-699
Jan 06 20:22:55 rhel7.local pulseaudio[8393]: [pulseaudio] pid.c: Daemon already running.
Jan 06 20:22:55 rhel7.local setroubleshoot[6579]: SELinux is preventing /usr/libexec/colord from read access on the file /etc/udev/hwdb.bin. For complete SELinux messages. run sealert -l 78137353-c4af-445b-919b-699
Jan 06 20:24:11 rhel7.local systemd[1]: Timed out waiting for device dev-mapper-rhel_rhel7\x2dhome.device.
-- Subject: Unit dev-mapper-rhel_rhel7\x2dhome.device has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit dev-mapper-rhel_rhel7\x2dhome.device has failed.
-- 
-- The result is timeout.
Jan 06 20:24:11 rhel7.local systemd[1]: Dependency failed for /home.
-- Subject: Unit home.mount has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit home.mount has failed.
-- 
-- The result is dependency.

Open in new window

Avatar of arnold
Please post
cat /proc/mdstat
Lvdiskscan
Pvdisplay
Vgdisplay
Lvdisplay
Avatar of Brian S

ASKER

Here ya go - I changed "lvdiskscan" to "lvmdiskscan"


# cat /proc/mdstat
Personalities : 
unused devices: <none>
# lvmdiskscan
  /dev/rhel_rhel7/root [      50.00 GiB] 
  /dev/sda1            [     500.00 MiB] 
  /dev/rhel_rhel7/swap [       7.81 GiB] 
  /dev/sda2            [     223.08 GiB] LVM physical volume
  /dev/sdb1            [       1.82 TiB] LVM physical volume
  /dev/sdc1            [       1.36 TiB] LVM physical volume
  /dev/sdd1            [       1.36 TiB] LVM physical volume
  /dev/sde1            [       2.73 TiB] LVM physical volume
  /dev/sdf1            [       2.73 TiB] LVM physical volume
  /dev/sdg1            [       2.73 TiB] LVM physical volume
  /dev/sdh1            [       2.73 TiB] LVM physical volume
  /dev/sdi1            [     119.24 GiB] LVM physical volume
  /dev/sdj1            [     111.79 GiB] LVM physical volume
  2 disks
  1 partition
  0 LVM physical volume whole disks
  10 LVM physical volumes
# pvdisplay
  WARNING: Device for PV 3Wr2SB-bJxf-BTbq-Wd4x-udX9-0ifq-XBWKcV not found or rejected by a filter.
  WARNING: Device for PV FX7n9D-0Vc2-kyB8-3rQ8-NRVq-McEy-myB33R not found or rejected by a filter.
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               rhel_rhel7
  PV Size               223.08 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              57108
  Free PE               0
  Allocated PE          57108
  PV UUID               p9B7L5-srLK-eMFb-EyZ7-TDZM-UleK-IiANjP
   
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               rhel_rhel7
  PV Size               1.82 TiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               qbjmqV-0gQU-hM0R-94E4-JEvQ-GSLo-Mrv4t9
   
  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               rhel_rhel7
  PV Size               1.36 TiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              357699
  Free PE               0
  Allocated PE          357699
  PV UUID               hkNfsD-Q9ZP-HDu7-y19Y-MXYA-7l5b-1RIDPD
   
  --- Physical volume ---
  PV Name               /dev/sdd1
  VG Name               rhel_rhel7
  PV Size               1.36 TiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              357699
  Free PE               0
  Allocated PE          357699
  PV UUID               884fk5-7RvJ-hXZT-Jv1e-1NRm-vQOZ-S9qRrx
   
  --- Physical volume ---
  PV Name               /dev/sde1
  VG Name               rhel_rhel7
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               hifA42-Ps26-12LE-BgO7-gleE-VVm5-zxFsME
   
  --- Physical volume ---
  PV Name               /dev/sdf1
  VG Name               rhel_rhel7
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               YHjda4-yi8S-I5ii-P2Tl-Amtz-JoAr-9dOao6
   
  --- Physical volume ---
  PV Name               [unknown]
  VG Name               rhel_rhel7
  PV Size               1.36 TiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              357699
  Free PE               0
  Allocated PE          357699
  PV UUID               3Wr2SB-bJxf-BTbq-Wd4x-udX9-0ifq-XBWKcV
   
  --- Physical volume ---
  PV Name               [unknown]
  VG Name               rhel_rhel7
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               FX7n9D-0Vc2-kyB8-3rQ8-NRVq-McEy-myB33R
   
  --- Physical volume ---
  PV Name               /dev/sdg1
  VG Name               rhel_rhel7
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               fLvu9R-8w1n-ylQC-CBxv-yej8-qdrH-5oJoWy
   
  --- Physical volume ---
  PV Name               /dev/sdh1
  VG Name               rhel_rhel7
  PV Size               2.73 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               0
  Allocated PE          715396
  PV UUID               I5lUmA-kdmK-4pqA-kDx8-6ulr-vPXP-LxBRBl
   
  --- Physical volume ---
  PV Name               /dev/sdi1
  VG Name               rhel_rhel7
  PV Size               119.24 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              30525
  Free PE               0
  Allocated PE          30525
  PV UUID               oTW4Jv-wGRP-0YOi-nWzF-Q0Vq-PFpG-aVPsjg
   
  --- Physical volume ---
  PV Name               /dev/sdj1
  VG Name               rhel_rhel7
  PV Size               111.79 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              28617
  Free PE               16
  Allocated PE          28601
  PV UUID               JbnwMf-qaPe-rSd2-xfmC-g4zi-jwrn-3I6Ya9
   
# vgdisplay
  WARNING: Device for PV 3Wr2SB-bJxf-BTbq-Wd4x-udX9-0ifq-XBWKcV not found or rejected by a filter.
  WARNING: Device for PV FX7n9D-0Vc2-kyB8-3rQ8-NRVq-McEy-myB33R not found or rejected by a filter.
  --- Volume group ---
  VG Name               rhel_rhel7
  System ID             
  Format                lvm2
  Metadata Areas        10
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                12
  Act PV                10
  VG Size               20.00 TiB
  PE Size               4.00 MiB
  Total PE              5243258
  Alloc PE / Size       5243242 / 20.00 TiB
  Free  PE / Size       16 / 64.00 MiB
  VG UUID               Jigxo3-RpJk-ZUev-zVXL-5d3T-zaQo-5epqvJ
   
# lvdisplay
  WARNING: Device for PV 3Wr2SB-bJxf-BTbq-Wd4x-udX9-0ifq-XBWKcV not found or rejected by a filter.
  WARNING: Device for PV FX7n9D-0Vc2-kyB8-3rQ8-NRVq-McEy-myB33R not found or rejected by a filter.
  --- Logical volume ---
  LV Path                /dev/rhel_rhel7/swap
  LV Name                swap
  VG Name                rhel_rhel7
  LV UUID                a59WUg-Nvqs-ldyU-xlO9-h46P-d7Zf-M72Ocx
  LV Write Access        read/write
  LV Creation host, time rhel7.fios-router.home, 2016-12-11 20:19:17 -0500
  LV Status              available
  # open                 2
  LV Size                7.81 GiB
  Current LE             2000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel_rhel7/home
  LV Name                home
  VG Name                rhel_rhel7
  LV UUID                lHbuyL-Nk0e-Ob0v-dHdc-ECRH-snIg-K5cEu8
  LV Write Access        read/write
  LV Creation host, time rhel7.fios-router.home, 2016-12-11 20:19:17 -0500
  LV Status              NOT available
  LV Size                19.94 TiB
  Current LE             5228442
  Segments               12
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/rhel_rhel7/root
  LV Name                root
  VG Name                rhel_rhel7
  LV UUID                FnRKah-dcK3-Wtow-zsEv-j4Bw-Rw10-3La7dd
  LV Write Access        read/write
  LV Creation host, time rhel7.fios-router.home, 2016-12-11 20:22:44 -0500
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
#

Open in new window

Try
lvchange -a /dev/mapper/rhel_rhel7/home
This activates the volume.


Your setup seems to span all drives without providing for a possible drive failure.....
You may have to scan fsck.xfs /dev/mapper/rhel_rhel7/home
If the activate errors out because the partition filesystem is seen as not clean.
Avatar of Brian S

ASKER

sadly I thought drive redundancy was built in to the LV and XFS systems. I did want a capability to have it mapped similar to a RAID5 or RAID6 for multiple disk falures. I am reviewing my logs and that seems to be the case is that one or two of my drives are failing.

This is a test system, so if I have rebuild there is nothing earth shattering there.

And part of the problem is that there is a missing entry in dev/mapper:

# ls -l /dev/mapper/
total 0
crw-------. 1 root root 10, 236 Jan  6 20:22 control
lrwxrwxrwx. 1 root root       7 Jan  6 20:22 rhel_rhel7-root -> ../dm-0
lrwxrwxrwx. 1 root root       7 Jan  6 20:22 rhel_rhel7-swap -> ../dm-1
#

Open in new window

the missing entry is because /dev/mapper/rhel_rhel7/home is marked as unavailable, try to force it online lvchange -a /dev/mapper/rhel_rhel7-home
When setting up lvm, you have to define redundancy
One option is to use RAID +lvm overlay.
You are using 2-3TB drives, raid 5 will have a long rebuild and potential for another drive failure during ......... The attempt
Raid 6.....has its own deficiencies...

The other is to make sure when defining the lvm pv or vg with the raid redundancy.......

The smart event referenced your post merely suggests that there are newer firmware for the drives. Look at updating the firmware on the drive/s referenced.

What was the result of running lvchange --activate /dev/mapper/rhel_rhel7/home
If all goes, it should mark it active..
Look in /dev/rhel_rhel7 that might be the path on which fsck can be run.....
Two drives one 1.5TB and one 3TB failed?
fdisk -l
Avatar of Brian S

ASKER

# lvchange -a /dev/mapper/rhel_rhel7-home 
  Invalid argument for --activate: /dev/mapper/rhel_rhel7-home
  Error during parsing of command line.
#

Open in new window

Avatar of Brian S

ASKER

# fdisk -l

Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes, 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00089791

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048  2930276351  1465137152   8e  Linux LVM

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x000b5ba7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048  3907028991  1953513472   8e  Linux LVM

Disk /dev/sda: 240.1 GB, 240057409536 bytes, 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0001a812

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   468860927   233917440   8e  Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sde: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


#         Start          End    Size  Type            Name
 1         2048   5860532223    2.7T  Linux LVM       

Disk /dev/sdj: 120.0 GB, 120034123776 bytes, 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0000b661

   Device Boot      Start         End      Blocks   Id  System
/dev/sdj1            2048   234440703   117219328   8e  Linux LVM

Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes, 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0006084c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048  2930276351  1465137152   8e  Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


#         Start          End    Size  Type            Name
 1         2048   5860532223    2.7T  Linux LVM       
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdh: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


#         Start          End    Size  Type            Name
 1         2048   5860532223    2.7T  Linux LVM       

Disk /dev/sdi: 128.0 GB, 128035676160 bytes, 250069680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00090e6a

   Device Boot      Start         End      Blocks   Id  System
/dev/sdi1            2048   250068991   125033472   8e  Linux LVM

Disk /dev/mapper/rhel_rhel7-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel_rhel7-swap: 8388 MB, 8388608000 bytes, 16384000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


#

Open in new window

Look in /dev do you gave an rhel_rhel7 there and looking within that do you have home?

How many total drives do you have in the system?
10 with two in an unknown state, or 12 with two failed?

While the system is off, pulling sone and using vendor tools to test the drive. Do one at a time, do not run destructive tests.....
Avatar of Brian S

ASKER

I believe that there are two banks of HDD and 4 SSDs.

I am now looking for a mapping to find the logical to physical layout. The system doesn't seem to use the drive serial number or anything traceable.
Two banks meaning 8?
Smartctl can be used to query and trigger the light or
what is the make of the server? Some have vendor provided utility that could/would help manage the hardware.
cat /proc/scsi/scsi
Avatar of Brian S

ASKER

sadly this doesn't have that option to light the drives -- it doesn't even light the bad disks with a "red" LED. :(

I do not seem to find any kind of mapping between the logical and the physical. I can see that it is a 2TB and a 3TB disk, which does narrow the options. I still would have guessed that I could find the drive serial number some place. I do have image of the disk labels from when they were put in.

# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
  Vendor: TSSTcorp Model: CDDVDW SH-224BB  Rev: SB00
  Type:   CD-ROM                           ANSI  SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: OWC Mercury EXTR Rev: 13F0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: ST2000DM001-1CH1 Rev: CC26
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 01 Lun: 00
  Vendor: ATA      Model: ST31500541AS     Rev: CC34
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 02 Lun: 00
  Vendor: ATA      Model: ST31500541AS     Rev: CC34
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 03 Lun: 00
  Vendor: ATA      Model: ST3000DM001-1CH1 Rev: CC24
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 04 Lun: 00
  Vendor: ATA      Model: ST3000DM001-1CH1 Rev: CC26
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 05 Lun: 00
  Vendor: ATA      Model: ST_M13FQBL       Rev: 0957
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 06 Lun: 00
  Vendor: ATA      Model: ST31500541AS     Rev: CC34
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi2 Channel: 00 Id: 07 Lun: 00
  Vendor: ATA      Model: ST3000DM001-1CH1 Rev: CC24
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi5 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: ST3000DM001-9YN1 Rev: CC4H
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi6 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: ST3000DM001-1CH1 Rev: CC24
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi7 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: TOSHIBA THNSNH12 Rev: N101
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi8 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: INTEL SSDSC2CW12 Rev: 400i
  Type:   Direct-Access                    ANSI  SCSI revision: 05
#

Open in new window

Use smartctl -I /dev/sda-j
This will help you identify the ones in use. The remaining/missing will be the ones that are bad.
Avatar of Brian S

ASKER

I think I found it way too:

#  grep "WWN:" /var/log/messages
Jan  7 19:57:06 rhel7 smartd[1051]: Device: /dev/sda [SAT], OWC Mercury EXTREME Pro SSD, S/N:MXE24011E47G3830, WWN:5-000000-000003830, FW:361A13F0, 240 GB
Jan  7 19:57:06 rhel7 smartd[1051]: Device: /dev/sdb [SAT], ST2000DM001-1CH164, S/N:Z1F34XPP, WWN:5-000c50-050b5ef9d, FW:CC26, 2.00 TB
Jan  7 19:57:06 rhel7 smartd[1051]: Device: /dev/sdc [SAT], ST31500541AS, S/N:9XW04B5C, WWN:5-000c50-01a71a09b, FW:CC34, 1.50 TB
Jan  7 19:57:06 rhel7 smartd[1051]: Device: /dev/sdd [SAT], ST31500541AS, S/N:6XW0H14R, WWN:5-000c50-01b52c0d6, FW:CC34, 1.50 TB
Jan  7 19:57:07 rhel7 smartd[1051]: Device: /dev/sde [SAT], ST3000DM001-1CH166, S/N:Z1F23YJM, WWN:5-000c50-04f7f3dfb, FW:CC24, 3.00 TB
Jan  7 19:57:07 rhel7 smartd[1051]: Device: /dev/sdf [SAT], ST3000DM001-1CH166, S/N:W1F2NFZQ, WWN:5-000c50-06118768d, FW:CC26, 3.00 TB
Jan  7 19:57:07 rhel7 smartd[1051]: Device: /dev/sdh [SAT], ST31500541AS, S/N:9XW00GJV, WWN:5-000c50-0158365a5, FW:CC34, 1.50 TB
Jan  7 19:57:07 rhel7 smartd[1051]: Device: /dev/sdi [SAT], ST3000DM001-1CH166, S/N:Z1F23ZHK, WWN:5-000c50-04f7ebaaf, FW:CC24, 3.00 TB
Jan  7 19:57:07 rhel7 smartd[1051]: Device: /dev/sdj [SAT], ST3000DM001-9YN166, S/N:Z1F17YAT, WWN:5-000c50-04e9c33fe, FW:CC4H, 3.00 TB
Jan  7 19:57:07 rhel7 smartd[1051]: Device: /dev/sdk [SAT], ST3000DM001-1CH166, S/N:Z1F1Y76W, WWN:5-000c50-04f4c700a, FW:CC24, 3.00 TB
Jan  7 19:57:07 rhel7 smartd[1051]: Device: /dev/sdl [SAT], TOSHIBA THNSNH128GBST, S/N:435S106HTE4Y, WWN:5-00080d-b0002b6ac, FW:HTRAN101, 128 GB
Jan  7 19:57:07 rhel7 smartd[1051]: Device: /dev/sdm [SAT], INTEL SSDSC2CW120A3, S/N:CVCV251600DB120BGN, WWN:5-001517-803d4fbe3, FW:400i, 120 GB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sda [SAT], OWC Mercury EXTREME Pro SSD, S/N:MXE24011E47G3830, WWN:5-000000-000003830, FW:361A13F0, 240 GB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sdb [SAT], ST2000DM001-1CH164, S/N:Z1F34XPP, WWN:5-000c50-050b5ef9d, FW:CC26, 2.00 TB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sdc [SAT], ST31500541AS, S/N:9XW04B5C, WWN:5-000c50-01a71a09b, FW:CC34, 1.50 TB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sdd [SAT], ST31500541AS, S/N:6XW0H14R, WWN:5-000c50-01b52c0d6, FW:CC34, 1.50 TB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sde [SAT], ST3000DM001-1CH166, S/N:Z1F23YJM, WWN:5-000c50-04f7f3dfb, FW:CC24, 3.00 TB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sdf [SAT], ST3000DM001-1CH166, S/N:W1F2NFZQ, WWN:5-000c50-06118768d, FW:CC26, 3.00 TB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sdh [SAT], ST31500541AS, S/N:9XW00GJV, WWN:5-000c50-0158365a5, FW:CC34, 1.50 TB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sdi [SAT], ST3000DM001-1CH166, S/N:Z1F23ZHK, WWN:5-000c50-04f7ebaaf, FW:CC24, 3.00 TB
Jan  7 20:07:31 rhel7 smartd[1042]: Device: /dev/sdj [SAT], ST3000DM001-9YN166, S/N:Z1F17YAT, WWN:5-000c50-04e9c33fe, FW:CC4H, 3.00 TB
Jan  7 20:07:32 rhel7 smartd[1042]: Device: /dev/sdk [SAT], ST3000DM001-1CH166, S/N:Z1F1Y76W, WWN:5-000c50-04f4c700a, FW:CC24, 3.00 TB
Jan  7 20:07:32 rhel7 smartd[1042]: Device: /dev/sdl [SAT], TOSHIBA THNSNH128GBST, S/N:435S106HTE4Y, WWN:5-00080d-b0002b6ac, FW:HTRAN101, 128 GB
Jan  7 20:07:32 rhel7 smartd[1042]: Device: /dev/sdm [SAT], INTEL SSDSC2CW120A3, S/N:CVCV251600DB120BGN, WWN:5-001517-803d4fbe3, FW:400i, 120 GB
#

Open in new window

Avatar of Brian S

ASKER

I was able to bring the system up to networked GUI mode buy adding the "nofail" to the /etc/fstab:
/dev/mapper/rhel_rhel7-home /home                   xfs     defaults,nofail 0 0

Open in new window


Then using the gnome disk utility I am able to attempt to mount it. It seems to be running a system check of the drives. I expect it to fail, since there was nothing on /home any who, this is not a major concern. But through the gnome disk utility the /dev/mapper now has the mapping:
 ls -l /dev/mapper/
total 0
crw-------. 1 root root 10, 236 Jan  7 20:07 control
lrwxrwxrwx. 1 root root       7 Jan  7 20:07 rhel_rhel7-home -> ../dm-2
lrwxrwxrwx. 1 root root       7 Jan  7 20:07 rhel_rhel7-root -> ../dm-0
lrwxrwxrwx. 1 root root       7 Jan  7 20:07 rhel_rhel7-swap -> ../dm-1
#

Open in new window


Once this completes I'll remove the dead disks, and rebuild the XFS partition, but maybe into smaller drive groupings.

Going back to something you mentioned 18 hours ago:
make sure when defining the lvm pv or vg with the raid redundancy

how do I ensure these settings? I thought when the system was created and the original XFS build of the /home partition, I thought I had selected RAID5.
ASKER CERTIFIED SOLUTION
Avatar of arnold
arnold
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Brian S

ASKER

thank you for all the help -- learned a lot on the way...
Look at the lvm raid options, lvm segtypes

Look at vgcreate /dev/sda /dev/sda ...  <segtype raid option mirror, raid5, raid6, raid10 as available> volumeGroup
........
It might be that lvm based raid is limited to mirror (raid 1), striping (raid 0) provides no refundNcy.