Linux LVM - LVM logical volume from group on raid1 partition not available (not active) at startup

Hi experts!
I have a physical lvm2 volume over raid1 partition. Logical volume from this group is inactive at startup and the respective fs /home cannot be mounted, althouht I have previously activated the group with vgchange -a y. On startup I land in emergency mode from where I can activate the group and continue normally. I use Scientific Linux 7. Why the lvm logical volume is not active at startup? Thanks for any help!
Lelio Michele LattariIT ManagerAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

joolsCommented:
The volume may not be starting for a number of reasons, it could be the physical disk associated to the vg is not available or something like the partition type is not set correctly

Can you post some output back here to help diagnose

the following commands run as root might prove useful...

fdisk -l
vgs
lvs
(if using software raid)
mdadm --examine --scan
cat /proc/mdstat

Usually vgscan would find all the volumes and activate them, for rhel 7 though they use systemd so something might have gone awry, check the boot logs for more information (this would be in /var/log/boot.log but I dont know off the top of my head if this has changed as I've not had a chance to play with 7 yet)
0
Lelio Michele LattariIT ManagerAuthor Commented:
Hi!

There are absolutely no errors on boot, until I try to mount the volume through /etc/fstab. The volume, althought I activate the group using vgchange, is inactive after every reboot.

[root@filemon1 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               mirrored-raid1-devices
  PV Size               869.08 GiB / not usable 2.81 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              222483
  Free PE               94483
  Allocated PE          128000
  PV UUID               0lo3jy-qixg-tWPl-nQqv-6dtA-BttZ-3v7nRv

[root@filemon1 ~]# vgdisplay
  --- Volume group ---
  VG Name               mirrored-raid1-devices
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               869.07 GiB
  PE Size               4.00 MiB
  Total PE              222483
  Alloc PE / Size       128000 / 500.00 GiB
  Free  PE / Size       94483 / 369.07 GiB
  VG UUID               gTdMqy-yfDt-dgQr-LqN1-mkpD-94QE-PWZlXB

[root@filemon1 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/mirrored-raid1-devices/lv_home
  LV Name                lv_home
  VG Name                mirrored-raid1-devices
  LV UUID                0cttq2-x5DN-I4I3-7sZK-h9zb-ZHsc-n9wGch
  LV Write Access        read/write
  LV Creation host, time filemon1, 2014-11-17 22:09:01 +0100
  LV Status              NOT available
  LV Size                500.00 GiB
  Current LE             128000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

[root@filemon1 ~]# vgchange -aay
  1 logical volume(s) in volume group "mirrored-raid1-devices" now active
[root@filemon1 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/mirrored-raid1-devices/lv_home
  LV Name                lv_home
  VG Name                mirrored-raid1-devices
  LV UUID                0cttq2-x5DN-I4I3-7sZK-h9zb-ZHsc-n9wGch
  LV Write Access        read/write
  LV Creation host, time filemon1, 2014-11-17 22:09:01 +0100
  LV Status              available
  # open                 0
  LV Size                500.00 GiB
  Current LE             128000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

[root@filemon1 ~]# fdisk -l

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes, 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     1026047      512000   fd  Linux raid autodetect
/dev/sdb2         1026048    25815039    12394496   82  Linux swap / Solaris
/dev/sdb3        25815040   130672639    52428800   fd  Linux raid autodetect
/dev/sdb4       130672640  1953525167   911426264    5  Extended
/dev/sdb5       130674688  1953523711   911424512   fd  Linux raid autodetect

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes, 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bb947

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048    25815039    12394496   82  Linux swap / Solaris
/dev/sda3        25815040   130672639    52428800   83  Linux
/dev/sda4       130672640  1953525167   911426264    5  Extended
/dev/sda5       130674688  1953523711   911424512   83  Linux

Disk /dev/md1: 53.7 GB, 53653405696 bytes, 104791808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md0: 524 MB, 524222464 bytes, 1023872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md2: 933.2 GB, 933164285952 bytes, 1822586496 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/mirrored--raid1--devices-lv_home: 536.9 GB, 536870912000 bytes, 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@filemon1 ~]# lsblk
NAME                                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                      8:0    0 931.5G  0 disk
├─sda1                                   8:1    0   500M  0 part
│ └─md0                                  9:0    0   500M  0 raid1 /boot
├─sda2                                   8:2    0  11.8G  0 part  [SWAP]
├─sda3                                   8:3    0    50G  0 part
│ └─md1                                  9:1    0    50G  0 raid1 /
├─sda4                                   8:4    0     1K  0 part
└─sda5                                   8:5    0 869.2G  0 part
  └─md2                                  9:2    0 869.1G  0 raid1
    └─mirrored--raid1--devices-lv_home 253:0    0   500G  0 lvm
sdb                                      8:16   0 931.5G  0 disk
├─sdb1                                   8:17   0   500M  0 part
│ └─md0                                  9:0    0   500M  0 raid1 /boot
├─sdb2                                   8:18   0  11.8G  0 part  [SWAP]
├─sdb3                                   8:19   0    50G  0 part
│ └─md1                                  9:1    0    50G  0 raid1 /
├─sdb4                                   8:20   0     1K  0 part
└─sdb5                                   8:21   0 869.2G  0 part
  └─md2                                  9:2    0 869.1G  0 raid1
    └─mirrored--raid1--devices-lv_home 253:0    0   500G  0 lvm
sr0                                     11:0    1  1024M  0 rom
[root@filemon1 ~]# blkid
/dev/sdb1: UUID="d94999a4-4d70-6676-ea0a-9c266e71ac0d" TYPE="linux_raid_member"
/dev/sdb2: UUID="86d0f300-bc51-4f5f-a1b1-0b65fa23b33d" TYPE="swap"
/dev/sdb3: UUID="a9189950-e3ce-0155-ca5e-dd38927ead3c" UUID_SUB="9196d1df-23b2-a7b4-1726-012eef788262" LABEL="filemon1:1" TYPE="linux_raid_member"
/dev/sdb5: UUID="a444c20b-a067-3616-f940-81dcf85e56f0" UUID_SUB="301d17ad-116c-7e11-358a-35137fb97920" LABEL="filemon1:2" TYPE="linux_raid_member"
/dev/sda1: UUID="d94999a4-4d70-6676-ea0a-9c266e71ac0d" TYPE="linux_raid_member"
/dev/sda2: UUID="262de8fc-67c1-4b29-a8fd-69ef92d9b231" TYPE="swap"
/dev/sda3: UUID="a9189950-e3ce-0155-ca5e-dd38927ead3c" UUID_SUB="43641251-4844-6a6c-66e5-defb2fd7e8a2" LABEL="filemon1:1" TYPE="linux_raid_member"
/dev/sda5: UUID="a444c20b-a067-3616-f940-81dcf85e56f0" UUID_SUB="6d9be8ca-29c8-162c-a684-65ae4c4f1bac" LABEL="filemon1:2" TYPE="linux_raid_member"
/dev/md1: UUID="a0c1f14f-f88c-4a95-91f7-4894377fab2d" TYPE="xfs"
/dev/md0: UUID="fa43cdb6-5bdd-40c3-be67-693b7155dcbf" TYPE="xfs"
/dev/md2: UUID="0lo3jy-qixg-tWPl-nQqv-6dtA-BttZ-3v7nRv" TYPE="LVM2_member"
/dev/mapper/mirrored--raid1--devices-lv_home: UUID="02d146f8-1bd2-468b-8074-eccf10692945" TYPE="xfs"
[root@filemon1 ~]# mdadm --examine --scan
ARRAY /dev/md0 UUID=d94999a4:4d706676:ea0a9c26:6e71ac0d
ARRAY /dev/md/1  metadata=1.2 UUID=a9189950:e3ce0155:ca5edd38:927ead3c name=filemon1:1
ARRAY /dev/md/2  metadata=1.2 UUID=a444c20b:a0673616:f94081dc:f85e56f0 name=filemon1:2
[root@filemon1 ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda5[2] sdb5[0]
      911293248 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[1] sdb1[0]
      511936 blocks [2/2] [UU]

md1 : active raid1 sda3[2] sdb3[0]
      52395904 blocks super 1.2 [2/2] [UU]

unused devices: <none>

The main problem is that the logical volume lv_home is not being ACTIVATED during the startup process.
0
Lelio Michele LattariIT ManagerAuthor Commented:
Thank you EXPERTS! I have found the solution.
It was necessary do change some of the options of the /etc/lvm/lvm.conf file.... Now it works fine!

 # Set to 1 to perform internal checks on the operations issued to
 # libdevmapper.  Useful for debugging problems with activation.
 # Some of the checks may be expensive, so it's best to use this
 # only when there seems to be a problem.
 checks = 1

# Set to 1 for LVM2 to verify operations performed by udev. This turns on
# additional checks (and if necessary, repairs) on entries in the device
# directory after udev has completed processing its events.
# Useful for diagnosing problems with LVM2/udev interactions.
verify_udev_operations = 1
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Lelio Michele LattariIT ManagerAuthor Commented:
The reason is that NOBODY could help me to solve the problem and I have found the solution inspecting the system. My question is: WHERE ARE THE EXPERTS? :-)))
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Linux

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.