jedale007
asked on
Difficulties extending an XFS root partition on CentOS 7 on a Virtual Machine (Hyper V on Windows Server 2012)
I created a Linux Centos 7 Virtual Machine using Hyper V on Windows Server 2012. We have a Failover Cluster Manager with two nodes managing a bunch of VM's.
Initially, I allocated only one virtual disk to the machine and made it 50 Gb. I configured the file system (CentOS uses XFS by defaut), set up the mount points and everything worked fine.
Shortly after realizing that 50Gb would not be enough space, I went into the Windows server cluster manager and expanded the virtual hard disk to 150 Gb. I verified the setting in Hyper V / CLuster manager. All good.
However, I am now battling to extend the size of my XFS root file system.
I installed both ssm (System Storage Manager) and xfsprogs to manage my disks, volumes and pools.
At this point, running df -h does not show the extra disk space. When I run "ssm list" however, I see the following (see also attached)
-------------------------- ---------- ---------- ---------- -----
Device Free Used Total Pool Mount point
-------------------------- ---------- ---------- ---------- -----
/dev/sda 150.00 GB PARTITIONED
/dev/sda1 200.00 MB /boot/efi
/dev/sda2 500.00 MB /boot
/dev/sda3 44.00 MB 49.27 GB 49.31 GB centos
-------------------------- ---------- ---------- ---------- -----
-------------------------- ---------- ---------- -----
Pool Type Devices Free Used Total
-------------------------- ---------- ---------- -----
centos lvm 1 44.00 MB 49.27 GB 49.31 GB
-------------------------- ---------- ---------- -----
-------------------------- ---------- ---------- ---------- ---------- ---------- ----------
Volume Pool Volume size FS FS size Free Type Mount point
-------------------------- ---------- ---------- ---------- ---------- ---------- ----------
/dev/centos/root centos 47.27 GB xfs 47.25 GB 30.91 GB linear /
/dev/centos/swap centos 2.00 GB linear
/dev/sda1 200.00 MB vfat part /boot/efi
/dev/sda2 500.00 MB xfs 493.73 MB 335.29 MB part /boot
-------------------------- ---------- ---------- ---------- ---------- ---------- ----------
Here is the xfs_info about the root partition
# xfs_info /dev/centos/root
meta-data=/dev/mapper/cent os-root isize=256 agcount=4, agsize=3097856 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=12391424, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=6050, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Here is what happens when I try and increase the space
# ssm resize -s+100G /dev/centos/root
SSM Error (2005): There is not enough space in the pool 'centos' to grow volume '/dev/centos/root' to size 154423296.0 KB!
# xfs_growfs /dev/centos/root -D 37174272
meta-data=/dev/mapper/cent os-root isize=256 agcount=4, agsize=3097856 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=12391424, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=6050, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data size 37174272 too large, maximum is 12391424
As you can see, the xfs_growfs command tells me that the maximum size is 12391424 which is where the block size is currently sitting. That tells me that Linux is not picking up that there is extra space available on /dev/sda - the only virtual disk being used.
Where am I going wrong?
Initially, I allocated only one virtual disk to the machine and made it 50 Gb. I configured the file system (CentOS uses XFS by defaut), set up the mount points and everything worked fine.
Shortly after realizing that 50Gb would not be enough space, I went into the Windows server cluster manager and expanded the virtual hard disk to 150 Gb. I verified the setting in Hyper V / CLuster manager. All good.
However, I am now battling to extend the size of my XFS root file system.
I installed both ssm (System Storage Manager) and xfsprogs to manage my disks, volumes and pools.
At this point, running df -h does not show the extra disk space. When I run "ssm list" however, I see the following (see also attached)
--------------------------
Device Free Used Total Pool Mount point
--------------------------
/dev/sda 150.00 GB PARTITIONED
/dev/sda1 200.00 MB /boot/efi
/dev/sda2 500.00 MB /boot
/dev/sda3 44.00 MB 49.27 GB 49.31 GB centos
--------------------------
--------------------------
Pool Type Devices Free Used Total
--------------------------
centos lvm 1 44.00 MB 49.27 GB 49.31 GB
--------------------------
--------------------------
Volume Pool Volume size FS FS size Free Type Mount point
--------------------------
/dev/centos/root centos 47.27 GB xfs 47.25 GB 30.91 GB linear /
/dev/centos/swap centos 2.00 GB linear
/dev/sda1 200.00 MB vfat part /boot/efi
/dev/sda2 500.00 MB xfs 493.73 MB 335.29 MB part /boot
--------------------------
Here is the xfs_info about the root partition
# xfs_info /dev/centos/root
meta-data=/dev/mapper/cent
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=12391424, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=6050, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Here is what happens when I try and increase the space
# ssm resize -s+100G /dev/centos/root
SSM Error (2005): There is not enough space in the pool 'centos' to grow volume '/dev/centos/root' to size 154423296.0 KB!
# xfs_growfs /dev/centos/root -D 37174272
meta-data=/dev/mapper/cent
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=12391424, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=6050, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data size 37174272 too large, maximum is 12391424
As you can see, the xfs_growfs command tells me that the maximum size is 12391424 which is where the block size is currently sitting. That tells me that Linux is not picking up that there is extra space available on /dev/sda - the only virtual disk being used.
Where am I going wrong?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
You must boot system from recovery media to partition first disk, i.e add extra LVM partition in new space.
Some systems will not rescan filesystem sizes unless they are restarted. Not sure if you tried that to make sure.
linux rescans disks on the fly, just that it will not extend pv ever.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thanks everyone, I essentially wiped that Linux VM and reconfigured it as follows:
1) Stopped the Linux server (# shutdown -h now)
2) Removed the virtual disk assigned to that Linux VM on Hyper V
3) Created two virtual disks on Hyper V - a small disk for the root/OS and a much larger disk for apps.
4) Re-installed CentOS 7 choosing the smaller disk as installation destination.
5) After install and yum updates, I used fdisk and mkfs.xfs to format the new disk and created fstab mount point using xfs defaults
6) mounted new disk
7) Put on my dictators act and told the users we will not install anything on the root partition - but only on the second disk with a dedicated partition :-)
So it's all sorted. So thanks Gheist - ultimately adding a second disk is the solution with the fewest headaches. You can't actually extend the root partition if it is not the last partition on a disk - especially if there is only one disk assigned to the VM
1) Stopped the Linux server (# shutdown -h now)
2) Removed the virtual disk assigned to that Linux VM on Hyper V
3) Created two virtual disks on Hyper V - a small disk for the root/OS and a much larger disk for apps.
4) Re-installed CentOS 7 choosing the smaller disk as installation destination.
5) After install and yum updates, I used fdisk and mkfs.xfs to format the new disk and created fstab mount point using xfs defaults
6) mounted new disk
7) Put on my dictators act and told the users we will not install anything on the root partition - but only on the second disk with a dedicated partition :-)
So it's all sorted. So thanks Gheist - ultimately adding a second disk is the solution with the fewest headaches. You can't actually extend the root partition if it is not the last partition on a disk - especially if there is only one disk assigned to the VM
ASKER
Thanks for all the assist, everyone
You have one disk for /opt
Now you can add another for /var
Another for /home
etc etc.
Actually you dont need LVM for virtual machine, you can put a new disk for each mount point and size up if needed (xfs does not size down like ext4)
Now you can add another for /var
Another for /home
etc etc.
Actually you dont need LVM for virtual machine, you can put a new disk for each mount point and size up if needed (xfs does not size down like ext4)
ASKER