VMware vSphere disk usage with thick provisioning
Posted on 2013-11-25
So this is my scenario:
- Virtual machine with CentOS 6 installed
- VMware ESX 5.0
- Dell EqualLogic SAN storage array
This is my problem. When I create a new virtual machine using thick provisioning, the datastore used for the VM is showing up as full as soon as I assign it to the VM. Let me elaborate...
I first create a thick provisioned volume on the SAN array of 200GB. Then, I connect the volume to the ESX as a new datastore via iSCSI. All's well and good at this point. I then create a new virtual machine, selecting the newly created datastore as the location for the virtual hard disk, assigning for example 150GB. I do this using thick provisioning on the ESX also (I tried with both Lazy Zeroed and Eager Zeroed with the same result).
At this point the 150GB I've assigned show up as full on the ESX. When I check from the SAN or from the OS which I installed on the VM, the readings are correct i.e. most of the space is free as should be, but the ESX shows the datastore as full.
What is the reason for this? I even tried using the Thick Provisioning Eager Zeroed method as it supposedly zeroes out all the space on the virtual hard disk upon creation, but it still shows up as full on ESX.