• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1241
  • Last Modified:

Over provisioned in VMWare ESX 5.1

OK - to get started right - I built this server as a VMware and virtual server newbie - so I know I've made mistakes.  I need help RESOLVING the situation the best way possible - and will attempt to learn along the way.

Hypervisor: ML 350 G6 (8) hot swap drive bays - 4 (600GB SAS) in use as a hardware controller level RAID 5 array (~1.6TB usable).  VMware ESX 5.1 Standalone host.

3 guests - 2x Windows 2008 r2 servers, 1 Acronis vmProtect appliance

Apparently, I allowed all RAID-array space to be used as VMware storage.

Each of the Windows boxes shows the same "1.25 TB free of 1.95 TB" on Drive C (these were servers "converted" from physical to virtual using the vm converted utility).

Total "actual" space consumed across both servers is about 1.4TB.

My vm datastore is currently at 12.37GB available.  (File system VMFS 5.58, 1MB block size)
-my "provisioned" space seems to be HUGE (as viewed under the Datastore browser) - WAY beyond the physical size of the array (shows almost 4TB!)

Earlier today, one Windows server stopped due to disk space (I was copying about 70 GB of new data to the machine).  VMware of course said this was due to no more space in the datastore.

I mistakenly thought I could resolve this by "moving out" a few hundred GB of data from the running server to a NAS, thus "freeing up" some space but soon realized this wasn't going to help.

I was able to get all 3 guests running again by doing the following:
reducing allocated RAM by 1/2 on each server (I read somewhere this would get the swap file size down enough to start the vm's).

My engineer is a bit more versed in manipulating the "guts" of VMware than I am; however, I'm at a loss for the BEST way to resolve this situation.

It would seem that the answer is now to purchase 4 more SAS disks and create a second hw-level RAID 5 array, then use the "move to" function to move one of the server's vmdk's, etc. to the new array.

One of the servers is nothing more than an AD DC (for 40 people) and the "redirection" share for Desktops/Documents.  I'm thinking I could just build add lower cost 2TB SATA drives (2x to build a RAID-1 array) and move that vm to this array.

  • 3
  • 3
  • 2
  • +2
1 Solution
Three ways:

1. Win2k8 right? Use the disk management snap in and shrink the NTFS drives of the other VM's.

2. Purchase a NAS with iScsi support (just to make sure). Add this NAS to your VM pool in Esxi.

3. Add more drives like you said. Add the drives in the vm's. Use the disk management snap in to extend your C: for each vm to the new virtual drives you just added to each vm. (After you initialize the empty NTFS drives you created).

Hope this gets you going in right direction.

Jose Fdez
The problem is that you need breathing room to work.

Your thin provisioned VMDKs will not easily shrink down in size, even if you delete files off of the guest operating system they will not get any smaller.

You need enough space to be able to create new, thick provisioned disks of a more appropriate size for your servers, and then image Windows over to the smaller disks using whatever recovery tools you like. You'll need to shrink the partition and then image it over.

But you need more space on your datastore to do this. Or rather, you need a second datastore.

You can either put in a NAS and connect it to ESXi as an NFS or iSCSI datastore, or you can put more hard disks into the server to give you more breathing room.

Your migration will happen with the servers offline, and the additional space is only going to be used temporarily while you move things around. By the time you bring the servers back up for production use they will be on your local datastore on the SAS disks, so I don't think it REALLY matters how high performance the disks are so long as it is reliable storage.

IMHO you can get away with a RAID 1 array of a couple SATA disks.

Once you have more breathing room you can do whatever is necessary to shrink the partitions of your server, create new, smaller, thick provisioned VMDKs, and image your server over to those smaller disks.
mlmslexAuthor Commented:

Thanks for your quick reply.

Couple of questions / comments:

Solution 1 - didn't think you could reduce the partition size of a system volume.  Physical servers only had c: drive and that's the way they are as vm's due to the import/conversion.  Also, didn't think reducing the amount of "data" in each partition actually helped since VMware had already expanded the thin-provisioning.

Solution 2 - I actually already have a Synology NAS onsite which will do this; however, we use that as the vmProtect onsite backup target so probably not wise to put working data on there as well.

Solution 3 - You think it's better to extend the partitions of each vm "across" the arrays rather than just letting each server have it's own array? (again, the C drive of each Windows server thinks it has tons of free space).
mlmslexAuthor Commented:
Frosty, would you recommend I:
- use the VMware conversion utility to convert the existing vms to ones which use thick provisioning (storing these converted files on another device temporarily),
- then (after ensuring I have a good backup), kill the existing vm's along with their thin-provisioned disks
- restore/import the backed up machines with their thick disks?

If so, is there anything I have to do to reclaim the disk space in the datastore after deleting the machines/virtual disks?

I use vmProtect for VMWare backup but I don't think you can change the "thick or thin" provisioning when restoring a backup with that utility.
1. Yep. NTFS is great at this, although most old school admins remember this being very difficult in the past to accomplish without 3rd party tools.


2. Probably the right thing to do. Don't host and backup in the same place.

3. Which ever way is easier and more affordable for you is the best way. Either of the three ways works.

This post offers a quick and dirty explanation of my recommendation to you.


While you cannot re-provision thick vm's you can convert and "logically shrink" disk size.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
If you have over provisioned, you will need to Copy Out and Backup, and then Restore, the VM to the correct size. VMware vCenter Converter Standalone can do this for you by creating a V2V.

see my EE Articles

HOW TO: FAQ VMware P2V Troubleshooting

HOW TO:  P2V, V2V for FREE - VMware vCenter Converter Standalone 5.5
Shrinking the NTFS volume under Windows won't do any good, VMware won't be able to grab it back and anyway since it's thin provisioned you may not gain any space even by V2V migration. Could be that a defrag or similar process that shuffles files all over the disk was run under Windows though in which case V2V will reduce the size as all those deleted files won't be taking up space any more so long as you zero out the unused blocks. Defrag is the enemy of thin provisioning and can convert thin to fat very quickly.

I would add disks and expand the array, then extend the logical disk in hardware and finally grow the current datastore. Trouble is though that the hardware disk re-leveling process takes a day or so before you can use the extra space.

These are thick, not thin vms correct?

With 1.6TB physical space and two Windows servers each seeing 2TB one would think not.
mlmslexAuthor Commented:
boy, I wish I had just known that all I had to do was add new hard disks and span the datastore across the new LUN.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

  • 3
  • 3
  • 2
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now