OK - to get started right - I built this server as a VMware and virtual server newbie - so I know I've made mistakes. I need help RESOLVING the situation the best way possible - and will attempt to learn along the way.
Hypervisor: ML 350 G6 (8) hot swap drive bays - 4 (600GB SAS) in use as a hardware controller level RAID 5 array (~1.6TB usable). VMware ESX 5.1 Standalone host.
3 guests - 2x Windows 2008 r2 servers, 1 Acronis vmProtect appliance
Apparently, I allowed all RAID-array space to be used as VMware storage.
Each of the Windows boxes shows the same "1.25 TB free of 1.95 TB" on Drive C (these were servers "converted" from physical to virtual using the vm converted utility).
Total "actual" space consumed across both servers is about 1.4TB.
My vm datastore is currently at 12.37GB available. (File system VMFS 5.58, 1MB block size)
-my "provisioned" space seems to be HUGE (as viewed under the Datastore browser) - WAY beyond the physical size of the array (shows almost 4TB!)
Earlier today, one Windows server stopped due to disk space (I was copying about 70 GB of new data to the machine). VMware of course said this was due to no more space in the datastore.
I mistakenly thought I could resolve this by "moving out" a few hundred GB of data from the running server to a NAS, thus "freeing up" some space but soon realized this wasn't going to help.
I was able to get all 3 guests running again by doing the following:
reducing allocated RAM by 1/2 on each server (I read somewhere this would get the swap file size down enough to start the vm's).
My engineer is a bit more versed in manipulating the "guts" of VMware than I am; however, I'm at a loss for the BEST way to resolve this situation.
It would seem that the answer is now to purchase 4 more SAS disks and create a second hw-level RAID 5 array, then use the "move to" function to move one of the server's vmdk's, etc. to the new array.
One of the servers is nothing more than an AD DC (for 40 people) and the "redirection" share for Desktops/Documents. I'm thinking I could just build add lower cost 2TB SATA drives (2x to build a RAID-1 array) and move that vm to this array.