Link to home
Start Free TrialLog in
Avatar of kjudd
kjuddFlag for United States of America

asked on

vmware 1.82tb partition limitation?

I am new, just yesterday, to VMware ESX 3i so let's get that out there. Our server is a Dell 2950 with a Dell MD1000 direct attached storage system filled with 15 1TB drives via a perc 6 card. When I go into the VM Infrastructure client I see the 11TB array but can only use 1.82TB at any one time. I would like to use all of it at once? How do I do this? thanks everyone
4-18-2009-9-05-54-AM.jpg
ASKER CERTIFIED SOLUTION
Avatar of 65td
65td
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of kjudd

ASKER

Wasn't expecting that. Figured all these major companies used vmware and it would be scalable up to 64TB or beyond per VPS. No matter. If I run Windows 2003 SVR I can GPT the disk to 12TB without issue so my next question is what's the best virtualization software that I can lay on top of 2003 svr?
Depends on what one is trying to accomplish, production or dev/test environments.  
32 bit and or 64 bit OS's for the virtuals, chose could affect host OS?
How much RAM and number of NICs on the server?

I prefer VMWare server for testing and ESX for production.
Maybe one should look at Microsofts' Hyper-V.

http://www.microsoft.com/windowsserver2008/en/us/hyperv-main.aspx

Avatar of markzz
markzz

The point to consider here is do you have a single Server which addresses more than 2TB of disk or do any of you servers address more than a single 2 TB disk.
For me I don't address any one Lun larger than 1TB but I have 20 odd LUNS. Therefore if a virtual guest which requires more than 1TB of disk it's not considered suitable for the virtual environment.
Although I'm not very familiar with the Dell hardware I expect you can create multiple logical volumes at a controller level.
This of course would require you to delete all existing logical volumes.
I would suggest you create a 1GB logical disk for ESXi to be installed to and 15x 1TB Logical volumes for your VMFS volumes.
This will also make your server much more managable.
I must high light that the controller may well be incapable of keeping up with the IO requirements, but hey give it a go.
The other point raised here was looking at other virtualisation technologies.
Unfortunatly ESX is in my opinion your most scalable and enterprise level solution.
Xen is getting there, MS's Hyper-V is still a toy much like VMWares free "VMWare Server". The advantage of using "VMWare Server" is the ease of which you can migrate to ESX or ESXi.
If you are trying your hand and intend moving forward with virtualising your DataCentre VMWare is the obvious choice but it will not be ESXi for long as it doesn't offer the failover and HA features of ESX.
Back to your disk.
Break the disk up into Logical volumes, it good practice even in a Windows environment..
 
Agree with comments post by markzz.

We use multiple 2 TB (1.95 TB) LUNs, if a vm needs more disk than the standard OS (32GB) and small data drive (72 GB), then we utilize RDM's.
Avatar of kjudd

ASKER

Great input guys. This server will be a production server that will have multiple uses. It's primary role will be the backup of our primary backup server so that's why I need over 2tb. Since we have to colocate this I wanted to virtualize it so I can get my monies worth. So if vmware has 2tb limit then I can't do the bare metal version and looking at their known issues list I can't say I would want to risk it. Parallels containers seems expensive and ms virtual server r2 doesn't support multiple cpus. Does that make sense?