Looking for some advice on building an ESXi server with about 140 VMs networked to one VM sever for a testing labs.
VMs will consist of:
1 - Server 2008 (with MS SQL Enterprise ed)
(light server activity)
139 - mix of Win 7, Win XP
- 584 MB each, 640 x 480 video
The greatest bottle neck will obviously be I/O (especially during boot up) there for boot ups will be scheduled in a logical sequence.
Quad core I7's with max of 16 GB
1. Since I don't believe it is practical to do this on one ESXi server, how many hosts do you think are necessary?
I am thinking 8 VMs per core for a total of 32 VMs per host; less on the ESXi server
hosting the Win 2008 server VM
2. I will need to network the ESXi servers together. Any special considerations?
3. What do I need to purchase from VMware to accomplish this? ESXi is free. Vsphere
(the free version can only handle one ESXi at a time). Can I manage all the ESXi
servers from VMware 8 (workstation s/w)? Will I need to purchase anything?
2. I/O is the killer, and you may want to consider Fusion I/O cards as SSD Storage to avoid broadcast storms.
3. If using free, you will need to create access to the VMs using RDP and name RDP VMs. Otherwise you will need Licensed VMware Sphere and VMware View.
You can connect to each Server individually, using the vSphere Client, but trying to manage more than two servers, that way is difficult but possible. Otherwise you will need to purchase VMware vSphere Licenses and vCenter Server.
The biggest issue with free is its limited to 32GB per server.
2GB for Windows 7, you'll only get 16 VMs per Server!
You physical server memory is low, and make sure its on the HCL.