<

Build your own vsphere whitebox lab server

Published on
10,812 Points
4,712 Views
1 Endorsement
Last Modified:
Approved
Hi Guys

Recently, I set up a vsphere lab at home and thought I would share my experience in this article with the hope it will help other like minded people.

To build a whitebox ( a whitebox is a a self build server ) that you build your self as buying a vsphere 4 server of ebay like a hp dl server as the sellers know its vsphere capable they are more expensive than say an esx 3.5 capable server. esx 3.5 is basically the 32bit version of vsphere 4 as that is the 64 bit version.

There are a few whitebox hcl's for getting the information required on motherboards, network cards and sata/sas raid controller. I use http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php 

The main problem with esx in general is esxi is very finicky with  motherboards, nics and hdd controllers especially built in devices on the motherboard. Even if your motherboard nics and hdd controllers you can disable the built-in devices and purchase specific nics and hdd controllers from the whitebox hcl.

For instance with nics I prefer Intel pro 1gbE pci-e x4 cards you can get a dual port intel server adapter for £40 - £80 as a dual port allows team bonding for instance.

It's also a good idea to make your vsphere server have multiple gbE nics as really you need a san to do iscsi for vsphere advanced tasks like vmotion and drs as these work in the 60 day version of vsphere 4.1.

Also, to do these advanced vsphere tasks you need a san for iscsi and I use openfiler for that as that can do nfs/cifs/iscsi as a vsphere cluster needs to use iscsi paths for all the cluster members. In a vsphere cluster the members need to be identical. So what I did to achieve that was to do virtual vsphere servers inside of a vsphere server as the virtual vsphere servers are identical as they are both virtual vm's which means you can build a vsphere cluster.  

A vsphere server really needs a lot of ram as although vmware has a ram overhead technology where it slices the physical ram so a vm can talk to the same virtual ram but not the same memory slice which means you can specify more vram than physical ram but more physical ram will always help.

Iscsi really needs a separate switch ( a iscsi switch ) so its not degraded by other network activity thats why vsphere servers need multiple nics. You can use the same switch for viclient management but this not recommended.

For instance I bought a intel s5520 dual 1366 socket motherboard and a quad core xeon for £400 when the cpu was £419 on its own new. This server motherboard can have another xeon installed and take 48gb ram at least. So definitely look around as this can save you hundreds of £

Having multiple vsphere physical servers can help also learning advanced subjects like HA and vmotion as these two vsphere tasks move vm's between vsphere servers using the iscsi path so the vm is not affected if a vsphere server starts to shutdown.

I hope this helps people who are trying to build a vsphere lab.
1
Author:IanTh
Ask questions about what you read
If you have a question about something within an article, you can receive help directly from the article author. Experts Exchange article authors are available to answer questions and further the discussion.
Get 7 days free