Build your own vsphere whitebox lab server

Hi Guys

Recently, I set up a vsphere lab at home and thought I would share my experience in this article with the hope it will help other like minded people.

To build a whitebox ( a whitebox is a a self build server ) that you build your self as buying a vsphere 4 server of ebay like a hp dl server as the sellers know its vsphere capable they are more expensive than say an esx 3.5 capable server. esx 3.5 is basically the 32bit version of vsphere 4 as that is the 64 bit version.

There are a few whitebox hcl's for getting the information required on motherboards, network cards and sata/sas raid controller. I use 

The main problem with esx in general is esxi is very finicky with  motherboards, nics and hdd controllers especially built in devices on the motherboard. Even if your motherboard nics and hdd controllers you can disable the built-in devices and purchase specific nics and hdd controllers from the whitebox hcl.

For instance with nics I prefer Intel pro 1gbE pci-e x4 cards you can get a dual port intel server adapter for £40 - £80 as a dual port allows team bonding for instance.

It's also a good idea to make your vsphere server have multiple gbE nics as really you need a san to do iscsi for vsphere advanced tasks like vmotion and drs as these work in the 60 day version of vsphere 4.1.

Also, to do these advanced vsphere tasks you need a san for iscsi and I use openfiler for that as that can do nfs/cifs/iscsi as a vsphere cluster needs to use iscsi paths for all the cluster members. In a vsphere cluster the members need to be identical. So what I did to achieve that was to do virtual vsphere servers inside of a vsphere server as the virtual vsphere servers are identical as they are both virtual vm's which means you can build a vsphere cluster.  

A vsphere server really needs a lot of ram as although vmware has a ram overhead technology where it slices the physical ram so a vm can talk to the same virtual ram but not the same memory slice which means you can specify more vram than physical ram but more physical ram will always help.

Iscsi really needs a separate switch ( a iscsi switch ) so its not degraded by other network activity thats why vsphere servers need multiple nics. You can use the same switch for viclient management but this not recommended.

For instance I bought a intel s5520 dual 1366 socket motherboard and a quad core xeon for £400 when the cpu was £419 on its own new. This server motherboard can have another xeon installed and take 48gb ram at least. So definitely look around as this can save you hundreds of £

Having multiple vsphere physical servers can help also learning advanced subjects like HA and vmotion as these two vsphere tasks move vm's between vsphere servers using the iscsi path so the vm is not affected if a vsphere server starts to shutdown.

I hope this helps people who are trying to build a vsphere lab.

Comments (2)

Ii recently installed ESXi in my home PC, using regular desktop PC components and had good results.

My system is as follows:

AMD Phenom II X4 955 BE
CoolerMaster CM690 Case
Intel 1000 Pro 1000 GT
2 GB USB no-name flash drive (to install ESXi)
WD 120 GB Sata Drive (used for storage)

The ESXi software (4.1) would not install without the Intel NIC. The on-board NIC is not recognized by the ESXi installer, so the installation would crash with an error each time I tried. The Intel card is a must for this system - unless you go through a software workaround and find a patched file for the onboard NIC. The easiest way is to get the Intel card.

I had no need to turn off any devices in the BIOS, and was able to use my wireless Logitech keyboard and USB mouse.

Installation was quite easy: Pop-in the installable 4.1 CD and boot from the optical drive.
The ESXi installer did its thing, and when asked where I wanted to install the software, I chose the USB flash drive.

The rest of the installation went without hiccups, and I was able to run ESXi on the machine for a few days.

I did this just for testing purposes, and to make sure that the hardware was compatible. I plan to build a machine with similar specs to use as a dedicated ESXi home-lab server.


yes my first esx server was a core 2 quad with 8gb ram and its internal ich9 sata ports worked it was just the nic realtek that didn't you can get a realtek nic working if you trawl google. Ich10 works in esx too

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.