VMWare ESXi 3.5 iSCSI Deployment best practice

Posted on 2009-04-11
Medium Priority
Last Modified: 2013-11-14

Hi All,

I'm about to deploy VMWare ESXi 3.5 on 2 servers which will be sharing the SAN using iSCSI (2x Gigabit Ethernet teaming cable).

Dell PowerVault MD3000
10x 300 GB SAS 15k rpm
2x Dual port Gigabit Ethernet NIC (4x in total)

Dell PowerEDGE 2950-III
2x Intel Quad Core E5410
32 GB DDR-II 667 MHz
internal 4x 500 GB SATA 7200 rpm HDD (RAID 5) - I know it is slow for hosting the VMDKs
Internal USB slot on the motherboard (but no USB flashdisk ???)

here it is the diagram http://img25.imageshack.us/my.php?image=vmlan.jpg
please let me know if this is does make sense and according to the best practice ?

and the last thing is, as I've got spare 1 TB from the internal RAID10 SATA Drive, any idea of what should i do with them apart from installing 32 MB ESXi ?

Question by:jjoz

Accepted Solution

markzz earned 1600 total points
ID: 24126271
So you have 2 Host servers and a SAN.
Using ESXi without the VC and therefore vMotion, VCB, HA may be seen as selling your solution short.
I can understand there are always financial constraints but ESX Foundation will give you VCB, I'm bot sure if the Foundation license also gives you vMotion.
Anyway lets not debate your solution, I'm really just prompting some thought of just what is a Corporate Virtualisation solution, You have the capability to supply hardware independant infrastructure.

Looking at your diagram, which I must say is very easy to read.
Really your on the right track.
I must ask how many physical NIC ports do you have per server, and across how many NIC cards.

It does appear you are intending on running your iSCSI Traffic over the same NIC's as your guest traffic. If so this is not ideal and may result in NIC contention.
I'd suggest you should be looking at a minimum of 6 NIC ports, ideally over 3 NIC cards.
2 port would be dedicated to the Service Console and vMotion traffic. vSwitch0
2 ports would be dedicated to IP Storage traffic (iSCSI) vSwitch1
2 ports would be dedicated to guest traffic. vSwitch2
vSwitch0 will need to see vSwitch2 and if you can't VLAN can share the same network segment.
vSwitch1 should be either VLANed or sit on it's own switch (or pair) it does not need a gateway and is best without one.
There are more details which should be covered off. For now could you explain a little more about your hardware capabilities, and if you intend to consider this advice or maybe it's just not within budget.
If ya see what I mean, there's no point getting further into it if you can't do it.
Consider this.
If you don't spend now you likely will later. If you intend utilising your host heavily Later will likely be sooner than you think. With 8x 2.3Ghz cores and 32GB of RAM per Host you will have some reasonable processing capability.
As far as the additional disk you have in your PowerEdge. I'd suggest buying some 160GB SATA to replace them and putting the 500GB SATA disks into your Dell PowerVault MD3000 as a 2nd tier storage.
Basicly you can house low performance required images and templates.
OH a word on that. Consider your ESX hosts can only queue so many IO requests. If the performance of your SATA was very poor the entire ESX host will suffer and therefore all guest will suffer.
OH another thought. If you can use NFS rather than iSCSI you'll likely get better Storage performance, or so I've read.

Expert Comment

ID: 24126299
I don't know if I can get in trouble for saying this but I'm also in NSW, I could consult !
LVL 42

Assisted Solution

by:Paul Solovyovsky
Paul Solovyovsky earned 200 total points
ID: 24126369
markzz has provided a good summation of what you need.  I would like to add that you may want to rethink the unmanaged switches for at least gigabit web managed switches.  This will allow you too aggregate bandwidth on the SAN and depending on your IOPS may also allow you to use LACP for outbound load balancing or cisco etherchannel (L3 switching needed) for inbound/outbound link aggregation.

Since you don't have Virtual Center the extra 1TB can be used for local storage for testing virtual machines, backup/restore scenerios, virtual machines to be used for templates (since you don't have VC you can't deploy them as easily but you can create VMs and use vConverter as a poorman's deployment tool) as well as using the space for ISOs as well.
2017 Webroot Threat Report

MSPs: Get the facts you need to protect your clients.
The 2017 Webroot Threat Report provides a uniquely insightful global view into the analysis and discoveries made by the Webroot® Threat Intelligence Platform to provide insights on key trends and risks as seen by our users.


Assisted Solution

kumarnirmal earned 200 total points
ID: 24126970
I suggest that you take a look at vCenter Foundation Edition which allows you to manage upto 3 ESX Hosts and also gives you 24*7*365 up time solutions like vMotion, Storage vMotion, DRS and HA.

Author Comment

ID: 24127786
To Mark,

"I must ask how many physical NIC ports do you have per server, and across how many NIC cards."
 -- The answer is one per servers, as my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server).

"It does appear you are intending on running your iSCSI Traffic over the same NIC's as your guest traffic. If so this is not ideal and may result in NIC contention."
 -- the answer is yes, I do implement this solution using just the ordinary Procurve 24 ports unmanaged Gigabit switch.

as per your suggestion of having:
"2 port would be dedicated to the Service Console and vMotion traffic. vSwitch0
2 ports would be dedicated to IP Storage traffic (iSCSI) vSwitch1
2 ports would be dedicated to guest traffic. vSwitch2

I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ?

Thanks for the reply Mark

and to paulsolov, you could be right, I'll try to get one managed Gigabit switch so that i can do LAN teaming and creating a VLAN to make what in my colour coding come true :-)

and to kumarnirmal,
Does vCenter Foundation Edition is something that comes as freeware ?


Assisted Solution

markzz earned 1600 total points
ID: 24129894
When I mentioned dedicating 2 NIC ports per vSwitch I meant they should be separated into VLANS or as a minimum separate your IP storage traffic.
Forgive me if I have missunderstood but I thought the point of your diagram showing lines of differing colours was each line represented a UTP cable and therefore a corresponding NIC port.

With the VLANing side. The Procurve 1800 24g can vlan, etherchanel but it can't route. You will need to buy the Procurve 2800 or better to get the routing module or use a separate router.
Maybe 2 of the Procurve's so you will have redundancy in network paths too.

It's a pitty you don't have another duel Gb NIC in each host, then you could implement redundancy and separation in function.


Author Comment

ID: 24133771
Alright Mark,

Now It's all clear that i should have these connection as minimum with each configured as a separate VLAN:

1 Management
1 production traffic

in this way, I still have one free NIC on each servers and the SAN.

Author Comment

ID: 24134438

OK, in this case I'd like to simplify the diagram again so that i can use 2xdirect patch cable connection to the SAN from each server and just leave the production line access one cable to the unmanaged switch while those two pairs of the SAN-Servers running on it's own subnet

Please find the following final diagram:

thanks for all of your comments guys.


Author Closing Comment

ID: 31569273
thanks to all for your suggestion, it really helps me alot.


Author Comment

ID: 24190636
To All,

This is my updated deployment plan by creating a separate Subnet for the SAN without the use of managed switch.

please let me know if there is any issue with it.



Featured Post


Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Windows Server 2003 introduced persistent Volume Shadow Copies and made 2003 a must-do upgrade.  Since then, it's been a must-implement feature for all servers doing any kind of file sharing.
It’s time for spooky stories and consuming way too much sugar, including the many treats we’ve whipped for you in the world of tech. Check it out!
Advanced tutorial on how to run the esxtop command to capture a batch file in csv format in order to export the file and use it for performance analysis. He demonstrates how to download the file using a vSphere web client (or vSphere client) and exp…
This Micro Tutorial will teach you how to reformat your flash drive. Sometimes your flash drive may have issues carrying files so this will completely restore it to manufacturing settings. Make sure to backup all files before reformatting. This w…
Suggested Courses

864 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question