VMWare ESXi 3.5 iSCSI Deployment best practice

Hi All,

I'm about to deploy VMWare ESXi 3.5 on 2 servers which will be sharing the SAN using iSCSI (2x Gigabit Ethernet teaming cable).

Dell PowerVault MD3000
10x 300 GB SAS 15k rpm
2x Dual port Gigabit Ethernet NIC (4x in total)

Dell PowerEDGE 2950-III
2x Intel Quad Core E5410
32 GB DDR-II 667 MHz
internal 4x 500 GB SATA 7200 rpm HDD (RAID 5) - I know it is slow for hosting the VMDKs
Internal USB slot on the motherboard (but no USB flashdisk ???)

here it is the diagram http://img25.imageshack.us/my.php?image=vmlan.jpg
please let me know if this is does make sense and according to the best practice ?

and the last thing is, as I've got spare 1 TB from the internal RAID10 SATA Drive, any idea of what should i do with them apart from installing 32 MB ESXi ?

Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

So you have 2 Host servers and a SAN.
Using ESXi without the VC and therefore vMotion, VCB, HA may be seen as selling your solution short.
I can understand there are always financial constraints but ESX Foundation will give you VCB, I'm bot sure if the Foundation license also gives you vMotion.
Anyway lets not debate your solution, I'm really just prompting some thought of just what is a Corporate Virtualisation solution, You have the capability to supply hardware independant infrastructure.

Looking at your diagram, which I must say is very easy to read.
Really your on the right track.
I must ask how many physical NIC ports do you have per server, and across how many NIC cards.

It does appear you are intending on running your iSCSI Traffic over the same NIC's as your guest traffic. If so this is not ideal and may result in NIC contention.
I'd suggest you should be looking at a minimum of 6 NIC ports, ideally over 3 NIC cards.
2 port would be dedicated to the Service Console and vMotion traffic. vSwitch0
2 ports would be dedicated to IP Storage traffic (iSCSI) vSwitch1
2 ports would be dedicated to guest traffic. vSwitch2
vSwitch0 will need to see vSwitch2 and if you can't VLAN can share the same network segment.
vSwitch1 should be either VLANed or sit on it's own switch (or pair) it does not need a gateway and is best without one.
There are more details which should be covered off. For now could you explain a little more about your hardware capabilities, and if you intend to consider this advice or maybe it's just not within budget.
If ya see what I mean, there's no point getting further into it if you can't do it.
Consider this.
If you don't spend now you likely will later. If you intend utilising your host heavily Later will likely be sooner than you think. With 8x 2.3Ghz cores and 32GB of RAM per Host you will have some reasonable processing capability.
As far as the additional disk you have in your PowerEdge. I'd suggest buying some 160GB SATA to replace them and putting the 500GB SATA disks into your Dell PowerVault MD3000 as a 2nd tier storage.
Basicly you can house low performance required images and templates.
OH a word on that. Consider your ESX hosts can only queue so many IO requests. If the performance of your SATA was very poor the entire ESX host will suffer and therefore all guest will suffer.
OH another thought. If you can use NFS rather than iSCSI you'll likely get better Storage performance, or so I've read.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
I don't know if I can get in trouble for saying this but I'm also in NSW, I could consult !
Paul SolovyovskySenior IT AdvisorCommented:
markzz has provided a good summation of what you need.  I would like to add that you may want to rethink the unmanaged switches for at least gigabit web managed switches.  This will allow you too aggregate bandwidth on the SAN and depending on your IOPS may also allow you to use LACP for outbound load balancing or cisco etherchannel (L3 switching needed) for inbound/outbound link aggregation.

Since you don't have Virtual Center the extra 1TB can be used for local storage for testing virtual machines, backup/restore scenerios, virtual machines to be used for templates (since you don't have VC you can't deploy them as easily but you can create VMs and use vConverter as a poorman's deployment tool) as well as using the space for ISOs as well.
Powerful Yet Easy-to-Use Network Monitoring

Identify excessive bandwidth utilization or unexpected application traffic with SolarWinds Bandwidth Analyzer Pack.

I suggest that you take a look at vCenter Foundation Edition which allows you to manage upto 3 ESX Hosts and also gives you 24*7*365 up time solutions like vMotion, Storage vMotion, DRS and HA.
jjozAuthor Commented:
To Mark,

"I must ask how many physical NIC ports do you have per server, and across how many NIC cards."
 -- The answer is one per servers, as my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server).

"It does appear you are intending on running your iSCSI Traffic over the same NIC's as your guest traffic. If so this is not ideal and may result in NIC contention."
 -- the answer is yes, I do implement this solution using just the ordinary Procurve 24 ports unmanaged Gigabit switch.

as per your suggestion of having:
"2 port would be dedicated to the Service Console and vMotion traffic. vSwitch0
2 ports would be dedicated to IP Storage traffic (iSCSI) vSwitch1
2 ports would be dedicated to guest traffic. vSwitch2

I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ?

Thanks for the reply Mark

and to paulsolov, you could be right, I'll try to get one managed Gigabit switch so that i can do LAN teaming and creating a VLAN to make what in my colour coding come true :-)

and to kumarnirmal,
Does vCenter Foundation Edition is something that comes as freeware ?

When I mentioned dedicating 2 NIC ports per vSwitch I meant they should be separated into VLANS or as a minimum separate your IP storage traffic.
Forgive me if I have missunderstood but I thought the point of your diagram showing lines of differing colours was each line represented a UTP cable and therefore a corresponding NIC port.

With the VLANing side. The Procurve 1800 24g can vlan, etherchanel but it can't route. You will need to buy the Procurve 2800 or better to get the routing module or use a separate router.
Maybe 2 of the Procurve's so you will have redundancy in network paths too.

It's a pitty you don't have another duel Gb NIC in each host, then you could implement redundancy and separation in function.

jjozAuthor Commented:
Alright Mark,

Now It's all clear that i should have these connection as minimum with each configured as a separate VLAN:

1 Management
1 production traffic

in this way, I still have one free NIC on each servers and the SAN.
jjozAuthor Commented:

OK, in this case I'd like to simplify the diagram again so that i can use 2xdirect patch cable connection to the SAN from each server and just leave the production line access one cable to the unmanaged switch while those two pairs of the SAN-Servers running on it's own subnet

Please find the following final diagram:

thanks for all of your comments guys.

jjozAuthor Commented:
thanks to all for your suggestion, it really helps me alot.

jjozAuthor Commented:
To All,

This is my updated deployment plan by creating a separate Subnet for the SAN without the use of managed switch.

please let me know if there is any issue with it.


It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.