Solved

VMWare ESXi 3.5 iSCSI Deployment best practice

Posted on 2009-04-11
10
5,007 Views
Last Modified: 2013-11-14

Hi All,

I'm about to deploy VMWare ESXi 3.5 on 2 servers which will be sharing the SAN using iSCSI (2x Gigabit Ethernet teaming cable).

Specs:
Dell PowerVault MD3000
10x 300 GB SAS 15k rpm
2x Dual port Gigabit Ethernet NIC (4x in total)

Dell PowerEDGE 2950-III
2x Intel Quad Core E5410
32 GB DDR-II 667 MHz
internal 4x 500 GB SATA 7200 rpm HDD (RAID 5) - I know it is slow for hosting the VMDKs
Internal USB slot on the motherboard (but no USB flashdisk ???)

here it is the diagram http://img25.imageshack.us/my.php?image=vmlan.jpg
please let me know if this is does make sense and according to the best practice ?

and the last thing is, as I've got spare 1 TB from the internal RAID10 SATA Drive, any idea of what should i do with them apart from installing 32 MB ESXi ?

thanks.
VM-LAN.jpg
0
Comment
Question by:jjoz
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
10 Comments
 
LVL 8

Accepted Solution

by:
markzz earned 400 total points
ID: 24126271
So you have 2 Host servers and a SAN.
Using ESXi without the VC and therefore vMotion, VCB, HA may be seen as selling your solution short.
I can understand there are always financial constraints but ESX Foundation will give you VCB, I'm bot sure if the Foundation license also gives you vMotion.
Anyway lets not debate your solution, I'm really just prompting some thought of just what is a Corporate Virtualisation solution, You have the capability to supply hardware independant infrastructure.

Looking at your diagram, which I must say is very easy to read.
Really your on the right track.
I must ask how many physical NIC ports do you have per server, and across how many NIC cards.

It does appear you are intending on running your iSCSI Traffic over the same NIC's as your guest traffic. If so this is not ideal and may result in NIC contention.
I'd suggest you should be looking at a minimum of 6 NIC ports, ideally over 3 NIC cards.
2 port would be dedicated to the Service Console and vMotion traffic. vSwitch0
2 ports would be dedicated to IP Storage traffic (iSCSI) vSwitch1
2 ports would be dedicated to guest traffic. vSwitch2
vSwitch0 will need to see vSwitch2 and if you can't VLAN can share the same network segment.
vSwitch1 should be either VLANed or sit on it's own switch (or pair) it does not need a gateway and is best without one.
There are more details which should be covered off. For now could you explain a little more about your hardware capabilities, and if you intend to consider this advice or maybe it's just not within budget.
If ya see what I mean, there's no point getting further into it if you can't do it.
Consider this.
If you don't spend now you likely will later. If you intend utilising your host heavily Later will likely be sooner than you think. With 8x 2.3Ghz cores and 32GB of RAM per Host you will have some reasonable processing capability.
As far as the additional disk you have in your PowerEdge. I'd suggest buying some 160GB SATA to replace them and putting the 500GB SATA disks into your Dell PowerVault MD3000 as a 2nd tier storage.
Basicly you can house low performance required images and templates.
OH a word on that. Consider your ESX hosts can only queue so many IO requests. If the performance of your SATA was very poor the entire ESX host will suffer and therefore all guest will suffer.
OH another thought. If you can use NFS rather than iSCSI you'll likely get better Storage performance, or so I've read.
 
0
 
LVL 8

Expert Comment

by:markzz
ID: 24126299
Hmm
I don't know if I can get in trouble for saying this but I'm also in NSW, I could consult !
0
 
LVL 42

Assisted Solution

by:paulsolov
paulsolov earned 50 total points
ID: 24126369
markzz has provided a good summation of what you need.  I would like to add that you may want to rethink the unmanaged switches for at least gigabit web managed switches.  This will allow you too aggregate bandwidth on the SAN and depending on your IOPS may also allow you to use LACP for outbound load balancing or cisco etherchannel (L3 switching needed) for inbound/outbound link aggregation.

Since you don't have Virtual Center the extra 1TB can be used for local storage for testing virtual machines, backup/restore scenerios, virtual machines to be used for templates (since you don't have VC you can't deploy them as easily but you can create VMs and use vConverter as a poorman's deployment tool) as well as using the space for ISOs as well.
0
Efficient way to get backups off site to Azure

This user guide provides instructions on how to deploy and configure both a StoneFly Scale Out NAS Enterprise Cloud Drive virtual machine and Veeam Cloud Connect in the Microsoft Azure Cloud.

 
LVL 7

Assisted Solution

by:kumarnirmal
kumarnirmal earned 50 total points
ID: 24126970
I suggest that you take a look at vCenter Foundation Edition which allows you to manage upto 3 ESX Hosts and also gives you 24*7*365 up time solutions like vMotion, Storage vMotion, DRS and HA.
0
 
LVL 1

Author Comment

by:jjoz
ID: 24127786
To Mark,

"I must ask how many physical NIC ports do you have per server, and across how many NIC cards."
 -- The answer is one per servers, as my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server).

"It does appear you are intending on running your iSCSI Traffic over the same NIC's as your guest traffic. If so this is not ideal and may result in NIC contention."
 -- the answer is yes, I do implement this solution using just the ordinary Procurve 24 ports unmanaged Gigabit switch.

as per your suggestion of having:
"2 port would be dedicated to the Service Console and vMotion traffic. vSwitch0
2 ports would be dedicated to IP Storage traffic (iSCSI) vSwitch1
2 ports would be dedicated to guest traffic. vSwitch2
"

I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ?

Thanks for the reply Mark

and to paulsolov, you could be right, I'll try to get one managed Gigabit switch so that i can do LAN teaming and creating a VLAN to make what in my colour coding come true :-)

and to kumarnirmal,
Does vCenter Foundation Edition is something that comes as freeware ?

0
 
LVL 8

Assisted Solution

by:markzz
markzz earned 400 total points
ID: 24129894
HI
When I mentioned dedicating 2 NIC ports per vSwitch I meant they should be separated into VLANS or as a minimum separate your IP storage traffic.
Forgive me if I have missunderstood but I thought the point of your diagram showing lines of differing colours was each line represented a UTP cable and therefore a corresponding NIC port.

With the VLANing side. The Procurve 1800 24g can vlan, etherchanel but it can't route. You will need to buy the Procurve 2800 or better to get the routing module or use a separate router.
Maybe 2 of the Procurve's so you will have redundancy in network paths too.

It's a pitty you don't have another duel Gb NIC in each host, then you could implement redundancy and separation in function.

0
 
LVL 1

Author Comment

by:jjoz
ID: 24133771
Alright Mark,

Now It's all clear that i should have these connection as minimum with each configured as a separate VLAN:

1 Management
1 iSCSI
1 production traffic

in this way, I still have one free NIC on each servers and the SAN.
0
 
LVL 1

Author Comment

by:jjoz
ID: 24134438

OK, in this case I'd like to simplify the diagram again so that i can use 2xdirect patch cable connection to the SAN from each server and just leave the production line access one cable to the unmanaged switch while those two pairs of the SAN-Servers running on it's own subnet

Please find the following final diagram:


thanks for all of your comments guys.

iSCSI-SAN.jpg
0
 
LVL 1

Author Closing Comment

by:jjoz
ID: 31569273
thanks to all for your suggestion, it really helps me alot.

Cheers.
0
 
LVL 1

Author Comment

by:jjoz
ID: 24190636
To All,

This is my updated deployment plan by creating a separate Subnet for the SAN without the use of managed switch.

please let me know if there is any issue with it.

Thanks,

iSCSI-SAN.jpg
0

Featured Post

Best Practices: Disaster Recovery Testing

Besides backup, any IT division should have a disaster recovery plan. You will find a few tips below relating to the development of such a plan and to what issues one should pay special attention in the course of backup planning.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Concerto Cloud Services, a provider of fully managed private, public and hybrid cloud solutions, announced today it was named to the 20 Coolest Cloud Infrastructure Vendors Of The 2017 Cloud  (http://www.concertocloud.com/about/in-the-news/2017/02/0…
Each year, investment in cloud platforms grows more than 20% (https://www.immun.io/hubfs/Immunio_2016/Content/Marketing/Cloud-Security-Report-2016.pdf?submissionGuid=a8d80a00-6fee-4b85-81db-a4e28f681762) as an increasing number of companies begin to…
Teach the user how to configure vSphere clusters to support the VMware FT feature Open vSphere Web Client: Verify vSphere HA is enabled: Verify netowrking for vMotion and FT Logging is in place or create it: Turn On FT for a virtual machine: Verify …
This video shows you how to use a vSphere client to connect to your ESX host as the root user. Demonstrates the basic connection of bypassing certification set up. Demonstrates how to access the traditional view to begin managing your virtual mac…

734 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question