jjoz
asked on
VMWare ESXi 3.5 iSCSI Deployment best practice
Hi All,
I'm about to deploy VMWare ESXi 3.5 on 2 servers which will be sharing the SAN using iSCSI (2x Gigabit Ethernet teaming cable).
Specs:
Dell PowerVault MD3000
10x 300 GB SAS 15k rpm
2x Dual port Gigabit Ethernet NIC (4x in total)
Dell PowerEDGE 2950-III
2x Intel Quad Core E5410
32 GB DDR-II 667 MHz
internal 4x 500 GB SATA 7200 rpm HDD (RAID 5) - I know it is slow for hosting the VMDKs
Internal USB slot on the motherboard (but no USB flashdisk ???)
here it is the diagram http://img25.imageshack.us/my.php?image=vmlan.jpg
please let me know if this is does make sense and according to the best practice ?
and the last thing is, as I've got spare 1 TB from the internal RAID10 SATA Drive, any idea of what should i do with them apart from installing 32 MB ESXi ?
thanks.
VM-LAN.jpg
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
To Mark,
"I must ask how many physical NIC ports do you have per server, and across how many NIC cards."
-- The answer is one per servers, as my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server).
"It does appear you are intending on running your iSCSI Traffic over the same NIC's as your guest traffic. If so this is not ideal and may result in NIC contention."
-- the answer is yes, I do implement this solution using just the ordinary Procurve 24 ports unmanaged Gigabit switch.
as per your suggestion of having:
"2 port would be dedicated to the Service Console and vMotion traffic. vSwitch0
2 ports would be dedicated to IP Storage traffic (iSCSI) vSwitch1
2 ports would be dedicated to guest traffic. vSwitch2"
I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ?
Thanks for the reply Mark
and to paulsolov, you could be right, I'll try to get one managed Gigabit switch so that i can do LAN teaming and creating a VLAN to make what in my colour coding come true :-)
and to kumarnirmal,
Does vCenter Foundation Edition is something that comes as freeware ?
"I must ask how many physical NIC ports do you have per server, and across how many NIC cards."
-- The answer is one per servers, as my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server).
"It does appear you are intending on running your iSCSI Traffic over the same NIC's as your guest traffic. If so this is not ideal and may result in NIC contention."
-- the answer is yes, I do implement this solution using just the ordinary Procurve 24 ports unmanaged Gigabit switch.
as per your suggestion of having:
"2 port would be dedicated to the Service Console and vMotion traffic. vSwitch0
2 ports would be dedicated to IP Storage traffic (iSCSI) vSwitch1
2 ports would be dedicated to guest traffic. vSwitch2"
I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ?
Thanks for the reply Mark
and to paulsolov, you could be right, I'll try to get one managed Gigabit switch so that i can do LAN teaming and creating a VLAN to make what in my colour coding come true :-)
and to kumarnirmal,
Does vCenter Foundation Edition is something that comes as freeware ?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Alright Mark,
Now It's all clear that i should have these connection as minimum with each configured as a separate VLAN:
1 Management
1 iSCSI
1 production traffic
in this way, I still have one free NIC on each servers and the SAN.
Now It's all clear that i should have these connection as minimum with each configured as a separate VLAN:
1 Management
1 iSCSI
1 production traffic
in this way, I still have one free NIC on each servers and the SAN.
ASKER
OK, in this case I'd like to simplify the diagram again so that i can use 2xdirect patch cable connection to the SAN from each server and just leave the production line access one cable to the unmanaged switch while those two pairs of the SAN-Servers running on it's own subnet
Please find the following final diagram:
thanks for all of your comments guys.
iSCSI-SAN.jpg
ASKER
thanks to all for your suggestion, it really helps me alot.
Cheers.
Cheers.
ASKER
To All,
This is my updated deployment plan by creating a separate Subnet for the SAN without the use of managed switch.
please let me know if there is any issue with it.
Thanks,
iSCSI-SAN.jpg
This is my updated deployment plan by creating a separate Subnet for the SAN without the use of managed switch.
please let me know if there is any issue with it.
Thanks,
iSCSI-SAN.jpg
I don't know if I can get in trouble for saying this but I'm also in NSW, I could consult !