Go Premium for a chance to win a PS4. Enter to Win

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2576
  • Last Modified:

Windows Unicast NLB virtual machine unable to ping each other ?

Hi People,

I am managing a new environment where there are two nodes Unicast NLB Windows Server 2003 Unicast NLB Virtual Machine (let say VM1 and VM2).

Previously before I migrate it into ESXi 5.1u1 host, it was running on two different ESX 4.1 host (two different HP BL 380 G6 servers) and running fine (the NLB status are both converged).

But to my surprise, when I migrated both into two different Blade Server (on two different HP BL 460c G8), they cannot ping each other ?

so based on this article: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1556

I have to put it into the same ESXi host to make sure the NLB converged.

Why is that behaviour now changed on vSphere 5.1 ?
previously it was on the two different hosts and working just fine but now I have to host it on the same host.

Any kind of help and suggestion would be greatly appreciated.

Thanks.
0
Senior IT System Engineer
Asked:
Senior IT System Engineer
5 Solutions
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Some more details to add as follows,

previously it was like the following and working just fine on two separate ESX host:

ESX 4.1 VMHost 1
VM1 - NLB Unicast Node 1
Local IP 10.1.100.12 (VLAN 100)
NLB IP 10.1.200.5 (VLAN 200)
NLB Cluster Virtual IP 10.1.200.200 (VLAN 200)

ESX 4.1 VMHost 2
VM2 - NLB Unicast Node 2
Local IP 10.1.100.13 (VLAN 100)
NLB IP 10.1.200.6 (VLAN 200)
NLB Cluster Virtual IP 10.1.200.200 (VLAN 200)

but now the configuration like the following is not working:

ESXi 5.1u1 VMHost 1
VM1 - NLB Unicast Node 1
Local IP 172.15.20.25 (VLAN 20)
NLB IP 172.15.20.27 (VLAN 20)
NLB Cluster Virtual IP 172.15.20.29 (VLAN 20)

ESXi 5.1u1 VMHost 2
VM2 - NLB Unicast Node 2
Local IP 172.15.20.26 (VLAN 20)
NLB IP 172.15.20.28 (VLAN 20)
NLB Cluster Virtual IP 172.15.20.29 (VLAN 20)

why the configuration above is not working ?

I had to force it on the same VMhost1 to make it working as normal ?
does changing all of the IP address into the same VLAN causing it to not working ?
0
 
Rich WeisslerProfessional Troublemaker^h^h^h^h^hshooterCommented:
I wasn't aware of behaviour changes.  NLB in unicast mode causes each of the servers to take on a common mac address.  When we first brought up a NLB on vmware for the first time (in 3.5, I think), we discovered that rebooting one server caused the switch to suppress the address for five minutes.  The solution there was to either configure static ARP addresses on the switch, or to switch to using multicast addresses rather than unicast.

One way to get two different machines in a NLB cluster to see each other is to give them a separate NIC for that private communication.  That way, the NIC used for that communication keeps their separate MAC addresses.  Looking at your new configuration, I'm not certain if the Local IP addresses aren't on the same NICs as the NLB traffic.

(I'm not certain I've answered the question though... does that help though?  I guess the next logical question would be... do you have one or two NICs in the virtual machines?)
0
 
compdigit44Commented:
On the NIC teaming software inside the VM's did you try to edit the MAC address as listed in the following article...

http://jhmeier.de/2010/10/19/using-windows-server-2008-r2-network-load-balancing-with-teamed-network-nics-in-a-hp-server/

Also were both you new and old host using the same upstream switch?
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

 
TimotiStCommented:
Random guess:
Maybe a promiscuous mode configuration mismatch on the ESXi?

http://petermolnar.eu/linux-tech-coding/vmware-esxi-and-promiscuous-mode/

Tamas
0
 
compdigit44Commented:
Actually TimotiSt, you may be on to something. The vSwitch may have allow forget packets policy enabled.
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Each of the VMs got two vNICs.

Old configuration was connected to uplink switch A
Each vNICs got its own separate VLAN

New configuration was connected to uplink switch B
Each vNICs got the same VLAN

So totally new core switch. I believe that Unicast NLB doesn't require static ARP entry, only multicast needs it.
0
 
compdigit44Commented:
Are the vSwitch NIC teaming policies the same?

Some switch may block unicast traffic as a safety measure since it can flood a switch
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
comp,

yes the previous ESX host are 2x different HP DL 380 G6 rack servers, connected to a core Cisco Catalyst 6000 series switch (hence I can see the Cisco Discovery Protocol info on the vSwitch balloon).

the newly migrated Terminal Servers are now hosted in 2x different HP Blade servers BL 460c G8 connected to the HP Virtual Connect modules with Flex10 on HP c7000 blade enclosure.

but from VMware wise both vSwitch policy are exactly the same.
0

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now