Member_2_6492660_1
asked on
Web Farm not accessible Externally using NLB VIP
2 Windows 2012 R2 Data Center NLB IIS DFS/R URL Rewrite
2 Windows 2012 R2 Data Center Web Farm IIS DF/R
VMware ESXI 6.5
I can not access my web sites externally. Internally using the VIP of the NLB servers it works.
When I modify my Meraki router to point to the VIP of the NLB no one can access my two sites.
If I change the meraki router to point to the physical address of one of the Web Farm servers that works also.
I need to be able to access my web farms thru the NLB servers
Thank you
Tom
2 Windows 2012 R2 Data Center Web Farm IIS DF/R
VMware ESXI 6.5
I can not access my web sites externally. Internally using the VIP of the NLB servers it works.
When I modify my Meraki router to point to the VIP of the NLB no one can access my two sites.
If I change the meraki router to point to the physical address of one of the Web Farm servers that works also.
I need to be able to access my web farms thru the NLB servers
Thank you
Tom
Is this a new implementation or has NLB failed?
ASKER
Andrew
New setup. How would I know if NLB failed?
New setup. How would I know if NLB failed?
Okay there are specific requirements for Windows Network Load Balancing to function otherwise it will not function
Firstly did you use Multicast?
Firstly did you use Multicast?
Need to make a few assumptions first....
On the Meraki are you using NAT or port forwarding?
Are you using split DNS? This would be required where you have an internal VIP but where you are using a NAT or port forwarding external to internal. The exception here would be where you are hosting a Load Balancing appliance/virtual in a DMZ - preferred - and traffic is traversing the external Firewall to the load-balancer where you have an external VIP and then a "SNAT" to internal that would - hopefully - traverse another firewall....hence DMZ.
If you are not using the Load Balancing piece then NAT or Port Forwarding?
Regardless, you would need external DNS entries that only resolve to your external NAT'd IP Address.
Then, internal DNS entries that resolve to the NAT.
For the external DNS to work would require a registered domain and you can have internal and external IP defined for the registered domain but using the "Split DNS" as I described above.
Using a 1:1 NAT example, you would have the external, registered IP, facing NAT IP Address that resolves to the external FQDN IP > external hits the NAT > NAT address configured to hit the internal IP Address.
A potential issue might come with the "rewrite" where your using a different FQDN versus directory structure on those two load-balanced servers /newpath/newsite. Can you verify the NAT or port forwarder hits the internal LB IP and how the rewrite is "written"?
Let me know if you want to discuss further.
On the Meraki are you using NAT or port forwarding?
Are you using split DNS? This would be required where you have an internal VIP but where you are using a NAT or port forwarding external to internal. The exception here would be where you are hosting a Load Balancing appliance/virtual in a DMZ - preferred - and traffic is traversing the external Firewall to the load-balancer where you have an external VIP and then a "SNAT" to internal that would - hopefully - traverse another firewall....hence DMZ.
If you are not using the Load Balancing piece then NAT or Port Forwarding?
Regardless, you would need external DNS entries that only resolve to your external NAT'd IP Address.
Then, internal DNS entries that resolve to the NAT.
For the external DNS to work would require a registered domain and you can have internal and external IP defined for the registered domain but using the "Split DNS" as I described above.
Using a 1:1 NAT example, you would have the external, registered IP, facing NAT IP Address that resolves to the external FQDN IP > external hits the NAT > NAT address configured to hit the internal IP Address.
A potential issue might come with the "rewrite" where your using a different FQDN versus directory structure on those two load-balanced servers /newpath/newsite. Can you verify the NAT or port forwarder hits the internal LB IP and how the rewrite is "written"?
Let me know if you want to discuss further.
ASKER
Andrew
Yes I setup the NLB to be multicast.
see image
What else would you like to see?
nlbmulticast.PNG
Yes I setup the NLB to be multicast.
see image
What else would you like to see?
nlbmulticast.PNG
ASKER
Brian
My Meraki is using port forwarding.
The web farm is behind the NLB Servers NLB Network Load Balancing
You last question not sure
Load balancing is all new to me
My Meraki is using port forwarding.
The web farm is behind the NLB Servers NLB Network Load Balancing
You last question not sure
Load balancing is all new to me
You will need to add Static ARP Entries for the multicast MAC address and IP Address in your switches otherwise NLB cannot converge
Not all switches support Static ARP Entries and hence it fails
Not all switches support Static ARP Entries and hence it fails
I agree with Andrew in that first step is to verify resolution of your converged infrastructure to the primary IP address (NLB) to MAC using ARP. Assuming you used just one IP address for NLB and using multicast.
One example might be where you can "ping" the NLB from the same subnet but when you come from a different routed subnet you cannot ping NLB. Being this is a virtual IP between multiple nodes one of the requirements of NLB is turning on "proxy ARP support" on the router in question and adding the static ARP entry.
One example might be where you can "ping" the NLB from the same subnet but when you come from a different routed subnet you cannot ping NLB. Being this is a virtual IP between multiple nodes one of the requirements of NLB is turning on "proxy ARP support" on the router in question and adding the static ARP entry.
ASKER
Andrew was on the phone with Meraki support
First we changed from multicast to unicast. It still did not work
We then did a capture he saw that the http traffic was making it to the load balancer virtual ip address but no return traffic.
Thoughts.
First we changed from multicast to unicast. It still did not work
We then did a capture he saw that the http traffic was making it to the load balancer virtual ip address but no return traffic.
Thoughts.
ASKER
Brian
I can ping the virtual ip
Pinging 10.2.8.171 with 32 bytes of data:
Reply from 10.2.8.171: bytes=32 time<1ms TTL=128
Reply from 10.2.8.171: bytes=32 time<1ms TTL=128
Reply from 10.2.8.171: bytes=32 time<1ms TTL=128
Reply from 10.2.8.171: bytes=32 time<1ms TTL=128
Ping statistics for 10.2.8.171:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
I can ping the virtual ip
Pinging 10.2.8.171 with 32 bytes of data:
Reply from 10.2.8.171: bytes=32 time<1ms TTL=128
Reply from 10.2.8.171: bytes=32 time<1ms TTL=128
Reply from 10.2.8.171: bytes=32 time<1ms TTL=128
Reply from 10.2.8.171: bytes=32 time<1ms TTL=128
Ping statistics for 10.2.8.171:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
Was Wireshark used? I would need to see the traffic but don't upload it...can probably get through this without seeing the traffic.
So, there is nothing else between the Miraki and the server(s) NLB except a Layer 2/3 switch?
So, there is nothing else between the Miraki and the server(s) NLB except a Layer 2/3 switch?
ASKER
No he did it on my meraki dash board
My network is this
one Meraki MX 60
Two Fios Internet pipes 100 up down on both
one Meraki MS220 8 Port POE switch
One Cisco WS C3750 48 TS switch 10/100/1000
All one subnet 10.2.8.x 255 255.252.0
He is going to send my the trace
Tom
My network is this
one Meraki MX 60
Two Fios Internet pipes 100 up down on both
one Meraki MS220 8 Port POE switch
One Cisco WS C3750 48 TS switch 10/100/1000
All one subnet 10.2.8.x 255 255.252.0
He is going to send my the trace
Tom
ASKER
Brian
He used wires shark and he sent me the captures.
How can we handle this?
He used wires shark and he sent me the captures.
How can we handle this?
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
Hold on to the captures for now. Seems like a Layer 2 issue. Just seems odd that traffic would traverse the switch from Meraki and hit the NLB.
Let's walk through this for a moment.
What I would expect is that if you ping the NLB from the Meraki it is sent to both nodes (Unicast) and perhaps looks like it is going to the NLB.
In Unicast mode, NLB replaces the actual Media Access Control (MAC) address of each server in the cluster with a common NLB MAC address. The MAC address is used in the Address Resolution Protocol (ARP) header, not the Ethernet header. The switch uses the MAC address in the Ethernet header, not the ARP header.
Without the IGMP option, NLB uses a locally administered Multicast MAC address with a different prefix than that of IANA assigned.
Multicast mode set in the GUI instructs the cluster members to respond to ARP for the Virtual IP and use address with the use of a multicast MAC address, such as 0300.XXXXX
The ARP process does not complete for multicast MAC addresses (this breaks RFC 1812). A static MAC address is required in order to reach the cluster outside of the local subnet.
Without those static entries for multicast you would expect to see a lot of broadcast? Unless using IGMP?
The recommendation for avoiding/containing this flooding is the configuration of static MAC entries for the multicast Cluster MAC (binding it exclusively to the required ports) on your switch. Those static entries then also will be listed in the "show mac address-table" output.
Otherwise, add static route...such as....
arp 10.2.8.171 0300.XXXX.XXXX
Now, since the inbound packets have a unicast destination IP address and a multicast destination MAC address you would need to insert a static mac-address-table entry in order to switch the cluster-bound packets for the physical port. If there is more than one port, need to verify this first for that particular switch but the command would look something like:
mac address-table static 0300.5e01.0101 vlan XXX interface GigabitEthernet1/5
And, on Meraki static arp resolution to the Multicast MAC address of NLB IP.
With the IGMP option, you can make use of IGMP snooping in order to avoid the broadcast and the static MAC entries are not required in this case.
The multicast cluster MAC can be learned dynamically by IGMP snooping.
It should then be listed in the "show mac address-table multicast" output.
Make sense?
Let's walk through this for a moment.
What I would expect is that if you ping the NLB from the Meraki it is sent to both nodes (Unicast) and perhaps looks like it is going to the NLB.
In Unicast mode, NLB replaces the actual Media Access Control (MAC) address of each server in the cluster with a common NLB MAC address. The MAC address is used in the Address Resolution Protocol (ARP) header, not the Ethernet header. The switch uses the MAC address in the Ethernet header, not the ARP header.
Without the IGMP option, NLB uses a locally administered Multicast MAC address with a different prefix than that of IANA assigned.
Multicast mode set in the GUI instructs the cluster members to respond to ARP for the Virtual IP and use address with the use of a multicast MAC address, such as 0300.XXXXX
The ARP process does not complete for multicast MAC addresses (this breaks RFC 1812). A static MAC address is required in order to reach the cluster outside of the local subnet.
Without those static entries for multicast you would expect to see a lot of broadcast? Unless using IGMP?
The recommendation for avoiding/containing this flooding is the configuration of static MAC entries for the multicast Cluster MAC (binding it exclusively to the required ports) on your switch. Those static entries then also will be listed in the "show mac address-table" output.
Otherwise, add static route...such as....
arp 10.2.8.171 0300.XXXX.XXXX
Now, since the inbound packets have a unicast destination IP address and a multicast destination MAC address you would need to insert a static mac-address-table entry in order to switch the cluster-bound packets for the physical port. If there is more than one port, need to verify this first for that particular switch but the command would look something like:
mac address-table static 0300.5e01.0101 vlan XXX interface GigabitEthernet1/5
And, on Meraki static arp resolution to the Multicast MAC address of NLB IP.
With the IGMP option, you can make use of IGMP snooping in order to avoid the broadcast and the static MAC entries are not required in this case.
The multicast cluster MAC can be learned dynamically by IGMP snooping.
It should then be listed in the "show mac address-table multicast" output.
Make sense?
ASKER
Brian,
I made the change to IGMP multicast no luck.
Was on the Phone with Meraki and we tried to find the mac address on the meraki and my cisco switch not found
From my desktop I can ping 10.2.8.171
From the Meraki I can not ping the ip address
cisco3750#show mac address-table multicast
Vlan Mac Address Type Ports
---- ----------- ---- -----
cisco3750#
Not listed
From the switch I can not ping the virtual switch either
So if the meraki can not ping the virtual ip address in any mode then it will never work correct?
Thanks
Tom
I made the change to IGMP multicast no luck.
Was on the Phone with Meraki and we tried to find the mac address on the meraki and my cisco switch not found
From my desktop I can ping 10.2.8.171
From the Meraki I can not ping the ip address
cisco3750#show mac address-table multicast
Vlan Mac Address Type Ports
---- ----------- ---- -----
cisco3750#
Not listed
From the switch I can not ping the virtual switch either
So if the meraki can not ping the virtual ip address in any mode then it will never work correct?
Thanks
Tom
ASKER
Update
I switch the NLB Cluster properties to Unicast again going to let it set for a few minutes or so see if I can ping from the meraki
This is an ARP -a from the NLB server
C:\Windows\system32>arp -a
Interface: 10.2.8.171 --- 0xc
Internet Address Physical Address Type
10.2.8.1 00-18-0a-46-0d-c8 dynamic
10.2.8.24 00-19-b9-f8-aa-c6 dynamic
10.2.8.69 00-25-64-87-18-e3 dynamic
10.2.8.70 00-25-64-60-50-83 dynamic
10.10.10.255 ff-ff-ff-ff-ff-ff static
224.0.0.22 01-00-5e-00-00-16 static
224.0.0.252 01-00-5e-00-00-fc static
Interface: 10.2.8.169 --- 0xd
Internet Address Physical Address Type
10.2.8.1 00-18-0a-46-0d-c8 dynamic
10.2.8.4 e0-5f-b9-25-a4-c0 dynamic
10.2.8.16 00-50-56-b5-32-85 dynamic
10.2.8.19 00-22-6b-ec-55-c6 dynamic
10.2.8.22 00-0c-29-e9-e8-70 dynamic
10.2.8.24 00-19-b9-f8-aa-c6 dynamic
10.2.8.26 00-50-56-9e-5c-23 dynamic
10.2.8.27 00-50-56-bd-3c-04 dynamic
10.2.8.30 00-19-b9-f8-fc-e9 dynamic
10.2.8.69 00-25-64-87-18-e3 dynamic
10.2.8.70 00-25-64-60-50-83 dynamic
10.2.8.71 50-46-5d-3c-dd-e9 dynamic
10.2.8.85 00-50-56-9e-3e-65 dynamic
10.2.8.86 00-50-56-9e-35-8f dynamic
10.2.8.87 00-50-56-9e-05-91 dynamic
10.2.8.100 00-50-56-9e-3e-65 dynamic
10.2.8.123 00-50-56-89-b8-a2 dynamic
10.2.8.124 00-50-56-89-b9-c4 dynamic
10.2.8.151 00-50-56-89-8d-0a dynamic
10.2.8.170 00-50-56-89-b0-65 dynamic
10.2.11.255 ff-ff-ff-ff-ff-ff static
224.0.0.22 01-00-5e-00-00-16 static
224.0.0.251 01-00-5e-00-00-fb static
224.0.0.252 01-00-5e-00-00-fc static
you can see 10.2.8.1 (gateway address Meraki) on the 10.2.8.171 (VIP of NLB)
Thoughts
I switch the NLB Cluster properties to Unicast again going to let it set for a few minutes or so see if I can ping from the meraki
This is an ARP -a from the NLB server
C:\Windows\system32>arp -a
Interface: 10.2.8.171 --- 0xc
Internet Address Physical Address Type
10.2.8.1 00-18-0a-46-0d-c8 dynamic
10.2.8.24 00-19-b9-f8-aa-c6 dynamic
10.2.8.69 00-25-64-87-18-e3 dynamic
10.2.8.70 00-25-64-60-50-83 dynamic
10.10.10.255 ff-ff-ff-ff-ff-ff static
224.0.0.22 01-00-5e-00-00-16 static
224.0.0.252 01-00-5e-00-00-fc static
Interface: 10.2.8.169 --- 0xd
Internet Address Physical Address Type
10.2.8.1 00-18-0a-46-0d-c8 dynamic
10.2.8.4 e0-5f-b9-25-a4-c0 dynamic
10.2.8.16 00-50-56-b5-32-85 dynamic
10.2.8.19 00-22-6b-ec-55-c6 dynamic
10.2.8.22 00-0c-29-e9-e8-70 dynamic
10.2.8.24 00-19-b9-f8-aa-c6 dynamic
10.2.8.26 00-50-56-9e-5c-23 dynamic
10.2.8.27 00-50-56-bd-3c-04 dynamic
10.2.8.30 00-19-b9-f8-fc-e9 dynamic
10.2.8.69 00-25-64-87-18-e3 dynamic
10.2.8.70 00-25-64-60-50-83 dynamic
10.2.8.71 50-46-5d-3c-dd-e9 dynamic
10.2.8.85 00-50-56-9e-3e-65 dynamic
10.2.8.86 00-50-56-9e-35-8f dynamic
10.2.8.87 00-50-56-9e-05-91 dynamic
10.2.8.100 00-50-56-9e-3e-65 dynamic
10.2.8.123 00-50-56-89-b8-a2 dynamic
10.2.8.124 00-50-56-89-b9-c4 dynamic
10.2.8.151 00-50-56-89-8d-0a dynamic
10.2.8.170 00-50-56-89-b0-65 dynamic
10.2.11.255 ff-ff-ff-ff-ff-ff static
224.0.0.22 01-00-5e-00-00-16 static
224.0.0.251 01-00-5e-00-00-fb static
224.0.0.252 01-00-5e-00-00-fc static
you can see 10.2.8.1 (gateway address Meraki) on the 10.2.8.171 (VIP of NLB)
Thoughts
ASKER
Update
So after a short time I was able to ping the virtual ip address of the NLB server from my meraki router.
I switched the port forwarding from the physical web server to the vip of the nlb server and I still can not access my web sites.
This is with unicast setup.
So after a short time I was able to ping the virtual ip address of the NLB server from my meraki router.
I switched the port forwarding from the physical web server to the vip of the nlb server and I still can not access my web sites.
This is with unicast setup.
ASKER
Guys
Thanks for all the help on this
I switched over to using my Kemp Load Balancer setup IIS and now my sites all work.
Meraki does not support static ARP entries they cannot be configured on the Meraki Devices.
Kemp was a better solution.
I posted another Load Balancer Question today if either of you are available
Thank you
Tom
Thanks for all the help on this
I switched over to using my Kemp Load Balancer setup IIS and now my sites all work.
Meraki does not support static ARP entries they cannot be configured on the Meraki Devices.
Kemp was a better solution.
I posted another Load Balancer Question today if either of you are available
Thank you
Tom
This is often the issue with a VMware vSphere NLB and Static ARP entries not being supported by network equipment.
KEMP is a far better solution.
Microsoft NLB was always very poor... (and complicated with the Static ARP requirement)
KEMP is a far better solution.
Microsoft NLB was always very poor... (and complicated with the Static ARP requirement)
Agreed, I would only add that my preference on the Load Balancing is to host the LB in a true DMZ configuration and even prefer furthermore to have those isolated and forward to additional HA pair internal. In this scenario, port forward or NAT to DMZ hosted LB VIP with SNAT and hit another LB pair internal. I've used Kemp but don't recall if it has FW module option like Netscaler and F5. This yields great rewards when dealing with web traffic. Application load-balancing, SSL acceleration, not to mention all the metrics you get with something like HDX.
As I had alluded to in initial comment:
"Are you using split DNS? This would be required where you have an internal VIP but where you are using a NAT or port forwarding external to internal. The exception here would be where you are hosting a Load Balancing appliance/virtual in a DMZ - preferred - and traffic is traversing the external Firewall to the load-balancer where you have an external VIP and then a "SNAT" to internal that would - hopefully - traverse another firewall....hence DMZ."
As I had alluded to in initial comment:
"Are you using split DNS? This would be required where you have an internal VIP but where you are using a NAT or port forwarding external to internal. The exception here would be where you are hosting a Load Balancing appliance/virtual in a DMZ - preferred - and traffic is traversing the external Firewall to the load-balancer where you have an external VIP and then a "SNAT" to internal that would - hopefully - traverse another firewall....hence DMZ."