TMG Server Fills Up and Holds TCP Ports in TIME_WAIT Status Until Ports Are Exhausted

We have a two node TMG implementation running SP1. The first node (10.0.1.9) runs like a champ. Node two (10.0.1.10) runs OK but then performance nose-dives eventually leading to a completely non-functional server. We have found through netstat that thousands and thousands of ports on node two are tied up in a TIME_WAIT to the cluster IP address (10.0.1.8) and eventually there are no ports available to serve new connections. We have tried increasing the ports and decreasing the timeout but neither seems to be working. Any ideas? Any questions I can answer to move this along?
PHFrenchAsked:
Who is Participating?
 
Keith AlabasterConnect With a Mentor Enterprise ArchitectCommented:
Run the best practice analyser for tmg againsty both nodes - compare the outputs.
Are both nodes running sp1 and the sp1 update for tmg?
Are you using ftmg with NLB or did you set up NLB then install FTMG? Are you operating with isp load-balancing/failover?

Which addresses are being kept in wait-state - the external NLB or the internal NLB?

What are the default gateway settings for internal systems - a specific ftmg node or the vip address?
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.