RD Connection Broker Load Balancing results in users not being able to login

I cannot get my users logged in when my new terminal servers are using RD Connection Broker Load Balancing.

So I'm trying to use the Windows Server 2008 R2 SP1 operating system to host our Remote Desktop Services (terminal services) Host service. I setup three physical servers with that OS, and enabled the Remote Desktop services host service. All the apps are loaded, and all three servers are in the same OU with a group policy applied to it.  In the past, we only used NLB (network load balancing) for the 2003-based terminal servers, which worked okay.  Now I'm looking forward to using the session-based load balancing, aka, RD Connection Broker Load Balancing.  I enabled this feature, along with the other pertinent Connection Broker settings.  I then added three static DNS entries for the cluster name, with each entry including the main IP address of each host server (ex. - rdcluster1:, 62, 63). One of our DC's has the RD Connection Broker service running on it.

When I go to login as a user, it looks like it will login, but the welcome circle spins for a while and then it just bounces back to the login screen.  It seems that a few sessions would get through, but mostly it just sits there and bounces back.  At first I thought it was a problem with the user's roaming profile, but the same problem occurs if I even try to login as local administrator.

After searching all over the Internet, and finding nothing, I decided to revert to Network Load Balancing. I turned the RD Connection Broker Load Balancing feature off, deleted the round-robin DNS entries, and then created one cluster entry (ex. - rdcluster1: I then configured NLB on each of the host servers, and abracadabra, everything works fine.  

Does anyone know why this would occur?

Join RD Connection Broker, Configure RD Connection Broker farm name, Use IP Address Redirection, and Configure RD Connection Broker server name are all enabled and setup correctly.

One other thing that may be worth mentioning:  When I initially set all this up, I didn't know if I should have NLB enabled along with RD connection broker load balancing.  There isn't a lot of information out there on the Internet about whether to use one or more load balancing mechanisms for your cluster.  So I had originally set everything up with the NLB and RD Connection Broker Load Balancing running side-by-side.  I created one dns entry for the cluster IP and that's it. When I discovered that there was a problem logging in, I went into NLB, deleted all hosts from the cluster, and then created the round-robin DNS entries for each host.  After changing all of this, I still had the same problem with logging in users.  I tried everything I knew but it's still a problem.  

When NLB only is used, it logs in fine and balances relatively okay.
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

I actually have NLB and connection broker load-balancing enabled for our farm.  The idea is that NLB does the network load-balancing, while cblb does the session load-balancing.  When you had this set up before, were you trying to connect to the cluster with the NLB cluster IP?

I also made sure to match my RD farm name and NLB cluster name, and then there is a DNS entry matching this name to the NLB cluster IP.  I don't remember if this was a requirement.  But it made logical sense to me, and it's easier to keep up with.  Also, we're using multicast with dual nics, which requires ARP entries in the router.  How was yours setup?  Is this a physical or virtual environment?
wootenj2001Author Commented:
Sorry for the delay. I've had a couple guys out so it's been one of those weeks.

I am connecting to the cluster name (not IP), which is the same name as the farm name reference in group policy. That's the way I've always done it.

The RDS environment is physical (for now). When we're running the NLB method, we use Unicast, pointing to the primary NIC of each host server.  We had some issues in the past with connections to the cluster and we learned that it can be a problem on core switches.  What we ended up doing a while ago is connecting all the primary NIC cables into a separate small hub, and that hub then plugs into the core switch.  This is what is supposed to done when using NLB in Unicast mode.  With these new servers, I have the same setup and I'm still using NLB only.  There is only one DNS entry for the cluster IP address of (no round robin) So far it's working relatively okay, however, I would still like to use the session load balancing.


From what I've read on Microsoft, if you implement the RD Connection Broker Load Balancing, then you have to create the round-robin DNS entries. So when I initially tried to use this, I setup the individual DNS entries for each host, deleted the cluster DNS entry, turned on the load balancing setting in group policy, disabled NLB from each host, and  then attempted to connect using cluster name...not IP.  DNS will resolve that to one of the primary IP addresses and somewhere along the line, the RDS server talks with the RD connection broker service.  the RD connection broker service then determines which server is the most suitable to join??

I was able to get to the login screen, but it just wouldn't let the login happen.  That welcome circle just sits there and spins, then goes right back to the login screen.  And just to remind anyone that reads this, this symptom would happen whether I was logging in as local administrator or a user with roaming profile. Also, this happened whether I connected to the cluster name, or the specific host name (ex. - swts1).  As soon as I revert back to NLB and turn off the RD connection broker load balancing, connections go through no problem.  

You'll want to keep the 1 DNS entry for your NLB cluster/farm name, and do not use DNS round robin.  From here, you'll choose to 'Participate in Connection Broker Load-Balancing', and set your server's relative weight for the farm.

Be sure that your RDSH's are set to use their local IP's for reconnection under the RD Session Host Config > RD Connection Broker Properties.  We have dual nics in our servers.  The primary is configured normally, and this is the 'local' IP that we have set for 'reconnection'.  The second nic is configured normally + NLB, but we are also running multicast in a virtual environment.  Hopefully this will still translate well for you.

Now when users connect through NLB, they'll be redirected to the RDSH server with the least load based on NLB, and then the RDSH server will contact RDCB to determine if/where a user may have a disconnected or idle session to reconnect.  This is where the local IP's come into play.

The nice thing about this is that it's network load-balanced, as well as session/server balanced.  If a server goes down, then the cluster is still functional, but if one were using just RD Load-Balancing with DNSRR, this is not the case.
Newly released Acronis True Image 2019

In announcing the release of the 15th Anniversary Edition of Acronis True Image 2019, the company revealed that its artificial intelligence-based anti-ransomware technology – stopped more than 200,000 ransomware attacks on 150,000 customers last year.

wootenj2001Author Commented:
When you say use the local IP for reconnection, do you mean the setting "Use IP adress redirection". I have that enabled on each server, which is actually controlled by the group policy.  Then I went to each server, opened Remote Desktop Session Host Configuration, opened RDP-Tcp properties, went to Network Adapter tab, and chose the primary NIC (IP listed in bold above).  The primary NIC currently has the IP, subnet mask, and default gateway configured. It also has the NLB enabled on it.    The second NIC in our servers only has the IP and Subnet mask configured and doesn't really do anything at the moment.  I wouldn't be able to have two NICs on the same subnet with a gateway address added.  

If I were to move the NLB option to the second NIC, how do I handle that gateway address? Or does it not matter because of the multicast option?

Also, how are you registering the two nics in DNS?  In the past, I would only tell the primary NIC to register this address in DNS, while the second NIC would not.  This way, there weren't multiple entries in DNS for the same name.

Thanks in advance. I appreciate you taking the time to explain your environment.    
Our 2003 cluster was setup similar to your description with 2 nics, and only 1 with NLB and a GW.  The one with NLB and a GW was the only one registered in DNS, and this was a unicast cluster.  Our new 2008 multicast cluster is set up with 2 nics.  Both have GW's, and only 1 has NLB enabled.  But both are registered in DNS.  The other one is used for reconnection.  Maybe a pic will help...

 RDSH Connection Broker Properties Window

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
wootenj2001Author Commented:
Thanks a lot for taking the time out to help me out with this.  I think you have provided enough information for me to get this going. I wish I could mess with it now, but I'll have to do this over a weekend.  

By the way, how is RDS in a virtual environment? How did you overcome potential Disk I/O issues with so many people logging into one virtual machine?
You'll want to research SAN options, disk types, and speed.  Also, there are SAN's with SAO or QoS, and also fabric switches have this optional functionality as well.  And another thing to look for is SSD in the data center.  SAN's, servers, and I'm sure other things to come, are all starting to either come with SSD as an option, or a hybrid of SSD and SAS.  Another thing out and making it's way into the marketplace is VM-aware storage designed specifically for VMware, which understands and communicates directly with vCenter's API's.

I/O and LUNs/Volumes

Planning the size and separation of your volumes and LUN's is a very important factor.  We use an NLB cluster with the servers spread across differing LUN's.  Personally we try not to use thin-provisioning, unless we really have a need, and there is also a new PVSCSI controller that we use in some cases.  One place we've used this is for page files.


Another thing to remember here is that you'll want to set it up according to best practices and what makes sense of course, but also, don't go crazy with SAO, QoS, I/O control, PVSCSI, etc., etc., until it makes sense to do so.  It'll take some monitoring, stat gathering, and comparison to internal SLA's to determine if something needs to be adjusted with any of those additional/optional features.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Windows Server 2008

From novice to tech pro — start learning today.