internal DNS implementation

Posted on 2013-12-16
Medium Priority
Last Modified: 2014-01-01
What is the best practice for the internal DNS implementation?

My organization has 50 remote sites. Some are fairly large with 30 users and I am thinking having one DNS server at the headquarter and a backup DNS server at a backup site.

Now, all of the remote sites will go through the MPLS SAN to get the DNS service, will it affect my bandwidth? What is a typical bandwidth consumption for the DNS services?

I am also thinking may be the medium sites should have their own DNS server. But I am not sure what the best practice out there as far as the internal DNS server implementation.

PS. I have 4 ADs in my organization

Question by:leblanc
LVL 43

Accepted Solution

Steve Knight earned 400 total points
ID: 39724314
If you've got a DC running you might aswell have DNS on it assuming you keep your DNS AD integrated.  DNS traffic should be relatively neglible compared with normal apps between sites IMO.

DNS at a bigger site is a good idea, but without a domain controller / local file server etc. there if you lose your MPLS connection and everything and the internet connection is via that then you have bigger problems than DNS.

So what is your AD structure, do you mean 4 different Active Directory forests, domains, or domain controllers?

If the latter then I would suggest running DNS on each DC and pointing users to their 'nearest' depending upon your comms configuration one.

The subnets you allocate to each different site in AD will still know which site you are on based on the IP for dishing out any local resources based on that site.

LVL 71

Assisted Solution

by:Chris Dent
Chris Dent earned 800 total points
ID: 39724390
DNS requests are normally measured in bytes are likely to range from 45 to 150 bytes for the majority of traffic. A small number will sit above that, maybe 200 to 500 bytes for things to do with your AD domain (for example).

DNS requests are UDP and as such are pretty light on the network in most cases. 1500 users (30 users * 50 sites) are unlikely to do much to scratch a connection, even a poor one with just DNS traffic.

You may deploy a local server to a site for convenience. I'd explore the possibilities of Read Only Domain Controllers and Branch Cache if you're concerned about bandwidth usage. That will allow you to deploy a DNS service as well to minimise traffic over the WAN.

What kind of bandwidth are you talking about in most cases? If it's measured in 10's of Mbits you're exceptionally unlikely to run into problems caused by DNS.

Perhaps the bigger case for deploying remote services is reliability. How likely do you consider connection failure to be? Are sites multi-homed (more than one MPLS tail)?

LVL 28

Assisted Solution

mikebernhardt earned 800 total points
ID: 39724676
Although bandwidth utilization is low as mentioned by others, do keep in mind that if DNS goes down, all sites that depend on it are out of luck for both internal and internet connectivity. My suggestion would be to have multiple DNS servers at at least 2 different locations. Put DNS on every DC and have them at more than one location. Make sure all clients can reach those services at 2 or more locations. If some locations could theoretically continue to work without a link to your central office, then put a DC/DNS there so that they can continue to do so. A few hours of lost productivity by 30 people will cost far more than a right-sized server.
 [eBook] Windows Nano Server

Download this FREE eBook and learn all you need to get started with Windows Nano Server, including deployment options, remote management
and troubleshooting tips and tricks


Author Comment

ID: 39724728
Sorry I am not a DNS guy. But we have 4 domain controllers with DNS & DHCP configured in it. Somebody configured it before. I am not sure if it is working correctly or not as I am new here.

2 DCs at the headquarter, Each of the remaining DC is located at each of the remote sites.

LVL 71

Assisted Solution

by:Chris Dent
Chris Dent earned 800 total points
ID: 39724884
DCDiag will test much of your DNS configuration so don't be afraid to use that to check your forest out.

As far as AD is concerned the DNS requirements are pretty simple:

1. All internal systems must be able to reliably resolve names under your AD domain name

Needing that reliability means you should, ideally, run a minimum of 2 DNS servers. Each should be able to answer questions about your AD domain name (e.g. nslookup -q=srv _ldap._tcp.yourdomain.example).

It is imperative that you do not distribute your ISPs, or Google's, or any other third party's DNS service to your network clients. Doing so breaks reliable name resolution.

2. Your DNS server must support Service Records (that'll be the srv used above). MS DNS does, as does ISC BIND if you were going down that path.

That's pretty much it for the hard requirements, everything else is optional.

1. Dynamic Updates are entirely optional, but support for this makes your life a lot easier. Using Secure Updates with MS DNS is nice and simple, no configuration beyond setting it to Secure Only for the zone.

2. Aging and Scavenging. These may be slightly harder to get your head around, but there's a good article describing how it works here:


With Aging there's a minimum value to be aware of, never set the Refresh Interval lower than 1 day and try and avoid going anywhere near that low. This is because systems with static IP addresses only Refresh once every 24 hours by default.

Finally, it's good to understand how your clients update DNS. If DHCP is doing it for them the client won't. If you have more than one DHCP server you may find duplicate records. Are you finding problems like that?

LVL 28

Assisted Solution

mikebernhardt earned 800 total points
ID: 39725303
So I would just make sure that all 4 DCs are running DNS. If not, then install it. Make sure your DNS lients point to all 4 (via DHCP or however they are configured) and you should be good.

Featured Post

A Cyber Security RX to Protect Your Organization

Join us on December 13th for a webinar to learn how medical providers can defend against malware with a cyber security "Rx" that supports a healthy technology adoption plan for every healthcare organization.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Eseutil Hard Recovery is part of exchange tool and ensures Exchange mailbox data recovery when mailbox gets corrupt due to some problem on Exchange server.
MSSQL DB-maintenance also needs implementation of multiple activities. However, unprecedented errors can hamper the database management. In that case, deploying Stellar SQL Database Toolkit ensures fast and accurate database and backup repair as wel…
In this tutorial you'll learn about bandwidth monitoring with flows and packet sniffing with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're interested in additional methods for monitoring bandwidt…
Whether it be Exchange Server Crash Issues, Dirty Shutdown Errors or Failed to mount error, Stellar Phoenix Mailbox Exchange Recovery has always got your back. With the help of its easy to understand user interface and 3 simple steps recovery proced…
Suggested Courses

755 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question