Link to home
Start Free TrialLog in
Avatar of coolsport00
coolsport00Flag for United States of America

asked on

Unable to connect to own website from within organization

Ok, so I have this problem within my organization that's got me baffled to no end. And, to be honest, I don't even no what to look at. So, this is the issue. At my organization, BEFORE I got there, when AD was implemented, for whatever reason, the internal domain name was created to be the same as the external domain name ( To be honest, I know I should probably know why that is an issue, but I don't, outside of the fact that I know it's best practice to make them different (i.e. internal = mycompany.local ; external =

Ok, so on the point. for whatever reason, and not at any consistent time of day, etc., we are unable to connect to our external domain website (, but any other website we can access. We assume that the issue may be DNS related, but not sure how to check. And, if it isn't, what else could it be? And, if it is DNS related, why are we able to connect to any other (allowed) website with no issues, but not able to connect to our own? We have a "ping monitoring solution" (IPCheck and now PRTG) that we ping certain websites (besides of course our servers and network infrastructure), and when this issue happens, we get notified that our external website is not pingable. So, it's not pingable, nor is the website accessible ONLY FROM WITHIN OUR ORGANIZATION. If I were to ping the site ( or try to access it from my smartphone or from home, I can. I work at a financial institution and we have kiosk PCs for members to use and if they can't access their acct from our website when they come in, that obviously has a negative impact on our member service.

Can someone please provide some direction on what to check? And, I have to be honest, this has been going on for 3-4yrs (at least) and to this point, we haven't been able to figure out the issue. And, for most of the day, our site is accessible, but for maybe 3-4 times a week, from anywhere to 30mins to 1-2hrs, our site isn't accessible. And, it of course doesn't matter what browser is being used.

Let me know if you need anything further.

Thanks much!
Avatar of Bxoz
Flag of France image

I think you have to add a www record on your DNS.
If your active directory have the same name as your website, your DNS will redirect all query to your AD.

Check this Website it may help you:
Yes, thuis siunds like your website needs to be added to your DNS. if that doues not fix it, you might check to see if your site is infected and possibly your firewall is blocking access to it, but that is a long-shot.
AD "owns", and any name lookup will always point to a domain controller.
If the domain controller(s) all host the same website, it'll work fine.
But most of the time, DCs don't host websites, so using the URL will fail.
Even if exterior-facing DNS records are created to point to the web server, you can't resolve it correctly on the inside.
If your web server has a "www" alias in DNS (and people actually use it, instead of using the shorthand URL), then it should work fine.
Avatar of footech
Flag of United States of America image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of coolsport00


@millardjk @footech - yes, typing the shortened URL (no 'www') makes the request time out (IE Cannot Display This Page). And, the org does know that they need to type in The problem is, maybe 3-5times/wk, the site from within our org 'goes down' (not down, but inaccessible). Now, something I neglected to mention above but you guys I think have thus far assumed correctly - we do not host our external website internally or within our DMZ. Our site is maintained at a local ISP. Our site has a public IP. Something I don't remember if I tried or not is seeing if I can type in the IP of our website in a browser to see if I could get there that way, although again, if I were to ping the IP (gosh, I think..maybe I just tried to ping the site name), it would fail. @footech, since I've forgotten if I tried to ping, or even get to our site by IP in a browser, the next time I see it "down", I will try it. It was inaccessible for about 2hrs today. I'm sure it'll go down again sometime later today :P

Thanks for all your comments thus far. Any further insights/suggestions are welcomed!

The offsite host: how do you know that it's just your internal users that cannot access it?
Because I can connect on my smartphone
Ok, so I'm pretty convinced it's some kind of DNS issue because I can type in the IP address of our website and can get to it fine, but going to it by 'hostname' it bombs out on me. :/

And, I found out yesterday from my Director that we have ping capability disabled. Only ports open to our site are 80 and 443. Tracert times out after getting to a certain hop using the IP of our site. nslookup fails when using DNS name (of course) but works when using the IP.

Not sure what else to do at this point...

@footech - I followed the 1st scenario as you pointed out and, for the most part, we have everything already configured, except for finding the SOA for our external site then adding it as a delegation. As soon as I did that, I got a notification that our site was back "up". Now, I'm not sure if that's coincidence (was down/inaccessible over 3hrs), or if that worked. So...again, will be monitoring....

Does sound like a DNS issue, but now I'm confused about your monitoring solution.  Earlier you had indicated that it was pinging to test, but if only 80 and 443 are allowed, how is it checking the site?

Adding the delegation could be bypassing any issue with your internal DNS, so if the problem doesn't reoccur that's good news.  It also has the advantage that if the IP of your external website changes you won't need to update the internal record.  However, we're left with no exact answer as to what is wrong.  If the delegation doesn't work then we could continue to try to determine what's going on, but if it continues to function correctly I'm not sure you would want to go back to the old config just for troubleshooting.

Besides the two possibilities I described earlier, only other I've thought of is that an internal DNS server stops responding for a while.  Just to lay out some ideas, which are mostly ways of testing out the possibilities I mentioned:
 - During next outage, nslookup FQDN against all your DNS servers to see what record is returned and if it the same across servers.  Clear the DNS server cache (which might clear the problem) and perform nslookup again.
 - What are the DNS servers?  I'm thinking it's likely you're using Microsoft DNS with AD-integrated zones; is this correct?  I've heard faint mentions of the possibility of corruption in the DNS records, but I've never personally encountered it.  If I recall correctly, one way to clear it out is to make the zone non-AD-integrated (standard primary), edit the DNS file if necessary, and after replication has completed make it AD-integrated again.
 - If zones are AD-integrated, run dcdiag /test:dns /v to check for issues.
 - Make sure replication or zone transfer is functioning properly.
 - Event logs.
@footech - sorry for my mis-statement. Our monitoring solution does a "get" URL call to our website, not a ping. Most of our monitoring on internal devices is with a ping though.

So, hopefully adding a delegation of our external zone SOA works....we'll see.

Yes, our 2 DNS servers are AD-integrated and on W2K3 DCs and our domain is 2K3. I do notice at times 'stale' A records on DNS servers..not sure why non-active records don't get removed...thought they did. And, sometimes when a record gets removed on our 'main' DNS (primary) server, it sometimes, though rarely, doesn't get removed from our secondary DNS server.

For now, I'll just continue to monitor. If I notice our site being inaccessible again, I'll run that dcdiag command & report the results. This issue has been happening about every day at some point. I'm hoping this resolves it. If you don't hear back from me in a few days...that'll be a good thing :) If after 4-5 days I get no 'down' email, I'll close this out.

Thanks again all for the responses.

(((fingers crossed))) :)
To this point, it has been 5 days since any 'downage', which I don't recall the last time that has happened. Obviously, that's a good thing. Will give this till Monday & if no 'downage' will close this out.

Thanks all!

That's good news!

I was going back through your previous post and one statement has me a little concerned.
And, sometimes when a record gets removed on our 'main' DNS (primary) server, it sometimes, though rarely, doesn't get removed from our secondary DNS server.
If this is truly the case, it is indicative of a problem with replication which I would investigate further.
@footech - sorry for the delayed response. That has been happening ever since I started this position "white behind the ears" as it were almost 7yrs ago. Since that time, we had some other AD issue and hired in a consultant (this was maybe 4-5yrs ago?) to check things over. Except for a couple minor modifications (and the fact that our internal/external domain names are the same...ha :) ), all was pretty much ok; replication was/is fine. Now, that's not to say something hasn't happened in the meantime. I still will run your dcdiag cmd here in a bit to see if all comes back ok.

Oh yeah...and to this point, still no "down" msgs for our external domain! YAY! You all don't know how relieved I am to get this tackled. You know..the interesting thing is I think we already had config'd a SOA delegation for our external namespace internally on our DNS server, but I think the ntwk engr removed it a few yrs ago without realizing the implications of doing so....nor did I realize it seems :P

One more post to update and I'll close this out...

Happy New Year's everyone!
Oooops....forgot this was open :) So, absolutely NO reoccurrences of *internal* notifications our external website is 'down' since making the change to my internal DNS - adding the external SOA DNS server to our domain DNS as noted by the article @footech provided.

Thanks everyone!