Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 937
  • Last Modified:

How do I access a router behind a firewall for remote management?

I have a client site where we are setting up a VPN tunnel using a Linksys BEFVP41 router directly off the cable modem.  Then behind that I have a Linksys WRT54G router setup to allow wireless connectivity.  I can perform remote management functions on the BEFVP41 router by entering the external IP address of the router and :8080 (Such as http:\x.x.x.x:8080).  I then get the web based software management screen where I enter my user name and password.  All works well.

But I also need to perform remote management on the WRT54G wireless router behind the BEFVP41 VPN router.  I assumed that if I changed the remote management port on the WRT54G router to 8081 I could access by entering the external IP number of the BEFVP41 router (such as http:\x.x.x.x:8081).  No success.  I even tried entering IP forwarding on the BEFVP41 router table to route any port 8081 activity to the internal LAN address of the WRT54G router (  The BEFVP41 router uses address.  Again, no success.

Can anyone tell me how to access the WRT54G router for remote management functions in this configuration?  Thanks.
  • 4
  • 3
  • 2
1 Solution
Basically, you've put both routers as the Broadcast source for one gateway.  Broadcast is on and you've given that to both routers by assigning them and

That's a mistake.

First, make the second router  note where the "2" is.

Now, you can access the second router, providing you specify the proper gateway on one of your machines.  The first router's gateway is and the second router's gateway is

With two machines, preferably, assign two different IP's as machine one: 192.168.1.x where x is something like 47 or 100, and assign the second machine 192.168.2.x again where x is one of those numbers, this depends on the initial settings of the router, some start at 47 [usually the router] while other start at 100 [usually the router].  There is also another range for wireless; you'll have to find out from the specific model what the range is.  When both routers can be logged into, then you begin the real work.  The problem here is to get the two gateways to communicate.  Most home routers have real problems in Class C operation, that is, when the second octet of the IP is different as in and  the "1" and "2" in the second octet from your right is the problem for home routers.

On our network, we have assigned for an XP Home machine with gateway which is a Linux Server ethernet card.  Basically, the XP Home box sees the Linux box as the gateway and it is connected to Router2 whose WAN address is [gateway] and whose LAN IP is while the other Router3 has Internet Connection static IP [gateway].  As you can see, it's not a simple setup to get XP Home to work.  The Linux Server is plugged into Router2 on one NIC and the Internet on the other NIC.  The Linux Server acts as both the Server and the firewall between the Intranet and the Internet.

Some will suggest DMZ and other methods; they are not secure.

To route between two routers and the Internet/Intranet, you must make the routers unique in their broadcast range.

The second problem is getting everybody to talk to each other.  Currently, we use Apache for this, as it works better than the other methods such as Samba and various forms of login.  Users simply go to either the Internet Web Site or the Intranet Web Site.  We don't use port 8080 at all on the Internet server, as it is more of an interference than a safeguard.  Port 80 is fine for the  Apache Web Server on the Linux Web Server.  For the internal Web Servers, we have one on the Linux Server, using the NIC and one on the XP Home box.  You can use port 8080 on the XP box, but there really is no point if your behind a solid Server like Linux and routers.  Apache will run on any XP Home box.

There's really no need for all the VPN stuff.  It overcomplicates things and has no more security than normal network operations.

You're real problem is the gateway and broadcast addresses; solve that problem first.  You might also want to download and install Ethereal on at least two machines to troubleshoot TCP/IP connections between machines, routers, and machines.  It's free.  Install on the two machines you're first going to test, and

Read and learn how to use it.

Most routers are accessible via a simple Web interface, which is to say, a simple Apache interface.  Most of the web is run by Apache.  You can download that too and install it to see why.

StevePimerAuthor Commented:
Thanks for the quick come back.  Our plan is to use the VPN router to setup a tunnel to two other sites.

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

PPS:  One last note:

In a professionally secured system, you don't need Firewall software.  The Server is the firewall, the router should never be used as a firewall.

I have done everything to allow attacks on our servers and they haven't failed yet.  We do run a good Antivirus on the XP machines and firewall software on each machine; the proper use of firewall software, only because it is nearly always Microsoft that is vulnerable.

We set aside a separate Mail Server with Antivirus to filter email, email is forwarded directly to that machine where virus detection takes place.

We do have what may be considered a Virtual Private Network, but not of the usual form.  We use Split Brain and Chrooted DNS to isolate the Internet from the Intranet.

This is much more effective than firewalls and firewall software.

Firewall software is for client machines, not a Server.

These programs have never failed us or given a false positive:


We think they are the best there is.

We've been around a long time and our Private Network is an assigned Enterprise Number Network.  You can get one of these if you look hard enough for it at IANA.  This too is used for security within the network.

To your question:
If you have not already done so, set up your BEFVP41 router with an internal address of netmask of Set your DHCP range from to (this leaves a few spare addresses for you to use statically)

Set up the WRT54G router with an EXTERNAL IP of netmask, set your INTERNAL address to netmask Set your DHCP range like above (with a 2 instead of a 1)

This should stop the two routers from bleeding dynamic addresses across the two networks and allow you to keep most of your current configuration. Forwarding from the BEFVP41 shouldn't be a problem. If it proves to be, you're either not forwarding the port correctly from the BEFVP41 or you're blocking that port either on the BEFVP41 or the WRT54G (or both). This setup will also keep the network connected to the WRT54G segmented from the rest of the network.

GinEric FYI:
Private IP addresses are as follows:        -  (10/8 prefix)      -  (172.16/12 prefix)     - (192.168/16 prefix)

Taken directly from RFC 1918. You do NOT have to make a request to use these addresses for private networks. These addresses are never transmitted over the internet. The only time you need to make a request for addresses is for PUBLIC IP address space (ie:

As to VPN not having any more security than "normal network operations", I would suggest you have another look at RFC 2401. Anyone could set up a sniffer on your network and see all of your traffic. VPN tunnels encapsulate the data in an encrypted packet (generally speaking) making sniffers almost useless (not really, but you'd need alot more than a sniffer to do anything with what you captured). Chrooted DNS is no substitute for VPN. Your clients can still get to the servers they really want to by typing in the ip address. DNS only does resolution from names to numbers, and my bet would be all of those servers could be "found" by querying the DNS servers in front of them.

You then go on to say you use Apache to get everyone to talk to each other. Apache is a web server, used for serving web pages and scripts. If you're using Apache for configuration, you're actually using some other software running on Apache (like Webmin perhaps). Apache can in no way shape or form substitute Samba, you can't do file sharing with it, you can't even upload files to it (again, unless you're using a script running on Apache) without an FTP or SSH server behind it as well, which is not part of Apache.

You also point out that a DMZ is insecure. A DMZ is not some protocol, hardware or software you can go buy. A DMZ refers to a network configuration. The idea is to have a publicly accessable network for public access in which the firewall is configured to allow services (such as WWW) out of a corporate network. You would then have your "real" firewall (completely separate from the DMZ) which would protect your publicly inaccessable computers from the internet (computers like the client stations in the accounting office). DMZ's are called that because usually, when a security incident occurs, it occurs there (or the SA wishes it did because it contains no sensitive data).

As to tcp/ip troubleshooting, ping and traceroute (tracert on Windows) are packages with your OS, anything more than that is just pretty windows (well ok, nslookup for your DNS issues too). These can be run from a command shell (Start>Run>cmd) by typing:

nslookup takes a bit more, but then the question doesn't really pertain to DNS does it?

Lastly, we come to firewalls. I spent 6 years doing nothing but firewalls. In my experience, the best defense is a layered one. You should have ZoneAlarm on every Windows machine - that should be part of your policy in a corporate environment. ZoneAlarm has been around for ages and they know what they're doing. I would NEVER get a firewall (hardware or software) that I had a). never heard of, b). never personally used or c). has never been in Information Security magazine (as a review - not as an ad). Your servers should all be running some form of firewall as well. If they're *nix servers, you have firewalling capabilities built in, learn how to use them, if they're windows - as I said - get ZoneAlarm. This is in addition to the firewalls you have protecting your overall network.

Also, the server itself (be it www, mail or what-have-you) should never be the only firewall to a mission critical network. Your first line of defense should be a bridging firewall that connects up your public servers (those with public addresses) to the internet. A bridging firewall cannot be hacked simply because it cannot be seen, it should have no IP addresses assigned to it, either internal or external, though some people do give the internal interface an address for administration, that all depends on how much they trust their internal network. That said, I'd rather have a router as my firewall than a server any day.

I'd hate to be considered a flamer, so just see all this as constructive criticism :)

I should point out, for your actual netmask, you would enter on those routers
All taken as constructive.  No problem.  But I am allowed to disagree.  Zonealarm has a lot of false positives; depending on namebrand is often the pride that goeth before the fall.  Zonealarm itself, as I recall, has been virused.  Split Brain Chrooted jail sets permissions for the DNS where it cannot be hacked without physical site access.  Speaking of which, it would take five minutes to hack the "bridging firewall" and it has been done already.  Never say "never."

Guardian is now part of Broderbund, and older than Zonealarm.  Some call it AVK.

There are basically two approaches to security, government and corporate.  The government one works better.  See the NSA site for guidelines on security, if you really want a secured system.

Oddly enough, your answer looks just like mine, using different octects in the third position, and the use of the same subnet range for DHCP questions the need for the second router.  The second router, in the same range, means the internal and external routers are not isolated, as they would be with two different DHCP blocks.  Most corporatations I've worked for, like AT&T, Merril-Lynch, and others, use an internal Web Server for communication because it restricts access to only the Web Server of the the Server.  They do this at over 1,000,000 offices, and over 30,000,000 clients worldwide.  Works for them!

I don't think Cisco sees the job of their routers as to firewall, rather to route packets on the network and Internet.

Encryption eats up both bandwidth and cpu time, therefore, it should be limited to proprietary data, for which, you don't have to encrypt every packet.  If you use encryption with overkill zealotry, eventually the network is going to crash very, very, very hard, and masses of encrypted data will be lost.  It only takes one instance, and there's no law that says it will only come from the outside.  Sometimes it's better to log and surveil your enemies than to bury yourself in the sand.

That would be fear of sniffers; as Franklin Delano Roosevelt said,"There is nothing to fear except fear itself."  Too much fear paralyzes action when  action is needed because, having never had any action, the victim does not know how to act or react.

Most of the security guys in security magazine and in industry tend to ignore things like viruses once they have a patch for it.  I recently published a memo for Eric Howes that this approach was flat out wrong and that detecting, educating, and eliminating the threat was the correct path, not burying heads in the sand.

I suggested a popup be served to the offending client telling them they had just sent a virus to the Web Server, and where who they could contact at the offended server to get help in fixing their computer.  That's part of education, detection was by the server, eliminating is up to the informed user.

A corporation's network does not operate in a vacuum.  Which basically means there really is no such thing as a Virtual Private Network, that's why it's "Virtual," or, almost "true."  Corporations need both customers and employees to operate on the web, it IS where the business is.  It's no longer just at HQ or the office, in fact, even employees won't let it be because they want to use the web because of its ease of use, its efficiency, and its functionality in all situations.

I cited Apache because it is the leading Web Server, and will continue to be so.  With HTML, the entire concept of networking is changing radically, as httpd and html take over more and more of the tasks of the network and operating system.  The day may soon come when they are both incorporated into a new Operating System with all kinds of built in database functionality, security, etc..  It's not "just a web server."

Firewall software on a server is self-defeating; once breached, the flood cannot be stopped.  Behind all security and firewalls there should be human intelligence.  And that should be the real firewall, the human master of both the hardware and the software.  Machines can't think and they never will.  Somethings just have to be accepted for what they are, the truth.  All of the software giants have been virused for the very reason that they depended on software rather than their own intelligence.  And you can't gather intelligence while walled off in a welded shut box.

I did say every Windows box should have both AV and Firewall software installed, but I also said that the server should not be lagged with AV and Firewall software clogging its cpu time.  Instead, a human brain that knows how to configure it properly and then to monitor it is still better than any AV or Firewall software on a server.  If you don't think that that is true, you've abdicated to the superior intelligence of the machine, which it does not possess.

The DMZ is used in most small routers these days as a radio check box item and all the experts agree that it provides a false sense of security.  The term, derived from the De-Militarized Zones of various wars, is a good example as those zones also provided a false sense of security; we had based way over the other side of the DMZ, hence, it was a facade.

There are always at least two ways to do something.  You should never consider it an insult to read or hear the other way from someone else.  You shouldn't consider such sage advice as flaming, I know I don't.  Then again, you don't know if someone needs to "learn how to use" something and it normally sounds a little abrasive when you tell someone to do that; they may have a doctorate in the subject, in which case the remark would be quickly dismissed as an "uninformed" remark.

I came here to answer a few questions to test the waters.  To see if it would be worth my time to answer some questions, and, to continue working on my book, which is filled with examples from these conversations; from a collegium of names that you'd recognise, if you've been in the computer industry and know a lot about it, by education, experience, or both.  As far as I'm concerned, I'm here to gather information myself.  I don't believe in the existence of either flamers or trolls; those are stories we tell to children in Faerie Tales, like Grimm's.  I hold that a conversation can be both intelligent and diverse, even in disagreement, without losing one's sense of manhood.  It is, after all, up to the men to teach the boys.

I must say, quite well put.

Split brain chrooted DNS simply separates external DNS queries from internal DNS queries, being chrooted means DNS doesn't run as root (which it never should). That doesn't say anything for the rest of the system and its security.

Also, a properly configured bridging firewall has no IP address (on either side) and passes everything it gets to somewhere else. Because of this, it's a completely transparent device on the network. The systems behind it can't see it, nor can the systems in front of it. If someone were to crack one of these, they'd first have to do it by the MAC address of the NIC (which would be a feat in itself to obtain) they were going to try to get in on, then they'd have to get through the firewall running on it to get it to actually process packets instead of passing or dumping them. In other words, one would have to be the cream of the crop to get into one of these, and then it would still take quite a bit longer than 5 minutes. Frankly, if you can show me documentation of where someone has done this, I'd really like to see it (I'm not being sarcastic, I really would like to see it). Note that I'm not talking about a bridge device here, I'm talking about a bridging firewall - a server (usually) running some flavor of *nix (usually - well, always that I've seen) that's configured as a bridge and a firewall.

I know of both Guardian and Broderbund. I also know that any firewall software running on Windows is vulnerable simply because it runs on Windows. But then, I see all firewalls as being vulnerable, it's simply a question of how vulnerable and how easy to exploit. I mention ZoneAlarm because it's (very) user friendly (one doesn't need a CISSP to use it) and it work as well as or better than any other software firewall. Plus, every exploit I know of for it can be fixed by upgrading (or patching).

As to there being 2 different approaches to security - there are actually hundreds, if not thousands of approaches. The internet is still quite new and everyone is still figuring things out: This is why we have things like spam and hackers to even worry about. The NSA's view on security is actually such that if you really want your data protected, unplug your machine from the network - in fact, they have people who do nothing but manually enter data from one computer(network) to another. This is known in the industry as "Air Separation" - though the term isn't as meaningful with wireless networking now available. Most corporations are now starting to follow the NSA guidelines for security - due mostly to Sarbanes-Oxley.

The isolation of networks was on purpose. By far the best approach to security, virus control, and incident control is and always has been to segment your network as much as possible. This gives you better control over what goes in and comes out of each network segment. Web servers are for one way communication used by (usually) division heads, PR departments and so on. Most corporations I've worked for (I won't get into slinging names) demand some level of file sharing. Some of these do indeed use Samba, most use AD service. With the exception of Unix NFS, that's about it for file sharing. File sharing is different from the one-way communication offered by web servers and critical to the productivity of most corporate computing environments. I will state here also that if the second router were to be on the same network, then you don't need a router there, you need a switch. If the router is a requirement (for wireless access, say) then it should either be configured as an access point (not a gateway), or it should remain isolated from the wired network for security purposes (which is what I would do anyway).

I think Cisco does want their routers to be seen as firewalls, this is why they build the Pix line. Cisco has a powerful vision for network appliances to handle everything from firewalling, routing, nat, even virus protection all from one device, as do other network device manufacturers. However, that wasn't the point I was trying to make there. What I was saying was that there should be *something* in front of the servers as a first line of defense, rather than having a server itself doing all of the firewalling of the entire network.

Encryption eats up CPU time on the encrypting and decrypting systems - the in-between systems don’t do anything but pass packets. Encryption does eat up more bandwidth, that's why you have to decide if the data you're protecting is worth it or not. If that data falling into the wrong hands could cost you a fortune, you'd be an idiot not to encrypt it. If it doesn't really matter if that data escapes, why bother with encrypting it. I've never had a network crash because it was passing encrypted data - the network as a whole simply sees packets, it doesn't care what's in them. I never said to stick your head (or your computer) in the sand, but if you're running any sizable network, you'd better be aware of the threats that face it, and with new regulations on the horizon, you'd better use due and proper care in protecting your data. We don't know about every type of threat that exists for networks as a whole (as I mentioned, this is all fairly new), and we can't simply "not act" - what would be the point in having a network that you can't use? However, we can protect against those threats that we do know about, and we can attempt to be as prepared as possible for those threats we don't know about yet.

There's quite a bit more to worry about on a corporate network than "fear itself".

Your popup idea is already in use; look at Symantec, McAfee and Trend Micro for network-based virus scanners. "IT Guys" (of which I am) ignore patched viri because it's patched and we don't have to worry about it anymore. Once it's patched, any computer scanned by the network scanner gets immune because the viri gets caught (by the patch). A patch applies (usually) to a virus scanner, enabling it to catch more viri.

You're right about Apache being the leading web server, but again, Apache has nothing to do with anything but serving web pages. Apache does not do networking, it doesn't run applications (these are done with C++, Perl, PHP, Java and the client's browser). HTML stands for "HyperText Markup Language", and httpd is the "HyperText Transport Protocol Daemon" The key word here is "HyperText", while it can do linking, image displays and embedded applications, that's pretty much all it does, after all, it's just text. It doesn't even do the databasing (usually handled by MySQL). While I agree that one day we will all use an internet based operating system, it will have to be far more robust than just hypertext. As an aside, Google is working on just that - an internet based operating system.

Firewall software on a server is the last line of defense. There is a reason servers have built in firewalling capabilities. If your server is being hit so hard the firewall is impacting its performance, the reason it's being hit so hard should be dealt with (if it's an attack), or another server should be set up and load balancing applied (it it's not an attack - also with its own firewall), or, in rare occurrences (the served data needs to come from the same source for some strange reason), it should have a dedicated firewall in front of it. Note that before this "last line" firewall should have to do anything, every firewall between it and the outside world should have had to have been breached. This also means that if the server is protected by only 1 firewall and that system goes down or gets taken off line for any reason, the server isn't left unprotected.

You're right, machines can't think. They do what we tell them to, they know what we tell them to know. All of the software giants have been virused for this reason: We're human, as such we all make mistakes. In the early years of the internet, we found that we were way too trusting, that if someone finds a hole in your network they will exploit it. If someone finds a hole in your application, they will exploit it. Humans write all the software and as such, it's prone to error.

As to gathering intelligence, that's what honeypots are for. Systems designed specifically to attract crackers and log everything they do.

There's no way a human brain can sit at a terminal and scan 8 million packets per second, logging where each one came from, went to, how it made the request, if it was a crafted packet, and so forth. If your servers aren't doing any logging, there's nothing a human can do, except (perhaps) find out about it later. If your servers are logging, they can be dumping packets as well. A properly configured Linux box running iptables is firstly more secure than one not, and secondly faster than one not because it's not processing anything but what it thinks are good packets and it's dumping everything else. On top of stateful inspection, it's also doing traffic shaping, resulting in zero collisions on the nics, meaning packets don't have to be resent, resulting in lower bandwidth and faster response times. I'd like to see a human do that. You talk about a human knowing how to configure the machine properly, that's exactly what I'm saying - configuring a server properly includes setting up the firewall on the server. Besides, as I said before, it was a human (or group of them) who wrote the software to begin with. If you don't have that firewall set up, how will you know if the unspeakable happened and someone actually got in? Or are you going to wait until the home page to your dot com has been replace with something horrific?

Yes, I know about the little radio button in most small routers, and I agree: they provide a false sense of security. As I said, a DMZ is more than software, more than hardware. The DMZ, as I stated in my other post, is there as the place you want the fire directed to if one should break out.

With that I'll close. This thread is getting to be something better suited to a regular forum - and Lord knows there are plenty of them out there dealing with just these types of issues.

StevePimerAuthor Commented:
Thanks for all the technical responses.  I didn't mean to start a firestorm with my simple question but I have enjoyed the show nonetheless.  To make things easy at the sites I am planning on using the Linksys BEFVP41 VPN router / firewall as my first entry point behind the cable modem.  That would allow me to setup VPN tunnels to the sites I want to use in the wide area network.

Then I am going to replace the Linksys WRT54G wired / wireless router with a simple Linksys access point to bring back the wireless aspect of the network.  I am doing this because the WRT54G has no use option to set it up as a simple access point (only a router or gateway).

I will also setup an 8 port 10/100 switch to accomodate the wired connections I need (5 connections) when I remove the 4 port WRT54G router.

That setup should give me all the functionality I want.

Once again, thanks.

Featured Post

Free Tool: Path Explorer

An intuitive utility to help find the CSS path to UI elements on a webpage. These paths are used frequently in a variety of front-end development and QA automation tasks.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

  • 4
  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now