Suggestions For High Security Internet Access

I am currently assisting a client, who is basically starting from ground zero for both hardware and operating system, evaluate several proposals and suggestions made by other industry (banking) peers.  My client is subject to extremely detailed IT annual audits as outlined by the Federal Financial Institutions Examination Council (FFIEC).  Not only does my client desire to meet the requirements of the FFIEC accepted standards, but go well above and beyond as much as possible.

Most of the system architecture has been tentatively decided...with the exception of providing internet access to user workstations.  The client does not want any physical data path, other than keyboard input and screen image, from the internet connected machine to any workstation, thereby eliminating the typical inline AV, firewall, etc. approach.  The thinking is that when a user needs internet access, via IE, they will make a connection to a central machine/device in a keyboard input and screen image only form, thereby isolating the "exposed" machine/session.  Ideally the thought was to have a very basic standard machine install/profile that was reset when the user closed the connection or it timed out.   My experience with Terminal Server or VMWare, which seem like possible solutions, is limited and is why I am soliciting input from those of you that may have designed a similar solution or can offer negatives not recognized with this approach.  I should add that the number of concurrent internet users is not expected to exceeed 5-10 at any one time, and dedicating a workstation/server just for this function is totally acceptable.

While this approach may not be the most direct, it is my clients opinion, which I have to agree with given their goals, that totally removing "exposure pathways" or at the very least, seriously restricting them, will payoff with much smoother and cleaner IT Audits/Reviews. The FFIEC is understandably concerned primarily with preventing unauthorized access to non-public account information.  
LVL 1
serraultAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

mellowmarquisCommented:
Microsoft Terminal Services (MSTS) would be perfect for this. I would use a dedicated server on it's own DMZ in order to achieve this in the most secure way as you can segregate the network traffic and use firewall rules of least privilege.

Q: Why would I put the MSTS machine in it's own DMZ?
A: Should this server be compromised (from an internal source or an external source) this will minimise the risk exposure of your other servers (internal or internet facing). If anyone were to find a way to exploit the MSTS box and there were other servers on the same network, this machine could be used as a springboard for further penetration.

The only traffic you want to allow from the internal network to the MSTS hosted network would be 3389 as this is the only port required for an MSTS session. The only other rules required would be those permitting traffic from the MSTS box for outgoing connections. There are two options here:

1 - Permit outgoing traffic to the internet on essential ports, probably only tcp/80 (HTTP) and tcp/443 (HTTPS) and deny all other ports as well as unsolicited incoming traffic from the internet.

2 - Permit the same traffic from the MSTS segment of the network to go through your proxy server (which should be in the DMZ). Again, minimise the risk exposure by permitting only traffic on required ports and specified by IP addresses/ranges.

It is worth noting that tihs will prevent users from being able to use file transfer services such as FTP, TFTP, SCP etc. as the files will only be accessible from the MSTS server. These services should not be permitted or required anyway as your organisation should have a formal process for the transfer of files etc. into and out of the organisation.

Addionally, Citrix would do this as well, but is a little more costly and requires internal resources with the skills to deploy and manage it. MSTS is essentially a cut-down version of Citrix MetaFrame.

The recent flurry of activity concerning the 0-day exploit of the MS WMF Execution vulnerability is a perfect example of why the kind of architecture you are considering is absolutely critical.

http://isc.sans.org/diary.php?storyid=972
http://www.microsoft.com/technet/security/advisory/912840.mspx

HTH
-Mark
serraultAuthor Commented:
Mark,

First, thanks for confirming my idea was not out of line.  A couple people involved in this process were quite negative on the approach...comparing it to taking 2-3 steps backward as far as current/available technology.

Trying to keep the end solution basic, low maintainance, and easy to review/understand, my thought was that the ONLY machine that needed to be attached to the internet, DMZ or behind a firewall, was the MSTS box.  Would I even really need a proxy server if I only allow port 80, 443, and perhaps 20-21.  Your point about  preventing users access to FTP, TFTP, etc to the workstation is exactly what we want.  While maybe allowing FTP on the MSTS, the only way we would want  a file to be able to be transferred from the MSTS machine to the internal network is via copying it from the MSTS to some removable media, which given the physical access restrictions planned, would only be possible by selected senior management knowledgeable in policies for such.

I will look further into it later, but I assume traffic on port 3389 consists of only keyboard and screen image traffic.  Does MSTS do any kind of data stream validation on the port 3389 traffic to protect against any other form of traffic?  I am just thinking of the ways that applications like GoToMyPC, and LogMeIn, are able to get around firewalls.

As far as your comment about other servers being vulnerable if on the same network as the MSTS...I was thinking something like:

  Internet Access Point -> Eth0 of a firewall, or router/firewall, device (Public IP as assigned from ISP, close all ports but 80,443, & 20-21)
  MSTS ->  Eth1 of above device (192.168.1.x network with PAT, close all ports but 80,443, 20-21, and 3389)
  Internal Switch with Internal Servers and W/S conneted -> Eth2 (10.x.x.x addressing, only allowing port 3389 in and out of Eth2)

or perhaps splitting the firewall out to 2 seperate devices.  Either way, would this not be the most basic, and yet secure means of allowing internet access based on your MSTS suggestion?  Just for some balance, have you ever worked with VMWare, is this also a viable option?
mellowmarquisCommented:
1. Firstly, have the naysayers been so kind as to produce alternative technology that can do this. Most people are still very paranoid about Windows and MSTS security after the experience of working with Windows 2000. This is very valid as the technology at the time was both young and implemented without a great deal of concern for security. The technology has matured and is worthwhile, as long as you run on a Windows Server 2003 platform.

2. An additional point worth mentioning is the need to ensure that the Terminal Server:

a> Has an operating system that has been hardened according to best practices from either Microsoft, NSA, NIST or CIS. NIST usually review Microsoft's own recommendations for hardening servers before they go public, so the MS recommendations are usually quite good.

b> Has some form of OS patch management happening which is carefully managed, maintained and monitored. Again, this won't protect against 0-day exploits, but it will certainly reduce the attack surface of the server.

c> Has some antivirus software running which is regualarly updated, managed, monitored and scanned.

d> Has some anti-spyware software running on it which is regualarly updated, managed, monitored and scanned. I find the MS Antispyware Beta tool quite effective and it integrates better than most products with the Windows API (ie:does not hog resources etc.)

e> Has a host based firewall installed, updated and maintained.

3. The danger of allowing users to FTP files from the internet directly onto the MSTS box is that users can download and execute anything off the net with-in the MSTS environment. Proper user permissions could minimise the risk of this installing malicious products/code but it is not (in my opinion) a threat you can completely mitigate. Personally, I'd prefer some formal process for file transfers but with the prevalence of such activity these days, this could have significant administrative overhead.

This is where a proxy server can be useful. All data requested from the internet will be filtered through the proxy which can have a rulebase allowing FTP services only to certain users or for certain file-types. Products you would be looking at here if you're interested are NetIQ WebMarshal or MS ISA 2004. The additional bonus of using a proxy server is the ability to log all user activity which may also be required for compliance purposes or incident analysis after an intrusion (touch wood).

4. Regarding data-stream validation etc, MSTS does not use any form of validation that I am aware of. This kind of validation may be able to be done on the firewall, depending on what you are using. I know that Check Point's SMARTDefense is able to identify such traffic in most cases and will drop it automatically. I see this traffic being dropped in logs every week and can confirm that it works well. Again, this depends on your firewall.

5. I have used VMWare and Windows Virtual Server alot in lab environments, but never in live environments, and never in an implementation being described above. Are they talking about running a farm of about 10 virtual machines running concurrently? Whilst VMWare does contain many threats from infecting the host operating system of the terminal they are running on, they are still (for all intents and purposes) seen as just another machine on the local network segment. They communicate with IP and any open ports, applications or local OS vulnerabilities  can still propogate through the network. If they are all running a standard image, then if one virtual machine is exploited, they can all be exploited making containment and incident analysis more complex.

If however your virtual machines are running a hardened flavour of unix with Firefox as a browser and are regularly patched and maintained, it could be argued that this will be more secure. Again, I have never had much to do with such an implementation, so hopefully someone else on EE can chime in with their suggestions. I would investigate this option if I were you because you don't want to rule anything out based on ignorance alone.

It's been a while since I've seen GoToMyPC etc. being used. Back when I had to deal with them, they could only port through ports 80 and 443, though I beleive nowadays, they can use any port. Proxy filtering, smart firewalls (such as checkpoint) or packetshapers are what you would need here. IMO, every network should have a Packetshaper, not only to prioritise traffic for just such purposes, but because it allows you to watch your traffic flows and identify infected machines or rogue traffic at a glance. Firewall rules blocking access to these sites on all ports would work well, though this is difficult on some firewalls such as PIX which will only allow rules based on IP address or subnet rather then domain or URL filtering.

6. Regarding your firewall rules:

I'm not sure what type of firewall you are using so take these suggestions with caution (or let me know what you've got to confirm). I am assuming it's a stateful firewall.

a> You should not need to open any ports on your external interface. Permitting traffic here should be for incoming requests only such as if you were hosting internet facing services on ports 80,443 and 20,21. The outgoing requests from your MSTS box should establish a connection to the website and this negotiation will allow the firewall to permit traffic between the MSTS box and the external webserver until the connection timeout period has been reached or the connection is terminated.

b> Again, since you're not hosting any services which need to be accessed by request from the internet, you should not need PAT. I will assume of course that you are using NAT for translation between your public IP and the 192.168.1.x network.

c> Permit rule for internal switch connection is correct. Somewhere in there I will assume you will be permitting SMTP traffic, but that you have excluded this for simplicity.

For the moment, hanging the MSTS box off the DMZ interface of the firewall should be sufficient. As your network grows, you may consider adding additional firewall interfaces to seperate your DMZ servers from the internal network. Additionally, a VLAN switch hanging off the DMZ port with dedicated VLANs for different parts of the DMZ (MSTS, SMTP Proxy, Servers, Remote Access etc.) would allow for future growth, though the complexity is more pain than gain at this point.

That was a mouthful. Hope yr eyes haven't gone square :)

-Mark



Introduction to R

R is considered the predominant language for data scientist and statisticians. Learn how to use R for your own data science projects.

serraultAuthor Commented:
Mark,

Yes indeed, that was a mouthful, but an appreciated one...many good points and options to add to the mix.  I fear, however, that I am not properly relaying just how basic I am trying to cut this back to.  Bare with me as I try a different approach, and share some of my thinking:

Using Bruce Schneier's 5 step/question approach:
1) What assests am I trying to protect: Customer account information files.
2) What are the risks to these assests: unauthorized access from insider(employee) attack, this has been addressed and is not the issue here, and, unauthorized outside attacks, which is the focus here.
3) How well does the security solution mitigate those risks: the security solution I initially envisioned was that there be no connection whatsoever with any external network...period...none.  So, at this point I would assume you would agree I have totally 100% mitigated the risk of outside attack
4) What other risks does the solution casue: None, that is the beauty of it.
5) What costs and trade-offs does the solution impose: No internet access of any kind.

Okay, so as to lesson the trade-off of #5 lets say I now take a workstation with a clean, up-to-date install of XP Pro and I make a Ghost image of that machine's drive.  This machine connects directly to the T1 smartjack and to no other network device or workstation, it is truly standalone, and all means of copying anything from this machine have been disabled.  Now anyone that needs to access the internet must physically walk to this machine.  Given the setup I should not have to care what they download, what viruses they contract, or anything else because when each user is done the machine's harddrive is returned to the clean Ghost image.  This reset process could also be delayed until the end of the day.  Just to emphasize again, this machine only needs to allow users to access and reference internet based resourses.

Now where I am looking for suggestions is how to 1) save users the walk to the central Internet PC by allowing them to control the PC from their desk...just control it...no direct flle downloads etc., and 2) allow multiple instances of this clean XP Pro to run simultaneously on the same machine.  I used to have a client that used a very old networking solution called VM/386 that allowed a single P4 machine to handle 30+ users via seperate serial links to a WYSE150 terminal, or a PC using terminal emulation.  While the PC could access and run programs on the P4 there was no physical means to copy files or data to the workstation using terminal emulation.  While in that situation this limitation was a frustration, here it is what I am looking for.

Am I making this far too rudementary and missing a gapping hole?  Again to paraphrase Schneier: often the better security solution involves the least technology.  Is this senario not a great example of his theory?  
 
mellowmarquisCommented:
OK, now I see what you're getting at.

In this case, MSTS won't allow you to restore to a previous image so easily and will give you too much access to the host operating system. VMWare or Windows Virtual Server 2005 are your best options then as they allow you to create undo disks which allow you to restore to a point in time such as your original image. You can configure the virtual disks to return to the original image after a user logs off or run a sceduled task to replace the virtual disks files on a regular basis.

This is quite resource intensive. The client can access the virtual machine through a web-browser using an Active-X control, so you'd only need to open port 443 from the internal network to the hosting machine on your DMZ.

I would still keep the box behind the firewall and on it's own network segment to prevent infection of the machine via simple things like worms and viruses targetting the host OS. No machine should be connected directly to the internet - the average corporate firewall drops between 5,000 and 10,000 packets a day of randomly (or occasionally intentionally) targetted scans of all sorts of services etc. No machine sould be subjected directly to this traffic if possible

Yes you could permit FTP downloads to be initiated from the virtual network to the internet and then allow an admin with physical access to burn the data to CD. As you appear to agreem opening any data transfer channels between the virtual OS and the internal network is just too risky.

As I said earlier, I haven't used VMWare in years, so can't comment but it was a damn good product then and I hear it's still excellent, though I obviously can't advise here. Hopefully someone else can advise here.

You have a network connected to the internet, so there is no such thing as 100% risk mitigation from external attack, even if you are hosting no web content or allowing any email flow from the internet. There are still tools which can allow an attacker to traverse your firewall and get to the internal network, although it would have to be a concerted attack rather than a worm or automated script-kiddie attack.

I now agree that VMWare or Windows Virtual Server would be your best options. I'm not completely sure VMWare has all the functionality required, but I know for sure that Windows Virtual Server 2005 certainly does.

An additional issue to consider is the licensing requirements and costs for running such a setup.
VMWare is definitely more expensive though I'm not sure if it's any better.

For more info on MS Virtual Server:
http://www.microsoft.com/technet/prodtechnol/virtualserver/default.mspx
http://news.zdnet.co.uk/software/windows/0,39020396,39237255,00.htm

Again, if anyone out there has much experience with VMWare, hopefully they can offer an opinion.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Rich RumbleSecurity SamuraiCommented:
VMWare is still a wonderful product, and allows you to take snap-shots of the OS at any point, and even to boot up using the exact same point in time (the same snapshot- point in time). Terminal Services will still work for your situation, you can disable the rdpclip.exe from running that will keep copy-and paste from working, you can not copy files and or text and paste them. KISS is the way to go.

You can allow the TS/RD port to the DMZ'd pc, disable the clip board, and allow users to "surf" over terminalservices. You can also deny all port's from the DMZ'd pc to the lan, an estabished connection will allow traffic to flow from the lan to the DMZ'd pc (as long as you allow the proper dst port, by default 3389) when a connection is made. How you monitor that DMZ'd pc is then another question, use proxy to filter out porn... what have you... but it's a "junker" and can be remade at any time via a reboot (if using VMware) or using a ghost image.
-rich
serraultAuthor Commented:
Mark & Rich,

Thanks for the input and comments. Sounds like, after better explaining my thinking on approach, that VMWare or Windows Virtual Server are the two options to research and consider...which I will do.  

It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
OS Security

From novice to tech pro — start learning today.