Solved

Best practice : Block at proxy or perimeter firewall or IPS ?

Posted on 2014-12-25
3
390 Views
Last Modified: 2014-12-28
If I've spotted a number of "suspicious" (ie those source IP has no legitimate reason to access our servers,
possibly from places like N Korea or Russia) in our firewall & IPS logs, what is usually the best practice?

Block at the proxy server facing the internet, or block at the outermost (ie perimeter) firewall, block at
our network IPS or at the innermost endpoint IPS (ie IPS software running inside our servers) ?

Q1:
Taking analogy from traditional warfare, I would attempt to block invaders at the outermost forts rather
than attempting to defend when invaders have come closer to us ie I'm inclined to block at the outermost
defense ie the proxy & perimeter firewalls.  By blocking at outer defenses, we won't see so much logs
in the inner defenses (ie IPSes).  Am I in the right direction?

Q2:
Are proxy server (eg Bluecoat) usually placed at the outer most or the perimeter firewall?

Q3:
I felt Inner defenses are more for granular defenses like certain signatures

Q4:
Can CDN providers (ie those that provide clean pipes) block blacklisted source IP?
0
Comment
Question by:sunhux
  • 2
3 Comments
 
LVL 61

Assisted Solution

by:btan
btan earned 500 total points
ID: 40518422
Security by default, if the IP has no business with the business online appl then it can be (strictly speaking) be denied, but do verify with your owner if there are such of this expected usage. In fact if appl is online, any client should be expected and not deny unless they are showing non legit activities or anomalies such as scanning port or persistent spidering etc for no rationale reason..there are many online service to check on the IP further such as robtex, domain dossier etc. typically your network security device should be capable to  check that with their online threat services (provided it is subscribed etc)

Typically the network security defence placement is based on defense in depth (e.g. Perimeter FW (PFW) <> IPS <> Proxy <> IPS ) whereby each is to deter/alert any malicious attempt and stop at the earliest boundary or point detecting it. There is no right or wrong on the placement as most will debate more on the quantity of each devices (besides having to maintain beyond as HA mode).

But we do need to understand there are sure to be bypass of the device placed. E.g. FW will sure to allow http -80 and https-443, it can be blind if encrypted tunnel like VPN or SSL is not terminated. So there is need for capability to terminate and perform deep packet inspection, and this is where you identify which device should be performing that.  For any bypass from egress and ingress in each tier boundary fronted by the device will then have to rely on the next tier device .. hopefully (and eventually) to detect anomaly if it is indeed true positive penetration attempt etc.

Of course there are those "Zero day" type or non signature type of attempt that evade all and it is the endpoint systems to put up another defence check. It cannot be foolproof as there is no perfect security. Have the right tool to do the task purposefully - and not avoid treating all target like a "nail head" just because you only have a "hammer" at hand. Even the, every "hammer" can be unique as it varies in different size, make and magnitude of effectiveness. Likewise, not use the wrong tool like using a knife for the nail head example.

The overall situation awareness of the security posture is via the correlation of all the alert log from these devices. It will be lacking if we depend on only one of them. Having said that, I am assuming the tuning for minimal false positive is duly done constantly in all such device and the objective in each placement has clear goals in the segment guarded e.g. PFW is ip and port based (network service based), Proxy is content filter (app service policy) and IPS is threat detection (known vulnerability and exploit). I know there are the all in one device like those of NGFW and UTM.

With the above context,

Q1 - Every tier and device should go for deny of malicious traffic and alert on possibly malicious attempt and anomaly based on ruleset configured in the context of the boundary to be guarded. The "allow" mode is more of the initial baselining which should already be done or during the major changes in the network/app at backend. Visibility is not per device for situation awareness but a collective oversight from the various sensors to eventually help us orientate in context, triage threat, isolate damage and respond to exposure robustly

Q2 - Proxy does not have the fortitude to front the ugly n/w image can it front DDoS. It does the policy enforcement for content filter and normally looking at egress or as application delivery controller deciding the caching, load balancing or application based interception checks. I know PFW is not ideal also but it does still have the fundamental of blocking the L3/L2 based on protocol anomalies. Resource exhaustion as form of DoS at application should be better be handle by an appl aware FW. The latter will be more for higher risk segment and protocol based for the non risker if we want to balance the security investment of having all as appl aware FW.

You may want to check out my article on the question to help in triaging defences


Q3 - Signature is for all security device and appl as baseline. There is no perfect security hence leverage to best capability it can without impacting performance. Of course, signature is a reactive means and eating up storage and operationally unfriendly, but most of the time prevention (constant monitoring and vulnerability mgmt) and detection (via signature and ruleset/heuristic/behaviour) are required to work hand in hand. Tier-ed defence will be more towards  the defined boundary the traffic you have e.g. you will not mgmt traffic and production traffic in the same segment where possible to reduce the window of exposure to attack or abusive attempts. It depends on your environment and risk appetite.

You can check out my article on the action plans for stages of preventive , detective and respond measures


Q4 - CDN sure can do blacklisting and even whitelisting as well. Their internal architecture at their edge servers are also running those security device. High Performance, resiliency and availability of content delivery  service is one factor for their traction gain. Security has become part of their supporting portfolio to achieve those goals committed to their subscriber. It is no longer just caching and fast delivery based on their wide point of presence, WAF defences and DDoS protection are expected for the online asset ..(most of the leading CDN will have it)
0
 

Author Comment

by:sunhux
ID: 40519461
The reason I raised this was governance asked me to analyse / go thru IPS logs :
I guess the purpose is to identify illegitimate access attempts esp from countries
which has the remotest reasons for accessing VMs that are hosted by us.

Robtex, Trendmicro & McAfee's blacklisting sites fail to blacklist many source
IPs but DShield (used by one provider but I don't know what's its URL) listed
many of those source IP that made access & trigger our IPS (both network &
endpoint/server) signatures are actually blacklisted by one provider (which
I shall not name here).

However, we can't possibly activate (not even in Detect) all signatures that are
available as it's going to slow down the network & our endpoint servers
tremendously, so I'm thinking after we've blocked those illegit source IP,
we'll see less logs & hopefully replace those signatures that don't trigger
anymore (after observing for a couple of months) with new signatures:
possibly rotating signatures around.  That could be some malicious
activities which till date have not been detected as we load in only about
a quarter of all available IPS signatures (as loading too many will hog the
network & endpoint servers)
0
 
LVL 61

Accepted Solution

by:
btan earned 500 total points
ID: 40519554
if there are geolocation fencing required like rejecting specific country for business reasons, you sure can do that at the PFW or CDN level (earliest).

Blacklisting is never a panacea. Silver bullet does not exist in security. There is bound to be misses and even IP are disposed and randomised so frequently that it easily bypass blacklist. It is a rat race and not possibly to grow the blacklist store hence eventually device goes into Cloud services. Another area to augment the blacklisting aspects and likely to reduce device local inspection is use of IP reputation in such services is also slated to value add further in checking for any malicious act originating from that source IP address. You can check out ThreatStop

Blacklisting implementation is more common in PFW and Proxy as compared to IPS that deals more with frequent signature updates. Indeed IPS signature tuning is more rigorous and required to build the fundamental baseline profile of the network and business running first. Once the baseline is firmed up after the learning phase (1-2 week minimally with real traffic depending on the segment size), the ongoing tuning based on review regime will commence. I believe you also see likewise network device like FW, IPS, Proxy (in this case) is never a one time turn on, plug and play type and forget deployment. User are affected as shared when performance is below expectation which is already not so great if all are sharing the same network pipe. Hence the regime and accuracy of tuning is pertinent to keep it effective and efficient. Note that these boxes should be hardened to start off and not to rely on just default setting.

Taking instance of Heartbleed, Poodle and Shellshock vulnerability, signature is highly dependent on the vendor readily customised signature. Prior to availability, none of the boxes really stop such threats and furthermore, the boxes can be vulnerable itself and needed patches.

Performance also need to consider the aspect of other parameter like SNMP and network traffic shaping efforts to sync up the health monitor and the security setting so as to reduce false positive and at same time not become a single point of failure. It is best to understand what is not necessary and not allowed in the business. If app aware boxes like the Proxy, NGFW or UTM can block those unauthorised appls like P2P or video streaming or Tor connection, then the user need to understand that. Rate limit for persistent session is critical especially those user likely to have online radio and huge file downloading on daily basis...

If after all the checks and placement and enforcement are in place, the existing performance is failing expectation (and not the security requirement), this calls for reviewing  the network segmentation and risk level rather than refining the rule, policy and tuning for the sake of doing as they are calling security boxes the culprit. This may include

- review device at the most ulterior perimeter (most of time is PFW) that will face greater traffic load and need have more priority in higher h/w tech spec.

- review rule housekeeping frequency  in term of the no of downstream devices that a legitimate ingress/egress traffic has to flow through. These "many" hops can be contributory to the overall slower respond. It is a chain effect if there are too many tiers.

- spread the blocking of potential misses on malicious content or sophisticated to dedicated security service. They should do what they are deemed to perform best and reduce duplicate effort or ruleset unless it value adds due to different technology used and make  and vendor etc. Spread the checks as endpoint check using HIPS, and not be over reliant in networks check but it defeats the purpose of defense in depth.  

- review risk exposure such that internal DMZ may have less boundary compared to external DMZ. This is part of deterrence mostly at the external side so that containment and isolation can be done timely for incident response
- review to avoid creating too many potential single point of failure as mentioned previously. take note of any remote connection and purpose of it connection as it can be alerting and only for privileged user doing administration of server and not the norm mobile user doing work

- review detection capabiliy (again) PFW, Proxy and IPS only cover the known threat as shared. If the goal is to handle the persistent and advanced threat then do plan and blend into upgrade timeline. There are far more than just NGFW or UTM, check out SIEM based,  CDN, Breach detection technology etc that drills into network analysis, oversight of multiple sources, network forensic detecting indicator of compromise which may be more than just plain known signature. This requires a balance on cost effectiveness and security assurance expectation.

- Getting a bit more pro-active (and adventurous), you may even want to explore into blackhole which you host internally to detect those access which is highly source to blacklist. You can check out Artillery and HoneyDrive (honeypot in a box)
0

Featured Post

Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

Join & Write a Comment

Suggested Solutions

Password hashing is better than message digests or encryption, and you should be using it instead of message digests or encryption.  Find out why and how in this article, which supplements the original article on PHP Client Registration, Login, Logo…
If you're not part of the solution, you're part of the problem.   Tips on how to secure IoT devices, even the dumbest ones, so they can't be used as part of a DDoS botnet.  Use PRTG Network Monitor as one of the building blocks, to detect unusual…
Sending a Secure fax is easy with eFax Corporate (http://www.enterprise.efax.com). First, Just open a new email message.  In the To field, type your recipient's fax number @efaxsend.com. You can even send a secure international fax — just include t…
You have products, that come in variants and want to set different prices for them? Watch this micro tutorial that describes how to configure prices for Magento super attributes. Assigning simple products to configurable: We assigned simple products…

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

16 Experts available now in Live!

Get 1:1 Help Now