Pau Lo
asked on
network boundary security basics (external scan)
- As part of a penetration testing exercise (external) what are the common security tools the test team would utilise when assessing the security of your organisations traditional on-prem network “boundary defences”, i.e. perimeter firewall/UTM? Is it still primarily the port scanning range of tools, or something new in 2022?
- And for such external scans of a clients "boundary defence", what specific kind of vulnerabilities do such tools scan/test for? I appreciate each type of device or applications has its own common vulnerabilities, but I wasn't sure specifically when doing an external check if its looking for any particular weakness in the perimeter defence? I suppose the ultimate goal is to try and circumvent the perimeter controls to gain a degree of access to the clients private network environment.
- Out of interest when configuring their external scans/tests against a clients network perimeter security controls, what information does the client under review typically provide to the test team, as I assume they need to configure their scanners to check against a specific IP address/range that represents the network boundary? I was always interested in the initial setup as to how the security testers configure their tools to scan against a client network boundary/perimeter defence, in such an exercise. e.g. do the test team specify something specific to the clients firewall in the scans target config that then covers everything that can be checked (accessible to the outside world)? Or something more detailed than this? I've seen the process for vulnerability scans of servers for example, you can enter hostnames, IP ranges, import devices from local AD etc etc. But I have never seen the initial config of a scan when testing a networks boundary defence from the outside.
- And is it typically a single scan of the clients network “perimeter”, or does the exercise require execution of multiple scans?
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
"Assuming in this context boundary equals perimeter Firewall/UTM device, are there 'common' vulnerabilities and exploits specific to those devices themselves that the external scan process will look for, as opposed to something they have found from an open port/protocol? Are these fairly uncommon, as I assume such issues would constitute a major oversight and misconfiguration - i.e. what is the root cause, what haven't the security/IT team done to create that risk - is it patching related or something else."
Yes, in addition to scanning for open ports/protocols they will look for classic misconfigurations and deviations from best practices/baselines that get missed a lot more than you may think:
I'd also be interested in whether they would look for specific ports/protocols as priority as their as an increased likelihood those specific ports/protocols could be more likely exploited, above others which may be more challenging to exploit? I seem to recall cyber insurance companies do a similar scan of external facing infrastructure and things like basic RDP externally exposed would be a red flag for providing coverage, I know NMAP and the like will report of any open port and running services, but it would just be interesting to know what specific open ports/protocols on your boundary constitute more risky (in the pen-test community and the cyber insurance companies view) than others, or the services running behind them require extra security hardening, or just point blank should not be accessible to the outside world.
Correct again. Because time is a critical factor for them, they will focus on the "low hanging fruit" first. They'll examine how each system or device is accessible, and if any of those mechanisms are vulnerable or insecure. Examples might include TLS < 3.0, FTP, Telnet, SNMP v1-2, known exploitable encryption standards, and POP to name a few. Their scanners are going to pick up devices and systems that pass data in clear text...avoid this where possible.
Out of interest is the number of external scans you do/purchase based on your internal risk-assessment policy, or a requirement of any security standards your organization has to comply with, often quarterly external and internal scans (e.g. PCI DSS).
I believe PCI DSS requires a minimum of quarterly external scans to stay in compliance, which is a pretty good metric. My advice would be to run an external scan first, address the items that you and your team are comfortable addressing, and then move on to internal scans and do the same.
Yes, in addition to scanning for open ports/protocols they will look for classic misconfigurations and deviations from best practices/baselines that get missed a lot more than you may think:
- Default usernames and passwords in use (leads to device compromise and privilege escalation)
- Common vulnerabilities found in in-house apps/systems with poor development controls (think input validation and fuzz testing as examples) - As a former pen tester, I've seen in-house developed apps that will accept SQL statements from an input form...
- Known vulnerabilities for hardware/software that has not been patched - again, fairly common and easily missed
I'd also be interested in whether they would look for specific ports/protocols as priority as their as an increased likelihood those specific ports/protocols could be more likely exploited, above others which may be more challenging to exploit? I seem to recall cyber insurance companies do a similar scan of external facing infrastructure and things like basic RDP externally exposed would be a red flag for providing coverage, I know NMAP and the like will report of any open port and running services, but it would just be interesting to know what specific open ports/protocols on your boundary constitute more risky (in the pen-test community and the cyber insurance companies view) than others, or the services running behind them require extra security hardening, or just point blank should not be accessible to the outside world.
Correct again. Because time is a critical factor for them, they will focus on the "low hanging fruit" first. They'll examine how each system or device is accessible, and if any of those mechanisms are vulnerable or insecure. Examples might include TLS < 3.0, FTP, Telnet, SNMP v1-2, known exploitable encryption standards, and POP to name a few. Their scanners are going to pick up devices and systems that pass data in clear text...avoid this where possible.
Out of interest is the number of external scans you do/purchase based on your internal risk-assessment policy, or a requirement of any security standards your organization has to comply with, often quarterly external and internal scans (e.g. PCI DSS).
I believe PCI DSS requires a minimum of quarterly external scans to stay in compliance, which is a pretty good metric. My advice would be to run an external scan first, address the items that you and your team are comfortable addressing, and then move on to internal scans and do the same.
ASKER
Thanks Andrew, much appreciated. I have a fairly solid understanding of internal scanning and the common flaws the tools check for; I just wasn’t sure how different external scanning compared in terms of toolset and approach, and if there were any major changes in tools new to the market compared to say 5 years previous. Nessus and Nmap have clearly stood the test of time as they were around late 90’s, early 00’s! I wasn't overly sure how much info it is common to share with the pen-testers about your external facing systems/IP range, or if that was all part of the process that they enumerate the public facing landscape themselves? I suppose where you are paying good money for a thorough assessment you wouldn't want them to 'miss' any key systems so giving some scoping detail about your public facing landscape makes sense.
Can you just elaborate anymore on this please
Also, I am not sure if you still have contact with your former colleagues in the pen-test sector, but is it common now for the scope of their testing to also include cloud-based apps that are not hosted inside the organisations boundary defences? I was thinking that most places have a Microsoft 365 tenancy that likely hosts sensitive data. General observation I have found is that services like email, and file servers/storage have often been migrated to Exchange and SharePoint online within a 365 tenancy, whilst other line of business apps, such as ERP are often still installed and operating on servers on the organisations own data centre/on-prem environment.
I haven’t seen a lot of pen testing info about 365 tenancies, I appreciate in some cases targeting the on-prem environment and user accounts may be an initial requirement with synchronised accounts for authentication, and some places don’t even allow 365 access outside of their traditional on-prem environment (conditional access). It just felt a bit of a grey area but substantial nonetheless if only on-prem infrastructure and perimeter defences are being tested, whereas a lot of key systems such as file storage and email are often on cloud based platforms such as MS365 tenancies, so seeing how susceptible that data is to unauthorised access from outside the organisation (i.e. if where conditional access plays a role) is also key.
Can you just elaborate anymore on this please
“remember, devices on the inside can be "perimeter devices"... with some examples you have seen in practice, so I can research further. The wireless printer example as an entry point was very interesting.
Also, I am not sure if you still have contact with your former colleagues in the pen-test sector, but is it common now for the scope of their testing to also include cloud-based apps that are not hosted inside the organisations boundary defences? I was thinking that most places have a Microsoft 365 tenancy that likely hosts sensitive data. General observation I have found is that services like email, and file servers/storage have often been migrated to Exchange and SharePoint online within a 365 tenancy, whilst other line of business apps, such as ERP are often still installed and operating on servers on the organisations own data centre/on-prem environment.
I haven’t seen a lot of pen testing info about 365 tenancies, I appreciate in some cases targeting the on-prem environment and user accounts may be an initial requirement with synchronised accounts for authentication, and some places don’t even allow 365 access outside of their traditional on-prem environment (conditional access). It just felt a bit of a grey area but substantial nonetheless if only on-prem infrastructure and perimeter defences are being tested, whereas a lot of key systems such as file storage and email are often on cloud based platforms such as MS365 tenancies, so seeing how susceptible that data is to unauthorised access from outside the organisation (i.e. if where conditional access plays a role) is also key.
From my perspective, if I (as a pen tester) am sitting outside your network perimeter with no granted internal access...
Any device that I can attack and gain internal access is essentially a perimeter device.
It's not uncommon for pen testers to run scans and such against cloud-based services, but that's usually done between your cloud vendor/service provider and their pen testers. Typically, as part of your vendor vetting process, you would be responsible for confirming that this happens during your review of their cybersecurity documentation. You might see pen testers send out a phishing E-mail to your user base to see if anyone will divulge their O365 credentials, but I wouldn't expect them to actually scan or attack your cloud-services provider, especially without prior notification between yourself, your vendor, and them.
Any device that I can attack and gain internal access is essentially a perimeter device.
It's not uncommon for pen testers to run scans and such against cloud-based services, but that's usually done between your cloud vendor/service provider and their pen testers. Typically, as part of your vendor vetting process, you would be responsible for confirming that this happens during your review of their cybersecurity documentation. You might see pen testers send out a phishing E-mail to your user base to see if anyone will divulge their O365 credentials, but I wouldn't expect them to actually scan or attack your cloud-services provider, especially without prior notification between yourself, your vendor, and them.
ASKER
Assuming in this context boundary equals perimeter Firewall/UTM device, are there 'common' vulnerabilities and exploits specific to those devices themselves that the external scan process will look for, as opposed to something they have found from an open port/protocol? Are these fairly uncommon, as I assume such issues would constitute a major oversight and misconfiguration - i.e. what is the root cause, what haven't the security/IT team done to create that risk - is it patching related or something else?
I'd also be interested in whether they would look for specific ports/protocols as priority as their as an increased likelihood those specific ports/protocols could be more likely exploited, above others which may be more challenging to exploit? I seem to recall cyber insurance companies do a similar scan of external facing infrastructure and things like basic RDP externally exposed would be a red flag for providing coverage, I know NMAP and the like will report of any open port and running services, but it would just be interesting to know what specific open ports/protocols on your boundary constitute more risky (in the pen-test community and the cyber insurance companies view) than others, or the services running behind them require extra security hardening, or just point blank should not be accessible to the outside world.
Out of interest is the number of external scans you do/purchase based on your internal risk-assessment policy, or a requirement of any security standards your organisation has to comply with, often quarterly external and internal scans (e.g. PCI DSS).