• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 66
  • Last Modified:

security metrics from vulnerability assessments

Our corporate network is subject to 6 monthly vulnerability scans from a certified 3rd party to meet a variety of security standards certifications. Our directors would like to see the security team responsible to do some metrics for performance management purposes to demonstrate that lessons are being learned from the findings, root causes addressed, and the number of issues raised decline each time the scans/assessments are complete.

I wondered if anyone else does this degree of analysis and what metrics do you use, is it as simple as number of risks logged in the assessors report, or are they broken down by category, e.g. password related issues, patch related issues, configuration related issues etc. etc. I totally get the idea that in theory if the scans/assessments just find the same types of risk each audit, then the root causes probably aren't being addressed - its just realistically what metrics are you using to demonstrate each time things have improved, lessons learned etc.
0
pma111
Asked:
pma111
  • 2
  • 2
2 Solutions
 
masnrockCommented:
One criteria I would use the age of vulnerabilities. One of the biggest things is showing that you attempt to fix vulnerabilities within a reasonable timeframe of learning about them. But also, you want to have documentation for why you may not have chosen to fix particular ones (i.e. it would break a critical system, but you should also have records showing an acceptance of associated risk). Obviously there are also cases where the vendor hasn't released a fix yet (and you would at least be able to explain this if that's the case)

Maybe you can implement a policy requiring investigations of vulnerabilities that have existed past a specific threshold (i.e. 45 days)

Another metric would be to show the breakdown the types of vulnerabilities identified. You could identify areas that you're not prioritizing high enough.

Showing the number of vulnerabilities doesn't tell the most, because things could've just been discovered. The only time a count works is when you're talking about the number of vulnerabilities that existed in a previous report that carried over (this is why I mentioned using age of vulnerabilities as a metric).

While not a metric per se, I would be also finding a way to show steps that were taking to improve policies and procedures. For example, maybe you didn't have a patch management policy, but finally put one into place. Maybe you decided to start requiring firewall rule reviews on the regular basis, and show things that were corrected as a result.

Do you have any sort of internal patch or vulnerability management going on right now? You may want to look into some tools. That why you can stay ahead of the scans.
0
 
btanExec ConsultantCommented:
It is calling for the Scorecard report by the management that will state the KPI dashboard as the executive summary and trend analysis from all the surfaced vulnerability and time to taken to close them. In fact, you can consider putting in 3 main categories but not limited to these and the suggested metric. Key is to build 5-6 relevant one and show the trend every 6 months as part of the governance. Normally for audit, that is likewise done to identify the top 3-5 systemic weakness to focus the priority things to improve.

a) Vulnerability Management
- Total #Cyber Hygiene non-compliance
- % of NC  #Due to Lack of Access Control & Account Review (over privileged account, orphan account)
- % of NC  #Due to Lack of Configuration Review (oversight of baseline, use default unharden setting)
- % of NC  #Due to Lack of untimely Patch (late signature update, corrupted patch that failed)
- % of NC  #Due to Lack of Audit log (did not enable logging, log deleted or override)
- Time to re mediate security test findings (from start to end, about day/week/month)
- Top 3-5 vulnerability, and slowest to remediate

b) Incident Management
- Total no of incident reported or make known by public
(categorise into different type, e.g. phishing, malware, DDoS, data breach, intrusion, & in different severity level - High, medium, low))
- Total no of incident discovered or detected by internal controls and checks
- Total damage cost for incident incurred (include asset value, recovery effort, loss of business earning due to its unavailability)
- Time to Detect Incident (with verbal update, subsequent report submitted)
- Time to Contain Incident (with mitigation completed)
- Time to Close Incident (with remediation completed)
- Average to close incident (High , Medium , Low)
- Top 3-5 incident type reported, the slowest to close and detect

c)  Security Awareness
- % clean desk policy audits (with no adverse findings)
- % employee scoring >80% on awareness quiz (can be also question derived from incident, executed based on phishing campaign)
- % employee not attended regular security training (by department, every 6mth - 1 year refresher and hands-on)
- Top 3-5 common mistake
0
 
btanExec ConsultantCommented:
For author advice
0
 
masnrockCommented:
Answered
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Firewall Management 201 with Professor Wool

In this whiteboard video, Professor Wool highlights the challenges, benefits and trade-offs of utilizing zero-touch automation for security policy change management. Watch and Learn!

  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now