Type of DDoS data that is useful as IT Security metrics

One of the monthly IT Security metrics in my previous place is
to show  # of 'High' DDoS alerts for the month (leaving out the
Med & Low ones), extracted from Arbor Peakflow of cleanpipe.

Attached is how one such extraction looks like: basically we'll
count the # of 'High' alerts.

In new place, question was raised how this data can be useful
as IT Security metric.

My guess is Audit wants to see a trend (of 6-12 months) of the
# of 'High' alerts for DDoS: if it's always about the same, no
alarm but, say for a particular month, it triples, it's a concern?

Anyone has any clue how this data (or any other Peakflows'
data) could be useful for presentation to serve as IT Security

Anyone has any Application DDoS security metrics that could
be useful as IT Security metrics?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

David FavorLinux/LXD/WordPress/Hosting SavantCommented:
Actually... this seems like a good idea...

Normally I just TARPIT all DDOS/DOS attacks at the firewall.

Based on your suggestion, might be useful is to log data + break it apart based on base level IP (associated with physical interfaces) + container level IP (each LXD container I run has 1+ IP associated).

Then graph this by site niche + time period, looking for things like...

1) Niches attacked consistently.

2) Niches attacked in a given time period, like Black Friday or by entire months, especially November + December.

A highly interesting idea.

Data yielded could be used to identify niches where attack might become... more creative, so IPs to watch more closely than others.

This data could also be used to partition out niches from each other, to attempt pre-mitigation of attacks which might take down one niche... so by partitioning niches, if one high attack niche got taken out through some new niche, then other niches would survive.

First time I've thought about striping/partitioning niches into similar IP ranges.
David FavorLinux/LXD/WordPress/Hosting SavantCommented:
BTW, if you'd like to completely mitigate DDOS attacks, TARPIT attackers.

Likely you'll find, as I have, once a DDOS network gets taken out because they're so foolish as to attack an IP range using TARPIT tech... well... they blacklist all your IPs themselves + with other Bot networks.

Since switching to TARPIT tech, rather than just returning reject-with icmp-host-prohibited I find that attacks drop off very quickly.

As an added bonus, you'll be taking down the attacking BotNets, rather than just blocking them.

This gives me warm fuzzies when I sleep at night.
btanExec ConsultantCommented:
The security metrics depends on which audience that you are targeting towards.

Management - see return of investment vs  cost of ownership for the managed security services. You need to preach in the dsmsge cost when website incident vs it is always available and soundly guarded.

Operation - see real time event trending across the weeks and month, top attacker and geo source orignated as well as suspicious activity that signified common attacking points by unknown perpetrators as key stats. They serves to build into the incident trend report at weekly, monthly and quarter to annual basis.

Analyst - see the need to build new defence and put in new WAF rules and blacklisted sources accordingly. They needed threat report and reputation stats of the known source and identify potential perpetrator-to-be. They would also like to establish the log into the central SOC portal to build further stats using the internal sensory report generated.
Powerful Yet Easy-to-Use Network Monitoring

Identify excessive bandwidth utilization or unexpected application traffic with SolarWinds Bandwidth Analyzer Pack.

bbaoIT ConsultantCommented:
The security metrics depends on which audience that you are targeting towards.

the security metrics also depend on type of DDoS, attack surface, insfrasturcture bottleneck, of course as well as the mitigation actions agaisnt DDoS.

the types of DDoS, can be also viewed from different perspectives. for example, in the view of point of ISO layers, DDoS attacks can be categorise in three types:

Volumetric attacks: the attack's goal is to flood the network layer with a substantial amount of seemingly legitimate traffic. it includes UDP floods, amplification floods, and other spoofed-packet floods. these potential multi-gigabyte attacks can be mitigated by absorbing and scrubbing them by scaling up the network in the cloud.

Protocol attacks: these attacks render a target inaccessible, by exploiting a weakness in the layer 3 and layer 4 protocol stack. it includes, SYN flood attacks, reflection attacks, and other protocol attacks. these attacks can be mitigated by differentiating between malicious and legitimate traffic, by interacting with the client, and blocking malicious traffic.

Resource (application) layer attacks: these attacks target web application packets, to disrupt the transmission of data between hosts. the attacks include HTTP protocol violations, SQL injection, cross-site scripting, and other layer 7 attacks. use Application Gateway - web application firewall to provide defense against these attacks.
sunhuxAuthor Commented:
The audience is IT Senior management (steering committee) and IT auditors.

So can we say it's typical to include DDoS 'High' risk alerts in IT Security metrics?
Auditors generally want to see if there's useful trend (from month to month) &
if open issues are being closed : in all the 3 years of DDoS reports (from Arbor
Peakflow), have not seen one that is genuine attack so it's uneventful if I show
as "0" in the IT Security metrics;  rather I pick the 'High' cases to see if there's
sudden fluctuations;   make sense?
btanExec ConsultantCommented:
Key is whether there is any incident per se, and it should be reduced as compared to none. If none then state so.

For the mgmt team, focus on the high risk detection and trend of any increase exposure. Actually if the asset is safe, my opinion is that they wouldn't be concern on the stats but you can show diligence by summarising the situation picture that the risk are mitigated and protection in place adequate. In fact there is DDoS calculator that you may want to explore into (though not by Arbor). Supposedly the DDoS mitigation service would also better the performance delivery of the website due to CDN effect.

You can check out the Arbor paper on the Cost per Attack vs ROI of a DDoS Defense Solution. indicator should be simple such as Average Number of DDoS Attacks per Month. Attack does not mean it is successful but there are such attempted. The cost of outages due to DDoS attacks is comprised of operational costs and revenue impacts. In short, you are just updating the services are doing its job as invested.

From the technical assessment aspect, you want to look into instance which may "bypass" server as there are errors flagged e.g 50X codes which is not due to the DDoS services and is from the web server itself. Cases of such instance should be follow up for closure with project team to resolve any issue for leading to server are not responding well.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
sunhuxAuthor Commented:
One last question:
Does 'High Risk' detections (as per the attachment) indicate a genuine
attack (or a potential attack) has been mitigated?

I'll open another EE thread later to find out how Arbor Peakflow
works out those detections: was queried how it's classified as
High, Med, Low : have seen very low traffic (as 50kbits/sec) being
classified as 'High Risk' : guess it's the rate of change that Arbor
btanExec ConsultantCommented:
Yes with certain level of accuracy basef on the real time intelligence from its end. Best to prioritise analysis around those activities and consultancy may be option to dive deeper
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Network Management

From novice to tech pro — start learning today.