• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 405
  • Last Modified:

Trend/Anomaly Analysis

I am trying to put into words a concept that is not in my area of expertise.
What area of knowledge concerns itself with the following?
Specifics will be appreciated and the best, rewarded.

I am collecting daily data on  many differant metrics (numbers, measures).

For example:
I am seeing how much the current measure varies from the average, as indicator of a deviation from the norm,    but it is not satisfactory.

I am envisioning computer screen with a slider control which will show:
 all data....sliding up to highly anomalous data only.

I want to be alerted to anomalous trends.
How should anomalous be defined?
What about filtering data-errors?
What differentiates a data-anomaly and a data-error?

Thanks for your input
1 Solution
Depends on the data of course.

Say you are looking at heights and weights of men and women taken from
questionaires filled in by hand at a local college and scanned in with OCR.

Most of the data will fit onto Bell Curves.  There will be average values with
some spread around them.  You can decide that anomalous data is anything
three standard deviations from the mean.

The anomalies you might see this way are basketball players, maybe some
double amputees, a few people with eating disorders.

You might also see some errors:  A person who entered their data in
centimeters and kilograms instead of inches and pounds would stand out.
These would be anomalies.  Real but unusual data.

People who feel like they're ten feet tall or carrying the weight of the world
on their shoulders and say so, would show up.  These would be bad data rather than anomalies.

And you might find some smudges or hanging chads in the scanning process.  These would be data errors.
AndyPandyAuthor Commented:
Hi D-glitch,

thanks for the input.

To further clarify the context.  I am collecting numbers from the web.
Article counts, web page counts, prices, indexes.
A variety of numbers,  having Nothing in common except that they generallly move 'smoothly' up and down, as in a curve.
(if I find later there are coorelations between seemingly dissimilar metrics,  that would be very cool!)
Each metric, or source is kept unique from other metrics,   ie page count from site A,  is identified and kept seperate from price X on page Y.
Example Normal trends:
Page count:  3,4,3,4,3,4,5,6,7,6,5,4,3,2,2,2,2,21,2,3
Price X: .22,.21,.23,21,.22,.22,.21,.22,.23,.24,.25,.26
Page count:  3,4,3,4,3,4,5,6,7,100223,6,5,4,3,2,2,2,2,21,2,3
Price X: .22,.21,.23,21,.22,.22,.00,.00,.00,.21,.22,.23,.24,.25,.26

I  am seeking the language to describe variations and anomlaies in these movements.
Is this  the study of statistics,   if so, is ther a sub-specialty that deals with trends?
Probably the best description of what you are doing is Data Mining
The Wikipedia article should help bring you up to speed on the jargon:


Certainly statistics is part of it.  So is pattern recognitions and many other
Cloud Class® Course: C++ 11 Fundamentals

This course will introduce you to C++ 11 and teach you about syntax fundamentals.

'Outlier detection' is another buzz phrase you might want to research. The whole thing (data mining, pattern recognition, outlier detection, etc.) is a vast and extensively researched field especially with internet traffic. Once you get searching, you'll find more info than you probably want!
If you really want to dive in, search Google Scholar for some of these terms and look for survey papers (papers that summerize a lot of ideas).
AndyPandyAuthor Commented:
Ok...thinkin.....give me a couple days.. ;)
Yes, Statistics is the main way to check your data.  Data mining is typically used after data collection and analyzing the information.  Sounds like you are not analyzing information.  It sounds more like you want to check for data integrity. Your assumption is that your data is normal, but the data you are presenting are not normal.

If you assume normality, you should use consider using confidence intervals.  As you receive new data, you check to see if the data is within the confidence interval.

1) if data is within confidence interval, include the data in the next average
2) if data is outside the confidence interval, you will reject the data (contact you)

In the simplest form, you can average your "good" data as your mean.  Use the same data for your standard deviation.

Assuming 95% confidence interval, n = number of data point
mean + 1.96 * (Standard Deviation/sqrt (n))

You should have it continuous change.  The reason is that you will smooth the changes as increases or shifts occurs in your data.
AndyPandyAuthor Commented:
Thanks everyone!
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now