I've been trying to implement a low frequency cutoff in c++ which essentially takes a stream of numbers and smooths out the output (filtering out high frequency movement/jitter), however it is important the front weighted numbers are considered immediately as the data is time critical (it is to control a motion simulation base using output from a bit of game software). I've got a working weighted moving average algoithm but could do with something a little more responsive at the front end, and I found this:-

if CurrentBar <= 1 then
begin
ZLEMA = Price;
factor = 2/(Period+1);
lag = (Period-1)/2;
end
else
begin
ZLEMA = factor*(2*Price-Price[lag]) + (1-factor)*ZLEMA[1];
end;

I've translated it in to C and my code is as follows:

However, it doesn't seem to behave quite as I'd expect. It seems to be nearly there but sometimes I get a slightly lower value than all items in the queue (when they are all higher).

My queue and the number of items in it are passed as parameters, with the most recent being at the front at all times, also I pass an incrementing counter starting at 0 as required by the function.

I'm not sure I've interpreted the meaning of 'ZLEMA[1]' correctly as it's not clear in his pseudocode, so I've assumed this to be the last call's zlema and also I'm assuming 'Price' actually means 'Price[0]'. Perhaps I've got this wrong?

Am I supposed to be copying the actual zlema calculated values back to my original queue before the next call? I don't change the original queue at all other than just shifting all values one to the end and inserting the latest at the beginning. The code I use to do this is:

Would be extremely grateful if someone with a better understanding of the math could please sanity check this for me to see if I've got anything slightly wrong?

Here is another XLS file showing a more practical set of input values. See how if you change A2, it smooths out the values. It's a little easier to see with these inputs. Change A1 to equal A2 and

A moving average will smooth out a curve but it will ALWAYS fail at the beginning until there are enough time points to average.

The only way (that I know) is to use a transfer function. Since this isn't done physically very easy. So another way is to put a transfer function into a state-space and use the state space and implement.

I don't have code but I did do it in excel. This shows a way to implement it physically. Let me know if it was any help. The equation is in the file. Low-Pass-Filter.xls

I have looked over your code snippets and they seem ok. Is it possible to send more of the code (i.e so that we can run it) or can you make a small project that has the code and sample data that you are seeing the issue with?

what you've posted in the excel sheet is a Simple Moving Average (SMA), and has significant time-delay.

you cant get away from it, because it is calculating current position based on the current position plus two (or more) previous positions. you'll never lose the time shift

Chrispaul:

your ZLEMA will not work in "real time" like you hope that it will. this is used to get rid of the "lag" yes, but it will only work if you already know the "future" values. the ZLEMA formula expects that you have the entire data set, because each calculation requires the use of one or more future values depending on the Period.

you can not know the future values, obviously, in a real-time system.

you will have to use a numerical method to predict the future values, and at the most basic, i believe this will be an algorithm that uses second-order deriviative in a weighted fashion.

Firstly thanks all for your input, much appreciated!

rwj04,

That makes sense I guess, so I suppose then the best I can hope for is simply an exponential moving average, accepting there will be a little lag but this will be minimised by the heavier front weighting than given in typcial weighted moving average?

I have this algorithm too, but a similar problem in that the values don't seem quite correct (unless this is the nature of the formula).

For instance, say my array contains 16 values, all 0.4775 - the output is 0.4983, but I'd expect it to be 0.4775?

This is a DSP problem. What I had may seem like a moving average but a moving average (by definition) can not be the same length as the input data points because of the limitations of averaging across several points. What I did was a filter that takes out all data above the frequency omega = A. If you increased the poles of my filter, it would change the way it looks.

You are correct about the delay though, it is kind of unavoidable...

Hi Jcouls29, that looks like another good option for what I'm looking for - could you please translate that to code though as I have no clue what it means? (maths not too good here :( ). If someone can verify my EMA code also then I can choose which one works best by experimentation.

The best way I know how to show this is by doing a simple 1 pole filter. A little explanation needs to be told before hand so you might understand what this means.

There are 4 major parameters in what I'm about to show you:

u = input signal
y = output signal
dy/dt = change in output signal over time
A = filter constant

the formula I used was dy/dt = (u - y)/A

If you start with inital values of y = 0 and dy/dt = 0, then you could calculate a dy/dt at each point.

As for the code below, I'm rusty on c++. This might not work but the principle is there.

As rwj04 pointed out above though, this also has a delay to it. It's considered a lagging filter. I could show you a leading filter, but the problem is that you have a start up delay for leading filters as well. Sorry I can't help more than this

Note: I'm new to Experts-Exchange, how do you embed the code into a particular spot?

The excel spreadsheet I attached has that algorithm implemented. It may be wise to put some input values in there before implementing it in C++ to make sure it works.

0

More than 75% of all records are compromised because of the loss or theft of a privileged credential. Experts have been exploring Active Directory infrastructure to identify key threats and establish best practices for keeping data safe. Attend this monthâ€™s webinar to learn more.

The code you've written there doesn't work sadly.. it outputs no meaninful vals and it can be seen the loop causes an invalid array index to be used, y[cnt], at the last iteration - should this line be 'y[i-1] = y[i-2] + dy'? A couple of questions:-

1) What is the 'filter constant' for?
2) Which end of the input array needs to contain the most recent value?
3) Which end of the output array will contain the most up to date value following the call to this function?
4) What queue (NUM_VALS) size are you using in the spreadsheet?

Thanks again,
Chris.

p.s. Not sure how to inline code either, I think you need to use 'code' tags (opening and closing, like html).

As for the code I posted, you're right about the array size situation. That should be easily fixed. As for your questions:

1) The filter constant represents a frequency cutoff. I used a Digital Signal Processing (DSP) for this technique. http://en.wikipedia.org/wiki/Low-pass_filter is a simple explanation. You want the Discrete-Time Realization section. In my case the A is the RC-Constant they talk about. So the frequency that it cuts out is above 1/(2*pi*A). If you don't have an understanding of Frequency-Domain theory, this may get complicated.

In your case, The higher you make A, the lower the frequency that this filter will allow, meaning it will smooth the curve out more and more. The lower you make it, the more noise that is allowed in the system. Remember A must greater-than or equal to 1 to be effective.

I reattached the XLS again, this time without the changing rand() numbers. Adjust the A constant and watch how it "smooths" (or filters) out the high frequency variations.

2) The last point of the input array has the most recent value.

3) The same is true for the output array. The last is the most recent value.

5) The NUM_VALS is arbitrary. You can continuously add on to the input and output array as many times as you'd like and it wouldn't effect the filter. In particular, I used 49 points. But I can easily delete the last 20 and the first 29 outputs would remain the same. The function is not based on how many points are being used.

I would like to mention that I developed this function for a one-time conversion. If you wanted to do a conversion for the next value on the fly you could try something simpler (as attached). Again I'm rusty on c++. I hope this is right. The only thing you would need to supply is the input and filter constant.

Let me know if this helps.

function NextOutput(float input,float A){ static float dy = 0; static float y = 0; dy = (input - y)/A; y = y + dy; return y;}

Remember that this solution requires a start up time. You may try supplying some dummy average values up front to get the filter going. For example, if you know most values are going to be in the range 1 - 10, then call the function 5 or 10 times with a value of 5. It's worth a shot.

I don't know if this is the best solution for you. Good luck

That on-the-fly version was pretty much exactly what I'm looking for, a weighted mean provides slightly better results however but I end up copying loads of values in the queue each time which seems a bit sub-optimal. Can your version of the on-the-fly one-pole-filter there be modified to be a little more front weighted? Apologies if I've misunderstood something fundamental..

Yea, the problem with a filter is that it's main purpose is to filter out sharp changes in data. So when you go from 0 to 1, you won't see a real change, but when you go from 1 to 12, it'll pull that 12 down. It's not like a weighted moving average.

The only way to use the function I provided and get a quick up front jump would be to use a small value of A (like 0.5 or 0.6) in the first few inputs (like 2 or 3 data points). After that set A to a nominal value. In the attached spreadsheet, I used A1 for the first 3 points. You can see that the response is much sharper.

It would be wise to increase A to a nominal value as you go through new points. Like use A=0.6 for the first 3 points, then 0.7 for 1, 0.8 for 1, etc. until you get your nominal value.

Remember that this is a filter, not an averaging technique. And with real-life filters there's always a small lag up front. Does that answer your question?

Here is another XLS file showing a more practical set of input values. See how if you change A2, it smooths out the values. It's a little easier to see with these inputs. Change A1 to equal A2 and you'll notice that there is still a delay. Lower it for the first few values and notice how quick it jumps. It won't be an average though.

No problem. Hope it works like you want it you. Feel free to contact me anytime with more questions

0

Featured Post

More than 75% of all records are compromised because of the loss or theft of a privileged credential. Experts have been exploring Active Directory infrastructure to identify key threats and establish best practices for keeping data safe. Attend this monthâ€™s webinar to learn more.

Complex Numbers are funny things. Many people have a basic understanding of them, some a more advanced. The confusion usually arises when that pesky i (or j for Electrical Engineers) appears and understanding the meaning of a square root of a negaâ€¦

This article provides a brief introduction to tissue engineering, the process by which organs can be grown artificially. It covers the problems with organ transplants, the tissue engineering process, and the current successes and problems of the tecâ€¦