• Status: Solved
• Priority: Medium
• Security: Public
• Views: 809

# sum of absolute difference

what would be the most efficient and fastest way for an algorithm to find the sum of absolute difference between two arrays of the same size? as of now what I am doing is a double for loop re-iterating each element row by row and then taking the difference of it... this is the usual way... any other way?
0
kuntilanak
• 7
• 7
• 2
• +1
1 Solution

Commented:
If you want the sum of the absolute values of  the differences between corresponding elements in 2 arrays then your method is the fastest in an array

sum=0
for ( i=0;i < N; i++)
for ( j=0;j < N; j++)
sum= sum + Math.abs( A[i,j] - B[i,j] )
0

Commented:
ar1: 2 4 6 1
ar2: 5 3 2 5
----------------
abs diff of corresponding elements in an array
3 1 4 4
sum =  3 + 1 + 4 + 4 = 12

So, a single loop works if this is what you want
sum=0
for ( i=0;i < N; i++)
sum += abs( a[i] - b[i] )

0

Commented:
By array, maybe you meant matrix or 2d array?
If you defined the matrix as Mat1[2][3], then there are 2*3 elements in the matrix

int * pM1 = (int *) Mat1;
int * pM2 = (int *) Mat2;
for (sum=0, i=0;i < row*col; i++)
sum += abs( *pM1++ - *pM2++ );
0

Author Commented:
I mean a 2d array
0

Commented:
The approach you propose appears to be as efficient as you can get.  You have to visit each element of both arrays.  Assuming you are discussing two-dimensional arrays of equal size and an algorithm like this, you're fine.
``````        private int test13(int[,] a, int[,] b)
{
int sum = 0;
int m = a.GetLength(0);
int n = a.GetLength(1);
for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j)
sum += Math.Abs(a[i,j] - b[i,j]);
return sum;
}
``````
0

Commented:
If the arrays are dynamically created, then two loops required. If created like Mat1[2][3], then one loop is needed to pass over the numRow * numCol = 2*3 = 6 cells since the cells are stored sequentially in this case.

Also, you can set a sentinel pointer to the end of the
int * pSentinel = pM1 + 6;
and instead of testing for i < 6, and then i++
just test for pM1 == pSentinel to end the loop
This combined with usage of array traversal *pM1++ instead of indexing the 2d array is most efficient approach.
0

Commented:
How big are these arrays?
How often will you run the algorithm?
Which language will you use?

How much time will the calculation take?  Will anyone notice the difference?

Whether you treat the arrays as one- or two-dimensional, you will perform the difference and the sum step the same number of times.  You may do some multiplications in the index calculations, but a good optimizing compiler may even reduce or eliminate that.
0

Commented:
>> How big are these arrays?
>> How often will you run the algorithm?
Since this Q is in the Algorithms Zone, assume M x N, but for this problem OK to simplify as MxM; and assume that the algorithm will be run often otherwise "most efficient and fastest way" would likely not be an issue. M can go from 3 to 10000 easily.

>> You may do some multiplications in the index calculations, but a good optimizing compiler may even reduce or eliminate that.
>> "good optimizing compiler"
Which one do you recommend? Produce assembly code for the "good optimizing compiler" and compare assembly code to see if good optimizing compiler is good enough.

If less than good, then go with single array approach using *p++ and use sentinel, since that replaces 2 index increments and two index tests with one equality test.
0

Commented:
Why is MxM any easier than MxN?  You only need to take the product once.

I suppose you could write this algorithm in assembly language, in which case you could use a single index register for both arrays (just one increment to handle both arrays) and you could eliminate the need for calling the Abs function by testing condition codes and doing a three-way branch immediately after the subtraction.  You could also start the index at the end of the array and decrement the index which would eliminate the need to compare the index to the sentinel (by instead just doing a BGE after decrementing the index).

If you want to use a language like C++, you can treat a two-dimensional array as one-dimensional.  If you instead use C# or VB, you'll have to fight with the type restrictions, unless you resort to "unsafe" code.  Both the one- and two-dimensional approaches use the same number of iterations.  You can squeeze out a few cycles per iteration by using C++, and you can squeeze out even more by using assembly language, using essentially the same algorithm in all three cases.

Since most users can't tell the difference between 3 millisecond response time and 5 millisecond response time, one should consider other factors when deciding how much effort to put into such optimization.
0

Commented:
The difference between 3 ms response time and 5 ms response time in, say, DSP algorithms where the duty cycle is, say 16 ms; so I agree that "one should consider other factors when deciding how much effort to put into such optimization."
0

Commented:
Revised:
The difference between 3 ms response time and 5 ms response time is very significant in, say, DSP algorithms where the duty cycle is, say 16 ms; so I agree that "one should consider other factors when deciding how much effort to put into such optimization."
0

Commented:
I'm sorry.  I should have been more clear.  In applications where the user presses a button and some answer is displayed in response, the difference between 3ms and 5ms is probably imperceptible.  In time-critical applications where response on that scale or better is required, then such a difference is significant.  Knowing the requirement is one of the factors to which I referred.  Sometimes it's worth squeezing cycles; sometimes not.  When it's worthwhile, then make the code run as fast as you can.
0

Commented:
>> Why is MxM any easier than MxN?
Didn't mean from a performance viewpoint - just a tad easier to make timing tests.

>> a good optimizing compiler may even reduce or eliminate that.
You are right. On two platforms (using VS C++ and gcc) my timing tests show that the timing results are negligible when any optimization is turned on (O1, O2, O3). Without optimization, using gcc, the double loop version took 20 seconds; whereas the single loop version took only 18 seconds. (With optimization, we're talking 7-8 seconds depending on which switch is used.)
0

Commented:
Interesting results.

We're lucky to have such great hardware and compilers these days. Many years ago, I expended a lot of effort (writing asm code, intercepting and modifying system functions, etc) in order to maximize scrolling speeds.  Now, most programs scroll so fast it's easy to overshoot the mark.
0

Author Commented:
Just for an FYI I will be writing this algorithm in assembly/MIPS so therefore no more compiler in place
0

Commented:
Then you want to use phoffric's approach (calculate the total length of the array and then treat it as one-dimensional).  You can also use some of the ideas I mentioned seven posts back to optimize the asm code.
0

Commented:
Here's a sketch.  I'm unfamiliar with MIPS, so please clean up any syntax problems.  There may be more efficient instructions also.
``````#   Sum the absolute values of the differences between corresponding elements
#   of two two-dimensional arrays (assuming total size < 4GB)

#   Arguments:
#       \$a0:    First array
#       \$a1:    Second array
#       \$a2:    Number of rows
#       \$a3:    Number of columns
#   Result:
#       \$v0:    Sum of differences

li      \$v0,0       # Sum = 0
mult    \$a2,\$a3     # multiply array bounds to get total length
mflo    \$t0         # get result (ignore high order word of result)
beqz    \$t0,done
add     \$t0,\$a0,\$t0 # create sentinel; stop when index reaches this value
loop:   sub     \$t1,(\$a0),(\$a1)
beqz    \$t1,next    # difference is zero
bgez    \$t1,pos     # difference is positive; add
sub     \$v0,\$v0,\$t1 # difference is negative; subtract (add abs value)
b       next
next:   addi    \$a0,\$a0,1   # increment indexes
blt     \$a0,\$t0,loop # are we finished?
done:   jr      \$ra
``````
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.