Complexity in algorithms and succesive division

I have an algorithm of a funtion called sum:
function sum(a,i,j)
z=0
for k=i to j do
{z=z+a[k]}
return(z)
Now i have this algorithms that i got to find what they do and what's their complexity[Big OH Notation.(I'm learning all this by reading.)

Here are the algorithms:

(a.)
m=0
for i=1 to n do
for j=i to n do
{x=sum(a,i,j)
if m<x then m=x}

(b.)
m=0
for i=1 to n do
for j=1 to n do
{x=sum(a,1,n)
if m<x then m=x}

(c.)
m=0
for i=1 to n/4 do
for j=3n/4 to n do
{x=sum(a,i,j)
if m<x then m=x}
(d.)
m=0
for i=1 to n/4 do
for j=3n/4 to n do
{x=sum(a,1,n/2)
if m<x then m=x}
I will appreciate your help in understanding this and some info where i can find more about complexity.

Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Author Commented:
0
Author Commented:
Which one is more complex and which one is least complex?
0
Commented:
This is a very unusual and somewhat complex example of efficiency annalysis.  I hope you are not starting out with this, this is not the introductory material for the topic.

Just consider the sum fuction.

function sum(a,i,j)

The a doesn't matter for us, only the 2nd and 3rd parameters.  Why? because the number of operations that sum does depends on the 2nd and 3rd parameters, but not the first.  To be exact the number of operations it performs is j-i+1.   so we can say that the sum procedure is order O(j-i+1) which is aproximately O(j-i) when j-i is reasonably large.   (O() analysis is approximate, so that last approximation shouldn't scare you.

Actually, there will be enough times that j-i will not be large, that we might not want that approximation.  Lets stick with  O(j-i+1) for now.

continues
0

Experts Exchange Solution brought to you by

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Commented:
Now consider algorithm A

for i=1 to n do
for j=i to n do
{x=sum(a,i,j)

Consider what happens for 1 iteration of the outer loop.  the sum() is called  in a series like
sum(i,i), sum(i,i+1), sum(i+2)... sum(i,n)

The order of these is

O(1), O(2), O(2),...O(n-i+1)

The total order of this single outer loop is the some of these individual orders.  that is

O(!) + O(2) + O(3) ... O(n-i+1)

which is also

O(1+2+3+...n-i+1)
Now the sum of this series can be simplified.  Since the first and last items can be added together to get n-i+2 and the 2nd and the 2nd from last can be added together to get the same and so on the sum is just

O( (n-i+2) / 2)

continues.
0
Commented:
Opps, that last formula was wrong.  Each pair of terms in the series is eaual to (n-i+2)  and the number of terms in the series is n-i+1 so the number of pairs in the series is (n-i+1)/2 so the sum is (n-i+2)(n-i+1)/2.  That can be approximated as ((n-i)^2)/2 or (n^2 - 2in + i^2)/2

Now that is the order of a single iteration, we need to add together the results for each iteration, that is

O(1/2 *(
n^2  - 2n + 1  +  (1st iteration)
n^2 - 4n + 4  +  (2nd iteration)
n^2 - 6n + 9  + (3rd iteration)
n^2 - 2n^2 + n^2 (nth iteration)

Now we can look at that as the sum if three series,  (Three columns) We have n terms of n^2 so that is n^3.  In the next  series the numbers can be paired to get 2n+2n^2 in each pair, so the series sums to (n/2)*(2n + 2n^2) in each.  Unfortunaely, i can't remembmer what that last series sums to.  I'll have to find that--although its not likely to even matter as you will see.
0
Commented:
I can't find it.  However it doesn't matter, the sum is gong to be based on n^2 or maybe n^3.  Lets say n^3 as that is bigger, or "worse".  So now we get that the whole loop is

O( n^3  +  n/2(2n+ 2n^2) + n^3 )

Now is when we start simplifying.  if we multiply out that middle term we get

O(n^3 + n^2 + n^3 + n^3)
O(3n^3 + n^2)

Now we start using approximations (this is why I could just say that last term is at worst n^3.)  We note that as n get large the n^3 terms totally dominates and the n^2 term contributes almost nothing so we can drop it.  Also multiiplying the total time by that constant 3 is meaningless because again that n^3 will dominate, so the result is

O(n^3)

That is a very inefficient algorithm.  That says as n gets large the algorithm gets very very slow.  This is far far worse than an O(n^2 ) algorithm and they are considered bad.

continues
0
Commented:
Ouch.  I think all those algorithms are O(n^3) so they are all equally bad.  (however some are slightly better than others, but only due to those terms we dropped, which really tend to not matter.)

There may be an easier way to prove this though.  a graphical way.  the run-time of these loops is proportional to the volume of a shapethat we can calculate.

consider again

for i=1 to n do
for j=i to n do
sum(a,i,j)

Picture an i-j cartesian plane  (like an X-y plane)  each iteration of the loop occupies a point on the plane.  As you can see the points fill a right triangle with verticies (1,1), (n,n), (n,1).  i.e a triangle like

000*
00**
0***
****

where * is the points occupied by the triangle and the 0 are there to help space things so they look good in proportional fonts.

Now for each point the run-time duration of the iteration is O(j-i+1).  Picture this as a height at each point.  So the height at the three verticies is

(1,1) : O(1)
(n,n) :  O(1)
(n,1) : O(n)

So this forms a triangular piramid, the volume of this piramid is the total time required to do all the iterations.  That would be 1/3*A*H where A is the  area of the triangel and H is the height.  The Height is n.  The area is 1/2wh or 1/2*n*n or 1/2*n^2.  So the voume is 1/6*n^3.  So the algorithm is O(1/6*n^3).

We can continue this annalysis on the others.  but I have to go for now.  I'll be back in about 10 hours.
0
Author Commented:
That was supposed to be some introductory algorithms on complexity. I still don't understand which of them is the most efficient and which is the least.
0
Commented:
This is not introductory analysis.  Not by any means.

They are all of order O(n^3) which makes them all extremely bad.  some might be slightly better than others but all of them are very bad.

Lets consider the 2nd one

for i=1 to n do
for j=1 to n do
sum(a,1,n)

These points form a rectangle on the i-j plane its corners are (1,1) (n,1) (n,n) (1,n).  since the sum() function is being called with the 2nd and 3rd paraters being constant, the duration of each sum() call in the loop is O(n-1+1) = O(n).  So picture the height (time) at each point as n and you see that the shape formed is a rectangular prism, the volume is the area of the base (n^2) times the height (n) which yields a volume of n^3.  So the 2nd algorithm's duration is on the order of O(n^3), which meants that is is about 6 times worse than the first one.

Lets consider the 3rd case.
for i=1 to n/4 do
for j=3n/4 to n do
sum(a,i,j)

again this forms a rectangle on the i-j plane, but its coordinates are (1,3n/4) (n/4,3n/4) (n/4,n) (1,n)

height at each point is i-j+1.  so this is a trapezoidal prism.  We can calculate its volume as the volume of the lower part, a rectangular prism, plus the upper part, a tetrahedron.  The volume of the lower part is the area of the base times the height.  the base is a square with sides of length n/4, so its area is (n^2)/16.  The height is 3n/4-n/4+1 which is n/2+1.  We can drop the +1 as it contribites very little and we get that the volume is (n^3)/32.  So this is faster than the previous two.
0
Commented:
Finally

for i=1 to n/4 do
for j=3n/4 to n do
sum(a,1,n/2)

This forms the same rectangle as the 3rd, but this time the height is always n/2.  So this is a rectangular prism.  the volume is the area of the base (n^2)/16 times the height, n^2 or (n^3)/32

That's the same as the 3rd.  That's not good.  Opps.  I only got 1/2 way through the 3rd one.  I figured out the volume of the lower portion, that is the (n^3)/32, but we need to add on the volume of the upper portion, which is 1/3BH or (1/3)((n^2)/16)(n/4)  =  (n^3)/48  So the final answer for the 3rd one is 5(n^3)/98

so the results we get are

a = O(1/6n^3)
b = O(n^3)
c = O(5/98n^3)
d = O(1/32n^2)

So the order from fastest to slowest is d,c,a,b.

Now that is not a typical look at efficiency.  If that is how your book is introducing the problem, look this up in another book or two.  This is missing some of the most important (and easy) points of the topic in favor of less important (and hard) details.
0
Author Commented:
thanx very much. Any book you reccomend?
0
Commented:
Teh only one I can think of is Robert Sedgewick's "algorithms"  It focuses on basic algorithms, not on efficiency, but it carefully studies the efficiency of each algorithm, so it it is full of good examples of this.  There may be (actually I'm sure there are) books that cover this topic a little more directly.  I don't knwo any names.  If you go to a large bookstore, like Barnes and Nobles, you should be able to find one.  They tend to have 100s of books on these sorts of topics.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
C++

From novice to tech pro — start learning today.