Your question, your audience. Choose who sees your identityâ€”and your questionâ€”with question security.

Hi,

Say d is the maximum amount of memory that can be used. So, the big oh notation for the Cprogram that uses the realloc function is O(log_2 d). Say d is freed. The reallocs up to d are not done in one step but in several steps. For example, first the memory used is 4 and then doubled to 8 later to 16 and then 32 and so on.

If d is freed in one step and then, memory is allocated to it via realloc up to d again (several times), would the big oh notation still be O(log_2 d)? Thank you.

Say d is the maximum amount of memory that can be used. So, the big oh notation for the Cprogram that uses the realloc function is O(log_2 d). Say d is freed. The reallocs up to d are not done in one step but in several steps. For example, first the memory used is 4 and then doubled to 8 later to 16 and then 32 and so on.

If d is freed in one step and then, memory is allocated to it via realloc up to d again (several times), would the big oh notation still be O(log_2 d)? Thank you.

could you please give me an example. I am trying to understand this...I thought, it was log_2 d as you would take the largest one ...any help in understanding this would be greatly appreciated..

But if you had more than d bytes available, and if you find that you can realloc up to d bytes without an error, then according to your OP is, it will take (log_2(d) - 1) realloc calls.

If you free this d bytes, and start over starting with a malloc of 4 bytes, and realloc'ing up to d bytes, then it still takes (log_2(d) - 1) realloc calls.

But, some of the realloc calls may require a new block to be formed if the current region cannot be doubled because some of the memory requird is already being used. In that case, the n-bytes in the original allocated region is copied over. The complexity of that single copy is O(n).

Worst-case would be that every realloc requires a copy. Consider this sequence:

8 + 16 + 32 + ... + 2^d ~ O(2^d)

How often, in the grand scheme of things, is memory being reallocated?

What I see here reminds me of the doubling in size of an Array (say to implement an ArrayList, ArrayQueue, etc.). In many cases the doubling operation happens so rarely, that updating the array (or heap region) is considered to happen at the speed of the underlying structure. This could be as short as constant time.

My point is depending on what is going on, the effeciency may not matter to the overall efficiency of the program (or it may). So, what is going on?

Now, if one of the implementations were to reserve double or quadruple the number of pages actually requested in the realloc, then a number of realloc calls would natually be processed much faster since the reservation has already been made in advance.

In the worst case, you would still to do log_2(d) reallocs but each would be O(n) for a worst case of O(d*log(d))

Note that with big O notation, the base of the log doesn't matter.

Now, also remember that O(40000x) = O(x), so big O notation is really only useful in theoretical discussions or when extrapolating to very large values.

In practical use, things like caching speed up that kind of operation making it rather difficult to time tests and get meaningful results.

In worst-case, if every realloc required a copy, then the total number of bytes copied is:

8 + 16 + 32 + ... + d = 2^3 + 2^4 + 2^5 + ... + 2^log_2(d) ; Note: d = 2^log_2(d)

And this sum is just 2*(d-4) ~ O(d)

http://www.wolframalpha.com/input/?i=2%5E3+%2B+2%5E4+%2B...+%2B+2%5Ek

I agree that O(2^d) was too high. O( d * log_2(d) ) is a better estimate.

I did a program using reallocs and I was told that realloc bad and find out the big o analysis. I don't understand. How could reallocing up to 200 is bad. I had good timing results. So, here i just did a simple program in order to evaluate the big o notation/timings. (I am learning the big o notations. At times, the big o notations does not seem good, but in practice, the timings are good)

I could be wrong but this is what I understand. When we realloc, it is not necessary the OP reallocs the exact amount. It might realloc more just in case it is needed. Similar to malloc.

This is my purpose. To evaluate realloc using big o notations. Thanks very much for all the replies. I am going to study them..

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.

betterestimate.For worst-case, O( d * log_2(d) ) is a definitely a better estimate than either O(2^d) (which I got when I incorrectly made the last copy 2^d instead of just d), and your original estimate of

O(log_2(d)).

The reason that I said "

better" and not the correct complexity was that I was still thinking that maybe I should be adding the two complexities rather than multiplying them. After seeing ozo's response, I think adding is correct. That is, the worst-case complexity is:O( log_2(d) ) + O( d ) => O(d)

If each of the reallocs required a copy of d bytes, then multiplying the two would make more sense; i.e., complexity would be O( d * log_2(d) ) .

But realize that prior to copying d bytes in the last realloc, the sum of all the bytes copied in the previous ( log_2(d) - 1 ) reallocs was almost d bytes. So, the dominant transfer is just the last operation, since after the last operation, there were ~ 2*d bytes transferred, so the complexity is just O(2d) = O(d).

As Tommy indicated, the complexity is used primarily of value in very large numbers. These extra constants and terms do affect your run-time performance. So, you have to decide whether the complexity is of importance to you or whether your run-time performance analysis is more important.