• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 485
  • Last Modified:

big o notation for reallocs and frees

Hi,

Say d is the maximum amount of memory that can be used. So, the big oh notation for the Cprogram that uses the realloc function is O(log_2 d). Say d is freed. The reallocs up to d are not done in one step but in several steps. For example, first the memory used is 4 and then doubled to 8 later to 16 and then 32 and so on.

If d is freed in one step and then, memory is allocated to it via realloc up to d again (several times), would the big oh notation still be O(log_2 d)? Thank you.
0
zizi21
Asked:
zizi21
  • 5
  • 4
  • 2
  • +2
3 Solutions
 
ozoCommented:
It would depend on the implementation.
0
 
zizi21Author Commented:
ozo,
could you please give me an example. I am trying to understand this...I thought, it was log_2 d as you would take the largest one ...any help in understanding this would be greatly appreciated..
0
 
phoffricCommented:
If d is the max amount of memory available that is used for the heap (i.e., not for code or stack or global data variables), then due to general fragmentation, it is unlikely that you will be to actually realloc d bytes.

But if you had more than d bytes available, and if you find that you can realloc up to d bytes without an error, then according to your OP is, it will take (log_2(d) - 1) realloc calls.

If you free this d bytes, and start over starting with a malloc of 4 bytes, and realloc'ing up to d bytes, then it still takes (log_2(d) - 1) realloc calls.

But, some of the realloc calls may require a new block to be formed if the current region cannot be doubled because some of the memory requird is already being used. In that case, the n-bytes in the original allocated region is copied over. The complexity of that single copy is O(n).

Worst-case would be that every realloc requires a copy. Consider this sequence:

  8 + 16 + 32 + ... + 2^d ~ O(2^d)
0
Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

 
zizi21Author Commented:
Thank you very much..I am studying it now...
0
 
zizi21Author Commented:
Thank you very much for looking into this. In the worst case, it is O(2^d) because each copy may require O(2^d) but in the best case, it is O(log_2 d). Is this right then ?
0
 
Hugh McCurdyCommented:
After all that, I'm going with ozo, in that it depends.

How often, in the grand scheme of things, is memory being reallocated?

What I see here reminds me of the doubling in size of an Array (say to implement an ArrayList, ArrayQueue, etc.).  In many cases the doubling operation happens so rarely, that updating the array (or heap region) is considered to happen at the speed of the underlying structure.  This could be as short as constant time.

My point is depending on what is going on, the effeciency may not matter to the overall efficiency of the program (or it may).  So, what is going on?
0
 
phoffricCommented:
The OP gave the scenario that there would be a doubling in size of an array starting at 4, and then going to d. In that specific scenario there are about log_2(d) calls to realloc. There are implementation dependent under-the-hood optimizations that can improve one realloc version over another (e.g., if a copy is required, then copying 4-8 bytes at a time instead of one byte at a time), but for a given implementation, given the author's scenario, there are always log_2(d) calls to realloc.

Now, if one of the implementations were to reserve double or quadruple the number of pages actually requested in the realloc, then a number of realloc calls would natually be processed much faster since the reservation has already been made in advance.
0
 
phoffricCommented:
off topic: @ hmccurdy - sent you a note via your website - did you get it?
0
 
TommySzalapskiCommented:
phoffric, didn't you mean to terminate at 8 + 16 + 32 + ... + 2^log+2(d)? or ... + d

In the worst case, you would still to do log_2(d) reallocs but each would be O(n) for a worst case of O(d*log(d))

Note that with big O notation, the base of the log doesn't matter.

Now, also remember that O(40000x) = O(x), so big O notation is really only useful in theoretical discussions or when extrapolating to very large values.

In practical use, things like caching speed up that kind of operation making it rather difficult to time  tests and get meaningful results.
0
 
phoffricCommented:
Thanks Tommy ...    Should have terminated with d, not 2^d, since the last operation would require copying d bytes. Rewrite:

In worst-case, if every realloc required a copy, then the total number of bytes copied is:
  8 + 16 + 32 + ... + d = 2^3 + 2^4 + 2^5 + ... + 2^log_2(d) ;     Note: d = 2^log_2(d)

And this sum is just 2*(d-4) ~ O(d)
    http://www.wolframalpha.com/input/?i=2%5E3+%2B+2%5E4+%2B...+%2B+2%5Ek

I agree that O(2^d) was too high. O( d * log_2(d) ) is a better estimate.
0
 
ozoCommented:
I'd call that O(2^log_2(d)) = O(d)
0
 
phoffricCommented:
>> I agree that O(2^d) was too high. O( d * log_2(d) ) is a better estimate.
For worst-case, O( d * log_2(d) ) is a definitely a better estimate than either O(2^d) (which I got when I incorrectly made the last copy 2^d instead of just d), and your original estimate of
O(log_2(d)).

The reason that I said "better" and not the correct complexity was that I was still thinking that maybe I should be adding the two complexities rather than multiplying them. After seeing ozo's response, I think adding is correct. That is, the worst-case complexity is:
   O( log_2(d) )  + O( d ) => O(d)

If each of the reallocs required a copy of d bytes, then multiplying the two would make more sense; i.e., complexity would be O( d * log_2(d) ) .

But realize that prior to copying d bytes in the last realloc, the sum of all the bytes copied in the previous ( log_2(d) - 1 ) reallocs was almost d bytes. So, the dominant transfer is just the last operation, since after the last operation, there were ~ 2*d bytes transferred, so the complexity is just O(2d) = O(d).

As Tommy indicated, the complexity is used primarily of value in very large numbers. These extra constants and terms do affect your run-time performance. So, you have to decide whether the complexity is of importance to you or whether your run-time performance analysis is more important.
0
 
zizi21Author Commented:
Thanks very much for all for the replies.

I did a program using reallocs and I was told that realloc bad and find out the big o analysis. I don't understand. How could reallocing up to 200 is bad. I had good timing results. So, here i just did a simple program in order to evaluate the big o notation/timings.  (I am learning the big o notations. At times, the big o notations does not seem good, but in practice, the timings are good)

I could be wrong but this is what I understand. When we realloc, it is not necessary the OP reallocs the exact amount. It might realloc more just in case it is needed. Similar to malloc.

This is my purpose. To evaluate realloc using big o notations. Thanks very much for all the replies. I am going to study them..
0

Featured Post

Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

  • 5
  • 4
  • 2
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now