Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people, just like you, are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions

big o notation for reallocs and frees

Posted on 2013-01-06
Last Modified: 2013-01-07

Say d is the maximum amount of memory that can be used. So, the big oh notation for the Cprogram that uses the realloc function is O(log_2 d). Say d is freed. The reallocs up to d are not done in one step but in several steps. For example, first the memory used is 4 and then doubled to 8 later to 16 and then 32 and so on.

If d is freed in one step and then, memory is allocated to it via realloc up to d again (several times), would the big oh notation still be O(log_2 d)? Thank you.
Question by:zizi21
  • 5
  • 4
  • 2
  • +2
LVL 84

Expert Comment

ID: 38749486
It would depend on the implementation.

Author Comment

ID: 38749603
could you please give me an example. I am trying to understand this...I thought, it was log_2 d as you would take the largest one ...any help in understanding this would be greatly appreciated..
LVL 32

Expert Comment

ID: 38749636
If d is the max amount of memory available that is used for the heap (i.e., not for code or stack or global data variables), then due to general fragmentation, it is unlikely that you will be to actually realloc d bytes.

But if you had more than d bytes available, and if you find that you can realloc up to d bytes without an error, then according to your OP is, it will take (log_2(d) - 1) realloc calls.

If you free this d bytes, and start over starting with a malloc of 4 bytes, and realloc'ing up to d bytes, then it still takes (log_2(d) - 1) realloc calls.

But, some of the realloc calls may require a new block to be formed if the current region cannot be doubled because some of the memory requird is already being used. In that case, the n-bytes in the original allocated region is copied over. The complexity of that single copy is O(n).

Worst-case would be that every realloc requires a copy. Consider this sequence:

  8 + 16 + 32 + ... + 2^d ~ O(2^d)
Announcing the Most Valuable Experts of 2016

MVEs are more concerned with the satisfaction of those they help than with the considerable points they can earn. They are the types of people you feel privileged to call colleagues. Join us in honoring this amazing group of Experts.


Author Comment

ID: 38749661
Thank you very much..I am studying it now...

Author Comment

ID: 38749708
Thank you very much for looking into this. In the worst case, it is O(2^d) because each copy may require O(2^d) but in the best case, it is O(log_2 d). Is this right then ?
LVL 13

Expert Comment

by:Hugh McCurdy
ID: 38750681
After all that, I'm going with ozo, in that it depends.

How often, in the grand scheme of things, is memory being reallocated?

What I see here reminds me of the doubling in size of an Array (say to implement an ArrayList, ArrayQueue, etc.).  In many cases the doubling operation happens so rarely, that updating the array (or heap region) is considered to happen at the speed of the underlying structure.  This could be as short as constant time.

My point is depending on what is going on, the effeciency may not matter to the overall efficiency of the program (or it may).  So, what is going on?
LVL 32

Expert Comment

ID: 38751120
The OP gave the scenario that there would be a doubling in size of an array starting at 4, and then going to d. In that specific scenario there are about log_2(d) calls to realloc. There are implementation dependent under-the-hood optimizations that can improve one realloc version over another (e.g., if a copy is required, then copying 4-8 bytes at a time instead of one byte at a time), but for a given implementation, given the author's scenario, there are always log_2(d) calls to realloc.

Now, if one of the implementations were to reserve double or quadruple the number of pages actually requested in the realloc, then a number of realloc calls would natually be processed much faster since the reservation has already been made in advance.
LVL 32

Expert Comment

ID: 38751121
off topic: @ hmccurdy - sent you a note via your website - did you get it?
LVL 37

Assisted Solution

TommySzalapski earned 100 total points
ID: 38751190
phoffric, didn't you mean to terminate at 8 + 16 + 32 + ... + 2^log+2(d)? or ... + d

In the worst case, you would still to do log_2(d) reallocs but each would be O(n) for a worst case of O(d*log(d))

Note that with big O notation, the base of the log doesn't matter.

Now, also remember that O(40000x) = O(x), so big O notation is really only useful in theoretical discussions or when extrapolating to very large values.

In practical use, things like caching speed up that kind of operation making it rather difficult to time  tests and get meaningful results.
LVL 32

Expert Comment

ID: 38752003
Thanks Tommy ...    Should have terminated with d, not 2^d, since the last operation would require copying d bytes. Rewrite:

In worst-case, if every realloc required a copy, then the total number of bytes copied is:
  8 + 16 + 32 + ... + d = 2^3 + 2^4 + 2^5 + ... + 2^log_2(d) ;     Note: d = 2^log_2(d)

And this sum is just 2*(d-4) ~ O(d)

I agree that O(2^d) was too high. O( d * log_2(d) ) is a better estimate.
LVL 84

Assisted Solution

ozo earned 100 total points
ID: 38752691
I'd call that O(2^log_2(d)) = O(d)
LVL 32

Accepted Solution

phoffric earned 300 total points
ID: 38753105
>> I agree that O(2^d) was too high. O( d * log_2(d) ) is a better estimate.
For worst-case, O( d * log_2(d) ) is a definitely a better estimate than either O(2^d) (which I got when I incorrectly made the last copy 2^d instead of just d), and your original estimate of

The reason that I said "better" and not the correct complexity was that I was still thinking that maybe I should be adding the two complexities rather than multiplying them. After seeing ozo's response, I think adding is correct. That is, the worst-case complexity is:
   O( log_2(d) )  + O( d ) => O(d)

If each of the reallocs required a copy of d bytes, then multiplying the two would make more sense; i.e., complexity would be O( d * log_2(d) ) .

But realize that prior to copying d bytes in the last realloc, the sum of all the bytes copied in the previous ( log_2(d) - 1 ) reallocs was almost d bytes. So, the dominant transfer is just the last operation, since after the last operation, there were ~ 2*d bytes transferred, so the complexity is just O(2d) = O(d).

As Tommy indicated, the complexity is used primarily of value in very large numbers. These extra constants and terms do affect your run-time performance. So, you have to decide whether the complexity is of importance to you or whether your run-time performance analysis is more important.

Author Comment

ID: 38753303
Thanks very much for all for the replies.

I did a program using reallocs and I was told that realloc bad and find out the big o analysis. I don't understand. How could reallocing up to 200 is bad. I had good timing results. So, here i just did a simple program in order to evaluate the big o notation/timings.  (I am learning the big o notations. At times, the big o notations does not seem good, but in practice, the timings are good)

I could be wrong but this is what I understand. When we realloc, it is not necessary the OP reallocs the exact amount. It might realloc more just in case it is needed. Similar to malloc.

This is my purpose. To evaluate realloc using big o notations. Thanks very much for all the replies. I am going to study them..

Featured Post

Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

The greatest common divisor (gcd) of two positive integers is their largest common divisor. Let's consider two numbers 12 and 20. The divisors of 12 are 1, 2, 3, 4, 6, 12 The divisors of 20 are 1, 2, 4, 5, 10 20 The highest number among the c…
Prime numbers are natural numbers greater than 1 that have only two divisors (the number itself and 1). By “divisible” we mean dividend % divisor = 0 (% indicates MODULAR. It gives the reminder of a division operation). We’ll follow multiple approac…
The goal of this video is to provide viewers with basic examples to understand how to use strings and some functions related to them in the C programming language.
Video by: Grant
The goal of this video is to provide viewers with basic examples to understand and use nested-loops in the C programming language.

839 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question