Are Hashtable and Treeset heavy on the resources (memory)

I'm developing an application where I'm planning to use two hashtables with about a size of 1000 each. In the value, each value will be storing a treeset as an object containing about 1000 values. What I'd like to know is if this hashtable and treeset setup with such a large size makes the design very resource heavy?

As per my understanding , Hashtables usually start with a default initial capacity of about 11 (or 16). So 1000 in this case might mean a lot. Is that correct?

Who is Participating?

Improve company productivity with a Business Account.Sign Up

On most modern computers where you can have several  hundred megabytes of memory and maybe even
more than 1 GB for your application these requirements do not seem to be prohibitive - you'll probably
have to increase the amount of allocated memory  in the options for virtual machine, say
-Xmx512M  or someting like that.
orazen12Author Commented:
Thanks ObJects.

My server has 1 GB memory out of which 750MB is already being used (atleast that's what it is showing me in the webmin). Do you mean to say I'd need approx 1GB for this kind of operation or it should be ok with with my current system. It is doing a lot of other stuff too. Is there a way to check how much memory my system is using for this kind of operation?

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.


Your computer would start using the swap if ram  is full, and this would then in turn slow down processing time as read-writes  from disk will be slower, apart from this it should not be a problem.

for_yanConnect With a Mentor Commented:
I think the best way is first to write it in a straightforward way using all hashtables you need - make sure you are not getting java out of memory messages (provide sufficiently big number in -Xmx) and then observe - if you feel that your performance is too slow for your business  purposes, then try to monitor with your system means if your program is indeed doing a lot of memory swaps, then start thinking how to design your program to reduce memory size. My guess is that you'll not see too much swapping.
gordon_vt02Connect With a Mentor Commented:
What kind of values are you storing?  The overhead of each Hashtable and TreeSet is fairly minimal so you need to figure out how much storage each object in the TreeSet requires and then you can estimate the amount of memory required.

Sk = memory for hash key
Se = memory for each tree entry
n = number of hash entries
m = number of tree set entries

App Mem = n * (Sk + (m * Se))

So, if your hash key is a long (8 bytes) and your TreeSets contain ints (4 bytes), with 1000 sets of 1000 items you would use:

1000 * (8 + (1000 * 4)) = 4,000,800 bytes = ~3.8 MB

If your value objects contain more complex data structures, especially variable length members like strings or arrays, you can come up with a good range and evaluate a worst-case scenario to make sure you have enough memory.  You aren't talking about a whole lot of data objects though, so unless each object is pretty big, you probably won't even need to modify the default JVM memory settings, and setting it to use at most the 250MB you have remaining should give plenty of room without swapping.
Oops, flipped the 0 and 8 in that calculation.  Should be 4,008,000 bytes = ~3.8 MB.
orazen12Author Commented:
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.