Memory Performance Question - very technical
Posted on 2002-06-19
I am building an server application that uses business objects extensively. What started out as a "let's see if we can break it" test has turned into a serious performance tuning consideration.
Allocating new objects is very fast. Until RAM is filled, I get somewhere between 100,000 and 500,000 objects per second created. Once I start hitting virtual memory I get a flat 33,000 objects created per second. At just over 4,900,000 objects the allocation loop fails (out of memory). This is fine - it was the oroginal point of the test - to see what the limits were.
When I go to free the objects, I get a fraction of the performance. At this high utilization, I can only free 130 objects per second. Even at lower utilizations I see that releasing objects is far more expensive than creating them. Persisting this many objects takes 8 minutes. Releasing them ... well over night I still had over 4,000,000 objects allocated.
Further testing showed tha the performance of "free" degrades lineraly with the number of objects allocated in memory. This was done by allocating 10,000, 20,000, and 30,000 objects. When no objects were in memory, creating and freeing a single object 10,000 times required 110 ms. When 10,000 objects were resident the same operations required 220ms. 20,000 objects resident required 330 ms, and so on. Create timing was flat regardless of the number of objects already resident at 10 ms for 10,000 objects.
Given that this application is intended to service a mission critical enterprise scale problem, the capacity for a clean shutdown is a very necessary thing. An application that will start up instantly is nice, but if it takes three days to shut it down it makes maintenance difficult.
1. Does anyone know why is the performance of "Free" is so much poorer than "Create"?
2. Does anyone know why "Free" degrades in performance while "Create" remains flat?
3. Does anyone have some strategies for handling this performance differential?
To answer in advance some of the obvious questions:
1. I don't know for sure that the end users will have 4,000,000 plus objects resident in memory at one time. but I also don't know for sure that they won't. The project is still at the early stage where I can make drastic design changes without incurring a lot of overhead, so I am testing the limits early. I'd rather deal with it now than have a dis-satisfied customer.
2. An earlier concept of this project was built with a relational model. Althought rhe relational model handled much of the core business well, it did not handle many of the common circumstances very well at all. The polymorphism of the object model provides a much more elegant way to handle the problem.
3. For the persistence layer I am using Interbase, although the business objects are agnostic towards the persistence layer implementation.
Thanks in advance