Understanding database sizes
Posted on 2011-04-27
I have a database that is growing at an incredible rate, event though I am not increasing the number of records. Please can someone explain where the memory is leaking to?
The database has 40 tables 38 of which are smaller than 2 mb.
There are 2 large tables
The first of these tables GlobalLive which has 800,000 records, when I look at the properties of this table the size is about 2gb.
The other table Globalinput. This table starts of empty, then during an evening batch gets populated with another 800,000 records. These records match ones in GlobalLive and is used to update the GlobalLive table. At the end of the batch the Globalinput file is again emptied.
Looking at the global input table, even though it is empty, the size is about 1gb
Looking at the database with all the tables I would expect it to come to a size equal to the sum of the tables, however it is almost double that. At about 5gb.
The bigger issue comes when running this batch job the total database increase to over 10gb about 25% of which id the log file, this has a big impact on performance.
One big impact is when the GlobaInput file is having the records deleted. The idea was to clear space and to start with a fresh table each night. While the stored procedure is running to delete all the records and I look and refresh the size of the database I see it increase by GB at a time. Therefore I am thinking that there is some storage allocated to a roll back. May this be the issue, if so how would I free up the space. I understand that some space will be allocated to indexes, which I would like to keep for performance.
I take a back up each night and would only restore to a full back up, if that helps with freeing up memory.
Please can you indicate what may be taking up this space and how I can reduce it.