Improve company productivity with a Business Account.Sign Up

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 672
  • Last Modified:

sql 2008 database has suddenly grown huge

We've had a package for our website that uses sql for about 7 years.  The database was usually around 25 gb in size.  Suddenly it has grown to 90 gb and I can't figure out why or how to fix it.  It isn't the log that is huge - its the actual db.  I can see the largest file is "portal" but I don't know what that file is.  It isn't listed under the table view.  We haven't done anything differently with the site over the past few months - in fact we've deleted a lot of old images.  I thought about shrinking the db (even though some don't recommend that), but it looks like that would only affect about 6% of the space.  I have no clue what to do but need to reduce the size of this db quickly.
  • 8
  • 4
  • 3
  • +2
4 Solutions
I believe the sp_spaceused stored procedure would help you find the table that is using the unexpected space within the database - - this would be where I would start looking.  Shrinking the database would only be useful to you after you have ascertained that there is space to free up, however it does look like you have already looked at the space this would save.

Hope the above helps.
cindyfillerAuthor Commented:
Hmm - it did show me that 31 gb was used by the index.   I do reindex every week - but that must not affect the size of it.  It definitely shows that shrinking would not affect the size much.
Thanks for the reply.  I am not a dba, but have a little experience with SQL.
Personally, I would check the daily backups to see exactly when the db changed dramatically in size.  Then I would attach (preferably to a different server to ensure I don't accidentally affect the current live database) the database from the night before and the night following the size change and then utilise the SP from above to see where the change originated.  Apart from this, I'm afraid I can't be much further help regarding this question.
Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

cindyfillerAuthor Commented:
Do appreciate your comments - its pointed me to a different direction and that helps.
Thank you and I hope you resolve it successfully.
Please Check your Log file by DBCC sqlperf(logspace) command  and recovery model of the database. If the recovery model is full or no transaction log  backup is configured then transaction file is grow too much . You have to change the recovery model of the database to simple and shrink the log file .

If log file is not a concern then need to look each individual table space by the attach command to understand which table consume to much space so you can archive some old data
Scott PletcherSenior DBACommented:
This command will show you if an immediate shrink has any chance of freeing a lot of space:


If UsedExtents is not significantly below TotalExtents, then shrink at this point won't help.
David ToddSenior DBACommented:

Checking the backup size is a good idea.
	top 100 -- don't want to drown in detail
	, s.database_name
	, s.recovery_model
	, s.backup_size
	, s.compressed_backup_size
	, round( s.backup_size / 1024.0 / 1024.0 / 1024.0, 2 ) as BackupSizeGB
	, round( s.compressed_backup_size / 1024.0 / 1024.0 / 1024.0, 2 ) as CompressedBackupSizeGB
	-- percentage compressed backup is of expected
	, round( 100.0 * s.compressed_backup_size / s.backup_size, 2 ) as PercentCompression
from msdb.dbo.backupset s
	s.database_name like 'model' --'YourDatabaseNameHere'
	and s.type = 'D' -- only interested in database backups
order by
	s.backup_start_date desc

Open in new window

I'm wondering (like the others) for what changed. I'm wondering what you are looking at to see that the size has changed. I'm wondering if compression was on the backups and now its not. I've seen good compression, to better than 30% of expected size. SharePoint with its images and documents not so much.

Do remember, that any index rebuild via dbcc dbreindex or similar will leave a significant amount of free space in the database afterwards. I can't find it again, but read somewhere that it was as much as 1.4 times the finished size of the index. Now shrinking to remove that space is a waste of time because a) it will undo the good work you've just done, and b) next time you rebuild this index its just going to grow again, and c) fragment the file at the file-system level.

cindyfillerAuthor Commented:
I found an email where we had cleaned out images from our web site.  that was on Dec 4 and the total db size was 18 gb.  When I look at the actual mdf file it now shows 90 gb - quite a huge growth for 5 or 6 weeks.  When looking more close, I find the table itself is about 55 gb and the index is about 32 gb.  From the tools I've run shrinking it won't get more than a couple of gb.
Scott PletcherSenior DBACommented:
Is the table a heap or does it have a clustered index?

Sometimes SQL Server gets "confused" with heaps and the storage goes bonkers.

If it is a heap, try adding a clustered index, then see what the new size is.
cindyfillerAuthor Commented:
Honestly I don't know - but will find out!  Our vendor installed this software and for the most part I've just ignored it on the back end - haven't had to really do anything with it.
David ToddSenior DBACommented:

Referencing the email you mentioned, that means in 6 weeks database has grown. Did anything change in the application, as in the size of the images has doubled or worse?

Given Scotts point above about SQL getting confused - when was the last time you did a dbcc checkdb or database integrity check?

Would a dbcc updateusage be appropriate here? It at least couldn't hurt anything right?

cindyfillerAuthor Commented:
No nothing has changed.  That's why its so odd.  No one has loaded videos to our website or uploaded any major changes.  I'm going to check my logs tomorrow and see when I last upgraded - I don't think it was around that time.  But I'll also do the suggestions above and see what comes out.
cindyfillerAuthor Commented:
I've run the items suggested and am including the results below.  I sure am learning a lot - but still not seeing anything that would be causing the problem.  

      Database size:      92108.50 mb
      Unallocated:      7572.66 mb
      Reserved:      86528992 kb
         Data:             55670240 kb
         Index:       29829720 kb
         Unused:      1029032 kb

Dbcc showfilestats
      Portal (mdf) – everything else looks really small
         Total extents      1335584
         Used extents      1235812

DBCC sqlperf(logspace) – 2 show up with larger log sizes
         Log size      56.30469 mb
         Log space used      92%
      BBNC (the DB that I’m concerned with)
         Log size      34.86719
         Log space used      10%
Note – recovery mode is simple

Dbcc checkdb

Running program listed by David:  Backup size is 82 gb

I checked if tables were heap or clustered index and it appears there are both in the database.  

Dbcc checkdb – the most important results are:
CHECKDB found 0 allocation errors and 0 consistency errors in database 'BBNC'.
DBCC execution completed.
Scott PletcherSenior DBACommented:
>> I checked if tables were heap or clustered index and it appears there are both in the database. <<

Any large heaps probably need re-configured with a clustered index.

You can run this code to identify heaps (heaps will list with "Has Clus Index" as "No"):

DECLARE @has_clus_index_show_yes bit
SET @has_clus_index_show_yes = 0

    SCHEMA_NAME(o.schema_id) AS Schema_Name, AS Table_Name,
    Table_MB, Table_Rows,
    Table_Plus_Indexes_MB, Table_Plus_Indexes_Rows,
    CASE WHEN Has_Clus_Index = 0 THEN 'No' ELSE CASE WHEN @has_clus_index_show_yes = 1 THEN 'Yes' ELSE '' END END AS Has_Clus_Index,
    FILEGROUP_NAME(fg_min) + CASE WHEN fg_min = fg_max THEN '' ELSE '/' + FILEGROUP_NAME(fg_max) END AS Filegroup,
    CASE WHEN data_compression_max = 0 THEN '' ELSE
        CASE data_compression_min WHEN 0 THEN 'none' WHEN 1 THEN 'ROW' WHEN 2 THEN 'PAGE' ELSE '?' END +
        CASE WHEN data_compression_max = data_compression_min THEN '' ELSE '/' +
        CASE data_compression_max WHEN 0 THEN 'none' WHEN 1 THEN 'ROW' WHEN 2 THEN 'PAGE' ELSE '?' END END END AS Compression,
    o.create_date AS Table_Create_Date
        CAST(SUM(CASE WHEN dps2.index_id IN (0, 1) THEN dps2.reserved_page_count ELSE 0 END) / 128.0 AS decimal(9, 2)) AS Table_MB,
        SUM(CASE WHEN dps2.index_id IN (0, 1) THEN row_count ELSE 0 END) AS Table_Rows,
        CAST(SUM(dps2.reserved_page_count) / 128.0 AS decimal(9, 2)) AS Table_Plus_Indexes_MB,
        SUM(dps2.row_count) AS Table_Plus_Indexes_Rows,
        COUNT(DISTINCT CASE WHEN dps2.index_id > 0 THEN dps2.index_id END) AS Total_#_Of_Indexes,
          MAX(CASE WHEN dps2.index_id = 1 THEN 1 ELSE 0 END) AS [Has_Clus_Index],
          MAX(p2.data_compression_min) AS data_compression_min,
          MAX(p2.data_compression_max) AS data_compression_max,
          MAX(p2.fg_min) AS fg_min,
          MAX(p2.fg_max) AS fg_max         
    FROM sys.dm_db_partition_stats dps2 WITH (NOLOCK)
            p3.object_id, MAX(p3.rows) AS rows_max,
            MIN(p3.data_compression) AS data_compression_min, MAX(p3.data_compression) AS data_compression_max,
            MAX(au3.fg_min) AS fg_min, MAX(au3.fg_max) AS fg_max
        FROM sys.partitions p3 WITH (NOLOCK)
        INNER JOIN (
            SELECT au4.container_id, au4.type, MIN(au4.data_space_id) AS fg_min, MAX(au4.data_space_id) AS fg_max
            FROM sys.allocation_units au4 WITH (NOLOCK)
            GROUP BY au4.container_id, au4.type
        ) AS au3 ON
            au3.container_id = CASE WHEN au3.type = 2 THEN p3.partition_id ELSE p3.hobt_id END            
        GROUP BY p3.object_id        
    ) AS p2 ON
        p2.object_id = dps2.object_id
) AS dps
    o.object_id = dps.object_id
      Table_Plus_Indexes_MB DESC, --from largest to smallest total table plus indexes size
    Schema_Name, Table_Name
COMPUTE SUM(Table_MB), SUM(Table_Plus_Indexes_MB)
cindyfillerAuthor Commented:
I'd probably want to check with the vendor that provides us this package before changing their indexes.   I don't see why it would matter to them - but always use precaution when changing something a vendor has done.
Scott PletcherSenior DBACommented:
That makes sense.  But be aware: if you have heaps, esp. large ones, SQL can have internal issues keeping up with the correct space used and allocated by those heaps.
cindyfillerAuthor Commented:
The vendor is now working on a script to reduce the size of the database based - they said our error table has grown quite large so they are targeting that initially.  I'll close this for now  and will try to award points to everyone because all of your input helped to provide information that convinced the vendor there was an issue.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Keep up with what's happening at Experts Exchange!

Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.

  • 8
  • 4
  • 3
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now