Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x
?
Solved

DBCC SHRINK-DATABASE - how to

Posted on 2004-09-01
17
Medium Priority
?
4,980 Views
Last Modified: 2012-06-21
Hi, i got a very nice 'defrag my DB' script from here and I believe I need to add a ' DBCC SHRINK-DATABASE' to it, but I haven't got a clue or the time to read up so I'm hoping for some quick help.

Can someone show/tell/help me to include the 'SHRINK-DATABASE' or whatever is the correct command into this script.

Thanks in advance

acmp<><




The script is:
/*Perform a 'USE <database name>' to select the database in which to run the script.*/
USE epo_pluto
-- Declare variables
SET NOCOUNT ON
DECLARE @tablename VARCHAR (128)
DECLARE @execstr   VARCHAR (255)
DECLARE @objectid  INT
DECLARE @indexid   INT
DECLARE @frag      DECIMAL
DECLARE @maxfrag   DECIMAL

-- Decide on the maximum fragmentation to allow
SELECT @maxfrag = 5.0

-- Declare cursor
DECLARE tables CURSOR FOR
   SELECT TABLE_NAME
   FROM INFORMATION_SCHEMA.TABLES
   WHERE TABLE_TYPE = 'BASE TABLE' and table_name not like 't%'

-- Create the table
CREATE TABLE #fraglist (
   ObjectName CHAR (255),
   ObjectId int null,
   IndexName CHAR (255),
   IndexId int null,
   Lvl int null,
   CountPages INT null,
   CountRows INT null,
   MinRecSize int null,
   MaxRecSize int null,
   AvgRecSize int null,
   ForRecCount INT null,
   Extents int null,
   ExtentSwitches int null,
   AvgFreeBytes INT null,
   AvgPageDensity INT null,
   ScanDensity DECIMAL,
   BestCount int null,
   ActualCount int null,
   LogicalFrag DECIMAL,
   ExtentFrag DECIMAL null)

-- Open the cursor
OPEN tables

-- Loop through all the tables in the database
FETCH NEXT
   FROM tables
   INTO @tablename

WHILE @@FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
   INSERT INTO #fraglist
   EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')
      WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
   FETCH NEXT
      FROM tables
      INTO @tablename
END

-- Close and deallocate the cursor
CLOSE tables
DEALLOCATE tables

-- Declare cursor for list of indexes to be defragged
DECLARE indexes CURSOR FOR
   SELECT ObjectName, ObjectId, IndexId, LogicalFrag
   FROM #fraglist
   WHERE LogicalFrag >= @maxfrag
      AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0

-- Open the cursor
OPEN indexes

-- loop through the indexes
FETCH NEXT
   FROM indexes
   INTO @tablename, @objectid, @indexid, @frag

WHILE @@FETCH_STATUS = 0
BEGIN
   PRINT 'Executing DBCC INDEXDEFRAG (0, ' + RTRIM(@tablename) + ',
      ' + RTRIM(@indexid) + ') - fragmentation currently '
       + RTRIM(CONVERT(varchar(15),@frag)) + '%'
   SELECT @execstr = 'DBCC INDEXDEFRAG (0, ' + RTRIM(@objectid) + ',
       ' + RTRIM(@indexid) + ')'
   EXEC (@execstr)

   FETCH NEXT
      FROM indexes
      INTO @tablename, @objectid, @indexid, @frag
END

-- Close and deallocate the cursor
CLOSE indexes
DEALLOCATE indexes

-- Delete the temporary table
DROP TABLE #fraglist
GO

0
Comment
Question by:acmp
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 7
  • 5
  • 3
  • +1
17 Comments
 
LVL 15

Assisted Solution

by:jdlambert1
jdlambert1 earned 800 total points
ID: 11954263
If you don't want any freespace left over:

DBCC SHRINKDATABASE (database_name)

If you expect to add records:
DBCC SHRINKDATABASE (database_name, 5)

(or some other percentage of freespace to leave)
0
 
LVL 15

Expert Comment

by:jdlambert1
ID: 11954281
Put it after you drop #fraglist
0
 
LVL 18

Assisted Solution

by:ShogunWade
ShogunWade earned 400 total points
ID: 11954483
or sp_msforeachdb 'DBCC SHRINKDATABASE (?)'

will shrink the lot.
0
Learn how to optimize MySQL for your business need

With the increasing importance of apps & networks in both business & personal interconnections, perfor. has become one of the key metrics of successful communication. This ebook is a hands-on business-case-driven guide to understanding MySQL query parameter tuning & database perf

 
LVL 10

Accepted Solution

by:
AustinSeven earned 800 total points
ID: 11955158

Depends on the nature of your database but, given that your defragging it, it must be subject to some level of inserts and deletes.   In most cases you'll want some free space unless it's a database that's absolutely static or reducing in size?  You might be better off having a procedure to handle database usage separately.   For example, run a procedure every night to check and alert on database usage.  eg. a 80% threshold or below 50% as well if you're concerned about a db with reducing usage.   Then manually follow up on the alerts to grow or shrink the file as necessary.  

I prefer DBCC SHRINKFILE:-

DBCC SHRINKFILE
    ( { file_name | file_id }
        { [ , target_size ]
            | [ , { EMPTYFILE | NOTRUNCATE | TRUNCATEONLY } ]
        }
    )

However, you will probably need to make sure the tran log is checkpointed with a backup first.   Folks often use BACKUP LOG WITH TRUNCATE_ONLY to remove the inactive entries in the log.   Do a full backup afterwards.

By the way, I won't go off on a tangent too much but your proc has some holes in it.   One of those holes relates to your question so I'll mention it here.   The proc does not update usage.   It should do this because DBCC INDEXDEFRAG does not do this for you.  This is another reason why it probably isn't the best idea to deal with db file usage issues within an index maintenance proc - more complexity.  

Oh... Check out if Auto Shrink is already set on your database.  Probably is.  Might be an idea to turn it off and take control of that responsibility yourself as per my initial comments.

AustinSeven
0
 
LVL 6

Author Comment

by:acmp
ID: 11961107
The DB is part of McAfee's ePO antivirus management tools. It logs just about everything that my 400+ PC's do.

Over the last 2+ years the (MSSQL) DB has grown to around 2GB and now I'm told it will give me problems. Hence wanting to shrink the DB if possible.

I truely have no idea what I'm doing. The script I am using to defrag was obtained from here and I naively thought it would shrink/pack/compress hte DB as it defragged. It boviously doesn't.

I tend to run the script on an adhock basis when the reporting gets slow. I do need to shrink it now though as it is misbehaving.

I am quite happy to have another script that gets run periodically if that is the best way.

At present the server is backedup nightly to tape, It wouldn't be the end of the world if we had to restore the DB from tape, even a week old, all we would lose is virus events, nothing business critical.

I run the script with a 'osql -U sa -P [password] -i c:\defrag~1\dbcompact.txt' command. Not sure if this helps or not.

acmp<><
0
 
LVL 6

Author Comment

by:acmp
ID: 11961124
jdlambert1  (and others),

Trying not to be too thick...

If I add 'DBCC SHRINKDATABASE (database_name, 5)' after I drop #fraglist then what do I put in the Database_name bit? is it the 'epo_pluto' that I 'USE' at the top, or do I need to do something with the tables?

acmp<><

(I'm guessing it's the 'epo_pluto' but I'd rather not screw it up)
0
 
LVL 10

Expert Comment

by:AustinSeven
ID: 11961406
From what you've said, it's a growing database.   Therefore, nothing will be gained by trying to shrink it UNLESS you implement an archiving process to delete records older than a certain age.   2Gb is still relatively small.  I don't see why it should give you problems.   When it gets to 100Gb, you might start to see some issues arising.   However, there's no point in keeping that kind of data forever is there?   Remember, you can archive out old records to another table to keep it on-line for easy reporting - if that's a requirement.   Archiving will keep your main table small and fast.

Your maintenance proc will do the job even though it might not be perfect.   Stick with it for now.

I repeat... Forget about shrinking a growing database.    Think about archiving.

AustinSeven

0
 
LVL 6

Author Comment

by:acmp
ID: 11961554
We're using the 'free' MSSQL, which my AV resellersays has a 2GB limit, this is where my problem is comming from.

I will happily remove dead items from the DB, if it keeps it working.

I am trying to get a proper server to run this with a full MSSQL server licence so that I can have a 'real' DB attached to it.  The biggest filler of the DB has to be virus alerts, we get hundreds of them a day.

To be honest I'm not really interested in what happend more than a few weeks ago, but I think I should keep a few months of data.

acmp<><
0
 
LVL 6

Author Comment

by:acmp
ID: 11961633
I deleted a few months from the DB using the ePO tool and tried teh defrag/compact again. the results are:

Before:
01/09/04  16:51          2,011,627,520 ePO_pluto_data.mdf
02/09/04  09:42              5,308,416 ePO_pluto_log.LDF
03/05/04  19:32             11,468,800 ePO_pluto_log.mdf
01/09/04  16:51             12,648,448 master.mdf
01/09/04  16:51                524,288 mastlog.ldf
01/09/04  16:51                655,360 model.mdf
01/09/04  16:51                524,288 modellog.ldf
08/04/04  17:04              3,145,728 msdbdata.mdf
08/04/04  17:04                524,288 msdblog.ldf
01/09/04  16:54              2,097,152 tempdb.mdf
01/09/04  16:59                786,432 templog.ldf
              13 File(s)  2,049,310,720 bytes


After:
01/09/04  16:51          2,011,627,520 ePO_pluto_data.mdf
02/09/04  10:18                786,432 ePO_pluto_log.LDF
03/05/04  19:32             11,468,800 ePO_pluto_log.mdf
01/09/04  16:51             12,648,448 master.mdf
01/09/04  16:51                524,288 mastlog.ldf
01/09/04  16:51                655,360 model.mdf
01/09/04  16:51                524,288 modellog.ldf
08/04/04  17:04              3,145,728 msdbdata.mdf
08/04/04  17:04                524,288 msdblog.ldf
01/09/04  16:54              2,097,152 tempdb.mdf
01/09/04  16:59                786,432 templog.ldf
              13 File(s)  2,044,788,736 bytes

The only difference I can see is in the 'ePO_plotu_log.LDF' which I guess is the 'what happened log' The main DB file 'ePO_pluto_data.mdf' just doesn't change.

Any suggestions?

acmp<><
0
 
LVL 18

Expert Comment

by:ShogunWade
ID: 11961647
Personally i would suggest that you get on to McAffee and tell them the pradicament.  If this is a commercial product they must be aware of the 2G limit of MSDE
0
 
LVL 10

Expert Comment

by:AustinSeven
ID: 11961750
Your issue is that having removed some rows, the data file is the same size and that is normal.   If it was a 'normal' SQL Server installation, you would shrink the file/db in the way described in the previous posts.  Not sure what you can do with your database but as you can run index maintenance scripts, why can't you run a DBCC SHRINKFILE/SHRINKDATABASE  command?

Maybe ShogunWade is right.  Give McAffee a call.

AustinSeven
0
 
LVL 6

Author Comment

by:acmp
ID: 11961883
The option I have is 'remove bits older than xx days'.

I think I'll just have to chase McAfee as suggested.

This really sucks, I don't think we have a big setup or a busy system, just 400 PC's in an engineering company. This should not be difficult.

I'll post back with the 'good news' from McAfee

acmp<><
0
 
LVL 18

Expert Comment

by:ShogunWade
ID: 11961888
good luck.
0
 
LVL 6

Author Comment

by:acmp
ID: 12025586
Hi,

After much messing around and talking to a SQL consultant I have given up!

I believe that the cheap MSDE version just crapped out on me. I have had to restore the DB back to April to get it to work.

I will continue to use the 'DBCC SHRINKDATABASE' as it appeared to do some good, and I'll keep monitoring it and pressuring the boss for a proper setup.

Thanks for the help.

I'm splitting the points as I think you've all helped me to understand better that MSDE is not worth a penny and my vendor should have pointed this out.

acmp<><
0
 
LVL 18

Expert Comment

by:ShogunWade
ID: 12025639
of course if you wanted to goto the expense, you could upgrade MDSE to SQL Server Standard (pretty straight forward upgrade) then the 2gb limit would be lifted.   trouble is of course that you would need to ensure that either you machine has window server on or you moved sql to a server.
0
 
LVL 6

Author Comment

by:acmp
ID: 12041994
I've spoken to my boss today and he's 'moving forward' my new AV server with a full SQL license so I'm just trying to get by until it arrives, though I have a feeling that I'll see Father Christmas first.

acmp<><
0
 
LVL 18

Expert Comment

by:ShogunWade
ID: 12042436
:)
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In part one, we reviewed the prerequisites required for installing SQL Server vNext. In this part we will explore how to install Microsoft's SQL Server on Ubuntu 16.04.
It is possible to export the data of a SQL Table in SSMS and generate INSERT statements. It's neatly tucked away in the generate scripts option of a database.
Via a live example, show how to extract information from SQL Server on Database, Connection and Server properties
Via a live example, show how to backup a database, simulate a failure backup the tail of the database transaction log and perform the restore.

688 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question