Shrink Massive Database?  How to estimate how long it will take?

Posted on 2006-05-30
Last Modified: 2008-01-09
I have a pretty massive DB.  It's currently at 425,209 MB and growing at about 2G/day (36 million recs/day!).  

There is more space on the server, but I've got to share it with a couple other depts.  Once upon a time I had to go in and fix a big db with a few tables that were about a billion recs each.  I decided I'd just do a quick shrink and reindex those tables and then I'd be able to work more efficiently.  Ooops.  So, 3 days later when it was finished.... I managed to get the job finished and was never hired at that firm again.  :)

This one makes that DB look like a kid's toy.  It is a If I shut this thing down for a full day, well.... it could be bad.  Anyway, before I start shrinking this monster, is there a way to figure out how long it might take?
Question by:Danielcmorris
    LVL 28

    Expert Comment

    2G/day  in the file option if the file growth for your database is by percentage make it 100 mb.

    first you have to check what is the free space on your file by right click database --> all tasks --> shrink database

    and if all the space inside the file is used you cannot shrink it..

    you can shrink the log file for your database. but not data file if it is used.

    if you are taking full backups then change the recovery of your database to be simple.

    and then run this command in query analyzer
    dump transaction yourdatabase with no_log
    then shrink your log file it wont take log time may be in minutes.
    LVL 10

    Expert Comment

    Why shrink a DB that is growing at 2gb day ?   You will shrink it and it will just have to autogrow and this will slow down your server .....

    I would look at archiving old data off the server onto a seperate db if possible .....

    As for reindex ......  look at sql 2005 ent edition it allows online reinexing

    or use sql 2000 and reindex table by table where needed CAREFULY

    LVL 4

    Author Comment

    I'm actually not concerned about speed.  It's a data-warehouse - kinda.

    Basically, I'm using this system to produce OLAP reports on inventory trends for a pretty big company.  I've been getting their sales/inventory data (35million recs) each day and storing it.  Right now I've got about 6months and I'm startting to "hit the wall" as far as space on the server.  If I had more funding, this would be a non-issue.  

    I think archiving the data is going to be my only solution.  I spent all night writing a script that would export, zip and archive tables so they can be dropped.  It's still a pain in the butt, as I've also got to program the system, if it doesn't have a table it needs, to pull the archive, unzip it, and import the data - followed by dropping the table once it is finished with the data.

    yuck.  Anything I can do to make the dam system smaller would be fantastic.

    To be honest, I've never actually worked on a system this huge.  I mean, it's almost a terabyte.  On my normal systems, if things got a little big, I'd spend a couple hundred bucks and get a fat hard-drive.  Expanding this RAID array isn't quite as easy, expecially since the company's IT is oursourced and it costs about 1000x as much as telling one of my guys to go to compusa and get a new set of HDs for the array.

    Any ideas other than the indexing (--already been there --)?
    LVL 28

    Accepted Solution

    if you can archive the data i think you can dts the unsude tables to access and then change the extention of the access file as .txt this will make huge difference.

    LVL 4

    Author Comment

    export to access?
    change extention to .txt?

    I don't get it....  

    will the access DB, if zipped up, be smaller than a text file that is zipped up?
    Will the access db, holding one massive table of 35,000,000 records, make an import faster than a tab-delminted text file?

    What does changing the extention to .txt do?

    This is a solution I never would have thought of....
    LVL 10

    Expert Comment

    do you have another server you can use? eg on one of you bigger tables divide it in two  ie by date before / after ...   mover all the before data to another db on another server .. then create it as a linked server ...  then add an if (date > value) then select from linked server  else select from  local server ...   this will decrease the  size of your db but still allow you to query historical data easily ....  
    LVL 4

    Author Comment

    Sorry for the long delay imran_fast.  Initially the solution didn't work, but it turned me on to a great idea.

    Since the backups were these massive flat files, the text or access format didn't seem to make much of a difference, however, using MS Access, I was able to save an enormous amount of space by exporting the archives in a relational format.  I really couldn't do this before, using text files, because I simply couldn't trust my employees to keep all the files together.  Even in zip files we had problems in the past.

    So, it is a bit of a hit to the processor to break it up so much, the size issue is somewhat alleviated.

    Thanks for the push in the right direction.


    Write Comment

    Please enter a first name

    Please enter a last name

    We will never share this with anyone.

    Featured Post

    How to run any project with ease

    Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
    - Combine task lists, docs, spreadsheets, and chat in one
    - View and edit from mobile/offline
    - Cut down on emails

    Having an SQL database can be a big investment for a small company. Hardware, setup and of course, the price of software all add up to a big bill that some companies may not be able to absorb.  Luckily, there is a free version SQL Express, but does …
    Slowly Changing Dimension Transformation component in data task flow is very useful for us to manage and control how data changes in SSIS.
    Via a live example, show how to backup a database, simulate a failure backup the tail of the database transaction log and perform the restore.
    Viewers will learn how to use the INSERT statement to insert data into their tables. It will also introduce the NULL statement, to show them what happens when no value is giving for any given column.

    779 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    17 Experts available now in Live!

    Get 1:1 Help Now