• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2570
  • Last Modified:

Shrink Massive Database? How to estimate how long it will take?

I have a pretty massive DB.  It's currently at 425,209 MB and growing at about 2G/day (36 million recs/day!).  

There is more space on the server, but I've got to share it with a couple other depts.  Once upon a time I had to go in and fix a big db with a few tables that were about a billion recs each.  I decided I'd just do a quick shrink and reindex those tables and then I'd be able to work more efficiently.  Ooops.  So, 3 days later when it was finished.... I managed to get the job finished and was never hired at that firm again.  :)

This one makes that DB look like a kid's toy.  It is a If I shut this thing down for a full day, well.... it could be bad.  Anyway, before I start shrinking this monster, is there a way to figure out how long it might take?
0
Danielcmorris
Asked:
Danielcmorris
  • 3
  • 2
  • 2
1 Solution
 
imran_fastCommented:
2G/day  in the file option if the file growth for your database is by percentage make it 100 mb.

first you have to check what is the free space on your file by right click database --> all tasks --> shrink database

and if all the space inside the file is used you cannot shrink it..

you can shrink the log file for your database. but not data file if it is used.

if you are taking full backups then change the recovery of your database to be simple.

and then run this command in query analyzer
dump transaction yourdatabase with no_log
then shrink your log file it wont take log time may be in minutes.
0
 
TAB8Commented:
Why shrink a DB that is growing at 2gb day ?   You will shrink it and it will just have to autogrow and this will slow down your server .....

I would look at archiving old data off the server onto a seperate db if possible .....

As for reindex ......  look at sql 2005 ent edition it allows online reinexing

or use sql 2000 and reindex table by table where needed CAREFULY

0
 
DanielcmorrisAuthor Commented:
I'm actually not concerned about speed.  It's a data-warehouse - kinda.

Basically, I'm using this system to produce OLAP reports on inventory trends for a pretty big company.  I've been getting their sales/inventory data (35million recs) each day and storing it.  Right now I've got about 6months and I'm startting to "hit the wall" as far as space on the server.  If I had more funding, this would be a non-issue.  

I think archiving the data is going to be my only solution.  I spent all night writing a script that would export, zip and archive tables so they can be dropped.  It's still a pain in the butt, as I've also got to program the system, if it doesn't have a table it needs, to pull the archive, unzip it, and import the data - followed by dropping the table once it is finished with the data.

yuck.  Anything I can do to make the dam system smaller would be fantastic.

To be honest, I've never actually worked on a system this huge.  I mean, it's almost a terabyte.  On my normal systems, if things got a little big, I'd spend a couple hundred bucks and get a fat hard-drive.  Expanding this RAID array isn't quite as easy, expecially since the company's IT is oursourced and it costs about 1000x as much as telling one of my guys to go to compusa and get a new set of HDs for the array.

Any ideas other than the indexing (--already been there --)?
0
Introducing Cloud Class® training courses

Tech changes fast. You can learn faster. That’s why we’re bringing professional training courses to Experts Exchange. With a subscription, you can access all the Cloud Class® courses to expand your education, prep for certifications, and get top-notch instructions.

 
imran_fastCommented:
if you can archive the data i think you can dts the unsude tables to access and then change the extention of the access file as .txt this will make huge difference.

0
 
DanielcmorrisAuthor Commented:
export to access?
change extention to .txt?

I don't get it....  

will the access DB, if zipped up, be smaller than a text file that is zipped up?
Will the access db, holding one massive table of 35,000,000 records, make an import faster than a tab-delminted text file?

What does changing the extention to .txt do?

This is a solution I never would have thought of....
0
 
TAB8Commented:
do you have another server you can use? eg on one of you bigger tables divide it in two  ie by date before / after ...   mover all the before data to another db on another server .. then create it as a linked server ...  then add an if (date > value) then select from linked server  else select from  local server ...   this will decrease the  size of your db but still allow you to query historical data easily ....  
0
 
DanielcmorrisAuthor Commented:
Sorry for the long delay imran_fast.  Initially the solution didn't work, but it turned me on to a great idea.

Since the backups were these massive flat files, the text or access format didn't seem to make much of a difference, however, using MS Access, I was able to save an enormous amount of space by exporting the archives in a relational format.  I really couldn't do this before, using text files, because I simply couldn't trust my employees to keep all the files together.  Even in zip files we had problems in the past.

So, it is a bit of a hit to the processor to break it up so much, the size issue is somewhat alleviated.

Thanks for the push in the right direction.

-Dan
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Keep up with what's happening at Experts Exchange!

Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.

  • 3
  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now