stephenlecomptejr
asked on
SQL Server log file too large to manage - what's the fastest way to compact it?
I have a MS SQL Server 2000 file that is 99,146,176 KB size and takes up all of the C:\ drive. Obviously I try shrinking it but I keep getting the error to backup your log. I seriously do not need a single log of transaction that has occurred. How could I do just a delete of everything in there to reduce the file size?
Currently I ran a BACKUP LOG to DISK - not realizing the huge file size - how can I stop what is taking place without disrupting to follow your below comments?
I will not ever need but a log of what happened for only in the last week - I'm sorry if I don't know what size limitation will that be - but it is definitely not going to be that huge. How can I prevent this from happening in the future?
Currently I ran a BACKUP LOG to DISK - not realizing the huge file size - how can I stop what is taking place without disrupting to follow your below comments?
I will not ever need but a log of what happened for only in the last week - I'm sorry if I don't know what size limitation will that be - but it is definitely not going to be that huge. How can I prevent this from happening in the future?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
I tried what you stated and now my log is back down to 1024 KB.
Thank you,
My database file is 3,199,168 KB size after being shrunk.
I do have it on schedule to backup every Sunday- what else am I doing wrong?
I'm not sure how to put in simple recovery or why would doing log backups prevent a repeat of the same problem?
Thank you,
My database file is 3,199,168 KB size after being shrunk.
I do have it on schedule to backup every Sunday- what else am I doing wrong?
I'm not sure how to put in simple recovery or why would doing log backups prevent a repeat of the same problem?
ASKER
oops got my answer from the comments above.
Thanks guys - just freakin' out over here in a panic.
I notice that everytime I run a DTS package (I just ran one thus far) - my log file size went back to 517, 184 KB.
I really don't need that much log information everytime a DTS is ran.
Any clues on helping me fix this? My log size will balloon too large and even a simple log backup everytime would be useless for this.
Thanks guys - just freakin' out over here in a panic.
I notice that everytime I run a DTS package (I just ran one thus far) - my log file size went back to 517, 184 KB.
I really don't need that much log information everytime a DTS is ran.
Any clues on helping me fix this? My log size will balloon too large and even a simple log backup everytime would be useless for this.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Read my previous post ( 20412092 ).
ASKER
i2mental,
Should I not also pick Auto shrink as well?
Should I not also pick Auto shrink as well?
Depending on what the dts package is doing, this file size may be necessary/optimal. When the database is set to simple, all transactions are still logged until a checkpoint is reached. A checkpoint is like when the data is finally written to the disk at the end of an insert/update/delete statement. If that statement is particularly large, then the log file will contain that information until it can be committed. Once it is written, the log file is effectively cleaned out, but the data file stays the same. There is a cost to growing the log file to the size that is needed, so SQL Server will keep it at that size so that the next time the process is run, the file doesn't have to grow any further. I would recommend you leave it as the size it is now if you're set to simple now.
You can set autoshrink if you like, but like I said, if this same DTS package is giong to run each time, it's going to keep growing itself out and that's overhead you don't necessarily need.
You can set autoshrink if you like, but like I said, if this same DTS package is giong to run each time, it's going to keep growing itself out and that's overhead you don't necessarily need.
Be sure you understand the difference between full and simple recovery modes. In simple mode, you can only recover to the last full backup. In full mode, you need to run full backups and periodic log backups and then you can recover to the last log backup.
ASKER
The combination of everyone's comments helped me solve my problems. I hope no offense occurs from splitting the points.
The proper way to free up that space now if you are using FULL recovery is to let the log backup you're doing now finish. This will create a .trn file that is there for when you need to do a restore. It is not require anymore for the database to run, so you can delete it once to backup is complete. Note that this will mean that you can only restore to the last time you made a full backup. So I would advise you do a full backup once this log is complete so that you don't have potientally unrecoverable data.