SQL Server log file too large to manage - what's the fastest way to compact it?

I have a MS SQL Server 2000 file that is 99,146,176 KB size and takes up all of the C:\ drive.  Obviously I try shrinking it but I keep getting the error to backup your log.  I seriously do not need a single log of transaction that has occurred.   How could I do just a delete of everything in there to reduce the file size?

Currently I ran a BACKUP LOG to DISK - not realizing the huge file size - how can I stop what is taking place without disrupting  to follow your below comments?

I will not ever need but a log of what happened for only in the last week - I'm sorry if I don't know what size limitation will that be - but it is definitely not going to be that huge.  How can I prevent this from happening in the future?
Who is Participating?

Improve company productivity with a Business Account.Sign Up

i2mentalConnect With a Mentor Commented:
Right click on the database and select properties. Go into the "Options" section and set the Recovery Model to Simple. Simple recovery model only allows you to restore to the last time you made a FULL backup instead of allowing you to restore to a point in time up until youre last log backup.
mastooConnect With a Mentor Commented:
Backup Log dbname With Truncate_only

After doing so, you probably want to run a full db backup and either put the database in simple recovery mode or start doing log backups to avoid a repeat of the problem.
MrNeticConnect With a Mentor Commented:

Seems that you database is in full recovery model.

If the database you are talking about is a Production DB, you should consider implementing daily Backups to your transaction log, otherwise, put the database in SIMPLE recovery Model.

Hope it helps....
What Kind of Coding Program is Right for You?

There are many ways to learn to code these days. From coding bootcamps like Flatiron School to online courses to totally free beginner resources. The best way to learn to code depends on many factors, but the most important one is you. See what course is best for you.

What recovery mode is this database in? If it's FULL, your log file size has grown because  you aren't making regular log file backups. Every insert/update/delete is being logged into this file and will grow over time. To keep this from growing to large, you need to make a maintenance plan to backup the log on a regular interval. Perhaps every 30 minutes or an hour depending on the usage.

The proper way to free up that space now if you are using FULL recovery is to let the log backup you're doing now finish. This will create a .trn file that is there for when you need to do a restore. It is not require anymore for the database to run, so you can delete it once to backup is complete. Note that this will mean that you can only restore to the last time you made a full backup. So I would advise you do a full backup once this log is complete so that you don't have potientally unrecoverable data.
stephenlecomptejrAuthor Commented:
I tried what you stated and now my log is back down to 1024 KB.
Thank you,

My database file is 3,199,168 KB size after being shrunk.
I do have it on schedule to backup every Sunday- what else am I doing wrong?

I'm not sure how to put in simple recovery or why would doing log backups prevent a repeat of the same problem?
stephenlecomptejrAuthor Commented:
oops got my answer from the comments above.

Thanks guys - just freakin' out over here in a panic.
I notice that everytime I run a DTS package (I just ran one thus far) - my log file size went back to 517, 184 KB.

I really don't need that much log information everytime a DTS is ran.
Any clues on helping me fix this?  My log size will balloon too large and even a simple log backup everytime would be useless for this.
Read my previous post ( 20412092 ).
stephenlecomptejrAuthor Commented:
Should I not also pick Auto shrink as well?
Depending on what the dts package is doing, this file size may be necessary/optimal. When the database is set to simple, all transactions are still logged until a checkpoint is reached. A checkpoint is like when the data is finally written to the disk at the end of an insert/update/delete statement.  If that statement is particularly large, then the log file will contain that information until it can be committed.  Once it is written, the log file is effectively cleaned out, but the data file stays the same. There is a cost to growing the log file to the size that is needed, so SQL Server will keep it at that size so that the next time the process is run, the file doesn't have to grow any further. I would recommend you leave it as the size it is now if you're set to simple now.

You can set autoshrink if you like, but like I said, if this same DTS package is giong to run each time, it's going to keep growing itself out and that's overhead you don't necessarily need.
Be sure you understand the difference between full and simple recovery modes.  In simple mode, you can only recover to the last full backup.  In full mode, you need to run full backups and periodic log backups and then you can recover to the last log backup.
stephenlecomptejrAuthor Commented:
The combination of everyone's comments helped me solve my problems.  I hope no offense occurs from splitting the points.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.