Log Files Fill up m- can they be bypassed?

One of my users has a special application that can create very large files (300GB) when performing stochastic runs. When he attempts to delete this very large file within his database, the log files fill up and the deletion stops. This prompts us to truncate the log files and continue the deleting. We cannot delete in chunks because this is simply one large file of data within the database.

In my old mainframe days, we used to mount "scratch tapes" or assign "scratch drives" to take data that we really had no intention of saving.

In this SqlServer scenario, I do not care if we lose the data records that we are deleting but SqlServer wants to"save the day" in case the deletions need to be reversed.

Is there a way to bypass the log files and simply delete large amounts of data in a database? Is there a way to setup a ficticious drive that will take the log file entries and have an unlimited soze because it is not really a data file but just a named target?

Thanks!
Lenny GrayAsked:
Who is Participating?

Improve company productivity with a Business Account.Sign Up

x
 
Ryan LanhamConnect With a Mentor Commented:
1) Convert the Recovery Model to Simple Recovery

If you are truncating the transaction logs, this means you are breaking the T-Log LSN (Log Sequence Numbers). This follows that if disaster comes, you would not be able to restore your T-Logs and there would be no option for you to do point in time recovery. If you are fine with this situation and there is nothing to worry, I suggest that you change your recovery model to Simple Recovery Model. This way, you will not have extra ordinary growth of your log file.

2) Start Taking Transaction Log Backup

If your business does not support loss of data or requires having point in time recovery, you cannot afford anything less than Full Recovery Model. In Full Recovery Model, your transaction log will grow until you take a backup of it. You need to take the T-Log Backup at a regular interval.
0
 
Guy Hengel [angelIII / a3]Connect With a Mentor Billing EngineerCommented:
the fastest will be this:
* create a temporary table with ONLY the data you want to keep
* truncate the table
* insert back the data you wanted to keep from the temporary table

this has some constraints, like if you have foreign keys or the like on the table(s), you might first need to drop those (but first note them down), and recreate them afterwards...
0
 
Lenny GrayAuthor Commented:
I am already in the simple Recovery mode. The log files get maxed out even in that mode.

The entire tabel gets deleted. Nothing is saved, But the table is 300GB in size - far larger than the log files allocation.
0
A proven path to a career in data science

At Springboard, we know how to get you a job in data science. With Springboard’s Data Science Career Track, you’ll master data science  with a curriculum built by industry experts. You’ll work on real projects, and get 1-on-1 mentorship from a data scientist.

 
Lenny GrayAuthor Commented:
I am probably looking for the impossible. Thanks for the try!
0
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
Did you try TRUNCATE TABLE ? That statement only logs the fact, but not the data deleted.
0
 
Lenny GrayAuthor Commented:
After the log files fills up, the program abends. Then the log file undeletes the records it deleted. I tried truncating the log file first but the same error persists.
0
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
Sorry but I don't refer to TRUNCATE LOG, but TRUNCATE TABLE....
0
 
Lenny GrayAuthor Commented:
The table is not flat and has relational tables indexed to it. I have to delete in a way not to create orphan records. Thanks for the idea and it would work with a flat table.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.