transaction log full : ways to work around instead of getting more disk space

I ran the attached script & it created transaction logs of 80GB+ & then stopped with the message
below due to shortage of disk space for the transaction logs:

The transaction log for database 'SSM' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Msg 9002, Level 17, State 4, Server SPPAAH, Line 1
The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

I have another local drive with 110GB plus the current drive with 80GB that houses the current
transaction log.  Can anymore suggest how I can go about doing this?  

Possible to amend the attached SQL script so that it doesn't generate such a huge transaction log
or it writes to both the drives to split up transaction logs or ... ?
SanitizedUpdateSchemaSqlScript.txt
sunhuxAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

sunhuxAuthor Commented:
Forgot to say that obtaining more storage is not an option currently (at least for next few days)
Vitor MontalvãoMSSQL Senior EngineerCommented:
You're working with plenty of temporary tables and performing a lot changes and write operations, so transaction log will grow faster.
Try to perform this actions with the database in simple recovery model.
Tempdb database files are in a dedicated drive?
sunhuxAuthor Commented:
Tempdb are in the same drive : does this affect the speed/performance
or it just frees up more space if it sits on another drive?

>plenty of temporary tables and performing a lot changes and write operations
feel free to enhance this script.  If we can put the temp tables in SSD, that may
help?  But we'll only have SSD around May next year
Need More Insight Into What’s Killing Your Network

Flow data analysis from SolarWinds NetFlow Traffic Analyzer (NTA), along with Network Performance Monitor (NPM), can give you deeper visibility into your network’s traffic.

sunhuxAuthor Commented:
> perform this actions with the database in simple recovery model.
You meant run the script while the database is in simple recovery mode?

I'm no DBA, can you elaborate how to go about setting this MS SQL DB
into recovery mode?
Qlemo"Batchelor", Developer and EE Topic AdvisorCommented:
I don't see your script using explicit transactions, and that is reasonable, as all steps should be part of the same transaction.
However, that will always require a lot of transaction log space. Using a loop doing each column change individually will increase execution time bit lower transaction log usage, as you can commit after having finished each column.
Anthony PerkinsCommented:
Just to clarify the author is using a bulk_logged Recovery Model.  While that may not be exactly the same as Simple it is certainly not the same as Full, as I suspect many of us were assuming.

It would be helpful if the author would post the output from the script they ran before it ran out of space.  

But to answer the specific questions asked:
I have another local drive with 110GB plus the current drive with 80GB that houses the current
transaction log.  Can anymore suggest how I can go about doing this?

You can do that by adding an additional transaction log file.  This can be accomplished using SSMS or a SQL script.

Possible to amend the attached SQL script so that it doesn't generate such a huge transaction log
or it writes to both the drives to split up transaction logs or ... ?

The question you have to ask yourself before changing the script is the Transaction Log back to normal and what is the current Recovery model for the database(s) affected.  This results of this script should help:
SELECT  d.recovery_model_desc,
        d.log_reuse_wait_desc,
        mf.type_desc,
        mf.size / 128
FROM    sys.databases d
        INNER JOIN sys.master_files mf ON d.database_id = mf.database_id
WHERE   d.name IN ('Your1DatabaseNameGoesHere', 'Your2DatabaseNameGoesHere');

Open in new window

sunhuxAuthor Commented:
http://esupport.trendmicro.com/solution/en-US/1112218.aspx
  The script I attached earlier is sanitized from the above.  Not getting
  response from the author/developer of the script for the last 6 days.

I'm no DBA but will check with my DBA colleagues & get back.

> post the output from the script they ran before it ran out of space.
Refer to attached: the messages are what we'll see from SQL studio's
messages tab

>doing each column change individually will increase execution time but
> lower transaction log usage ...
Right now, priority is to drastically speed up this script so that the runtime
is about 20% that of what it takes tho it will be an added bonus if the
txn log is small as well (but if run in simple recov mode, there will be
no txn log)
TMsgsql2.zip
Qlemo"Batchelor", Developer and EE Topic AdvisorCommented:
Changing column definitions can be a very costly operation for large tables, and there is nothing you can do about that.
Anthony PerkinsCommented:
I see this as a problem and could be the root cause of the problem if the database is still in Full Recovery Model (although the previous statement seems to contradict that):
set recovery mode to bulk_logged on : alter database "ssm" set recovery bulk_logged
Msg 102, Level 15, State 1, Server S10XXX-AAH, Line 1
Incorrect syntax near 'ssm'.
Msg 195, Level 15, State 7, Server S10XXX-AAH, Line 1
'recovery' is not a recognized SET option.

Please post the results of the query I posted earlier to answer this critical question.
sunhuxAuthor Commented:
> Please post the results of the query I posted earlier to answer this critical question.

Below is the output:

recovery_      log_reuse      type_      (No column
model_desc      _wait_desc      desc       name)
===========      ==========      =====      =========
BULK_LOGGED      LOG_BACKUP      ROWS      253329
BULK_LOGGED      LOG_BACKUP      LOG      89996
BULK_LOGGED      LOG_BACKUP      LOG      101540
Vitor MontalvãoMSSQL Senior EngineerCommented:
Tempdb are in the same drive : does this affect the speed/performance
 or it just frees up more space if it sits on another drive?
Can affect the performance but mostly is consuming space from other databases.

If we can put the temp tables in SSD, that may help?  But we'll only have SSD around May next year
Don't you have a free disk available where you can move tempdb to? Preferable a RAID 10 configuration.

You meant run the script while the database is in simple recovery mode?
 I'm no DBA, can you elaborate how to go about setting this MS SQL DB into recovery mode?
Yes. Simple recovery model doesn't use transaction log as much Full and Bulk Logged recovery models does. Read this MSDN article to learn more about Recovery Models.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Vitor MontalvãoMSSQL Senior EngineerCommented:
I can see you're dropping indexes, backing up databases and recreating indexes and all these are heavy operations so I can't see how can you tune the script.
What's the goal of all those operations? Maybe we can lead you to a different way to achieve want you pretend.
sunhuxAuthor Commented:
> perform this actions with the database in simple recovery model.
> Tempdb database files are in a dedicated drive?
My DBA colleague just reran it in 'simple recov mode' but he forgot
to put the tempdb in another drive: so I guess there's no more txn
logs but tempdb simply grows too big that the drive ran out of space
again : refer to attached log

>http://esupport.trendmicro.com/solution/en-US/1112218.aspx
Basically I'm trying to achieve the above & our DB has already
been trimmed/housekept to less than half of its size so not much
more we can trim
ScriptSqlServer.zip
Vitor MontalvãoMSSQL Senior EngineerCommented:
There's no much that you can do. Those are all slow operations and the bigger the database the slower it will be.
You can do something to increase the performance but at the end you won't win a lot of time.
Things that you can do are more related to tempdb best practice:
-  tempdb data and log files should be stored in a fast and dedicated drive
- tempdb should have a datafile by logical processor until a maximum of 8 datafiles for the case that your server has 8 or more processors.
sunhuxAuthor Commented:
We are on Raid10 but just that the tempdb is not on a separate drive.
Would it help if we set tempdb to 'simple recovery mode' as well prior
to running the script?

Is this script CPU intensive?  I think it's IO intensive from the Task
Manager while it's running but not much CPU is used in the dual
vCPU VM when it's running (about 25%)
sunhuxAuthor Commented:
>tempdb should have a datafile by logical processor until a maximum of 8 datafiles
So how man datafile I have in my case?  Can you tell from the table that I provided
earlier or it's from the script?
Vitor MontalvãoMSSQL Senior EngineerCommented:
Would it help if we set tempdb to 'simple recovery mode' as well prior to running the script?
tempdb is always on Simple Recovery Mode. You can't change that. I was talking about the user database that you're working on it. That database you can change to Simple Recovery Model.

Is this script CPU intensive?  I think it's IO intensive (...)
Yes, it's I/O intensive but if you can give a datafile for each CPU, then it will run faster.

So how man datafile I have in my case?  Can you tell from the table that I provided
 earlier or it's from the script?
You can use SSMS and check the properties of the tempdb or run the following command to return the number of files: sp_helpdb 'tempdb'
Anthony PerkinsCommented:
>>So how man datafile I have in my case? <<
1 data files: 248GB
2 Log files:  88GB + 99GB
sunhuxAuthor Commented:
>>So how many datafile I have in my case? <<
1 data files: 248GB
2 Log files:  88GB + 99GB

Anthony, was the above what you noted from one of the logs I attached previously?
Don't see this in the table I posted.

The principal is asking that I trim the events in my db to just only 3 days (which currently
I have trimmed to 2 weeks): the max I can trim is 1 week as I have received requests to
check for events about 1 week ago (& on rare occasions, 12 days ago)

If only I have 510GB of SSD at my disposal: put the entire DB there, run the script & 
once done, move the DB back to FC disks but it's only available around Mar-May next
year & even so, I can't justify for it
Anthony PerkinsCommented:
It was from here:
recovery_      log_reuse      type_      (No column
model_desc      _wait_desc      desc       name)
===========      ==========      =====      =========
BULK_LOGGED      LOG_BACKUP      ROWS      253329
BULK_LOGGED      LOG_BACKUP      LOG      89996
BULK_LOGGED      LOG_BACKUP      LOG      101540
sunhuxAuthor Commented:
In some huge databases of several TBs, I've heard of sites processing
them table by table.  Judging from the script, is this possible?

Have also seen cases of huge databases that are fragmented & their
databases are split into multiple datafiles so that they defragment the
DB (by exporting & reimporting back) one datafile each weekend
Anthony PerkinsCommented:
As suggested previously, try setting the Recovery Model to Simple in the script.
sunhuxAuthor Commented:
We did set it to  simple recovery mode   plus truncating several tables
(suggested by the principal) & this resulted in runtime of 11 hours.
However the truncation practically made us lost all the data which I
wud normally use to do checks to service SRs raised to me.

Just simple recov mode alone will result in 23+ hrs of runtime which
is not acceptable
Qlemo"Batchelor", Developer and EE Topic AdvisorCommented:
The time needed is directly related to the amount of rows to process, so if you remove all content from some tables it is faster, of course ;-).
I'm certain there is nothing you can do to reduce the overall time needed. You can only split execution, e.g. by table, to have shorter time slots the DB is not available. If possible, of course.
Anthony PerkinsCommented:
I would suggest you hire a competent and reliable DBA in your area to resolve this as it is evidently beyond what we can suggest here.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Databases

From novice to tech pro — start learning today.