Link to home
Start Free TrialLog in
Avatar of sunhux
sunhux

asked on

transaction log full : ways to work around instead of getting more disk space

I ran the attached script & it created transaction logs of 80GB+ & then stopped with the message
below due to shortage of disk space for the transaction logs:

The transaction log for database 'SSM' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Msg 9002, Level 17, State 4, Server SPPAAH, Line 1
The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

I have another local drive with 110GB plus the current drive with 80GB that houses the current
transaction log.  Can anymore suggest how I can go about doing this?  

Possible to amend the attached SQL script so that it doesn't generate such a huge transaction log
or it writes to both the drives to split up transaction logs or ... ?
SanitizedUpdateSchemaSqlScript.txt
Avatar of sunhux
sunhux

ASKER

Forgot to say that obtaining more storage is not an option currently (at least for next few days)
SOLUTION
Avatar of Vitor Montalvão
Vitor Montalvão
Flag of Switzerland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sunhux

ASKER

Tempdb are in the same drive : does this affect the speed/performance
or it just frees up more space if it sits on another drive?

>plenty of temporary tables and performing a lot changes and write operations
feel free to enhance this script.  If we can put the temp tables in SSD, that may
help?  But we'll only have SSD around May next year
Avatar of sunhux

ASKER

> perform this actions with the database in simple recovery model.
You meant run the script while the database is in simple recovery mode?

I'm no DBA, can you elaborate how to go about setting this MS SQL DB
into recovery mode?
I don't see your script using explicit transactions, and that is reasonable, as all steps should be part of the same transaction.
However, that will always require a lot of transaction log space. Using a loop doing each column change individually will increase execution time bit lower transaction log usage, as you can commit after having finished each column.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sunhux

ASKER

http://esupport.trendmicro.com/solution/en-US/1112218.aspx
  The script I attached earlier is sanitized from the above.  Not getting
  response from the author/developer of the script for the last 6 days.

I'm no DBA but will check with my DBA colleagues & get back.

> post the output from the script they ran before it ran out of space.
Refer to attached: the messages are what we'll see from SQL studio's
messages tab

>doing each column change individually will increase execution time but
> lower transaction log usage ...
Right now, priority is to drastically speed up this script so that the runtime
is about 20% that of what it takes tho it will be an added bonus if the
txn log is small as well (but if run in simple recov mode, there will be
no txn log)
TMsgsql2.zip
Changing column definitions can be a very costly operation for large tables, and there is nothing you can do about that.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sunhux

ASKER

> Please post the results of the query I posted earlier to answer this critical question.

Below is the output:

recovery_      log_reuse      type_      (No column
model_desc      _wait_desc      desc       name)
===========      ==========      =====      =========
BULK_LOGGED      LOG_BACKUP      ROWS      253329
BULK_LOGGED      LOG_BACKUP      LOG      89996
BULK_LOGGED      LOG_BACKUP      LOG      101540
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I can see you're dropping indexes, backing up databases and recreating indexes and all these are heavy operations so I can't see how can you tune the script.
What's the goal of all those operations? Maybe we can lead you to a different way to achieve want you pretend.
Avatar of sunhux

ASKER

> perform this actions with the database in simple recovery model.
> Tempdb database files are in a dedicated drive?
My DBA colleague just reran it in 'simple recov mode' but he forgot
to put the tempdb in another drive: so I guess there's no more txn
logs but tempdb simply grows too big that the drive ran out of space
again : refer to attached log

>http://esupport.trendmicro.com/solution/en-US/1112218.aspx
Basically I'm trying to achieve the above & our DB has already
been trimmed/housekept to less than half of its size so not much
more we can trim
ScriptSqlServer.zip
There's no much that you can do. Those are all slow operations and the bigger the database the slower it will be.
You can do something to increase the performance but at the end you won't win a lot of time.
Things that you can do are more related to tempdb best practice:
-  tempdb data and log files should be stored in a fast and dedicated drive
- tempdb should have a datafile by logical processor until a maximum of 8 datafiles for the case that your server has 8 or more processors.
Avatar of sunhux

ASKER

We are on Raid10 but just that the tempdb is not on a separate drive.
Would it help if we set tempdb to 'simple recovery mode' as well prior
to running the script?

Is this script CPU intensive?  I think it's IO intensive from the Task
Manager while it's running but not much CPU is used in the dual
vCPU VM when it's running (about 25%)
Avatar of sunhux

ASKER

>tempdb should have a datafile by logical processor until a maximum of 8 datafiles
So how man datafile I have in my case?  Can you tell from the table that I provided
earlier or it's from the script?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
>>So how man datafile I have in my case? <<
1 data files: 248GB
2 Log files:  88GB + 99GB
Avatar of sunhux

ASKER

>>So how many datafile I have in my case? <<
1 data files: 248GB
2 Log files:  88GB + 99GB

Anthony, was the above what you noted from one of the logs I attached previously?
Don't see this in the table I posted.

The principal is asking that I trim the events in my db to just only 3 days (which currently
I have trimmed to 2 weeks): the max I can trim is 1 week as I have received requests to
check for events about 1 week ago (& on rare occasions, 12 days ago)

If only I have 510GB of SSD at my disposal: put the entire DB there, run the script & 
once done, move the DB back to FC disks but it's only available around Mar-May next
year & even so, I can't justify for it
It was from here:
recovery_      log_reuse      type_      (No column
model_desc      _wait_desc      desc       name)
===========      ==========      =====      =========
BULK_LOGGED      LOG_BACKUP      ROWS      253329
BULK_LOGGED      LOG_BACKUP      LOG      89996
BULK_LOGGED      LOG_BACKUP      LOG      101540
Avatar of sunhux

ASKER

In some huge databases of several TBs, I've heard of sites processing
them table by table.  Judging from the script, is this possible?

Have also seen cases of huge databases that are fragmented & their
databases are split into multiple datafiles so that they defragment the
DB (by exporting & reimporting back) one datafile each weekend
As suggested previously, try setting the Recovery Model to Simple in the script.
Avatar of sunhux

ASKER

We did set it to  simple recovery mode   plus truncating several tables
(suggested by the principal) & this resulted in runtime of 11 hours.
However the truncation practically made us lost all the data which I
wud normally use to do checks to service SRs raised to me.

Just simple recov mode alone will result in 23+ hrs of runtime which
is not acceptable
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I would suggest you hire a competent and reliable DBA in your area to resolve this as it is evidently beyond what we can suggest here.