transaction log full while increasing column size


I am trying to increase size of a few columns in a very large table (from numeric(7,2) to (11,6)). After the "alter table <my table>" command ran for 1:30 hours I got the following error message:

"The transaction log for database 'MyDB' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases"

What should I do to resolve the issue? Thanks.
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

David ToddConnect With a Mentor Senior DBACommented:

I think that your table in this instance is too big. The issue is that the statement is atomic, so the whole thing needs to happen as a unit.

Maybe my select top 1000 was a bit conservative, maybe this could be increased to a million or so, and still execute within a minute.

What I suggest is wrap it in a loop like this:
declare @isdone bit
set @isdone = 0
while not @isdone = 1 begin
   <insert statement from above>
  if @@rowcount = 0 set @isdone = 1
  waitfor delay '00:05'

Okay that now might take all day and night to complete, BUT if transaction log backups are now scheduled correctly, it will complete in nice bite-sized chunks that shouldn't blow the transaction logs or tempdb or whatever. The 5 minute delay allows other processes to complete and use the logs and the chance to get the next virtual log file occassionally. If the number on the select top is increased to one hundred thousand I imagine an execution time of the insert itself of a couple of minutes.

David ToddSenior DBACommented:

Does the transaction log have auto-grow?

Did the transaction log consume all available disk?

What size was the log before this alter table and after?

What recovery model is this database in? If full, how frequently are you doing transaction log backups? When are you doing a full database backup?

What is the status of this database? Is this a key production database that can't be taken off-line for anything, or on a test system, or somewhere in between?

As a thought, can you do something like this:
select * into newTable from existingTable where 1 = 2
alter newTable ...
rename existingtable to orginaltablename
rename newtable to existingtablename
insert existingtable
select top 1000
from originaltable ot
left outer join existingtable et
  on ot.key = et.key
where et.key is null

The idea is to create a new table with the correct structure, and batch copy the rows across to the new table. That way wont be such a big all-or-nothing hit on the database and its log and the server.

sdc248Author Commented:
Hi David:

I talked to our DBA and found they haven't backed up the trans log for a while. After the log is backed up I should be able to do what I wanted to do, but I am thinking of using a different approach than my original one since it took more than an hour to run and fed up the transaction log.

Do you think the following would work, wouldn't take too long, and wouldn't feed up the trans log?
- create a new table with exact same schema except for larger column space for the columns I wanted to change.
- insert everything from the original table  
- rename the original table and the new table so new could be in business.

Is "insert into <new table> select * from <original table>" efficient enough to avoid long running time and huge amount trans records?

sdc248Author Commented:
Got it, David. Let me work on it and see how it goes...
sdc248Author Commented:
Problem solved. Thanks.
All Courses

From novice to tech pro — start learning today.