[Webinar] Learn how to a build a cloud-first strategyRegister Now

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 203
  • Last Modified:

Alter field in table with 135,000,000 records

I am trying to alter a field in tableA  from 2 char to 3 char, in a table with over 135,000,000 records.

When running script below, it runs for 35-40 minutes and stops saying no more room or just disconnects my job.
I assume it stops because of all the records it has to alter.
I have no control over servers. Just wondering if there is a better way.

alter table tablea alter column [rt_code] char(3)

The next thing I will try is:
copying tablea to tablea_bk
deleting all records in tablea
then alter tablea
then copy records back to tablea from tablea_bk
  • 3
  • 3
  • 2
1 Solution
Scott PletcherSenior DBACommented:
Yeah, there's no easy way to do that.  You'll have some delays and/or down time for that table no matter what you do.

The single biggest thing you can do to speed up the entire process is to make sure you have sufficient pre-allocated, and thus pre-formatted, log space to handle the biggest transaction.  [I had a 45 minute task go down to ~30 secs just from doing that.]  I'm assuming you have IFI on, so that data file growth does not have the same issue.

Naturally tablea_bk should define the column as char(3) already.  If you're already familiar with Change Tracking, you could use it to re-sync the tables, but it's not trivial.
Anthony PerkinsCommented:
1. Add a new column
2. Copy the data from the old column in batches.
3. Drop the old column
4. Rename the new column with the old name.
5. Fix all the Foreign Keys, indexes if necessary.
Scott PletcherSenior DBACommented:
>> 1. Add a new column <<

That will take at least as long, and probably longer, than converting the existing column.
Efficient way to get backups off site to Azure

This user guide provides instructions on how to deploy and configure both a StoneFly Scale Out NAS Enterprise Cloud Drive virtual machine and Veeam Cloud Connect in the Microsoft Azure Cloud.

Anthony PerkinsCommented:
That will take at least as long, and probably longer, than converting the existing column.
Are you sure? I thought it (ALTER TABLE ADD ColumnName ...) would be just a metadata change and therefore practically instantaneous.
Scott PletcherSenior DBACommented:
Adding a fixed-length column will have to change the structure of all rows, just like increasing the length of a fixed-length column does.
Anthony PerkinsCommented:
Ah, I did not know that.  Thanks.
thayduckAuthor Commented:
I have a DBA looking at this issue.
He is trying some things in SQL Sand Box.

I think he will have to do a:

Copy A to BKA
Truncate A
Alter A
Copy BKA back to A

It's just the amount of records that is causing this issue.
thayduckAuthor Commented:
DBA came up with this,  then deleted tablea, then renamed tableb to tablea.
This only took around 5 mintutes to complete.

Per DBA:
From the Activity Monitor, it seems most of wait time is on PAGEIOLATCH_EX.  So we are waiting on disk IO most of the time.

So the important element is that SELECT .. INTO new_table FROM source_table does very little logging.  In this select statement, I cast the column we want to change to the new format.  I do some renames of the tables and we are done.

      ,cast([RT_code] as char(3)) as RT_code
      ,[Repair Type]
  into [dbo].[tableb]
  FROM [dbo].[tablea]

Featured Post

Prep for the ITIL® Foundation Certification Exam

December’s Course of the Month is now available! Enroll to learn ITIL® Foundation best practices for delivering IT services effectively and efficiently.

  • 3
  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now