thayduck
asked on
Alter field in table with 135,000,000 records
I am trying to alter a field in tableA from 2 char to 3 char, in a table with over 135,000,000 records.
When running script below, it runs for 35-40 minutes and stops saying no more room or just disconnects my job.
I assume it stops because of all the records it has to alter.
I have no control over servers. Just wondering if there is a better way.
alter table tablea alter column [rt_code] char(3)
The next thing I will try is:
copying tablea to tablea_bk
deleting all records in tablea
then alter tablea
then copy records back to tablea from tablea_bk
When running script below, it runs for 35-40 minutes and stops saying no more room or just disconnects my job.
I assume it stops because of all the records it has to alter.
I have no control over servers. Just wondering if there is a better way.
alter table tablea alter column [rt_code] char(3)
The next thing I will try is:
copying tablea to tablea_bk
deleting all records in tablea
then alter tablea
then copy records back to tablea from tablea_bk
1. Add a new column
2. Copy the data from the old column in batches.
3. Drop the old column
4. Rename the new column with the old name.
5. Fix all the Foreign Keys, indexes if necessary.
2. Copy the data from the old column in batches.
3. Drop the old column
4. Rename the new column with the old name.
5. Fix all the Foreign Keys, indexes if necessary.
>> 1. Add a new column <<
That will take at least as long, and probably longer, than converting the existing column.
That will take at least as long, and probably longer, than converting the existing column.
That will take at least as long, and probably longer, than converting the existing column.
Are you sure? I thought it (ALTER TABLE ADD ColumnName ...) would be just a metadata change and therefore practically instantaneous.
Are you sure? I thought it (ALTER TABLE ADD ColumnName ...) would be just a metadata change and therefore practically instantaneous.
Adding a fixed-length column will have to change the structure of all rows, just like increasing the length of a fixed-length column does.
Ah, I did not know that. Thanks.
ASKER
I have a DBA looking at this issue.
He is trying some things in SQL Sand Box.
I think he will have to do a:
Copy A to BKA
Truncate A
Alter A
Copy BKA back to A
It's just the amount of records that is causing this issue.
He is trying some things in SQL Sand Box.
I think he will have to do a:
Copy A to BKA
Truncate A
Alter A
Copy BKA back to A
It's just the amount of records that is causing this issue.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
The single biggest thing you can do to speed up the entire process is to make sure you have sufficient pre-allocated, and thus pre-formatted, log space to handle the biggest transaction. [I had a 45 minute task go down to ~30 secs just from doing that.] I'm assuming you have IFI on, so that data file growth does not have the same issue.
Naturally tablea_bk should define the column as char(3) already. If you're already familiar with Change Tracking, you could use it to re-sync the tables, but it's not trivial.