[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 218
  • Last Modified:

updating one column on 18 million record table

there is an update on a column and the inner join is between two table. 18M records are being affected.the query is running for 1 hour and log has grown from 1.5 GB to 16 GB.. there is 20 GB more available on the disk.. so it is OK for now..

theh recovery is simple

is it possible to ask it to commit each update and not do it as a transaction, so that the log is not impacted like this?
0
25112
Asked:
25112
4 Solutions
 
Aneesh RetnakaranDatabase AdministratorCommented:
space seems to be fine
you can try shrinking the logfile from another query window, this may slow down your update

DBCC SHRINKFILE(logName)

>is it possible to ask it to commit each update and not do it as a transaction, so that the log is not impacted like this?
no, you need to do these kinds of perations as small batches. like updating 10K records at a time
0
 
25112Author Commented:
>>no, you need to do these kinds of perations as small batches. like updating 10K records at a time
10K records - you mean manually? it will be troublesome with 18M records.. how do we track which ones have been updated yet or not etc

the join is as simple as
update a set col1 = b.col1 from tablea a join tableb b on a.id = b.id and a.col2='value' and b.col2='value'
0
 
dqmqCommented:
This will update 10000 rows that havn't been updated yet.  Repeat until no rows are updated.


update a set col1 = b.col1
from tablea a join tableb b on a.id = b.id and a.col2='value' and b.col2='value'
inner join
(
select top 10000 from tablea a inner join tableb b on a.id = b.id  and a.col2='value' and b.col2='value'
  where a.col2='value' <>  b.col2='value'
) as c on c.id = a.id  






  where b.col1 <> b.cola
0
Get your Conversational Ransomware Defense e‑book

This e-book gives you an insight into the ransomware threat and reviews the fundamentals of top-notch ransomware preparedness and recovery. To help you protect yourself and your organization. The initial infection may be inevitable, so the best protection is to be fully prepared.

 
dqmqCommented:
Pls ignore last line.  
0
 
Anthony PerkinsCommented:
>>the join is as simple as
update a set col1 = b.col1 from tablea a join tableb b on a.id = b.id and a.col2='value' and b.col2='value' <<
Are the appropriate columns indexed.  If they are not It could take you several days to complete.  Once you have it indexed correctly consider re-writing it as follows:
UPDATE  a
SET     col1 = b.col1
FROM    tablea a
        JOIN tableb b ON a.id = b.id
WHERE   a.col2 = 'value'
        AND b.col2 = 'value'
0
 
25112Author Commented:
thanks I followed that logic and also indexed.. here is another similar query to update 23M records. Indexed NC on the join columns... but there is no other condition other than the join itself.. how could it be done to iterate 10K each time?

UPDATE A
SET A.Col1 = B.Col1
FROM A JOIN B ON A.ID = B.ID
0
 
dqmqCommented:
Update A
Set A.col1 = B.col1
from A join B on A.id = b.id
join
(Select top 10000 A.* from A  join B on A.id = B.id
 where a.col1 <> b.col1 ) as c  on c.id = a.id
0
 
25112Author Commented:
that is neat..

if a.col1 or b.col1 is null, will it need some tweaking?
0
 
25112Author Commented:
dqmq, would you then put the update within a while loop?
0

Featured Post

Transaction-level recovery for Oracle database

Veeam Explore for Oracle delivers low RTOs and RPOs with agentless transaction log backup and transaction-level recovery of Oracle databases. You can restore the database to a precise point in time, even to a specific transaction.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now