Solved

MERGE multiple tables

Posted on 2013-11-10
6
180 Views
Last Modified: 2013-11-12
Dear all,
I have more then 40 Tables with same structure , in the end of the day I synchronize those table with master-table ( same structure but one extra col "ProjectID"  to distinguee from where the data came )
I develop the following SP  :
MERGE TargetTable AS T
USING SourceTable AS S
ON (T.fld1 = S.fld2) 
WHEN NOT MATCHED BY TARGET 
    THEN INSERT(ProjectID, fld1 ,fld2,fld3 ,fld4  ) 
       VALUES(@ProjectID, S.fld1 ,S.fld2 ,S.fld3 ,S.fld4  )
WHEN MATCHED 
    THEN UPDATE SET T.fld1=S.fld1,
					T.fld2=S.fld2,
					T.fld3=S.fld3,
					T.f4=S.fld4

WHEN NOT MATCHED BY SOURCE 
    THEN DELETE  

Open in new window


When I run on First source-table it work fine ,
when I try to run with second source-table , it delete the old rows ( Came from another Table).
How can stop that?
I try to change Delete to
DELETE T where ProjectID=@ProjectID

Also try to add ON (T.fld1 = S.fld2 and ProjectID=@ProjectID )
but didn't work at all

Please help
thanks
0
Comment
Question by:ethar1
  • 3
  • 2
6 Comments
 
LVL 9

Expert Comment

by:QuinnDex
ID: 39637211
WHEN NOT MATCHED BY SOURCE

this is your problem, the rows entered from the first source table will not exist in the second so will be deleted.

you could combine your source tables first or add another column that marks the row as new row and exclude this from the delete clause when all the updates are complete update the new row column to null
0
 
LVL 13

Expert Comment

by:magarity
ID: 39637224
You could always do this in one step by substituting a single view for the 40 tables,
something like:

create view all_sources as
select '1' as projectid, fld1, etc from source1
union all
select '2' as projectid, fld1, etc from source2
union all
... from sourceN

Then use that in your merge statement. I think this will allow your delete to function properly but I'm unable just now to test it myself.  I don't know if this will be too big of a transaction to make that view. You've said 40 source tables but not how many rows are involved.

First alternative: you could do the deletes as a separate set of statements:

delete from target left outer join source on target.key = source.key where source key is null and target.projectid = 1
...
delete from target left outer join source on target.key = source.key where source key is null and target.projectid = N

Second alternative:
Why not just truncate the target and insert only from the sources? Then you don't even have to do the deletes. Since there are 40 sources, this might actually be faster even if there are a modest ratio of updates.

truncate target;
insert target
select '1' as projectid, fld1, etc from source1
union all
select '2' as projectid, fld1, etc from source2
union all
... from sourceN;
0
 

Author Comment

by:ethar1
ID: 39637388
The natural of source table not available at same time, each table has a policy to connect with.

each table I make a separate SP , to manage the its policy.

Also I cant truncate the Target , its contains extra fields which is populated by local application ( Notes , and Comments , Description , and actions)
I hope I can find a way to change the MERGE  statement.
Otherwise , I will cancel delete case and use :
delete from target left outer join source on target.key = source.key where source key is null and target.projectid = 1

Open in new window

I don't understand how it will delete not existing rows in source !
0
Best Practices: Disaster Recovery Testing

Besides backup, any IT division should have a disaster recovery plan. You will find a few tips below relating to the development of such a plan and to what issues one should pay special attention in the course of backup planning.

 
LVL 9

Accepted Solution

by:
QuinnDex earned 500 total points
ID: 39637392
add new column to target table type bit default 0, when you update or enter new row set this column to 1, when all updates are complete delete all rows where this column = 0 then up date the column to 0 ready for the next update
0
 
LVL 13

Expert Comment

by:magarity
ID: 39637840
Here is an example I made to show my idea of deleting afterwards works, at least with my understanding of what you're asking about. The delete statement syntax had to be a little different because I was just going from memory before:

use test;
drop table targettable;
drop table sourcetable1;
drop table sourcetable2;
create table targettable (projectid int, keyval int, data1 nvarchar(1));
create table sourcetable1 (keyval int,data1 nvarchar(1));
 
insert into sourcetable1 (keyval,data1) values (1001,'A');
insert into sourcetable1 (keyval,data1) values (1002,'B');
 
merge targettable t using sourcetable1 s on (t.keyval = s.keyval)
when not matched then insert (projectid,keyval,data1) values (1,keyval,data1)
when matched then update set t.data1 = s.data1;
 
create table sourcetable2 (keyval int,data1 nvarchar(1));
insert into sourcetable2 (keyval,data1) values (2001,'P');
insert into sourcetable2 (keyval,data1) values (2002,'Q');
 
merge targettable t using sourcetable2 s on (t.keyval = s.keyval)
when not matched then insert (projectid,keyval,data1) values (2,keyval,data1)
when matched then update set t.data1 = s.data1;
 
select * from targettable;
 
delete sourcetable1 where keyval = 1001;
update sourcetable1 set data1 = 'C' where keyval = 1002;
 
merge targettable t using sourcetable1 s on (t.keyval = s.keyval)
when not matched then insert (projectid,keyval,data1) values (2,keyval,data1)
when matched then update set t.data1 = s.data1;
 
select * from targettable;
 
delete a from targettable a where a.projectid = 1 and a.keyval in
(select t.keyval from targettable t left outer join sourcetable1 s
on t.keyval = s.keyval and t.projectid = 1
where s.keyval is null);
 
select * from targettable;
0
 
LVL 13

Expert Comment

by:magarity
ID: 39643188
You do realize that if the 'delete where 0' step is ever accidentally run out of order then it will delete everything? It will work, just be careful.
0

Featured Post

Optimizing Cloud Backup for Low Bandwidth

With cloud storage prices going down a growing number of SMBs start to use it for backup storage. Unfortunately, business data volume rarely fits the average Internet speed. This article provides an overview of main Internet speed challenges and reveals backup best practices.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
best counters for cpu high usage 3 33
sql 2014,  lock limit 5 32
What's wrong with this T-SQL Foreign Key? 7 46
what are the unique tables in SQL master database 5 62
Nowadays, some of developer are too much worried about data. Who is using data, who is updating it etc. etc. Because, data is more costlier in term of money and information. So security of data is focusing concern in days. Lets' understand the Au…
JSON is being used more and more, besides XML, and you surely wanted to parse the data out into SQL instead of doing it in some Javascript. The below function in SQL Server can do the job for you, returning a quick table with the parsed data.
Via a live example, show how to setup several different housekeeping processes for a SQL Server.
Viewers will learn how to use the SELECT statement in SQL to return specific rows and columns, with various degrees of sorting and limits in place.

803 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question