i'm looking for advice on the best way to acheive the following with performance being primary goal.
I have two tables called table_import and table_live.
table_import will have over 1 million rows of data imported into it each night. This is not appended to existing data but is a new set of data.
table_live will be the data that the live application will be referrencing
each night I want to run a job that looks at the data in table_import and compare it against the data in table_live, removing data from table_live that is no longer in table_import and updating any rows in table_live that have been updated in table_import and adding new data from table_import that do not exist in table_live.
I have have been told a function exists in db2 called insert_update that acheives this using primary keys in both table.
The issue I have is that I do not have primary keys available in both table but I do have a unique column which is a telephone number in both tables.
Also, the table_live table will be in constant use so performance is a priority.
Can anyony please advise on the best method and provide example scripts to do this on SQL Server?