Transactional Replication Alter Table Best Practices, calling all replication experts

Posted on 2003-11-17
Last Modified: 2012-05-04
We have a database application that we distribute to all of our customers.  We are using Red Gate to create the database script changes from one version to the next (and its worked great for years).  Now, we are introducing transactional replication on this db and need some help from someone who has been there (preferably).  

The problem is that the migration scripts utilize the ALTER TABLE statement which will NOT work on replicated tables.  The tool doesn't know how to generate the correct statements (sp_msrepladdcolumn and sp_msrepldropcolumn).  The data is being replicated to a db that contains triggers and stored procedures that transform the data and send it on to a reporting db.  Dropping and recreating the publication seems like it will have two problems:  one is too much time, the other is that the data will probably be duplicated (not on the subscriber side, but on the reporting db side due to the triggers and sps.)

What we need is a workable solution or the name of a tool that will create migration scripts for replicated databases.  It seems like knowledge of replication here and elsewhere is fairly limited.  Btw, if anyone has the name of a book that they recommend on replication, I would appreciate it.  I looked on Amazon and Barnes and Noble and only found one book that is dedicated to replication and its basically a copy of BOL (SQL Server 2000 Reference Library -  Iseminger).

Question by:PaulBarbin
  • 4
  • 2
LVL 13

Expert Comment

ID: 10513048
You can just drop the replication articles involved without dropping the entire publication...
(An article is an object being replicated)

use: sp_droparticle
and sp_addarticle

As long as you add the articles afterwards you will be fine.
If you script out the replication scenarios from the server (drop / create), this will allow you to copy the code to add to the script created for both the drops/creates of the items.

The drops go before you're redgate script has run.
The creates go after you're redgate script has completed (sucesfully).


Author Comment

ID: 10514453
Dan, thanks for the reply.  Comments and questions...

When dropping and readding articles only, would we need to re-run the snapshot agent?  

Besides applying schema changes, one of our other goals was to keep the update time to an absolute minimum.  But we're talking about replicating 3-4 tables with 500,000 rows or more in each, plus 40 other smaller tables.  The time to reinit and rerun the replication can be as much as 20-30 minutes for our largest clients (which is another problem - this isn't for my company, its part of a software package that we sell to hundreds of clients nationwide).

I've done some additional research since then and realized that when we readd the publication and subscription, there is an option to tell the snapshot NOT to sync.  This would be a great fix, except that the option apparently doesn't work.  Microsoft has created a KB doc for it and the fix is available but not publicly yet.  

I have a few other questions I'd like to ask you.  I'll add new questions when we get this one resolved.  Thanks
LVL 13

Accepted Solution

danblake earned 500 total points
ID: 10515050
With this mechanism, you do not need to snapshot re-initialisation.
It keeps the subscription 'up', you just drop the articles within the subscription.
It is possible to have a subscription with no replicated articles.
Then apply schema modifications.
Then add replicated articles
Tran Replication then sends data-modifications on new tables to all sites without snapshots being reinitialised.

I did not need, to re-apply the snapshot after adding an article to a publication, I mainted T-replication on the subscription as a perm up option..

The default for adding an article is not to force a snapshot after applying the transactional replication at one site.
Looking within the sp_addarticle code:
[@force_invalidate_snapshot = ] force_invalidate_snapshot
Acknowledges that the action taken by this stored procedure may invalidate an existing snapshot. force_invalidate_snapshot is a bit, with a default of 0. 0 specifies that adding an article will not cause the snapshot to be invalid. If the stored procedure detects that the change does require a new snapshot, an error will occur and no changes will be made. 1 specifies that adding an article may cause the snapshot to be invalid, and if there are existing subscriptions that would require a new snapshot, gives permission for the existing snapshot to be marked as obsolete and a new snapshot to be generated.

So whilst a single article is down, to allow schema updates to be applied to each site in turn, preventing data-updates at the distributed sites (especially if involved in a merge replication) -- it is possible to do this by using view -> table updates only for all tables involved in replication.  The 'trick' is ensuring that all articles are modified / changed then re-added.  I have used this technique many times.

I also found that dropping/recreating the publication resulted in a snapshot being required, but I do did manage to create several t-sql tran publications without requiring snapshots to be created.

This should allow changing of status of a particular subscription: sp_changesubstatus by stating that the subscription is already syncronised.  or by changing the [@status =] 'status' to active.
Value Description
active Subscriber is synchronized and is receiving data.
NAS Cloud Backup Strategies

This article explains backup scenarios when using network storage. We review the so-called “3-2-1 strategy” and summarize the methods you can use to send NAS data to the cloud

LVL 13

Expert Comment

ID: 10515193
When dropping and readding articles only, would we need to re-run the snapshot agent?


Author Comment

ID: 10515231
Thanks again for the info.  I have learned quite a bit about replication in the past 6 months.  But I could still use your expertise.  I'll award you points for this one and open a new one for further discussion.

Thanks again.

LVL 13

Expert Comment

ID: 10515334
No worries, glad to have been of assistance.

Featured Post

Optimizing Cloud Backup for Low Bandwidth

With cloud storage prices going down a growing number of SMBs start to use it for backup storage. Unfortunately, business data volume rarely fits the average Internet speed. This article provides an overview of main Internet speed challenges and reveals backup best practices.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Let's review the features of new SQL Server 2012 (Denali CTP3). It listed as below: PERCENT_RANK(): PERCENT_RANK() function will returns the percentage value of rank of the values among its group. PERCENT_RANK() function value always in be…
International Data Corporation (IDC) prognosticates that before the current the year gets over disbursing on IT framework products to be sent in cloud environs will be $37.1B.
Via a live example, show how to set up a backup for SQL Server using a Maintenance Plan and how to schedule the job into SQL Server Agent.
Using examples as well as descriptions, and references to Books Online, show the documentation available for datatypes, explain the available data types and show how data can be passed into and out of variables.

828 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question