Replication into cache and committed with 1 transaction; is this possible?

I’m using Sql Server2008R2.  I’m replicating two databases over VPN from one server to another (2008R2).  I'm using push replication where the publisher is also the distributor.  Transactional replication is implemented on a set schedule, and that's because ETL packages run and pull tables beforehand (to be replicated after) from many locations.  These ETL packages truncate old tables and recreate them (updated) on every run.  (Hence, we have to run replication on a set schedule or empty tables may get replicated.)  Furthermore, I am wondering whether our replication can be done to first cache replicated changes on the subscriber side and then commit everything at once (at subscriber.)  Sometimes replication drags out due to poor VPN or internet bandwidth or whatever else, and I’d like to be able to control process interlocks to some extent.  

There are sql jobs and other things after replication comes through on the subscriber side, and interlocking is a problem when we push replication to the limit (to be as frequent as possible.)
menreeqAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

David ToddSenior DBACommented:
Hi,

You haven't said whether you are running transaction replication or snapshot replication. If you are completely rebuilding tables then snapshot replication may be more efficient.

Maybe try staggering the replications. ie break the publication(s) into smaller units,

Otherwise look at improving the pipe between here and there. Only real thing for it.

HTH
  David
0
menreeqAuthor Commented:
Hi David,
Transactional replication is implemented on a set schedule.

Currently I replicate a whole database of 10 tables or so per publication. When you mention staggering above, do you suggest to perhaps do one or two tables per publication?  This way I'm still dependent on publisher's VPN and internet connectivity, I'm trying to avoid it and I am afraid improving the pipe is beyond my control, since its client's bandwidth that I have no control over.
0
David ToddSenior DBACommented:
Hi,

Fairly easy really. If client is concerned about this issue, show the volume of data that is currently going down the pipe and point out that your timeouts require a fatter pipe to carry x data in y time. If client listens and improves pipe then you have succeeded. If client doesn't listen put it in writing and ignore the issue until client listens.

What is the business reason for the replication? What problem are you trying to solve?

Regards
  David
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server 2008

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.