Transactional replication, couple problems

Posted on 2013-01-30
Last Modified: 2013-02-12
v2005, transactional replication. Fairly simple, one pub, one sub, running all day long, just to keep redundancy in synch.  Came to my attention that many tables were not included. We spent some time getting them in place over the weekend.

Now, we're having recurring problems blowing the team log in the subscriber, and I have an error I haven't encountered before.

Server Error Log
Time      Error
2013-01-29T19:45:51.743      The transaction log for database 'XX' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

2013-01-29T19:45:51.743      if COLUMNPROPERTY( OBJECT_ID('MSreplication_subscriptions'),'distribution_agent','AllowsNull') <> 1 BEGIN alter table MSreplication_subscriptions alter column distribution_agent sysname null end

2013-01-29T19:45:51.743      The transaction log for database  'XX' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

Aside from the above, there are no obvious errors.  And, the articles are being transferred just fine.

Any suggestions from EE replication experts would truly be appreciated.
Question by:dbaSQL
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 5
  • 5
LVL 37

Expert Comment

ID: 38835833
have you tried to truncate it manually?
the transaction log can't be truncated until all subscribers get the updates from that log so if for some reason replication is behind, you might get these messages
LVL 69

Expert Comment

by:Scott Pletcher
ID: 38835979
FIRST, you MUST give the log more disk space or another file to use.  The database is DEAD until it can write to its log.
LVL 17

Author Comment

ID: 38836327
i gotcha, scott. in fact, i already have the new disk getting plugged in at eod.  But, that is not the source of the problem.  It is only a bandaid until I can figure out what is causing this.

My gutt says it is simply that we are using some default replication settings, that are no longer adequate.  Even with that, adding 1 new article or 50 new articles, should not impede the integrity of my redundancy.  I'm thinking it's default configured at every end of the stick -- replication, transaction log, and otherwise.  We  were logging X before,  we added maybe 45 new articles, and now we are logging XXXX.  The box is already weak on disk.  There's no way it is going to allow for this w/out a lot of pain.

LOG_BACKUP is the reason that the last log was blown.  (log_reuse_wait_desc)  When I looked at it the other day, we had 9K + VLF's.  Then, after clearing the log, we had 380.  Pretty sure Miss Tripp would wrinkle her nose:

I was reading this earlier:

Specifically, the 'Transactional Replication and the Transaction Log' section.

All of it suggesting that yes, we've got some default config that isn't going to allow for the new articles without all kinds of pain.  (you should see the sql error log.. i've got 3314's all over the place)

So.  What to do?
Enterprise Mobility and BYOD For Dummies

Like “For Dummies” books, you can read this in whatever order you choose and learn about mobility and BYOD; and how to put a competitive mobile infrastructure in place. Developed for SMBs and large enterprises alike, you will find helpful use cases, planning, and implementation.

LVL 69

Expert Comment

by:Scott Pletcher
ID: 38837025
Once the disk issue is temporarily overcome ...

If you're using immediately updating subscription, you could change it to use queued.  That should reduce the overhead.
LVL 17

Author Comment

ID: 38839227
This replication was authored by someone else. Im always a little hestitant on something i did not defined.  In the gui, on the replcation monitor, Subscription Options, these are all of the sub options:

Independent Distribution Agent         True
Snapshot always available                   False
Allow anonymous Subscriptions         False
Attachable Subscription database      False
Allow pull  subscriptions                      True
Allow initialization from backup files    False
Allow non-SQL subscribers                    False
Allow data transformations                   False
Allow peer to peer subscriptions          False
Replicate schema changes                     True
Allow immediate updating subscriptions    False
Allow queued updating subscriptions          False

Those bottom two are grey/untouchable. I would have thought at least one would be enabled.  I'm not sure how to alter this, scott.  Also, if i run this on the subscriber, i get nothing back.  

sp_helppullsubscription @publisher = 'xxxx',
    @publisher_db = 'xxx',
    @publication =  'xxx'
LVL 17

Author Comment

ID: 38839258
sp_helpsubscription on the publisher tells me this:

status                       Active
synch type              Automatic
sub type                       Push
full sub                            1
update mode           read only
loopback  detection        0
offload enabled               0
LVL 69

Expert Comment

by:Scott Pletcher
ID: 38840247
If the data to be replicated is at all large, you want to enable "allow initialization from backup files" and do the initial "sync" using a db backup.  You can do that in the existing publication.  You might have to drop and re-add the subscribed, "telling" SQL you'll do the initialization with a backup instead of SQL being forced to initially send *all* data to the subscriber to start the replication process.

Sorry; I should have explicitly stated that to change to queued sync from automatic that you would need to create a new publication (i.e., probably drop the old one and recreate it).
LVL 17

Author Comment

ID: 38843774
Scott, can you elaborate for me, let me know why the queued synch is a more optimal mode?  Or even just why you think this may help to resolve the problem?
LVL 69

Accepted Solution

Scott Pletcher earned 500 total points
ID: 38843975
When you do "immediate update", SQL does exactly that: it immediately connects to the subscriber(s) and involves them in what's called a two-phase commit process for all modifications (DELETEs, INSERTs, UPDATEs).  

Basically this means the entire transaction integrity process ("all or none") must occur in tandem on both machines involved, and SQL literally waits until both processes complete before applying any mods to the underlying table.  [Naturally Google "two phase commit" for more details.]
LVL 17

Author Closing Comment

ID: 38880780
Hey Scott, I apologize for the delayed response.  If you could give me the scoop on the correct steps to take, in order to make this change, I would be hugely grateful.  Otherwise, I am grateful for the input, and I can see now, why this would be to our gain.

Thank you very much for the insight.
LVL 69

Expert Comment

by:Scott Pletcher
ID: 38880854
Unfortunately, IIRC, I think the type of publication -- immediate or delayed -- is set at the time you create the pub.  Thus, I think you would have to drop the old publication and start a new one to switch to deferred updates.  But, in your situation, I think deferred updates would be preferable, so it's probably worth the one-time hassle to do the switch.

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

by Mark Wills Attending one of Rob Farley's seminars the other day, I heard the phrase "The Accidental DBA" and fell in love with it. It got me thinking about the plight of the newcomer to SQL Server...  So if you are the accidental DBA, or, simp…
In this article I will describe the Backup & Restore method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
In an interesting question ( here at Experts Exchange, a member asked how to split a single image into multiple images. The primary usage for this is to place many photographs on a flatbed scanner…

752 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question