Solved

Transactional replication, couple problems

Posted on 2013-01-30
11
772 Views
Last Modified: 2013-02-12
v2005, transactional replication. Fairly simple, one pub, one sub, running all day long, just to keep redundancy in synch.  Came to my attention that many tables were not included. We spent some time getting them in place over the weekend.

Now, we're having recurring problems blowing the team log in the subscriber, and I have an error I haven't encountered before.


Server Error Log
Time      Error
2013-01-29T19:45:51.743      The transaction log for database 'XX' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

2013-01-29T19:45:51.743      if COLUMNPROPERTY( OBJECT_ID('MSreplication_subscriptions'),'distribution_agent','AllowsNull') <> 1 BEGIN alter table MSreplication_subscriptions alter column distribution_agent sysname null end

2013-01-29T19:45:51.743      The transaction log for database  'XX' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases


Aside from the above, there are no obvious errors.  And, the articles are being transferred just fine.

Any suggestions from EE replication experts would truly be appreciated.
0
Comment
Question by:dbaSQL
  • 5
  • 5
11 Comments
 
LVL 37

Expert Comment

by:momi_sabag
ID: 38835833
have you tried to truncate it manually?
the transaction log can't be truncated until all subscribers get the updates from that log so if for some reason replication is behind, you might get these messages
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 38835979
FIRST, you MUST give the log more disk space or another file to use.  The database is DEAD until it can write to its log.
0
 
LVL 17

Author Comment

by:dbaSQL
ID: 38836327
i gotcha, scott. in fact, i already have the new disk getting plugged in at eod.  But, that is not the source of the problem.  It is only a bandaid until I can figure out what is causing this.

My gutt says it is simply that we are using some default replication settings, that are no longer adequate.  Even with that, adding 1 new article or 50 new articles, should not impede the integrity of my redundancy.  I'm thinking it's default configured at every end of the stick -- replication, transaction log, and otherwise.  We  were logging X before,  we added maybe 45 new articles, and now we are logging XXXX.  The box is already weak on disk.  There's no way it is going to allow for this w/out a lot of pain.

LOG_BACKUP is the reason that the last log was blown.  (log_reuse_wait_desc)  When I looked at it the other day, we had 9K + VLF's.  Then, after clearing the log, we had 380.  Pretty sure Miss Tripp would wrinkle her nose:  
http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

I was reading this earlier:  http://msdn.microsoft.com/en-us/library/ms345414.aspx

Specifically, the 'Transactional Replication and the Transaction Log' section.

All of it suggesting that yes, we've got some default config that isn't going to allow for the new articles without all kinds of pain.  (you should see the sql error log.. i've got 3314's all over the place)

So.  What to do?
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 38837025
Once the disk issue is temporarily overcome ...

If you're using immediately updating subscription, you could change it to use queued.  That should reduce the overhead.
0
 
LVL 17

Author Comment

by:dbaSQL
ID: 38839227
This replication was authored by someone else. Im always a little hestitant on something i did not defined.  In the gui, on the replcation monitor, Subscription Options, these are all of the sub options:

Independent Distribution Agent         True
Snapshot always available                   False
Allow anonymous Subscriptions         False
Attachable Subscription database      False
Allow pull  subscriptions                      True
Allow initialization from backup files    False
Allow non-SQL subscribers                    False
Allow data transformations                   False
Allow peer to peer subscriptions          False
Replicate schema changes                     True
Allow immediate updating subscriptions    False
Allow queued updating subscriptions          False


Those bottom two are grey/untouchable. I would have thought at least one would be enabled.  I'm not sure how to alter this, scott.  Also, if i run this on the subscriber, i get nothing back.  

sp_helppullsubscription @publisher = 'xxxx',
    @publisher_db = 'xxx',
    @publication =  'xxx'
0
How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

 
LVL 17

Author Comment

by:dbaSQL
ID: 38839258
sp_helpsubscription on the publisher tells me this:

status                       Active
synch type              Automatic
sub type                       Push
full sub                            1
update mode           read only
loopback  detection        0
offload enabled               0
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 38840247
If the data to be replicated is at all large, you want to enable "allow initialization from backup files" and do the initial "sync" using a db backup.  You can do that in the existing publication.  You might have to drop and re-add the subscribed, "telling" SQL you'll do the initialization with a backup instead of SQL being forced to initially send *all* data to the subscriber to start the replication process.

Sorry; I should have explicitly stated that to change to queued sync from automatic that you would need to create a new publication (i.e., probably drop the old one and recreate it).
0
 
LVL 17

Author Comment

by:dbaSQL
ID: 38843774
Scott, can you elaborate for me, let me know why the queued synch is a more optimal mode?  Or even just why you think this may help to resolve the problem?
0
 
LVL 69

Accepted Solution

by:
ScottPletcher earned 500 total points
ID: 38843975
When you do "immediate update", SQL does exactly that: it immediately connects to the subscriber(s) and involves them in what's called a two-phase commit process for all modifications (DELETEs, INSERTs, UPDATEs).  

Basically this means the entire transaction integrity process ("all or none") must occur in tandem on both machines involved, and SQL literally waits until both processes complete before applying any mods to the underlying table.  [Naturally Google "two phase commit" for more details.]
0
 
LVL 17

Author Closing Comment

by:dbaSQL
ID: 38880780
Hey Scott, I apologize for the delayed response.  If you could give me the scoop on the correct steps to take, in order to make this change, I would be hugely grateful.  Otherwise, I am grateful for the input, and I can see now, why this would be to our gain.

Thank you very much for the insight.
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 38880854
Unfortunately, IIRC, I think the type of publication -- immediate or delayed -- is set at the time you create the pub.  Thus, I think you would have to drop the old publication and start a new one to switch to deferred updates.  But, in your situation, I think deferred updates would be preferable, so it's probably worth the one-time hassle to do the switch.
0

Featured Post

How to improve team productivity

Quip adds documents, spreadsheets, and tasklists to your Slack experience
- Elevate ideas to Quip docs
- Share Quip docs in Slack
- Get notified of changes to your docs
- Available on iOS/Android/Desktop/Web
- Online/Offline

Join & Write a Comment

Suggested Solutions

by Mark Wills PIVOT is a great facility and solves many an EAV (Entity - Attribute - Value) type transformation where we need the information held as data within a column to become columns in their own right. Now, in some cases that is relatively…
In this article I will describe the Backup & Restore method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
This tutorial demonstrates a quick way of adding group price to multiple Magento products.
This video explains how to create simple products associated to Magento configurable product and offers fast way of their generation with Store Manager for Magento tool.

757 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

19 Experts available now in Live!

Get 1:1 Help Now