Solved

DFS-R Backlog file list

Posted on 2013-12-17
5
2,444 Views
Last Modified: 2013-12-23
I have a problem with DFS Replication. I can see that our users have dumped a whole heap of files (approx. 12,000) into a couple of replicated folders and the system is backlogging these files. The backlog is not growing very quickly which suggests that DFS is still working to a degree. However when I run a DFS Backlog report it only gives me the first 100 files. Ideally I want get a list of *all* the backlogged files as I want to move them to a non-DFS replicated folder and then re-introduce them to replication slowly.

Anybody know how to get a complete list of backlogged files?
0
Comment
Question by:BluecubeTechnology
  • 3
  • 2
5 Comments
 
LVL 8

Assisted Solution

by:MarkieS
MarkieS earned 500 total points
ID: 39725953
Hi

Take a look at DFSRMon.exe.  
http://certcollection.org/forum/topic/106034-dfsrmon-v110/

It's kinda clunky and slow on the GUI but it is gathering alot of info in the background so dont expect it to be fast.

Also use TRACE32.EXE to monitor the live log files in %WINDIR%\debug\DFSRxxxxx.log.gz and DFSRxxxxx.log

DFSRxxxxx.log is the current file being used and TRACE32.EXE will show you live replication activity as it happens.  For the full instructions how to view this look at http://support.microsoft.com/kb/958893

You may find that staging areas are your problem.   A temporary increase in staging area size to get you through your "time of adversity" may result in faster replication.  You can reduce the staging area size afterwards.

I would have reservations in now removing the files for replication only to add them in again slowly.  The act of removing the files has to be replicated itself.  This effectively triples the amount of replication going on.

kind regards
Mark S.
0
 

Author Comment

by:BluecubeTechnology
ID: 39726346
Thanks for the reply, the TRACE32.exe has been helpful in getting at the logs.

I've increased the staging area size to 10GB for the affected folders but this doesn't appear to have got things moving. I've also checked the bandwidth between the office and datacentre and I'm not getting anything like the amount of traffic that would suggest the line is saturated (it's a 40MB EFM circuit).

This is an extract from the live log:

+	present                         1
+	nameConflict                    0
+	attributes                      0x20
+	ghostedHeader                   0
+	data                            0
+	gvsn                            {2909EAD2-DA4D-4F28-9CB2-7028EB48070D}-v2014664
+	uid                             {2909EAD2-DA4D-4F28-9CB2-7028EB48070D}-v2014664
+	parent                          {2909EAD2-DA4D-4F28-9CB2-7028EB48070D}-v2003432
+	fence                           Default (3)
+	clockDecrementedInDirtyShutdown 0
+	clock                           20131121 16:02:37.218 GMT (0x1cee6d318ff22d9)
+	createTime                      20131121 11:06:57.230 GMT
+	csId                            {4D219D40-1196-4B26-9CD6-41DFC0B14D54}
+	hash                            00000000-00000000-00000000-00000000
+	similarity                      00000000-00000000-00000000-00000000
+	name                            3109001 - <File_Name>.xlsx
+	 Error:
+	[Error:9027(0x2343) RpcFinalizeContext downstreamtransport.cpp:1117 25016 C A failure was reported by the remote partner]
+	[Error:9027(0x2343) DownstreamTransport::RdcGet downstreamtransport.cpp:5269 25016 C A failure was reported by the remote partner]
+	[Error:9024(0x2340) DownstreamTransport::RdcGet downstreamtransport.cpp:5269 25016 C The file meta data is not synchronized with the file system]
20131218 11:28:39.602 25016 INCO  6582 InConnection::LogTransferActivity Failed to receive RAWGET uid:{2909EAD2-DA4D-4F28-9CB2-7028EB48070D}-v2014664 gvsn:{2909EAD2-DA4D-4F28-9CB2-7028EB48070D}-v2014664 fileName:3109001 - Coca-cola - Supreme Control.xlsx connId:{B8966922-E91B-4B56-807F-CC46EA43CA52} csId:{4D219D40-1196-4B26-9CD6-41DFC0B14D54} stagedSize:0 Error:
+	[Error:9027(0x2343) DownstreamTransport::RdcGet downstreamtransport.cpp:5346 25016 C A failure was reported by the remote partner]
+	[Error:9027(0x2343) RpcFinalizeContext downstreamtransport.cpp:1117 25016 C A failure was reported by the remote partner]
+	[Error:9027(0x2343) DownstreamTransport::RdcGet downstreamtransport.cpp:5269 25016 C A failure was reported by the remote partner]
+	[Error:9024(0x2340) DownstreamTransport::RdcGet downstreamtransport.cpp:5269 25016 C The file meta data is not synchronized with the file system]
20131218 11:28:39.602 25016 INCO  2831 InConnection::ProcessErrorStatus (Ignored) Remote error connId:{B8966922-E91B-4B56-807F-CC46EA43CA52} state:CONNECTED Error:
+	[Error:9027(0x2343) DownstreamTransport::RdcGet downstreamtransport.cpp:5346 25016 C A failure was reported by the remote partner]
+	[Error:9027(0x2343) RpcFinalizeContext downstreamtransport.cpp:1117 25016 C A failure was reported by the remote partner]
+	[Error:9027(0x2343) DownstreamTransport::RdcGet downstreamtransport.cpp:5269 25016 C A failure was reported by the remote partner]
+	[Error:9024(0x2340) DownstreamTransport::RdcGet downstreamtransport.cpp:5269 25016 C The file meta data is not synchronized with the file system]

Open in new window



Unfortunately I'm not sure how to interpret these errors. Some searching has suggested that this due to temp files not being replicated but the file attributes for this file (0x20) and others I have looked at do not indicate the temporary attribute is set, so I think that line of investigation is a dead end.
0
 

Accepted Solution

by:
BluecubeTechnology earned 0 total points
ID: 39726725
I have a solution:

The problem was with the TCP Chimney-Offload feature. Once this was disabled (as per below) the cork was removed and DFS-R began to flow once again. Thank you MarkieS, your suggestion of using TRACE32.exe found me the errors that lead to me finding the solution.

1. Click Start, click Run, type Regedit, and then click OK. 
 
2. Locate the following registry subkey: 
 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters 
 
*If any of the below keys are not present, create them.
 
3. Double-click the EnableTCPChimney registry entry. 
 
4. In the Edit DWORD Value dialog box, type 0 in the Value data box, and then click OK. 
 
5. Double-click the EnableRSS registry entry. 
 
6. In the Edit DWORD Value dialog box, type 0 in the Value data box, and then click OK. 
 
7. Double-click the EnableTCPA registry entry. 
 
8. In the Edit DWORD Value dialog box, type 0 in the Value data box, and then click OK. 
 
9. Restart the server.

Open in new window

0
 
LVL 8

Expert Comment

by:MarkieS
ID: 39726749
Good to hear you're running again
0
 

Author Closing Comment

by:BluecubeTechnology
ID: 39735677
Because connections are buffered and processed on the TOE (TCP\IP Offload Engine) chip, resource limitations happen more often then they would if processed by the ample CPU and memory resources that are available to the operating system.  This limitation of resources on the TOE chip can cause communication issues.
0

Featured Post

Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

When you upgrade from Windows 8 to 8.1 or to Windows 10 or if you are like me you are on the Insider Program you may find yourself with many 450MB recovery partitions.  With a traditional disk that may not be a problem but with relatively smaller SS…
You might have come across a situation when you have Exchange 2013 server in two different sites (Production and DR). After adding the Database copy in ECP console it displays Database copy status unknown for the DR exchange server. Issue is strange…
This tutorial will walk an individual through locating and launching the BEUtility application to properly change the service account username and\or password in situation where it may be necessary or where the password has been inadvertently change…
This tutorial will walk an individual through the steps necessary to enable the VMware\Hyper-V licensed feature of Backup Exec 2012. In addition, how to add a VMware server and configure a backup job. The first step is to acquire the necessary licen…

911 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

17 Experts available now in Live!

Get 1:1 Help Now