?
Solved

What is the fastest way to replicate DFS data to new member server?

Posted on 2010-08-24
14
Medium Priority
?
1,149 Views
Last Modified: 2013-12-02
I am going to be adding a new member server to our existing Windows Server 2003 R2 x86 DFSR infrastructure to replace a different member.  The primary server will have a one-way connection with the new member server. The new server is just meant to be an off-site failover/backup file server in case our main server goes down.

Is there a faster way to copy around 1TB of data to the new DFS member other than simply adding it to the Replication group and waiting? This new server will be moved to one of our remote sites across town, and I would like to replicate the 1TB of data over to it while it is still at my site locally hooked up on our gigabit LAN. I would like to have this data copied within 2 days if possible because I need to deliver this remote server ASAP.

I have seen that various people have tried using robocopy or similar software to accomplish this - but i do not want to change/overwrite any of the permsissions while copying over the files with a 3rd party program.

Once this data is copied over - I am assuming that I can just add the new member server to the existing replication group and it will make sure that the files are the same and then be updated....

If anyone has any recomendations for what the best process to copy all of this data over safely/quickly it would be greatly appreciated!
0
Comment
Question by:RavenInd
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
14 Comments
 
LVL 8

Expert Comment

by:afthab
ID: 33517965
0
 
LVL 20

Expert Comment

by:woolnoir
ID: 33518371
You could add the server to the replication set while its configured locally, wait for replication to occur and them move it offsite. The change in its AD logical location shouldn't effect the ability to replicate so the bulk of the work should be able to happen locally.
0
 
LVL 4

Expert Comment

by:TechnoButt
ID: 33524677
Be forewarned, prior to 2008 R2, DFS was not able to do read-only.  You'll have to do a convoluted ntfs/share permissions configuration to achieve one-way replication.  In 2008, it could be done for SYSVOL only.  In 2008 R2 they open it up for member servers.

The above suggestion of setting up replication with the server locally (temporarily on a local IP), allow replication to occur, and then move offsite is a pretty good one.

I'd use robocopy with appropriate switches (/copyall perhaps) to preserve your permissions if you want to do a faster manual copy.   If your target volume is on a SAS external enclosure or similar, this might be the fastest way to accomplish your goals (without saturating your network for the long copy).  You can move the enclosure to the source server, copy files with robocopy, and then move the enclosure to the destination server and join replication group.

It should use hash values to determine if replication is necessary and discover the files already on the server and not replicate again (although it may true up metadata, like ntfs permissions/etc as part of the replication).
0
Free learning courses: Active Directory Deep Dive

Get a firm grasp on your IT environment when you learn Active Directory best practices with Veeam! Watch all, or choose any amount, of this three-part webinar series to improve your skills. From the basics to virtualization and backup, we got you covered.

 

Author Comment

by:RavenInd
ID: 33527230
I could possible setup two-way replication if needed to our "redundant" site. Would you recommend that? I have heard nothing but bad things for one-way replication. I currently have most of our folders setup like this - and am now noticing that the folder sizes on the primary and secondary folder differ. I am thinking that i might need to redo most of the replication groups for two way. I also have had an issue in the past when i had a one-way replication group, and then changed it to a two-way = BAD IDEA. The secondary folder seemed to have took precedance and folders were removed form the primary.

Do you guys see any issue in doing a two-way replication for our "backup" site?

Thx
0
 
LVL 20

Expert Comment

by:woolnoir
ID: 33528463
Well dont forget that by default DFS doesn't replicate certain types of files, temp files being the primary culpril. Although tiny, they do add up when you have shares with serever 100,000's files. I wouldnt worry to much about small size differences, or you can remove all file type filters from the DFS properties.

In general i hate one way rep... i always go for 2 way unless the business need specifically dictates the need for the other.

Also keep in mind that DFS has issues with the file locking mechanism that office apps employ, meaning you could run into conflicts if you have a multi person file edit situation.
0
 

Author Comment

by:RavenInd
ID: 33531081
That makes sense - I believe our problem is the One-Way replication than. I changed one of our one-way replication groups to two-way and we had files/folders getting removed from our primary server (not good!). So i think i will recreate the replication groups again. The server is now brought out to our remote site, so the replication will not be too fast (10MB pipe between).

Will i have any issues with deleteing replication groups during the day? Most of them are disconected or disabled by now. I just want to make sure that our Namsespaces will remain intact because all of our employees have Mapped network drives that point to the namespaces.

I will only have 2 servers to replicate to - does anyone know of any issue i should have by removing the existing replication groups and recreating them during the day? I just want to make sure we don't have folders/files getting deleted again.

Thanks!
0
 
LVL 20

Expert Comment

by:woolnoir
ID: 33531171
The problem that occurs with DFS is that if you remove the replication entries and recreate for folders that already exist and have content that often the content is removed and placed in a conflict folder..., ideally the best way is to make sure the bulk of the stuff is done out of hours.

Start from fresh, create the DFS root -> targets etc. Add the files into the local one (the primary if you will) and make sure that referrals are disabled to the offsite one until files have replicated. it does mean that access will be slower as remote people access the local office copy. Once replication is complete and the logs indicate that you're in sync you can enable referals on both.

Just to check, you are using, or will be suing a DOMAIN DFS root yes ? rather than a standalone, so its \\fulldomainname\root\folder.

Just wanted to check as the experience is much better ;) if so, then you can re-create using the same names, and drive mappings will be maintained.

0
 

Author Comment

by:RavenInd
ID: 33531582
If i want to use the 2ndary site for failover, in case the primary site is down, it would not make sense for one-way replication. If for istance of primary server went down - the people would then automatically be routed to the 2ndary site server. They could then make changes to their same files and everything would be fine. The issue i understand now is that after we get the primary server running again - the changes made on the 2ndary server would not be replicated back.

Is this correct?
0
 
LVL 20

Expert Comment

by:woolnoir
ID: 33531675
I've never really seen people use 1 way replication, the issue is is that if you have 2 servers , say server1(HQ), server2(Remote) - people could make changes on either, and server2 would never send its updates to server 1- resulting in two non identical mirrors. Your correct in your assumption with 1 way mirrors.

The best way (least hassle) is to have a 2 way mirror meaning HQ and Branch always stay in sync. If you want to be doubly sure of consistency for DR reasons you could disable referals to server2 and people would NEVER save files onto it... but the downside is the WAN would be used a lot as everyone would go to server1.

At my place of work we have 3 servers (SITE 1 , 2 , 3) , each has a file server and we have a full mesh replication system ( we use MPLS wan , with a virtual full mesh), meaning if any server goes down nobody notices, we can repair and bring back, or slot in a new server (turning referals off so nobody sees the empty share), wait for rebuild and then switch on referals.

To summarise, with 1 way referrals your right, its a nightmare for ensuring consistency.
0
 
LVL 20

Expert Comment

by:woolnoir
ID: 33531706
You could have a two way replication system, and setup DFS so that the referral for server1 (HQ) is always the primary one given, clients would only go to server2 if server1 is down. That way, when the server or WAN returns, the servers will sync and everything is fine.

Couple that with Shadow Copying on the DFS drives and you can be sure to get back any deleted data too, if people get silly.
0
 

Author Comment

by:RavenInd
ID: 33532979
Thanks for the great explanation - will i be safe to remove the Existing replication groups without deleting data? I was going to remove the replication memberships for the one-way connections - and then remove the members completely. Once that is done I am thinking i will need to remove all of the one-way replicated data from the current 2ndary member server.

I then will recreate the target shares on the 2ndary server with the correct permissions
Create a new replication group with the primary and 2ndary server with the below settings:
*Multipurpose Replication Group - Full Mesh (only 2 servers)
*Set the primary server - to the primary server

Does this seem correct - Also i am thinking this would be safe to do now as well. By removing the REplication groups i wont remove any namespaces or anything will i?
0
 
LVL 20

Accepted Solution

by:
woolnoir earned 2000 total points
ID: 33533077
removing replication groups should just remove the replication, it will in effect make the namespace point at the member targets, but they will be out of sync whenever any data is stored. The issue is once you re-establosh replication it will probably remove the data into a conflict resolution folder and start again. Thats why i said, always best to remove the data before dropping and readding replication - not always needed but recomended.

------------


 I was going to remove the replication memberships for the one-way connections - and then remove the members completely. Once that is done I am thinking i will need to remove all of the one-way replicated data from the current 2ndary member server.

I then will recreate the target shares on the 2ndary server with the correct permissions
Create a new replication group with the primary and 2ndary server with the below settings:
*Multipurpose Replication Group - Full Mesh (only 2 servers)
*Set the primary server - to the primary server

-------------

That looks right - at least that's how we have done it in the past. I will add, when ive done re-establishment in the past its been with 2003 FRS DFS... 2008 should act the same but the normal ' im not responsible' disclaimer :) Ensure you have a backup of the data before doing anything :)
0
 

Author Comment

by:RavenInd
ID: 33533278
Do you recommend removing the data from the primary server as well? I can't afford to do that i don't think. If the 2ndary server is completely empty - shouldn't the replication just occur 1-way from primary to 2ndary for the intial replication?

I just want to make sure that because DFS sees the 2ndary member as empty - it wont remove all of the files from the primary.
0
 
LVL 20

Assisted Solution

by:woolnoir
woolnoir earned 2000 total points
ID: 33533343
Nah, as long as you start the process with all but one of the targets being empty. I.e

SERVER 1 = all the data
SERVER 2 = empty target
etc
etc

then the replication will be fine as it will replicate the data to all the other targets. The issues start when you have multiple shares with data on... thats a mess.
0

Featured Post

VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

If, like me, you have a lot of Dell servers in the estate you manage this article should save you a little time. When attempting to login to iDrac on any server I would be presented with two errors. The first reads "Do you want to run this applicati…
While rebooting windows server 2003 server , it's showing "active directory rebuilding indices please wait" at startup. It took a little while for this process to complete and once we logged on not all the services were started so another reboot is …
Michael from AdRem Software outlines event notifications and Automatic Corrective Actions in network monitoring. Automatic Corrective Actions are scripts, which can automatically run upon discovery of a certain undesirable condition in your network.…
In this video, Percona Director of Solution Engineering Jon Tobin discusses the function and features of Percona Server for MongoDB. How Percona can help Percona can help you determine if Percona Server for MongoDB is the right solution for …
Suggested Courses
Course of the Month12 days, 19 hours left to enroll

777 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question