DFS alternatives from MS

We're currently using DFS to replicate files between 2 Windows 2008 servers.  We're doing this to: 1, give us server redundancy so that we can have 100% up time despite having to patch and reboot either of the servers; 2, to have a second copy on-site so that we have a little "buffer" to hardware issues (despite these both being VMs).

MS has never heavily marketed/pushed DFS, however it's worked fine for about 10 years for us (initially with Windows 2003).  We are getting some new systems that will be Windows 2012 (also VMs) and want to know if there's something preferred by MS for equivalent functionality now.  We already understand these are VMs and have some resilience to H/W issues but want a solution for item #1.  We need file replication along with multiple servers for OS availability.

Does MS still recommend DFS for this purpose or is there some new and improved clustering available?  We know that with our newer SQL Server setups (on Windows 2012) that SQL Server redundancy is now being provided at the OS level, not within SQL Server (as least that's the MS preference).

We're not looking to introduce new products if we can avoid it.
LVL 1
Gene KlamerusTechnical ArchitectAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

 
Adam BrownSr Solutions ArchitectCommented:
DFS is still recommended for file-level replication of data in a Windows environment. And MS hasn't actively marketed DFS, but it is what windows uses for SYSVOL replication in DCs now, so they are absolutely sticking with it for file replication. MS doesn't have a block-level replication feature that is similar to what you can find on a Replicating SAN solution, but if you just want to get files copied between servers, DFS is the way to go.

Database replication like SQL's Availability Groups and Exchange's DAG are accomplished through the Clustering features in Windows Server. Clustering a file share requires use of a Cluster Shared Volume of some type, so it is significantly different than DFS. The CSV for a Windows Fail Over File Share cluster would be an iSCSI SAN or similar solution, and many of those have their own proprietary replication solutions that provide redundancy.
0
 
Lee W, MVPTechnology and Business Process AdvisorCommented:
Microsoft does a POOR job marketing anything.  They haven't actively marketed VSS and yet that's a HUGE thing everyone should be using and I find a shocking number of people don't  Microsoft relies on professionals who are trained in their products to implement and support them.  They'd like you to think they're easy enough for anyone to setup (SBS, Essentials), but in reality, these are hugely complex systems and generally shouldn't be administered by people who don't know what they are doing - if they are, people often fail to exploit the full potential of the product.

Rant over, as for DFS you shouldn't be using it.  You should be using DFS-R.  DFS uses the File Replication Service (FRS) to replicate files which requires the ENTIRE file be replicated.  DFS-R replicates block changes so you don't need to replicate the entire file.

It's also important to understand the use cases for the technology.  For example, it's NOT a good solution for any kind of database.  Put a file based database on DFS share and you could have user A connect to server1 and update the database while user B connects to server2 and does the same, now you have a conflict!  Files are generally easier.  since the odds that you and John Doe have the same file open at the same time is pretty low.  It could happen, but especially in smaller environments, it doesn't happen often.

Clustering shares a SINGLE volume between two servers.  This is potentially usable with databases, though during failovers you risk corruption. Of course, if you failed over, there was probably something bad anyway which would risk corruption.

In 2016 DATACENTER, you have Storage Replica - but that's not meant as a DFS replacement.  It's meant to provide high availability and redundancy.  One server writes the data and the other does too to an offline volume.  In fact, the writes MUST occur simultaneously - so simultaneously, it won't work if the network latency is more than 5 ms and ideally, you'll have a 10 gig link or better between the servers.

VM Replication isn't necessarily meant to replace DFS, but it could if you can tolerate 15-30 minutes of downtime (assuming the admin is on site during the failure).  That can work better with databases and the potential for lost data is minimal and limited to 30 seconds (at the fastest replication interval).
0
 
Gene KlamerusTechnical ArchitectAuthor Commented:
Actually for us DFS is better than DFS-R.  These files are never updated (it's an archive for SAP documents) and so for us there is no file transfer benefit to be gained (as in the case of files that do get updated).  We'd end up with the extra overhead of searching for portions of files to update to no benefit.  As least so far as I'm aware.

The clustering might actually work well for us except that the reason I'm using servers instead of a device (like a Filer) is that we have to replicate to an off-site disaster recovery center and we can't do that with a Filer.  Only Windows or Linux images.

So in summary, it sounds like DFS is still what I have to use or perhaps there "is" a third-party product that does this better.  The issue I've got is volume.  I have to host about 50,000 new files daily.  I've been doing that for 10 years (roughly), but recently we had an issue with DFS getting behind (mainly due to some sort of failure).  I was just hoping we had other options that were quicker and intended for larger volumes of new documents (in the 50,000-80,000/daily range).
0
Problems using Powershell and Active Directory?

Managing Active Directory does not always have to be complicated.  If you are spending more time trying instead of doing, then it's time to look at something else. For nearly 20 years, AD admins around the world have used one tool for day-to-day AD management: Hyena. Discover why

 
arnoldCommented:
Dfs -  is the file sharing, access portion

In 2003 server and older data replication relied on ntfrs (NT file replication system)

Since 2003 R2, the DFS got an improvement tool to provide the data replication which is the DFS replication (DFS-R)

So both Lee's and Gene's comments are true but seems Gene slightly misinterpreted Lees suggestion regarding dfs-r as a replace,ent to rather than as intended as an addition (reinforcing replication feature) of DFS ...
0
 
Gene KlamerusTechnical ArchitectAuthor Commented:
if DFS-R only adds improvements to replicate portions of files vs. improvements for file replication overall, then I did misinterpret.  I'll go read up if it benefits in my case where the files are static once written.
0
 
arnoldCommented:
DFS-R is the technology replicating content between and among replication group members.
It includes features to reduce bandwidth consumption, and includes a logic engine that in the event two users in separate location open and edit the same file, Then saved at the same time, it will using conflict detection attempt to manage .... But in such a case one file would get kicked because of conflict ... And will be recorded in the DFS event log.
I.e you add a chapter to the novel you are writing, instead of transmitting the entire novel, DFS-r would transmit the changes, that the other side would incorporate in the local copy.


What does currently synchronize your content across systems, are you using sync toys (robocopy) on a schedule?
0
 
Gene KlamerusTechnical ArchitectAuthor Commented:
Yes, I understand what's described on what DFS-R brings to the table - but let me explain what I'm doing to help explain why it's not adding anything to solving my problem.

I'm using DFS to replicate content across shares for a document management system.  In the case of this system, the documents are created and written but no updates are made (nor should they be - they're for legal and regulatory purposes).  The reason for DFS is so that when I am required to patch servers I can remove one of the pair of servers sharing files to do so without an outage to the application that's working with them.  The documents are all SAP documents and they're generated at a rate of perhaps 50,000 daily.  Read rates are perhaps 10,000 daily.  So, I'm faced with fairly high volumes of static documents.

I would have liked to use a Filer for this (our Filers have redundant storage and aren't running an "OS" that requires outages per se).  I can't though because we need to replicate the solution to a secondary site and we have no solution for doing so with Filers.

We are having some issues with this recently and are therefore going to split our content up.  The servers struggle are each hosting 4 different folders which are combined via DFS.  Two of these are higher volume and two are lower, so we're going to move one higher volume and one lower volume folder to another server and hopefully cut "load" in somewhere around half.
0
 
Adam BrownSr Solutions ArchitectCommented:
DFS Replication (AKA DFS-R, There are two pieces to DFS, DFS Namespaces and DFS Replication) is actually designed for a scenario like you explain, where files are being replicated to an off-site or backup area where they are not regularly accessed and changed. The problem you're having, where DFS is getting "behind" is likely due to the staging settings for DFS-R being set too low for the volume of files being replicated. DFS-R functions by first staging files for replication, then replicating them. If the staging area is not given sufficient space to store all of the files that are being replicated at a time, there will be a significant lag between file creation/modification and replication. https://blogs.technet.microsoft.com/askds/2011/07/13/how-to-determine-the-minimum-staging-area-dfsr-needs-for-a-replicated-folder/ covers the staging quota in detail, and should give you some ideas on how to improve DFS-R performance for your environment.
0

Experts Exchange Solution brought to you by ConnectWise

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
 
arnoldCommented:
Adam addressed, what is the replication groups settings, configuration bandwidth allocation and available bandwidth if location ....
The size of files also governs what the staging area should be...

Make sure on the DFS-r side you have differential enabled.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.