Link to home
Start Free TrialLog in
Avatar of AlphaLolz
AlphaLolzFlag for United States of America

asked on

DFS alternatives from MS

We're currently using DFS to replicate files between 2 Windows 2008 servers.  We're doing this to: 1, give us server redundancy so that we can have 100% up time despite having to patch and reboot either of the servers; 2, to have a second copy on-site so that we have a little "buffer" to hardware issues (despite these both being VMs).

MS has never heavily marketed/pushed DFS, however it's worked fine for about 10 years for us (initially with Windows 2003).  We are getting some new systems that will be Windows 2012 (also VMs) and want to know if there's something preferred by MS for equivalent functionality now.  We already understand these are VMs and have some resilience to H/W issues but want a solution for item #1.  We need file replication along with multiple servers for OS availability.

Does MS still recommend DFS for this purpose or is there some new and improved clustering available?  We know that with our newer SQL Server setups (on Windows 2012) that SQL Server redundancy is now being provided at the OS level, not within SQL Server (as least that's the MS preference).

We're not looking to introduce new products if we can avoid it.
Avatar of Adam Brown
Adam Brown
Flag of United States of America image

DFS is still recommended for file-level replication of data in a Windows environment. And MS hasn't actively marketed DFS, but it is what windows uses for SYSVOL replication in DCs now, so they are absolutely sticking with it for file replication. MS doesn't have a block-level replication feature that is similar to what you can find on a Replicating SAN solution, but if you just want to get files copied between servers, DFS is the way to go.

Database replication like SQL's Availability Groups and Exchange's DAG are accomplished through the Clustering features in Windows Server. Clustering a file share requires use of a Cluster Shared Volume of some type, so it is significantly different than DFS. The CSV for a Windows Fail Over File Share cluster would be an iSCSI SAN or similar solution, and many of those have their own proprietary replication solutions that provide redundancy.
Microsoft does a POOR job marketing anything.  They haven't actively marketed VSS and yet that's a HUGE thing everyone should be using and I find a shocking number of people don't  Microsoft relies on professionals who are trained in their products to implement and support them.  They'd like you to think they're easy enough for anyone to setup (SBS, Essentials), but in reality, these are hugely complex systems and generally shouldn't be administered by people who don't know what they are doing - if they are, people often fail to exploit the full potential of the product.

Rant over, as for DFS you shouldn't be using it.  You should be using DFS-R.  DFS uses the File Replication Service (FRS) to replicate files which requires the ENTIRE file be replicated.  DFS-R replicates block changes so you don't need to replicate the entire file.

It's also important to understand the use cases for the technology.  For example, it's NOT a good solution for any kind of database.  Put a file based database on DFS share and you could have user A connect to server1 and update the database while user B connects to server2 and does the same, now you have a conflict!  Files are generally easier.  since the odds that you and John Doe have the same file open at the same time is pretty low.  It could happen, but especially in smaller environments, it doesn't happen often.

Clustering shares a SINGLE volume between two servers.  This is potentially usable with databases, though during failovers you risk corruption. Of course, if you failed over, there was probably something bad anyway which would risk corruption.

In 2016 DATACENTER, you have Storage Replica - but that's not meant as a DFS replacement.  It's meant to provide high availability and redundancy.  One server writes the data and the other does too to an offline volume.  In fact, the writes MUST occur simultaneously - so simultaneously, it won't work if the network latency is more than 5 ms and ideally, you'll have a 10 gig link or better between the servers.

VM Replication isn't necessarily meant to replace DFS, but it could if you can tolerate 15-30 minutes of downtime (assuming the admin is on site during the failure).  That can work better with databases and the potential for lost data is minimal and limited to 30 seconds (at the fastest replication interval).
Avatar of AlphaLolz

ASKER

Actually for us DFS is better than DFS-R.  These files are never updated (it's an archive for SAP documents) and so for us there is no file transfer benefit to be gained (as in the case of files that do get updated).  We'd end up with the extra overhead of searching for portions of files to update to no benefit.  As least so far as I'm aware.

The clustering might actually work well for us except that the reason I'm using servers instead of a device (like a Filer) is that we have to replicate to an off-site disaster recovery center and we can't do that with a Filer.  Only Windows or Linux images.

So in summary, it sounds like DFS is still what I have to use or perhaps there "is" a third-party product that does this better.  The issue I've got is volume.  I have to host about 50,000 new files daily.  I've been doing that for 10 years (roughly), but recently we had an issue with DFS getting behind (mainly due to some sort of failure).  I was just hoping we had other options that were quicker and intended for larger volumes of new documents (in the 50,000-80,000/daily range).
Dfs -  is the file sharing, access portion

In 2003 server and older data replication relied on ntfrs (NT file replication system)

Since 2003 R2, the DFS got an improvement tool to provide the data replication which is the DFS replication (DFS-R)

So both Lee's and Gene's comments are true but seems Gene slightly misinterpreted Lees suggestion regarding dfs-r as a replace,ent to rather than as intended as an addition (reinforcing replication feature) of DFS ...
if DFS-R only adds improvements to replicate portions of files vs. improvements for file replication overall, then I did misinterpret.  I'll go read up if it benefits in my case where the files are static once written.
DFS-R is the technology replicating content between and among replication group members.
It includes features to reduce bandwidth consumption, and includes a logic engine that in the event two users in separate location open and edit the same file, Then saved at the same time, it will using conflict detection attempt to manage .... But in such a case one file would get kicked because of conflict ... And will be recorded in the DFS event log.
I.e you add a chapter to the novel you are writing, instead of transmitting the entire novel, DFS-r would transmit the changes, that the other side would incorporate in the local copy.


What does currently synchronize your content across systems, are you using sync toys (robocopy) on a schedule?
SOLUTION
Avatar of AlphaLolz
AlphaLolz
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Adam addressed, what is the replication groups settings, configuration bandwidth allocation and available bandwidth if location ....
The size of files also governs what the staging area should be...

Make sure on the DFS-r side you have differential enabled.