Solved

File Server to store home drives and user profiles that sync across two different Data Center ?

Posted on 2014-04-15
29
816 Views
Last Modified: 2014-05-20
Hi People,

With the latest Windows Server 2012 R2 Standard edition, I'd like to implement Active/Active file server to host user profiles between two different Data Center within the same AD domain.

What sort of technologies or architecture that I need to put / deploy to make sure that when one user redirected to one Data Center, he/she can still use the actual data when logged off and logged back on again to the other Data Center file server ?

I'm thinking of using Work Folders feature on top of the geo clustered file server (stretched cluster).

Thanks
0
Comment
  • 12
  • 6
  • 5
  • +2
29 Comments
 
LVL 78

Assisted Solution

by:David Johnson, CD, MVP
David Johnson, CD, MVP earned 46 total points
ID: 40003906
use DFRS and point the users share to \\dfrs.example.com not \\server.domain.com
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40003916
So does this means that the replication can go two ways between the DC1 and DC2 ?
0
 
LVL 47

Assisted Solution

by:dlethe
dlethe earned 137 total points
ID: 40003966
The correct technique is usually to sync up the two remote sites with each other.  This is much faster; maintains data integrity; and even gives you the opportunity for a grandfather copy  if you delay the sync.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40004020
ok, thanks for the clarification, because I was under the impression that it can only be one way synchronization from Production to DR site.

Do I have to implement Work Folders or just the normal File Server and create DFS-R as stand alone server not MSCS cluster failover ?
0
 
LVL 42

Assisted Solution

by:kevinhsieh
kevinhsieh earned 182 total points
ID: 40004349
FYI Microsoft specifically doesn't support this because you can corrupt a user profile, especially if a user is logged onto multiple computers. That said, I do this but only for redirected folders, not roaming profiles or application data.

You need two technologies: DFS-REPLICATION to replicate the files between two or more servers. You have the option of making the files read only or writeable. You want writeable in this situation. The second technology you need is DFS Namespace to allow your clients to connect to the "closest" file server.

DFS Replication requires that all servers involved are joined to the domain. They can be member servers or domain controllers. You don't need failover clustering unless you want to have a local failover cluster that it's also replicating to another server or cluster. I have never done DFS Replication with failover clustering. Remember that a failover cluster is active-passive.

For DFS namespace servers I have always used domain controllers, but I believe that member servers can work too. You will create a domain based namespace like \\domain.local\DFS\users. Host the namespace on multiple servers. \DFS\users can then point to \\DC1fileserver\users and \\DC2fileserver\users,  and the client will try to connect to the closest server.
0
 
LVL 42

Assisted Solution

by:kevinhsieh
kevinhsieh earned 182 total points
ID: 40005004
Here's Microsoft support statement. Be aware of the issues.
http://support.microsoft.com/kb/2533009/en-US
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40005382
Ah I see the challenge here. So I guess there is no way to prevent user to login into two different Data Center at the same time ?
0
 
LVL 47

Assisted Solution

by:dlethe
dlethe earned 137 total points
ID: 40005395
You could write a program / policy that sees if user is currently logged on elsewhere, but latency and practicality probably make this unworkable unless you wrote a C program that used a shared pipe for interprocess communications.

But even then if somebody fails to log off, then you'll have to deal with some housekeeping to deal with corner cases.

Maybe there is a native way, I don't know that there isn't.  I'm just saying if there was not a native utility then it would be difficult to get it right w/o timeouts.

I'd just have them both log in so half are on one, half on the other, and let the O/S do replication and be prepared for the performance hit and delays if either system dies.
0
 
LVL 42

Assisted Solution

by:kevinhsieh
kevinhsieh earned 182 total points
ID: 40005498
Does each user need active active access? If not, you can have half of your users on one datacenter and then half on another. Each users directories would be on each server, but the DFS namespace link would have only one target active at a time. I do this for departmental shares where file locking is obviously important.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40005556
hi All,

well the idea is to allow user to work without any issue or file locking problem when login to any of the data center.

@Kevin how in Windows AD terms to allocate one user to one DC and the rest to another DC ?
0
 
LVL 47

Expert Comment

by:dlethe
ID: 40005579
Then forget it, unless this is something like a SQL database, OR you have a NAS appliance.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40005585
why does the NAS appliance is needed in this case ?
or does NFS share is sufficient for this type of scenario ?
0
 
LVL 42

Expert Comment

by:kevinhsieh
ID: 40005590
You would need to specify different UNC paths to different sets of users.
0
 
LVL 47

Assisted Solution

by:dlethe
dlethe earned 137 total points
ID: 40005602
Let's get this straight, just in case we're not on the same page.  A system that uses NFS shares IS it's own file server, and the mount is on the system that is sharing the data.

CIFS / SMB, windows native file sharing is NOT NFS.  But you can set up a NFS service on W2K12.

When a developer writes code  that is "network aware" they have to deal with things like file locking, and locking at byte levels for starting/ending offsets.  Code is different depending on the networking, file system type.  Many developers don't even bother do deal with this stuff.

So really you need to look at it from perspective of  the apps that use the data.  Contact the developer(s) and tell them what you want to do and ask if their application code does locking properly to prevent corruption when data is cached on one host and written on another.

This isn't an O/S thing, the O/S gives system calls to the developer when it comes to locking, semaphores, recovery methodology, racing, and concurrent access.   IF the developer who wrote the code did so with your topology in mind, then it will  work.

If they didn't, you'll have data loss guaranteed w/o even knowing it.

Two file servers trying to replicate data back and forth CAN be configured to do so, but if two people try to read/write the same file(s) at the same time outside control of an executable designed to support  such a thing will destroy your data.
0
 
LVL 47

Expert Comment

by:dlethe
ID: 40005613
P.S. If you have a specific application or need in mind, and this is something you have source code to, then it can be done.  But it may not be cheap.    That is why people use SQL server and such in a cluster, so they don't have to write that code themselves.

Or buy a nice NetApp appliance and get lots of redundancy.  

We can save a lot of time if you talk about the bare minimum that you must have (be specific with applications and data), and BUDGET.   This is an an exercise in futility if you have no budget and want people to do whatever they want.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40005748
many thanks for the sharing and explanation guys.

@DLethe: what I'm trying to implement here is the normal user home directory and file server for Citrix VDI deployment, the Netscaler / load balancer redirects people randomly into two different Data Center, that is why I need to know what is the best way to achieve this without causing too much issue ?
0
 
LVL 11

Expert Comment

by:gmbaxter
ID: 40012660
Do you have shared storage which is accessible in both data centres?
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40013673
Baxter,

Yes I do, do you mean the LUN that is accessible from both sites ?
I'm using EMC VNX 5300 on the DC1 and VNX5500 on the DC2 so how can it be leveraged in this case for my file server / home drives NTFS LUN ?
0
 
LVL 11

Expert Comment

by:gmbaxter
ID: 40015082
Hi,

Yes can the VNX 5300 provide a lun which can be seen in DC1 and DC2, same goes for the VNX 5500 ?
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40015160
GMBaxter,

It can only be done using replication utilizing EMC Replication manager as the orchestrator while the underlying technology is SAN Copy.
0
 
LVL 11

Assisted Solution

by:gmbaxter
gmbaxter earned 135 total points
ID: 40023541
Ok, would that pass a windows cluster validation? The quorum volume would have to be accessible by both nodes.

You could have a two node file server cluster with two cluster roles; home folders and profiles. This would get around the issues of homes and profiles hosted on a DFS volume.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40024443
well, theoretically that should be possible.

do you mean by creating the stretched MSCS / geo cluster between the two DC site ?
one server active in production and one passive failover at the DR ?
0
 
LVL 11

Assisted Solution

by:gmbaxter
gmbaxter earned 135 total points
ID: 40025024
Yes stretch the cluster between the two sites, with each server a member of the cluster.

Then add two roles into the cluster one a fileshare for homes, the other a fileshare for profiles. Set preferred owners of the cluster resource to be the primary node.

What bandwidth do you have between the two sites?
0
 
LVL 42

Assisted Solution

by:kevinhsieh
kevinhsieh earned 182 total points
ID: 40025171
Fundamentally, Windows isn't designed to give local acess to multiple locations because there is no global file locking. Any native Windows clustering solution requires one site to go over the WAN to access files. This can be mitigated with high bandwidth low latency links, WAN optimization and caching such as Riverbed or Silver Peak, or possibly Branch Cache.

An active/passive failover cluster buys nothing in terms of performance because only one node can be active, and the other site will need to go over the WAN, so it is basically no better than DFS-R and DFS-N with only one namespace target active.

I think that the best design is to either make one datacenter active and the other standby, or to just accept that half of your sessions will pull files over the WAN and just mitigate that.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40025220
Hi All,

The bandwidth is plentiful with Dark Fibre connection between the sites.

@Kevin: so what happened with the other half of the user profiles and home directory when the link between the DC is cut or unavailable due to disaster or issue ?
0
 
LVL 11

Expert Comment

by:gmbaxter
ID: 40025634
If bandwidth isn't an issue then make modifications to your SAN fabric so that dc1 and dc2 can have a lun mapped from both VNX.

Then in the cluster, put the homes share preferred to the dc1 node and the profiles share preferred to the dc2 node
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 40025722
And then after that set the DFS-r to synch against each other ?
0
 
LVL 11

Accepted Solution

by:
gmbaxter earned 135 total points
ID: 40026161
If you have a common SAN fabric, there will be no need to use DFS-R:

Homes lun on VNX in DC1 replicated to Homes_Replica lun on DC2 VNX
Profiles lun on VNX in DC2 replicated to Profiles_Replica lun on DC1 VNX
Homes preferred owner in DC1 on DC1 VNX
Profiles preferred owner in DC2 on DC2 VNX
Homes and Profiles cluster resource available to both nodes via SAN fabric

In case of node failure, other node takes over the failed cluster role.

In case of VNX failure, revert to appropriate replica lun

In case of DC failure revert to appropriate replica lun

You'd have to lab this up though, as EMC do sell VPLEX for this exact scenario - the base systems may not have the inbuilt functionality, but it is basically what I do with 2 Dell Compellent SANs.
0
 
LVL 7

Author Closing Comment

by:Senior IT System Engineer
ID: 40079515
Thanks guys !
0

Join & Write a Comment

This article will review the basic installation and configuration for Windows Software Update Services (WSUS) in a Windows 2012 R2 environment.  WSUS is a Microsoft tool that allows administrators to manage and control updates to be approved and ins…
Resolve DNS query failed errors for Exchange
In this Micro Tutorial viewers will learn how to use Boot Corrector from Paragon Rescue Kit Free to identify and fix the boot problems of Windows 7/8/2012R2 etc. As an example is used Windows 2012R2 which lost its active partition flag (often happen…
This tutorial will walk an individual through the process of installing of Data Protection Manager on a server running Windows Server 2012 R2, including the prerequisites. Microsoft .Net 3.5 is required. To install this feature, go to Server Manager…

762 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

21 Experts available now in Live!

Get 1:1 Help Now