File Server to store home drives and user profiles that sync across two different Data Center ?

Hi People,

With the latest Windows Server 2012 R2 Standard edition, I'd like to implement Active/Active file server to host user profiles between two different Data Center within the same AD domain.

What sort of technologies or architecture that I need to put / deploy to make sure that when one user redirected to one Data Center, he/she can still use the actual data when logged off and logged back on again to the other Data Center file server ?

I'm thinking of using Work Folders feature on top of the geo clustered file server (stretched cluster).

Thanks
LVL 9
Senior IT System EngineerIT ProfessionalAsked:
Who is Participating?
 
gmbaxterConnect With a Mentor Commented:
If you have a common SAN fabric, there will be no need to use DFS-R:

Homes lun on VNX in DC1 replicated to Homes_Replica lun on DC2 VNX
Profiles lun on VNX in DC2 replicated to Profiles_Replica lun on DC1 VNX
Homes preferred owner in DC1 on DC1 VNX
Profiles preferred owner in DC2 on DC2 VNX
Homes and Profiles cluster resource available to both nodes via SAN fabric

In case of node failure, other node takes over the failed cluster role.

In case of VNX failure, revert to appropriate replica lun

In case of DC failure revert to appropriate replica lun

You'd have to lab this up though, as EMC do sell VPLEX for this exact scenario - the base systems may not have the inbuilt functionality, but it is basically what I do with 2 Dell Compellent SANs.
0
 
David Johnson, CD, MVPConnect With a Mentor OwnerCommented:
use DFRS and point the users share to \\dfrs.example.com not \\server.domain.com
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
So does this means that the replication can go two ways between the DC1 and DC2 ?
0
Making Bulk Changes to Active Directory

Watch this video to see how easy it is to make mass changes to Active Directory from an external text file without using complicated scripts.

 
DavidConnect With a Mentor PresidentCommented:
The correct technique is usually to sync up the two remote sites with each other.  This is much faster; maintains data integrity; and even gives you the opportunity for a grandfather copy  if you delay the sync.
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
ok, thanks for the clarification, because I was under the impression that it can only be one way synchronization from Production to DR site.

Do I have to implement Work Folders or just the normal File Server and create DFS-R as stand alone server not MSCS cluster failover ?
0
 
kevinhsiehConnect With a Mentor Commented:
FYI Microsoft specifically doesn't support this because you can corrupt a user profile, especially if a user is logged onto multiple computers. That said, I do this but only for redirected folders, not roaming profiles or application data.

You need two technologies: DFS-REPLICATION to replicate the files between two or more servers. You have the option of making the files read only or writeable. You want writeable in this situation. The second technology you need is DFS Namespace to allow your clients to connect to the "closest" file server.

DFS Replication requires that all servers involved are joined to the domain. They can be member servers or domain controllers. You don't need failover clustering unless you want to have a local failover cluster that it's also replicating to another server or cluster. I have never done DFS Replication with failover clustering. Remember that a failover cluster is active-passive.

For DFS namespace servers I have always used domain controllers, but I believe that member servers can work too. You will create a domain based namespace like \\domain.local\DFS\users. Host the namespace on multiple servers. \DFS\users can then point to \\DC1fileserver\users and \\DC2fileserver\users,  and the client will try to connect to the closest server.
0
 
kevinhsiehConnect With a Mentor Commented:
Here's Microsoft support statement. Be aware of the issues.
http://support.microsoft.com/kb/2533009/en-US
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Ah I see the challenge here. So I guess there is no way to prevent user to login into two different Data Center at the same time ?
0
 
DavidConnect With a Mentor PresidentCommented:
You could write a program / policy that sees if user is currently logged on elsewhere, but latency and practicality probably make this unworkable unless you wrote a C program that used a shared pipe for interprocess communications.

But even then if somebody fails to log off, then you'll have to deal with some housekeeping to deal with corner cases.

Maybe there is a native way, I don't know that there isn't.  I'm just saying if there was not a native utility then it would be difficult to get it right w/o timeouts.

I'd just have them both log in so half are on one, half on the other, and let the O/S do replication and be prepared for the performance hit and delays if either system dies.
0
 
kevinhsiehConnect With a Mentor Commented:
Does each user need active active access? If not, you can have half of your users on one datacenter and then half on another. Each users directories would be on each server, but the DFS namespace link would have only one target active at a time. I do this for departmental shares where file locking is obviously important.
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
hi All,

well the idea is to allow user to work without any issue or file locking problem when login to any of the data center.

@Kevin how in Windows AD terms to allocate one user to one DC and the rest to another DC ?
0
 
DavidPresidentCommented:
Then forget it, unless this is something like a SQL database, OR you have a NAS appliance.
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
why does the NAS appliance is needed in this case ?
or does NFS share is sufficient for this type of scenario ?
0
 
kevinhsiehCommented:
You would need to specify different UNC paths to different sets of users.
0
 
DavidConnect With a Mentor PresidentCommented:
Let's get this straight, just in case we're not on the same page.  A system that uses NFS shares IS it's own file server, and the mount is on the system that is sharing the data.

CIFS / SMB, windows native file sharing is NOT NFS.  But you can set up a NFS service on W2K12.

When a developer writes code  that is "network aware" they have to deal with things like file locking, and locking at byte levels for starting/ending offsets.  Code is different depending on the networking, file system type.  Many developers don't even bother do deal with this stuff.

So really you need to look at it from perspective of  the apps that use the data.  Contact the developer(s) and tell them what you want to do and ask if their application code does locking properly to prevent corruption when data is cached on one host and written on another.

This isn't an O/S thing, the O/S gives system calls to the developer when it comes to locking, semaphores, recovery methodology, racing, and concurrent access.   IF the developer who wrote the code did so with your topology in mind, then it will  work.

If they didn't, you'll have data loss guaranteed w/o even knowing it.

Two file servers trying to replicate data back and forth CAN be configured to do so, but if two people try to read/write the same file(s) at the same time outside control of an executable designed to support  such a thing will destroy your data.
0
 
DavidPresidentCommented:
P.S. If you have a specific application or need in mind, and this is something you have source code to, then it can be done.  But it may not be cheap.    That is why people use SQL server and such in a cluster, so they don't have to write that code themselves.

Or buy a nice NetApp appliance and get lots of redundancy.  

We can save a lot of time if you talk about the bare minimum that you must have (be specific with applications and data), and BUDGET.   This is an an exercise in futility if you have no budget and want people to do whatever they want.
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
many thanks for the sharing and explanation guys.

@DLethe: what I'm trying to implement here is the normal user home directory and file server for Citrix VDI deployment, the Netscaler / load balancer redirects people randomly into two different Data Center, that is why I need to know what is the best way to achieve this without causing too much issue ?
0
 
gmbaxterCommented:
Do you have shared storage which is accessible in both data centres?
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Baxter,

Yes I do, do you mean the LUN that is accessible from both sites ?
I'm using EMC VNX 5300 on the DC1 and VNX5500 on the DC2 so how can it be leveraged in this case for my file server / home drives NTFS LUN ?
0
 
gmbaxterCommented:
Hi,

Yes can the VNX 5300 provide a lun which can be seen in DC1 and DC2, same goes for the VNX 5500 ?
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
GMBaxter,

It can only be done using replication utilizing EMC Replication manager as the orchestrator while the underlying technology is SAN Copy.
0
 
gmbaxterConnect With a Mentor Commented:
Ok, would that pass a windows cluster validation? The quorum volume would have to be accessible by both nodes.

You could have a two node file server cluster with two cluster roles; home folders and profiles. This would get around the issues of homes and profiles hosted on a DFS volume.
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
well, theoretically that should be possible.

do you mean by creating the stretched MSCS / geo cluster between the two DC site ?
one server active in production and one passive failover at the DR ?
0
 
gmbaxterConnect With a Mentor Commented:
Yes stretch the cluster between the two sites, with each server a member of the cluster.

Then add two roles into the cluster one a fileshare for homes, the other a fileshare for profiles. Set preferred owners of the cluster resource to be the primary node.

What bandwidth do you have between the two sites?
0
 
kevinhsiehConnect With a Mentor Commented:
Fundamentally, Windows isn't designed to give local acess to multiple locations because there is no global file locking. Any native Windows clustering solution requires one site to go over the WAN to access files. This can be mitigated with high bandwidth low latency links, WAN optimization and caching such as Riverbed or Silver Peak, or possibly Branch Cache.

An active/passive failover cluster buys nothing in terms of performance because only one node can be active, and the other site will need to go over the WAN, so it is basically no better than DFS-R and DFS-N with only one namespace target active.

I think that the best design is to either make one datacenter active and the other standby, or to just accept that half of your sessions will pull files over the WAN and just mitigate that.
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Hi All,

The bandwidth is plentiful with Dark Fibre connection between the sites.

@Kevin: so what happened with the other half of the user profiles and home directory when the link between the DC is cut or unavailable due to disaster or issue ?
0
 
gmbaxterCommented:
If bandwidth isn't an issue then make modifications to your SAN fabric so that dc1 and dc2 can have a lun mapped from both VNX.

Then in the cluster, put the homes share preferred to the dc1 node and the profiles share preferred to the dc2 node
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
And then after that set the DFS-r to synch against each other ?
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Thanks guys !
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.