• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 509
  • Last Modified:

DFS default target server within one site

I have a DFS setup between two Windows Server 2008 R2 servers in the same site. Everything is running smoothly except for one issue...

DFS randomly selects which of the 2 servers is used as a target by each client PC as both servers are in the same site. Now if a document is simultaneously opened by two users, one of them using the FS1 path as the target and the other user using the FS2 path, there is no read-only lockdown message from DFS stating that the file has already been opened by another user. This means that they can both edit the document simultaneously, so one of the users' changes are lost. Also, I cannot simply retrieve the dumped document version from the ConflictandDeleted folder as it either doesn't appear or the changes aren't all there (plus it doesn't make sense to have to manually retrieve a file every time this happens).

I've decided to point certain users to FS1 by default, and to only use FS2 if the first server is unreachable. That way they will use the same target path by default and receive the lockdown message. I know this may be possible from the "Properties" > "Advanced" setting in the namespace, but this setting applies to ALL users. I only want this setup to work for a specific OU or two. Is there anyway to set this up for a specific OU only?
0
Sleezed
Asked:
Sleezed
2 Solutions
 
arnoldCommented:
The DFS connection/target distribution is all or none on a per shared folder setting (referral policy)
If the OU in question, has a dedicated share I.e. \\domain\root\oushare, you can configure the oushare referral policy to achieve your goal.

The issue you are running nto is that the file reflecting the file is in use does not replicate because it is in use.
0
 
SommerblinkCommented:
The problem that you're running into is part of a larger misconception of what DFS can do and I patently disagree with your assessment that DFS is running smoothly.

DFS should never be configured where it is possible for data to be read/written from more than one target by one or more users.

You don't mention how large your environment is but what I do in all my single-site DFS deployments, I only have one target per share that is active at any given time. In the event that there is failure, I will manually change the target to the other server.

While my failover time is about 15-20 minutes, this is pretty good given that I can do this with no specialized hardware or software.

If a client needs a shorter downtime, then I look to Failover Clustering... but then you need a specialized hardware configuration.. the software is still free.
0

Featured Post

Configuration Guide and Best Practices

Read the guide to learn how to orchestrate Data ONTAP, create application-consistent backups and enable fast recovery from NetApp storage snapshots. Version 9.5 also contains performance and scalability enhancements to meet the needs of the largest enterprise environments.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now