• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 267
  • Last Modified:

HA File Server on Server 2008

Looking at ways to implement a HA solution on Server 2008.

I had a look at DFS Replication but the problem is that it only allows a single namespace server and selecting sharing folders with multiple target locations.

Failure of the namespace server would result in not being able to access folders using the namespace. Additionally from what I've read, there is no mechanism to lock files when opened by one user, so there'll be issues of incorrectly opening/saving data here.

Alternatively, I can setup a failover cluster as the following guide:

However, this arrangement does require shared clustered storage and I'm no looking to invest on any expensive SAN solution. I thinking possibly using some other means of storage virtualisation.

Basically, I like to setup a HA file server with just two servers and in the cost effective way. If there any suggestions on achieving the mentioned, please let me know. Thank you.
  • 4
1 Solution
dave558Author Commented:
Regarding the DFS namespace and replication, it does seem to place locks on files. Anyone able to clarify and how is it achieving.

I noticed Word documents are able to lock the files from being edited from another user by the Word application. However, it doesn't prevent Notepad from accessing and modify the content which appears.

So it seems certain applications like Word does prevent editing/renaming/deleting of locked files and where as others like Notepad does not care whether a file is locked nor does it place a lock itself and can modify/delete that files anytime.

Just reading on byte-range locks for file locking but would like to know more about this mechanism. Are the locks managed by the OS with some sort of databases tracking information about opened file and portion of the files that are locked?
dave558Author Commented:
Actually, there does appear to be an option to add multiple namespace servers.

However, after disconnecting one of the file servers, it doesn't seem to use the second namespace server for file access; I can't access the network drive.
You can not use links but use targets where the shares exist.
Then double check your DFS name space ditributioin.
What do you mean disconnecting?  unplugging from the network, or removing as a target.

THe disconnect from network recovers faster than the propagation of the removal of a target from a possible list.

Not sure what locks you mean.  client applications that access a file and lock it will prevent the replication of the file until the client application exits.
i.e. office products create a ~filename.doc for each file they open.
open files are not replicated.  When you save/close the file it will be replicated.
Has Powershell sent you back into the Stone Age?

If managing Active Directory using Windows Powershell® is making you feel like you stepped back in time, you are not alone.  For nearly 20 years, AD admins around the world have used one tool for day-to-day AD management: Hyena. Discover why.

Cliff GaliherCommented:
DFS-R really isn't meant to be used as for HA and yes, file locks will be an issue. It isn't that the files can't be locked, it is that only the referenced file is locked, not all replicants. Here is a scenario where that can be a problem:

HA, by definition, means you probably don't want to have to manually fail over or fail back. Otherwise it wouldn't technically be "highly" available. It'd more of a DR setup. Since, with HA, you'd want that to be automatic, there are plenty of network conditions that could cause one workstation (we'll call this workstation A) to get a reference to Server A from the namespace server, and workstation B to later get a reference to Server B from the nameserver.

Now, when workstation A *opens* a file, it'd place a lock on the file. But that lock would *only* be on the file on Server A. DFS-R would *not* lock the copy on Server B. So Workstation B could open the copy that exists on Server B without ever "seeing" the lock. And then you are in a situation where a version conflict would exist.

That is where DFS-R can really bite you, and it isn't even that rare. It isn't an edge case. When sysadmins try to use DFS-R for HA, it almost always happens within a week or two, and the user experience creates a bad perception, even if the actual technical issue may seem minor.

As far as HA goes, you do need to use shared storage. There is no truly "cheap" way to do this, but for just a two server setup, shared SAS would be quite a bit less expensive than a SAN. It can get you into a truly HA setup *relatively* inexpensively. If shared SAS still breaks the budget, I'd honestly have to argue that the company probably doesn't really need HA. The point of HA is that *any* downtime costs enough money that the cost of shared storage is less that the revenue that'd be lost, therefore is justifiable. All a matter of calculating the ROI.

dave558Author Commented:
In terms of ROI, it's not worthwhile to invest in SAN/shared SAS.

Thought I'd get to see if there's ways of setting up something similar, possibly virtualisation of normal SATA hard drives as shared storage. Performance isn't a criteria here as there not too many people accessing the file server at the same time.

Thanks for your comments, greatly appreciated.
dave558Author Commented:
Thanks, very insightful.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Train for your Pen Testing Engineer Certification

Enroll today in this bundle of courses to gain experience in the logistics of pen testing, Linux fundamentals, vulnerability assessments, detecting live systems, and more! This series, valued at $3,000, is free for Premium members, Team Accounts, and Qualified Experts.

  • 4
Tackle projects and never again get stuck behind a technical roadblock.
Join Now