Ways to improve file read/write over WAN

Hi everyone,

Problem : file access is slow over the WAN

Situation :

We currently host all files at our central office, which has a 10mbit connection to our WAN (an ATM network). All remote sites are on local 3mbit ADSL (does only 1 to 1.5mbit really). PVC points towards the main site (10mbit). We use EIGRP but have a very static network, so change never occur really. Router (Cisco all the way) are not running out of CPU nor memory, problem is not there.

We have 1 main file server. It's fast enough (XEON dual processor with 1gig memory). The problem here is really not a server hardware issue. File access is fast enough at the main site, only at remote site with 10+ users, things get a little to slow at times. Problem is not with the server as file access is fast when used locally.

Problem is related to the ADSL loop that are too slow when network traffic peek occurs. We do not want to invest on faster ADSL (can hardly get faster anyways). What I will accept as answet will be a way to cache files at the remote sites (even if that means setting up local servers). My main constraint is backup, they are all done at the main office and I don't want to backup file remotely on the WAN at night.

I would want a solution that provides some kind of caching + with syncronisation at the main site.

Limiting the bandwith use would is almost essential ( I don't want the sync to take 50% of the bw half the day).  It has to be reliable. It should be usable on Windows 2000/2003 or on Linux.

I'll answer any questions or give more informations on demand,
Cheers !
lussierAsked:
Who is Participating?
 
chicagoanConnect With a Mentor Commented:
http://www.newarchitectmag.com/documents/s=2451/na1002a/

I offer up the document management solution only because it gives users the appearance of speed and on slow wan links they tend to keep copies of stuff everywhere. It get's them organized and find things quickly. They stop using mail as a filing cabinet and wan transport.

You're not going to be able to "cache" office documents in the sense a web cache works.
Syncronization is sort of the ultimate pre-fetch and write cache.

JFrederick29's comments shouldn't be dismissed, background traffic and misuse can kill a wan link.
As utilization get's higher the link becomes less efficient, people hit reload, apps retry and it spirals out of hand.
A couple of goofballs watching NFL clips, couple of Kazaa installations can bring you link to it's knees.  

0
 
JFrederick29Commented:
I would suggest downloading Network Performance Monitor from www.solarwinds.net (30 day evaluation).  You can setup SNMP on your router and monitor the ADSL line's utilization.

Are your servers local? Are you running Active Directory?  Are your DNS, WINS, DC's local? What traffic is crossing the WAN? Please provide more information...
0
 
chicagoanCommented:
One problem is when you browse a share, even if it's neware or cifs on nas or linux, Explorer wants to peek at every friggin' file and get icons and associations instead of just reading the fat like a good little redirector.
Redirection operations in general over a wan link are going to be pokey.
You can set up syncronization, web based document management at the main site or terminal services, depends on the size of the files, the types of applications, etc.
0
Improved Protection from Phishing Attacks

WatchGuard DNSWatch reduces malware infections by detecting and blocking malicious DNS requests, improving your ability to protect employees from phishing attacks. Learn more about our newest service included in Total Security Suite today!

 
lussierAuthor Commented:
As I've explained, the file server is at the main site (10mbit to the WAN) and the users are at remote site accessing this server (on 1.5 to 3.0mbit ADSL loop to the WAN).

DNS/DC are remote (to the same main site).

As for what traffic cross the WAN, dosen't really matter. What I want is a caching solution. Ie : A user open a file from the remote server, it then caches it to a server (that I could add, there are none currently) to the remote site. He works on the file and saves it, then it updates the cached copy the the remote site server. And at set time, there is a syncronisation between the remote server and the main server where the file reside.

That's what I want..

0
 
lussierAuthor Commented:
chicagoan :

What do you mean by web based document management ? We do use TS as well, but those are not related, as they open their files on their workstation and not in a TS session.

Most documents are Ms Office related (Words, Excel) or PDF.

You have any product (MS or others) that you propose ?

Thanks !
0
 
Robing66066Commented:
This looks like a situation where you could combine a good document management system along with a Terminal Services or Citrix type solution.

Put the documents that the users work with on the central TS server.  Install a document management system such as Microsoft Sharepoint and make it, and Office available on the users' TS session.  The users can work within the TS session as though the document had been downloaded to their machine.  

Trying to manage data across a WAN link can cause all sorts of havok.  Imagine a user with a 60MB excel file.  He updates it four times an hour and his work habits are such that he opens and closes the document every time he makes the changes.  Your network will die a horrible death under those conditions.  Now picture that document shared amongst 10 other people and you have the situation I faced six months ago.  

We moved him (and his friends) onto a Citrix solution with Sharepoint and never looked back.  Traffic on the WAN became reasonable again and they just love the speed at which the file 'transfers' to them.  

Good luck!
0
 
lussierAuthor Commented:
Robing,

Thanks for your idea, I already run multiple Citrix servers here but I rather have users work on their workstation. The reason is simple, if I get everything to work on the Citrix servers, it'll require an hardware upgrade as I'll run out of memory and my cpu will reach new peeks.

If I take your situation, I would required 600mb on my Citrix server for 10 users. Imagine now that I have 60-80 on each of em. Woah.

It's a great offer but I will beleive that there must exist some kind of caching/sync solutions for files.
0
 
lussierAuthor Commented:
chicagoan,

I'm not dismissing JFrederik comment for sure, but I doubth that it's the case. The users have really restricted rights and cannot install software on their local PC. Also, only limited users have access to the Internet. I highly doubth that DNS/DHCP would overflow the WAN link.

The main traffic comes from Citrix Connections and Files from the file server downloaded locally as well as emails.

If no one gives me a better solution, I'll give you the point as you provided the closest match to what I was looking for. And I might be able to work something out with the open source project that the article you linked me to talked about.

But somehow I still beleive of this mervelous caching/sync solution, if it dosen't exist, it should be developed.

Cheers.
0
 
chicagoanCommented:
it's the nature of the applications and workflow.
certainly in theory one could cache everything you worked on and just send delta changes, but Redmond never did understand networking - geez i remember in the NT3.5 days their whold building was flat running netbios and the net-doom players would bring the whole place down

0
 
lussierAuthor Commented:
Thanks Guys !
0
 
cooleditCommented:
just a simple question if you have created a Secondary zone for all your remote networks to look up in the DNS Database.

Meaning in this way:
Main Site DNS server x.x.x.x IP address Range
Added servers within the secondary zone File server 10.40.0.50 (must be included in the Zone)

just a matter of centralizing the offered services to the clients
0
 
cooleditCommented:
ohhh forgot something to add
there must also be added a server on each different subnet used in the secondary DNS zone at the Main DNS

if you have lets say 5 subnets (10.40.5.1/10.40.6.1/10.40.7.1/10.40.8.1/10.40.9.1)
then there must be an entry in each subnet at the DNS
0
 
Robing66066Commented:
That's cool, Lussier.  Just an idea...  :)

(BTW, even if you did have 80 users and each of them required 600mb, you would only need a 48GB drive.  I just bought an 80 GB drive for $100 for my home PC...)  

Later!  
0
 
lussierAuthor Commented:
Robing,

Were't we speaking about memory ? .. 80 users X 60MB each = 4.8 gigs or ram. You really don't wanna fall on your HD on a TS. Luckly enough, avg memory per users rarely goes over 20 to 30 mb.

Cheers.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.