Link to home
Start Free TrialLog in
Avatar of AlphaLolz
AlphaLolzFlag for United States of America

asked on

tolerance to latency/bandwidth constraints of JSP-based applications doing file read/writes to shares

We have a JSP web app (runs on Tomcat on Windows) that needs to do file read/writes.  We want to distribute the web servers, but need a central file server so we know we're going to introduce latency as it reads/writes to the shared folder (\\server\folder type sharing via SMB).

Is there some sort of document or any good articles on the latency ceiling for this?  I expects it's not a hard limit, but just that at some point the latency/bandwidth leads to the process taking longer and hence more web server resources will be consumed (from the users making the requests that lead to this read/write).  Probably mainly memory for the threads/code running while they're running.  What I'm trying to find is anything about failures.  For instance if reading a 50 k file is making calls and there's 1000 ms or 2000 ms or etc. what's the effect of that.  Is there a point (3000 ms latency) or other things of that sort at which point the read/write will generate errors (in other words timeouts and such).
ASKER CERTIFIED SOLUTION
Avatar of Travis Martinez
Travis Martinez
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of AlphaLolz

ASKER

Actually, i'm not asking any of that.

I'm going to be deploying a web (JSP on Tomcat) application in an IaaS Azure environment.  The application is going to need to store perhaps 100,000 files daily and return up to 25,000 of them daily.  The files themselves are 50k to 100k as a rule.

My need is for deploying my application with resilience to physical site disaster events.  My company cloud delivery team is asking me to provide them my tolerances so they can get underlying capabilities I need.

There are other parameters I'll be providing, but I'm currently trying to gather the ceilings on latency that will allow me to hit my service levels I have to deliver to my users.

I'm currently storing or returning documents in about .8-1.5s within the company data center.  I'm willing to tolerate 2-3s at Azure.  My testing with a simple proof-of-concept in Azure is actually faster than that, but I don't have a distributed architecture at Azure yet.  My users won't tolerate 5s.  That's my ceiling.

So I have to parse my SLA down into time spent reaching Azure from the data center, time spent within my web app, and time spent on read/writes.