Solved

How to prioritize Web Access between threads

Posted on 2013-01-12
4
459 Views
Last Modified: 2013-01-13
I have a Windows C++ application that needs to access the web for a few kbytes of data every few minutes.  Latency for those bytes must be reasonably low--less than a second.

The application creates many files, each around 1 Mb.  If I upload those files while the my application is running using Filezilla,  my application suffers from occaisional time-outs, even with time-out set to as long as five seconds.

Can I  add a thread to my application to upload files in such a way that would avoid impacting latency on getting those 1 Kbyte web accesses?  Would it be sufficient to just have the thread that does the uploading be at a lower priority, or are there shared resources (Wifi, and everything else between my pc and the cloud) that could be consumed by the file upload, even it's thread is lower priority?
0
Comment
Question by:DaveThomasPilot
  • 2
  • 2
4 Comments
 
LVL 31

Expert Comment

by:Frosty555
Comment Utility
I don't think this is a thread priority issue, it's a quality-of-service issue on your network.

FileZilla is uploading a file. It is most likely saturating your available Internet upload bandwidth. When this happens there is a long lag for any other applications that are accessing the internet, in particular anything that requires a back-and-forth handshake like an HTTP Request. Your application's HTTP Request is queued behind a mile of Filezilla packets all waiting to be transmitted.

The simplest thing to do is simply throttle Filezilla's upload rate to something that is less than your ISP's upload bandwidth limit. Check your speeds at www.speedtest.net, and give Filezilla maximum of something like 50-60% of the upload speed.

A better solution is to have a router which is capable of QoS / bandwidth priority. Configure it so that HTTP traffic has high priority and FTP traffic has low priority. But this needs to be done at the Router level, because the lag is not being caused by your computer - it's being caused by your network.

Realistically, because the Internet is a best-effort service and because lags on the network DO happen (it may not necessarily be your computer causing the problem, somebody else on the network could be saturating the bandwidth), you really have to have a 30-60 second timeout for web requests.
0
 

Author Comment

by:DaveThomasPilot
Comment Utility
I've tried used the "speed limits" on FileZilla then checking if the ping response to unrelated sites (like google.com) make a difference.  It didn't seem to have any impact.  
Maybe the FileZilla speed throttling mechanism reduces data transfer on average, but the "time constant" over which the slower average upload speed  is too long to be useful for a tiny Http request to get a crack at the bandwidth with low latency.

Generally, I'll have no control over the router.  The applicatiion is used at many different venues, with public wifi networks.  Are you suggesting I have a dedicated router for my pc that connects to the public wifi?  Would there be no way to do something on my pc to eliminate the need for the external router?  And how would that work anyway, there's only one Mac address for my laptop from which both the ftp transfers and the http requests are coming.


Instead, I'm thinking along the lines of doing something like an ftp from a lower priority thread (C++ code), but transfer partial files instead of a regular file. Target the file fragment size so that it rarely takes more than about a second for the file fragment to upload.  I  After the end of each fragment upload  (or time out if the upload took more than a time out period), do a "SwitchToThread()". or an equivalent call to relinquish the threads timeslot to the application's latency critical thread.

So, something like an ftp, but with extra parameters to limit the number of bytes transfer, a time-out, and the ability to keep track of how much of the file got uploaded so the next transfer attempt could start from that place in the file.

That would mean something would have to run on the host side, to "reassemble" the file fragments and create/save the file when all the data had arrived.

I was hoping that I was just ignorant of some common file transfer technique that was like ftp, but had this type of additional functionality.

?
0
 
LVL 31

Accepted Solution

by:
Frosty555 earned 500 total points
Comment Utility
Hi Dave,

Check this out:

http://www.seriousbit.com/netbalancer/

You might be able to use this to control what processes on your computer get higher priority on the network, and schedule your HTTP packets to go ahead of your FTP packets. That will fix the problem.

A few things regarding what you mentioned in your post:

I'm not suggesting you need a dedicated router, if you don't have control over the network (e.g. you're on a public wifi), then it won't be feasible for your to implement QoS on the network... but that is what is needed here. You need some kind of arbitrator to decide which packets get to go first. Normally the QoS on a router does this, but NetBalancer might be able to do it for you too.

- What you are describing for file transfer - "ftp, but with extra parameters to limit the number of bytes transfer", is basically the exact definition of FileZilla's speed limit feature. Re-implementing it yourself isn't going to help.

- I think that the throttling in Filezilla may not be working because Filezilla uses "bursting" - it allows faster-than-the-limit file transfers for a short period time and then gradually brings the speed down to the set limit. That might explain why you are still experiencing problems even after throttling your upload bandwidth. Try capping the upload rate even more aggressively.

- I want to emphasize here that thread priorities will do NOTHING to help you here. Thread scheduling only has to do with CPU resources. It will not help with prioritizing network traffic. The reason is because your network card has 100 or 1000 megabits  of available bandwidth. The bottleneck only happens when the packets reach the router, and the router can't send them across the Internet fast enough. The bottleneck happens at the network layer, not the application layer.
0
 

Author Closing Comment

by:DaveThomasPilot
Comment Utility
Thanks, I'll try setting the speed limits even lower on FileZilla.

Thanks for the netbalancer.  That looks really interesting!  I'll probably give that a try if using lower speed limits on Filezilla doesn't help.

Dave Thomas
0

Featured Post

NetScaler Deployment Guides and Resources

Citrix NetScaler is certified to support many of the most commonly deployed enterprise applications. Deployment guides provide in-depth recommendations on configuring NetScaler to meet specific application requirements.

Join & Write a Comment

Suggested Solutions

Data center, now-a-days, is referred as the home of all the advanced technologies. In-fact, most of the businesses are now establishing their entire organizational structure around the IT capabilities.
When it comes to security, there are always trade-offs between security and convenience/ease of administration. This article examines some of the main pros and cons of using key authentication vs password authentication for hosting an SFTP server.
After creating this article (http://www.experts-exchange.com/articles/23699/Setup-Mikrotik-routers-with-OSPF.html), I decided to make a video (no audio) to show you how to configure the routers and run some trace routes and pings between the 7 sites…
This video gives you a great overview about bandwidth monitoring with SNMP and WMI with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're looking for how to monitor bandwidth using netflow or packet s…

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

18 Experts available now in Live!

Get 1:1 Help Now