Improve company productivity with a Business Account.Sign Up

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 354
  • Last Modified:

Multithreaded Design Question


We’re in the process of designing a server-side application.

The first "filter" component of the server should receive requests from multiple concurrent clients over TCP/IP, each request containing an entity ID number. The filter should then process each ID and decide whether to pass it on to the next component (database) according to a given set of rules. The first version of the system should deal with around 20 concurrent clients producing ~1,000 requests per second, but future versions should be much more scalable (up to hundreds of concurrent clients and ~100,000 requests per second). The development environment is Microsoft .NET on Windows 2000 servers.

We came up with two possible architectures for the filter component:

1. Maintain a different communication thread for every client. Every such communication thread receives ID’s from its corresponding client and writes them into a common queue. A different worker thread reads data from the queue and processes the requests.

2. Maintain one communication thread for all clients. The communication thread reads ID’s and writes them into a queue. Several worker threads, managed in a thread pool, read data from the queue and process requests.

The first solution is obviously less scalable, since when the number of clients increases the overhead of a large number of threads starts affecting performance. Still, it is easier to code and it fits the first version/versions of the server.

Our question regarding the second solution, which seems more "server-oriented", is this: when taking into consideration the large number of requests per second and the fact that the processing time of each request is relatively short, isn’t the overhead of managing a thread pool, allocating a worker thread for every new request and returning it back to the pool when finished, etc. becoming too expensive ? Won't this affect the overall performance ?

In addition:

1. What should be the limit (roughly speaking) of the number of concurrent communication threads we can expect to run on a standard Windows 2000 machine ?
2. Is there any other alternative for the design of the filter component ? Are we missing something basic ?

Thanks in advance,

4 Solutions
I like second solution much more, but as you say, large number of threads are bad becouse of the memory usage and the time to create the thread.
I would use some variables that is limiting how much thread can be run at once and how much thread stays in memory (but not running) - this way you will avoid creating the thread all the time and just run it when it's needed, this will boost up the performance.

Since there's no thread limit in windows 2000 you are limited to memory and processor speed. It's hard to tell exact number becouse i don't know what the app is doing and how much processor time is needed for that, so the best way is to experiment.
An alternative would be to use both designs. Start with design #1 using a tunable number of communication threads. If the number of concurrent users hits some configuration limit then kick in the design 2 aspect. At this connection limit, begin using thread pooling for each  connection (share the connection among users).  

See  for comments on SQL Server approach.

You can also round robin-DNS or some other load balancing technique to mitigate single server performance issues when you expect high numbers of connections.

Unless you're running on a multi-CPU box, you want to minimize the number of threads.

3: Main chunk of codes looks at queue for requests to perform, while another chunk of code (triggered by packet arrival) adds requests to the queue.  Some form of semaphore is used to provide queue integrity.
What Kind of Coding Program is Right for You?

There are many ways to learn to code these days. From coding bootcamps like Flatiron School to online courses to totally free beginner resources. The best way to learn to code depends on many factors, but the most important one is you. See what course is best for you.

Hi Amir,

There is limited information about the nature of your application. The scale you have mentioned for the next version is large and running a thread for each request is not advisable unless you are running some very high performance hardware. A small modification to the first and second approach will be a better approach that the raw approaches.

On startup, sever creates a fixed number of threads which will wait for client requests. As request arrive, they are handed over to waiting threads. You can impose limits on
- Minimum number of free threads waiting for requests
- Maximum number of free threads waiting for requests
- Maximum total number of threads

The key difference is that threads are not destroyed on completing a request. Spawing threads and killing them is an expensive operation and you will end up saving substantial amounts of time given the scale of your application.

Similar architecture is used by apache web server ... A peek into its documentation and source code should provide some valuable insights.

This approach does have scalability issues. You would not like to have 1000 threads running concurrently on your system unless all threads have small amount of processing to do, hardware is fast, and there is no other processes taking up the CPU.

What is the expected behaviour of the application if the server is over-loaded? Can you put a limit, like at a time only 200 clients will be processed. Others have to wait or try again.

If yes, then it might be a good idea to have an accepting thread which accepts the requests and hands them over to worker threads. If all threads are busy then they are enqueued. If queue is full, they are declined service. If system is dedicated to the application, then you can keep spawing processes until the system chokes :) ... It should be rare so should work for most of the time unless you have very stringent reliability requirements.

In either case, keeping thread pool and avoiding creating and destroying threads is likely to give you substantional performance and scalability benefits.

good luck
When dealing with that many connections, it is imperative that you not use a thread for every connection.  Your other option was to use just one thread.  These can be kept desparate and in doing so, you have almost infinite scalability.  Asynchronous socket connections allow your CPU only to handle socket requests and responses when the request comes in, without requiring a thread for each connection to avoid blocking.  Then, you have any number of threads (you're now in control of that) doing the processing.  You don't have to have just one thread.  

The threadpool is ideal for this sort of thing.  The thread creation and destruction is much more expensive than the pool management, especially with a static pool size.  Otherwise, nobody would use them.  This is how my colleague and I are writing our SMTP server.  The server can handle thousands of concurrent connections with only about 15 threads doing the processing.  This accommodates an optimal use of the CPU when all these connections are the bottleneck over a shared connection.  

Thread pooling can be done in many ways.  If the .NET thread pool is not efficient enough, you can write your own.  You can write your own class using an array of thread objects and an array of flags, or by using the thread state to indicate availability (available = (state == suspended)).  
tulip123Author Commented:
Sorry about the delay, I was in vacation and had a few other things on my mind too...

Thanks for all the replies, they helped me design what I hope is the correct architecture.

Cheers !
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now