Server/Client Notification over HTTP using Thread.wait

I am working on a Java server program that can notify a windows C++ client when a specific event occur (for example another user logs in).
The solution needs to work across firewalls and proxies so I have so far been using HTTP but due to the request/response nature of HTTP that gives me some problems.
So far I have been looking at a solution where the server does not respond to the clients HTTP request until the notification is ready to be sent, i.e. the server holds the thread (Thread.wait()) created by the clients request.

1) I am afraid that this solution can give me problems when many users logged are in, i.e. I have to hold maybe 2000 Threads. Does anyone know if this could be a problem?
2) Can anyone else suggest another technique that would achive what I want?
3) It seems to me that ASP.NET has something called IAsyncResult which allows a Thread to be given back to the process pool when the thread is blocked (sleeps). Does Java have something similar?

I would like to use a standard HTTP server like tomcat so that I can avoid having to implement my own HTTP server.
Who is Participating?
girionisConnect With a Mentor Commented:
I can then see two possible solutions to your problem.

1) Buy more memory for the computer (assuming it can handle a few gigabytes of memory - depends if it is 32 or 64 bits) so you can create more threads and have enough memory for them.

2) Clustering. You gain speed and processing power by grouping together several computers and alongside with clustering comes load-balancing, failover and fault-tolerance. You overcome limitations such as memory and CPU and you have a more stable non-centralized system. Besides the addition of CPU and memory is simply a matter of adding a computer to the network and replicate the application there. The drawback of this is that you will need to buy/get a few more computers.

> I mean a request every couple of seconds from several thousand subscribers would put a huge
> strain on the CP wouldn't it!

Not necessarily. Do not forget that HTTP is a stateless protocol, i.e. it does not maintain state and hence it does not remember anything between requests. The server does not have to maintain state and therefore does not have to keep the threads alive. It might as well kill the threads or return them to the pool of available threads after it finishes processing the request and spawn/obtain a new thread upon a new request.
1) You can always use a thread pool to manage several threads:

2) Using a separate thread for each request is how Java handles the HTTP requests. Servlets and JSP pages use thsi model (i.e. the spawn a new thread for each request coming in but there is only one instance of the servlet running) so I'd say more or less this is the "de-facto" way to do it.

3) AFAIK jdk1.4 does not provide a thread pooling automatically, things might have changed in jdk1.5 but I can't comment on that. You will have to write your own connection pool to obtain that. If you take a look at the link I posted you will take ideas of how to obtain it. Also if you google "java thread pooling" you will come up with loads of info about that subject.
TeeRolesAuthor Commented:
Hi Girionis,

Thanks for your answer, but I'm not sure if it answers the main problem that I am worried about. I guess I was not too clear the first time, so let me try and clarify....

The primary concern with having so many simultaneous threads (a few thousand +) is that I am not really sure it is possible. I have read in several forum (and articles) that the major restriction on the max simultaneous threads is the stack size of the threads, which by default is set to 2M. As a result of this, due to VM limitaitions of various OSs, one can only hope to have between 300 - 500 simulatneous threads.

Thread pooling will definitely help in the management of the threads, and the queuing of requests, but if there are only 300 threads with over 2000 subscribers logged on...all waiting for events, then I guess no amount of queuing will be useful??

Of course this limit changes drastically if you have a system capable of handling 64 KB (10000+) or by reducing the stack size (reduction fator from 2M), but is this still the best/recommended/only way to handle/design a HTTP event driven server? I guess a mechanism such as this forces a limitation on the operating system as compared to a limitation on the CPU (although CPU limitations can possibly be overcome by clustering several together...(i think))!!

I have also been doing a bit of reading on the way MSN maintains the communication between client and server, and it appears that they actually shoot out new requests once every 2 seconds (on receipt of the reply of course)!! Is this a better way to go forward?? If so, I find it hard to believe that there is no other way. I mean a request every couple of seconds from several thousand subscribers would put a huge strain on the CP wouldn't it!

Anyway, I hope I have managed to clairfy what my main problem is, although I'm not really sure that there is a black and white answer to it!!
I look forward to hearing your answer.
Thank you for accepting, glad I was of help :)
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.