On internal intranet the C/C++ client sends out REST-like queries via TCP/IP and receives an XML response from the Java server. If the REST query loops fetching 3 records at a time, the total time to accumulate 90000 records is about 10x longer than if we fetch 100 records at a time. We will be performing a number of timing tests to isolate the cause. In anticipation that the problem may be the slow TCP/IP start due to initial small windowing, what settings are there to tell TCP/IP to start off with the largest (or larger) window size possible?
We are on 64-bit RHEL servers, and I assume that since the client/server are run on an intranet self-contained within the company, that we do not have to be concerned about congestion.
Thanks,
Paul
Then again running the query yielding 90000 records returning record 6-9 etc.
thus running 30K queries against your database.
While returning 100 / batch will 'only'' do 900 Queries. that will make up for a 33 times difference. (it might be slightly better due to caching inside the DB).
The better approach might be running the query yielding 90K records, then keep those in a cache (memcache f.e.) and then pull from memcache until either the data is missing there or until x seconds have past..... and if memcache has no data THEN query the DB again.