This is a special purpose commercial system running with a central server (written in C++) which manages a Microsoft SQL Database, serving a series of diskless terminals on a network. The touchscreen terminals are running a 1.1 Blackdown JVM on Linux. Terminals and Host (the central server) communicate via TCP/IP.
Recently a piece was added to allow a terminal to interact with a 3rd party application. The interaction with the 3rd party application is also done via TCP/IP. The code to send the message, once it is constructed, is as follows:
try
{ Socket sock=new Socket(kip,port);
DataOutputStream dos=new DataOutputStream(sock.getOutputStream());
dos.write(msg);
dos.flush();
sock.close();
} catch(Exception e)
{ Util.sysError("Com exception "+e,false);
}
The ip address and the port of the 3rd party device are configured in a file, and the values are loaded into kip and port.
Now all of this works just fine; until the terminal is rebooted.
If the terminal is rebooted, there will be an approximately 5 second delay at the line where the socket is created (Socket sock=new Socket(kip,port);).
This condition will persist for 3 to 5 minutes of use. If it is used (thus causing lots of messages to be sent (with the lag)) the lag will suddenly clear up/disappear, and from that point forward all will be well again, until the terminal is once again rebooted.
If the terminal is *not* used (e.g. leave it sit for 12 hours and then use it) the delay will still be there when use is started; for approximately 3 to 5 mintes. So it appears that a certain number of packets have to go through before the problem disappears.
I am but a humble application programmer, and so I haven't the faintest as to where to begin to figure out why Socket creation would be taking so long.
Does anyone have a suggestion on how to figure out what is causing the delay, or (even better) knows what could be causing it?