[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1083
  • Last Modified:

socket delay

Socket Delay

We are experiencing 15 seconds delay each time we receive an HTTP transmission.  How can the delay be reduced?

We are using a 'C' iSeries ILE program to communicate with a remote system.  In each cycle of the program, we perform 4 API calls:

Acquire      socket_number = socket(AF_INET, SOCK_STREAM, 0);
Connect      connect(socket_number, (struct sockaddr *)&serveraddr, sizeof(serveraddr));
Send      send(sock_number, send_buffer, req_len, 0);
Receive      recv(sock_number, recv_buffer, 24000, MSG_WAITALL)

The first three calls execute in less than a second. However, the receive call takes 15 seconds regardless of the length of the response.  Furthermore, testing has demonstrated that the delay is beyond program control.  Examples follow:

Since we do not know in advance the length of the response, we supply a 24000-byte buffer which we judge to be adequate for any and all responses. The MSG_WAITALL  parameter specifies that the command should not complete until the entire transmission has been received. We can remove this parameter by setting its value to zero
      recv(sock_number, recv_buffer, 24000, 0)
In this case, the call is completed within a second.  However, if the response is longer that a single packet, the last packet of the response is not returned.  That is to say, the response buffer is truncated at the end of the next to last packet.

To further evaluate the response, we used a technique which allowed us to better view response timing:
      while ((xx = recv(sock_number, recv_buffer + n_read, 1, 0)) > 0)
       {  fprintf(stderr, recv_buffer + n_read);
           ++n_read;
       }
In this set of statements, the response is retrieved byte by byte.  Each byte is printed on the screen as it is received.  As in the prior example, the results are strongly influenced by the length of the response.  For responses of one packet or less, the screen sits quiet for 15 seconds, then the response streams across.

For responses n packets, n being greater than one,  n-1 packets immediately stream across the screen, a pause of 15 seconds occurs, then the final packet streams.

An additional consideration: The HTTP header received as part of the response contains a timeout parameter of 15 seconds.  “HTTP/1.1 200 OK  Date: Mon, 11 Apr 2005 22:12:56 GMT  Server: IBM_HTTP_Server  Keep-Alive: timeout=15, max=100  Connection: Keep-Alive  Transfer-Encoding: chunked  Content-Type: text/plain”
0
jwguest
Asked:
jwguest
1 Solution
 
Dave FordSoftware Developer / Database AdministratorCommented:
If it's possible, what happens if you reduce the timeout parameter from 15 seconds to ... say ... 2 seconds?  (and I don't claim to even know how to do that, but it seems like a logical thing to try).

-- DaveSlash
0
 
jwguestAuthor Commented:
It's easy to change, unfortunately that parameter is being sent to us from the other company.  They're not being very cooperative about changing for a test.
However, it seems odd to me that someone else's parameter would effect how quickly we receive the data.

But thanks for the comment.
0
 
HelixirCommented:
I don't know if you fixed it, but as you are the client just kill the socket yourself.
If you are sure you've got everything...

By the way I think that a loop on recv() with 4000 character would be a lot more appropriate than one call with 24000 or a lot of call with 1 !!
0
 
PAQ_ManCommented:
PAQed with points refunded (500)

PAQ_Man
Community Support Moderator
0

Featured Post

[Webinar] Cloud and Mobile-First Strategy

Maybe you’ve fully adopted the cloud since the beginning. Or maybe you started with on-prem resources but are pursuing a “cloud and mobile first” strategy. Getting to that end state has its challenges. Discover how to build out a 100% cloud and mobile IT strategy in this webinar.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now