We are using the .net system.net.sockets.socket class for some ftp functionality.
After sending a command on an open socket we call the socket.receive (buffer). This puts a certain number of bytes into the buffer and returns the number of bytes.
The problem is that sometimes the operation returns 0 but we have not received the entire response. The next call to the receive method will then return the remaining message from this command PLUS the response generated by the subsequent command.
We suspect this happens due to network latency and varying ftp server loads etc.
My question is how can we reliably know when we have read back the entire outstanding response from the socket?