I run a client program which outputs to a socket whose destination is same PC, port 4321. On the same VMware, I run a simulated server using netcat:
nc -lk 127.0.0.1 4321 > /dev/null
so that the program can connect to the nc listening command and write stuff to it. This all works fine. The internal output buffers in the program are nearly 0 since nc reads much faster than the program can produce data. The program spits out the average size of its internal output buffer, and it is almost 0 (which would be great in real life). But I would like to see some simulated congestion so that the program spits out a larger amount of buffering.
I would like to know if there is an easy way to slow down nc's high speed read capability to simulate congestion.
Using a debugger is not in the cards for this problem. Just looking for some standard way to throttle the nc read. If not nc directly, is there another set of commands to do this? (If there is no standard solution, then I will have to write a simple server program to simulate periodic delays.)
I did come across this:
but not sure how it might be useful with the nc command. Plus, this link talks about "all packets going out of the local Ethernet", and in this case, the socket writes are not going out on the Ethernet.