• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 3679
  • Last Modified:

unix C++ create process with redirecting stdin and stdout

hi - new to unix dev, i have this working in windows since you can specifiy handles to stdin and stdout when calling CreateProcess API.  In unix i see there is popen, but its only one way.  I want to be able to read and write to this process.  The project is automated telnet, so i want to create a telnet process, and then read in the stdout from that process, and respond to certain events.
0
bowser17
Asked:
bowser17
  • 5
  • 5
2 Solutions
 
jkrCommented:
See http://www.gmonline.demon.co.uk/cscene/CS4/CS4-06.html ("Pipes in Unix") which addresses exactly this issue and comes with full code. The scoop is to

#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
 
int main (int argc, char **argv)
{
  int filedes1[2], filedes2[2];
  int pid;
  
  /* Argument checking. */
 
  if (argc < 4)
    {
      fputs("Insufficient number of arguments given\n", stderr);
      exit(1);
    }
 
  /* Make our first pipe. */
 
  if (pipe(filedes1) == -1)
    {
      perror ("pipe");
      exit(1);
    }
 
  /* Make our second pipe. */
  
  if (pipe(filedes2) == -1)
    {
      perror ("pipe");
      exit(1);
    }
 
  /* Fork a child */
 
  if ((pid = fork()) == 0)
    {
      dup2(filedes1[0], fileno(stdin)); /* Copy the reading end of the pipe. */
      dup2(filedes2[1], fileno(stdout)); /* Copy the writing end of the pipe */
      
      /* Uncomment this if you want stderr sent too.
 
      dup2(filedes2[1], fileno(stderr));
 
      */
 
      /* If execl() returns at all, there was an error. */
      
      if (execl(argv[1], argv[1] == -1)) /* Bye bye! */
	{
	  perror("execl");
	  exit(128);
	}
    }
  else if (pid == -1)
    {
      perror("fork");
      exit(128);
    }
  else
    {
      FILE *program_input, *program_output, output_file;
      int c;
 
      if ((program_input = fdopen(filedes1[1], "w")) == NULL)
	{
	  perror("fdopen");
	  exit(1);
	}
 
      if ((program_output = fdopen(filedes2[0], "r")) == NULL)
	{
	  perror("fdopen");
	  exit(1);
	}
 
      if ((output_file = fopen(argv[3], "w")) == NULL)
	{
	  perror ("fopen");
	  exit(1);
	}
 
      fputs(argv[2], program_input); /* Write the string */
 
      while ((c = fgetc(program_ouput)) != EOF)
	  fputc(c, output_file);
 
      exit(0);
    }
}

Open in new window

0
 
bowser17Author Commented:
Can't seem to get this working.  the dup w/ fileno definately resembles what I did in windows.
i changed it to execlp, and this call wont exit.  I tried it with the "ls" command, and i can read the output of ls, write it to a file, but then i imagine it just sticks in my read loop (fgets) or with the original fputc.
0
 
Infinity08Commented:
At the risk of being accused of "sneaking in" and "stealing points", I'll reply, since it's been well over a day now. I hope I've given you enough time to clarify yourself this time, jkr.

bowser17,

the problem with read-write access to a process is that the pipes are buffered, and deadlocks can occur quite easily. Ie. While you're writing data to the other process, it might already be writing data back to you, filling up the pipe buffer. But on your end, you're not reading yet, so the buffer is never emptied, and the other process blocks when the buffer fills up. Meantime, you might still be writing data, but since the other process doesn't read it any more, your buffer fills up too, and when that's full, you have a nice deadlock. This is just one example of how this can occur, and believe me, deadlocks like these are quite common.

A reliable solution to this problem is to do the reading and writing in two separate threads which are independent in each other (ie. they operate independently, and don't block each other). One thread will open a write pipe to the other process, and write whatever data needs to be sent to it, and the other thread will read data from the read pipe to that same process.
0
Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
bowser17Author Commented:
Correct, isn't that the purpose of the fork?
0
 
Infinity08Commented:
>> isn't that the purpose of the fork?

The fork as used above is for the other process. The calling process still consists of only one thread. And that one thread has both the read and write pipes to the other process.
So, the way out, is to have two threads in the calling process : one thread that does the writing, and one that does the reading.
0
 
bowser17Author Commented:
Ive got it partially working now, last Fri afternoon my brain went dormant, so i can see this a little better.  I have to think about the deadlock and buffering some more, because i am not totally on board with that yet.  Are you suggesting that the sample given will break due to deadlock since there is no alternate thread for writing?
0
 
Infinity08Commented:
>> Are you suggesting that the sample given will break due to deadlock since there is no alternate thread for writing?

I'm saying that that's the likely reason, yes. It is a very common problem with read/write access to a process from one thread.

If you want, you can show the code you have now, so we can take a look at it ...
0
 
bowser17Author Commented:
Is this deadlock thing only when the opposite end prog is also non-multithreaded?  I believe in a telnet session, the rfc is for the server to always be ready to accept new input, so wouldn't that avoid the deadlock in that specific case?
0
 
Infinity08Commented:
>> Is this deadlock thing only when the opposite end prog is also non-multithreaded?

No, because the main reason for the deadlock is the fact that the I/O is buffered.


>> so wouldn't that avoid the deadlock in that specific case?

Not really. Imagine 4 buffers, two on each side. So :

    1) one write buffer for your process (at your end of the write pipe)
    2) one read buffer for the other process (at the other end of that same write pipe)
    3) one write buffer for the other process (at the other end of the read pipe)
    4) one read buffer for your process (at your end of that same read pipe)

Data conceptually flows from 1 to 2, and then back through 3 and 4.

Now, say that you are happily writing data into buffer 1, which then flows through the pipe to buffer 2, where it's read by the other process, and the corresponding output data is written in buffer 3. You still keep writing, and as a result, buffer 3 is filling up. When it's full, the other process can't continue any more, so it blocks. You don't know that, so you keep writing data into buffer 1, until that buffer too is completely filled up. Then your process blocks too, because it can't continue writing any more.
That's a deadlock. And whether the other process is multi-threaded or not doesn't change a thing. Both processes are now blocked ...

This can never occur if buffer 3 doesn't have the chance to fill up. And you make sure of that by having a separate reader thread in your application that continually empties buffer 3, so that the other process never becomes blocked, thus avoiding the deadlock.

This is just one example of how a deadlock can occur in a setup like this. But all are resolved by having separate read and write threads.

Does that make more sense ?
0
 
bowser17Author Commented:
Yes, it does make sense.  If our 2nd process was multi-threaded, the one side is constantly reading, the buffers would never stay full thus no deadlock?  In any case, I would agree that it would make sense to make it multi-threaded, because my statement is purely argumentative for the sake of understanding.  I just learned i no longer need to do this unix port.
0
 
Infinity08Commented:
>> I just learned i no longer need to do this unix port.

Heh :) The ever-changing requirements lol.
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 5
  • 5
Tackle projects and never again get stuck behind a technical roadblock.
Join Now