Solved

unix C++ create process with redirecting stdin and stdout

Posted on 2008-06-13
11
3,616 Views
Last Modified: 2013-12-27
hi - new to unix dev, i have this working in windows since you can specifiy handles to stdin and stdout when calling CreateProcess API.  In unix i see there is popen, but its only one way.  I want to be able to read and write to this process.  The project is automated telnet, so i want to create a telnet process, and then read in the stdout from that process, and respond to certain events.
0
Comment
Question by:bowser17
  • 5
  • 5
11 Comments
 
LVL 86

Accepted Solution

by:
jkr earned 250 total points
ID: 21783000
See http://www.gmonline.demon.co.uk/cscene/CS4/CS4-06.html ("Pipes in Unix") which addresses exactly this issue and comes with full code. The scoop is to

#include <stdio.h>

#include <unistd.h>

#include <fcntl.h>
 

int main (int argc, char **argv)

{

  int filedes1[2], filedes2[2];

  int pid;

  

  /* Argument checking. */
 

  if (argc < 4)

    {

      fputs("Insufficient number of arguments given\n", stderr);

      exit(1);

    }
 

  /* Make our first pipe. */
 

  if (pipe(filedes1) == -1)

    {

      perror ("pipe");

      exit(1);

    }
 

  /* Make our second pipe. */

  

  if (pipe(filedes2) == -1)

    {

      perror ("pipe");

      exit(1);

    }
 

  /* Fork a child */
 

  if ((pid = fork()) == 0)

    {

      dup2(filedes1[0], fileno(stdin)); /* Copy the reading end of the pipe. */

      dup2(filedes2[1], fileno(stdout)); /* Copy the writing end of the pipe */

      

      /* Uncomment this if you want stderr sent too.
 

      dup2(filedes2[1], fileno(stderr));
 

      */
 

      /* If execl() returns at all, there was an error. */

      

      if (execl(argv[1], argv[1] == -1)) /* Bye bye! */

	{

	  perror("execl");

	  exit(128);

	}

    }

  else if (pid == -1)

    {

      perror("fork");

      exit(128);

    }

  else

    {

      FILE *program_input, *program_output, output_file;

      int c;
 

      if ((program_input = fdopen(filedes1[1], "w")) == NULL)

	{

	  perror("fdopen");

	  exit(1);

	}
 

      if ((program_output = fdopen(filedes2[0], "r")) == NULL)

	{

	  perror("fdopen");

	  exit(1);

	}
 

      if ((output_file = fopen(argv[3], "w")) == NULL)

	{

	  perror ("fopen");

	  exit(1);

	}
 

      fputs(argv[2], program_input); /* Write the string */
 

      while ((c = fgetc(program_ouput)) != EOF)

	  fputc(c, output_file);
 

      exit(0);

    }

}

Open in new window

0
 
LVL 1

Author Comment

by:bowser17
ID: 21783275
Can't seem to get this working.  the dup w/ fileno definately resembles what I did in windows.
i changed it to execlp, and this call wont exit.  I tried it with the "ls" command, and i can read the output of ls, write it to a file, but then i imagine it just sticks in my read loop (fgets) or with the original fputc.
0
 
LVL 53

Expert Comment

by:Infinity08
ID: 21788346
At the risk of being accused of "sneaking in" and "stealing points", I'll reply, since it's been well over a day now. I hope I've given you enough time to clarify yourself this time, jkr.

bowser17,

the problem with read-write access to a process is that the pipes are buffered, and deadlocks can occur quite easily. Ie. While you're writing data to the other process, it might already be writing data back to you, filling up the pipe buffer. But on your end, you're not reading yet, so the buffer is never emptied, and the other process blocks when the buffer fills up. Meantime, you might still be writing data, but since the other process doesn't read it any more, your buffer fills up too, and when that's full, you have a nice deadlock. This is just one example of how this can occur, and believe me, deadlocks like these are quite common.

A reliable solution to this problem is to do the reading and writing in two separate threads which are independent in each other (ie. they operate independently, and don't block each other). One thread will open a write pipe to the other process, and write whatever data needs to be sent to it, and the other thread will read data from the read pipe to that same process.
0
 
LVL 1

Author Comment

by:bowser17
ID: 21794251
Correct, isn't that the purpose of the fork?
0
 
LVL 53

Expert Comment

by:Infinity08
ID: 21794401
>> isn't that the purpose of the fork?

The fork as used above is for the other process. The calling process still consists of only one thread. And that one thread has both the read and write pipes to the other process.
So, the way out, is to have two threads in the calling process : one thread that does the writing, and one that does the reading.
0
Do You Know the 4 Main Threat Actor Types?

Do you know the main threat actor types? Most attackers fall into one of four categories, each with their own favored tactics, techniques, and procedures.

 
LVL 1

Author Comment

by:bowser17
ID: 21795658
Ive got it partially working now, last Fri afternoon my brain went dormant, so i can see this a little better.  I have to think about the deadlock and buffering some more, because i am not totally on board with that yet.  Are you suggesting that the sample given will break due to deadlock since there is no alternate thread for writing?
0
 
LVL 53

Expert Comment

by:Infinity08
ID: 21795864
>> Are you suggesting that the sample given will break due to deadlock since there is no alternate thread for writing?

I'm saying that that's the likely reason, yes. It is a very common problem with read/write access to a process from one thread.

If you want, you can show the code you have now, so we can take a look at it ...
0
 
LVL 1

Author Comment

by:bowser17
ID: 21796133
Is this deadlock thing only when the opposite end prog is also non-multithreaded?  I believe in a telnet session, the rfc is for the server to always be ready to accept new input, so wouldn't that avoid the deadlock in that specific case?
0
 
LVL 53

Assisted Solution

by:Infinity08
Infinity08 earned 250 total points
ID: 21796422
>> Is this deadlock thing only when the opposite end prog is also non-multithreaded?

No, because the main reason for the deadlock is the fact that the I/O is buffered.


>> so wouldn't that avoid the deadlock in that specific case?

Not really. Imagine 4 buffers, two on each side. So :

    1) one write buffer for your process (at your end of the write pipe)
    2) one read buffer for the other process (at the other end of that same write pipe)
    3) one write buffer for the other process (at the other end of the read pipe)
    4) one read buffer for your process (at your end of that same read pipe)

Data conceptually flows from 1 to 2, and then back through 3 and 4.

Now, say that you are happily writing data into buffer 1, which then flows through the pipe to buffer 2, where it's read by the other process, and the corresponding output data is written in buffer 3. You still keep writing, and as a result, buffer 3 is filling up. When it's full, the other process can't continue any more, so it blocks. You don't know that, so you keep writing data into buffer 1, until that buffer too is completely filled up. Then your process blocks too, because it can't continue writing any more.
That's a deadlock. And whether the other process is multi-threaded or not doesn't change a thing. Both processes are now blocked ...

This can never occur if buffer 3 doesn't have the chance to fill up. And you make sure of that by having a separate reader thread in your application that continually empties buffer 3, so that the other process never becomes blocked, thus avoiding the deadlock.

This is just one example of how a deadlock can occur in a setup like this. But all are resolved by having separate read and write threads.

Does that make more sense ?
0
 
LVL 1

Author Comment

by:bowser17
ID: 21796652
Yes, it does make sense.  If our 2nd process was multi-threaded, the one side is constantly reading, the buffers would never stay full thus no deadlock?  In any case, I would agree that it would make sense to make it multi-threaded, because my statement is purely argumentative for the sake of understanding.  I just learned i no longer need to do this unix port.
0
 
LVL 53

Expert Comment

by:Infinity08
ID: 21796683
>> I just learned i no longer need to do this unix port.

Heh :) The ever-changing requirements lol.
0

Featured Post

Threat Intelligence Starter Resources

Integrating threat intelligence can be challenging, and not all companies are ready. These resources can help you build awareness and prepare for defense.

Join & Write a Comment

I have been running these systems for a few years now and I am just very happy with them.   I just wanted to share the manual that I have created for upgrades and other things.  Oooh yes! FreeBSD makes me happy (as a server), no maintenance and I al…
FreeBSD on EC2 FreeBSD (https://www.freebsd.org) is a robust Unix-like operating system that has been around for many years. FreeBSD is available on Amazon EC2 through Amazon Machine Images (AMIs) provided by FreeBSD developer and security office…
Learn several ways to interact with files and get file information from the bash shell. ls lists the contents of a directory: Using the -a flag displays hidden files: Using the -l flag formats the output in a long list: The file command gives us mor…
The viewer will learn how to user default arguments when defining functions. This method of defining functions will be contrasted with the non-default-argument of defining functions.

757 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

18 Experts available now in Live!

Get 1:1 Help Now