Link to home
Start Free TrialLog in
Avatar of tikiliainen
tikiliainen

asked on

Why do script children refuse to die?

First of all, this is a Solaris question. I have a (bash) shell script abc.sh, which looks as follows:

#!/bin/bash
/some/perl/script.pl

The perl script is designed to hang (not by me), by trying to suck some data out from a bad URL (the URL never gives any data back) and there is no timeout.

Now, all I want to do is to be able to kill abc.sh with a single kill -9 without any residual processes hanging about afterwards. The problem is that the perl script refuses to die upon a kill -9 issued to the process of abc.sh.

Before issuing a SIGKILL (kill -9) to the process of abc.sh, a ps -ef shows the parent of the hung perl process to be abc.sh. After issuing a SIGKILL, the parent becomes 1 (/etc/init, as far as I remember). A subsequent kill -9 to the perl process kills it without any problems.

Can anyone think why the perl script would ignore the SIGKILL, which, AFAIK, gets passed through to it? If I am wrong in thinking that it gets passed to it, which is quite likely, then is there a way to pass it through nicely?

My main problem is that killing the script abc.sh is the only thing I can do (I have to use Java's Process.destroy() on it), as normally, I would not have the pid of any of its children (again, as the result of using Java).

Thanks.
Avatar of sunnycoder
sunnycoder
Flag of India image

Hi tikiliainen,

The perl script is a separate process that is launched by your shell script ... Your signal is received by the shell srcipt and it exits ... However perl script is an independent process which has not received any signal ... thus it continues

kill sends the signal to a process not a process and all its children

you are right in remembering that init is the new parent (1)

Sunnycoder
one way around this is to put your group of processes that you wish to have killed in a separate process group and send the signal to -n where n is the process group number

man kill for more information on this
Avatar of Alf666
Alf666

You simply want to send the shell a SIGHUP and not a SIGKILL. SIGHUP will instruct the shell to exit, but it will pass the signal to all child processes before.

Try and avoid using SIGKILL as it definitely kills processes without giving them a chance to clean up. SIGKILL is non catcheable.

kill -HUP <processid>

The only problem you will encounter is if your perl script blocks SIGHUP. Then, you'll have the same problem.

In this case, you can do the following :

#!/bin/bash
mySignalHandler() {
  kill -9 $MYPROC
  echo "done" > /tmp/fic
  exit
}

trap mySignalHandler SIGHUP

/some/perl/script.pl &

MYPROC=$!

wait ${MYPROC}


Of course, killing -9 the perl script is not the best option either, but if it resists to the standard HUP signal....
Hi Alf666,
> kill -HUP <processid>

There is no "hang up" (i.e., losing connection) happening, so -TERM might be more appropriate, especially when the script is started with nohup (or via cron).

Cheers,
Stefan
ASKER CERTIFIED SOLUTION
Avatar of Alf666
Alf666

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial