home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!mcsun!uknet!warwick!nott-cs!mips.nott.ac.uk!pczip
- From: pczip@mips.nott.ac.uk (Ivan Powis)
- Newsgroups: comp.os.os9
- Subject: Deallocation of process descriptors.
- Message-ID: <1992Sep15.081851.10686@cs.nott.ac.uk>
- Date: 15 Sep 92 08:18:51 GMT
- Sender: news@cs.nott.ac.uk
- Reply-To: pczip@mips.nott.ac.uk (Ivan Powis)
- Organization: Nottingham University
- Lines: 20
-
- Can someone advise me on the following: I have an application which consists
- of a long term process which spawns many short child processes. The children
- all exit before the parent and are intended to execute concurrently with it -
- so the parent does not do a 'wait' for the child. This all works, but leaves
- behind a trail of 'dead' non deallocated process descriptors until some
- indefinite time in the future when the parent process is killed off. How can
- I force these unwanted descriptors to be deallocated when the child terminates?
-
- It seems this situation is analogous to a shell creating background processes
- with the 'command &' facility. The shell doesn't leave a trail of dead
- descriptors in its wake, so there must be some way to achieve the desired
- effect.
-
- On a related point, does anyone know whether it can be arranged for a child
- process automatically to signal its parent when it dies, without the parent
- having to 'wait' and without having to explicitly program a send_signal
- operation into the child?
-
-
- Ivan Powis
-