I am going to write python script which will read python command from
socket, run it and return some values back to socket.
My problem is, that I need some timeout. I need to say for example:
os.system("someapplication.exe")
and kill it, if it waits longer than let's say 100 seconds
I want to call command on separate thread, then after given timeout -
kill thread, but I realized (after reading Usenet archive) that there is
no way to kill a thread in Python.
How can I implement my script then?
PS. it should be portable - Linux, Windows, QNX, etc
Probably the easiest way is to use select with a timeout (see the
docs for library module select). eg.
a, b c = select.select([mySocket], [], [], timeout)
if len(a) == 0:
print 'We timed out'
else:
print 'the socket has something for us'
Steve
(Sounds like a huge security risk, unless you have tight control over
who can connect to that socket.)
> My problem is, that I need some timeout. I need to say for example:
>
> os.system("someapplication.exe")
>
> and kill it, if it waits longer than let's say 100 seconds
>
> I want to call command on separate thread, then after given timeout -
> kill thread, but I realized (after reading Usenet archive) that there is
> no way to kill a thread in Python.
The issue isn't killing a thread in Python, it's killing the *new
process* which that thread has started. To do that you have to rely on
OS-specific (i.e. non-portable) techniques. Googling for "python kill
process" would probably get you off to a good start.
-Peter
Jacek Poplawski wrote:
> After reading more archive I think that solution may be to raise an
> Exception after timeout, but how to do it portable?
Python allows any thread to raise a KeyboardInterrupt in the
main thread (see thread.interrupt_main), but I don't think there
is any standard facility to raise an exception in any other
thread. I also believe, and hope, there is no support for lower-
level killing of threads; doing so is almost always a bad idea.
At arbitrary kill-times, threads may have important business
left to do, such as releasing locks, closing files, and other
kinds of clean-up.
Processes look like a better choice than threads here. Any
decent operating system will put a deceased process's affairs
in order.
Anticipating the next issues: we need to spawn and connect to
the various worker processes, and we need to time-out those
processes.
First, a portable worker-process timeout: In the child process,
create a worker daemon thread, and let the main thread wait
until either the worker signals that it is done, or the timeout
duration expires. As the Python Library Reference states in
section 7.5.6:
A thread can be flagged as a "daemon thread". The
significance of this flag is that the entire Python program
exits when only daemon threads are left.
The following code outlines the technique:
import threading
work_is_done = threading.Event()
def work_to_do(*args):
# ... Do the work.
work_is_done.set()
if __name__ == '__main__':
# ... Set stuff up.
worker_thread = threading.Thread(
target = work_to_do,
args = whatever_params)
worker_thread.setDaemon(True)
worker_thread.start()
work_is_done.wait(timeout_duration)
Next, how do we connect the clients to the worker processes?
If Unix-only is acceptable, we can set up the accepting socket,
and then fork(). The child processes can accept() incomming
connections on its copy of the socket. Be aware that select() on
the process-shared socket is tricky, in that that the socket can
select as readable, but the accept() can block because some
other processes took the connection.
If we need to run on Windows (and Unix), we can have one main
process handle the socket connections, and pipe the data to and
from worker processes. See the popen2 module in the Python
Standard Library.
--
--Bryan
Definitely look into Peter Hanson's answer.
Olson's answer was about timing-out one's own Python code.
Bryan Olson has heretofore avoided referring to himself in the
third person, and will hence forth endeavor to return to his
previous ways.
--
--Bryan
It works on QNX, thanks a lot, your reply was very helpful!
> If we need to run on Windows (and Unix), we can have one main
> process handle the socket connections, and pipe the data to and
> from worker processes. See the popen2 module in the Python
> Standard Library.
popen will not work in thread on QNX/Windows, same problem with spawnl
currently I am using:
os.system(command+">file 2>file2")
it works, I just need to finish implementing everything and check how it
may fail...
One more time - thanks for great idea!
Maybe the child process can just use sigalarm instead of a separate
thread, to implement the timeout.
> If Unix-only is acceptable, we can set up the accepting socket,
> and then fork(). The child processes can accept() incomming
> connections on its copy of the socket. Be aware that select() on
> the process-shared socket is tricky, in that that the socket can
> select as readable, but the accept() can block because some
> other processes took the connection.
To get even more OS-specific, AF_UNIX sockets (at least on Linux) have
a feature called ancillary messages that allow passing file
descriptors between processes. It's currently not supported by the
Python socket lib, but one of these days... . But I don't think
Windows has anything like it. No idea about QNX.
Already tried that, signals works only in main thread.
> To get even more OS-specific, AF_UNIX sockets (at least on Linux) have
> a feature called ancillary messages that allow passing file
> descriptors between processes. It's currently not supported by the
> Python socket lib, but one of these days... . But I don't think
> Windows has anything like it. No idea about QNX.
I have solved problem with additional process, just like Bryan Olson
proposed. Looks like all features I wanted are working... :)
It can be done on on Windows.
http://tangentsoft.net/wskfaq/articles/passing-sockets.html
--
--Bryan