Using module ‘subprocess’ with timeout

Posted on

Problem :

Here’s the Python code to run an arbitrary command returning its stdout data, or raise an exception on non-zero exit codes:

proc = subprocess.Popen(
    stderr=subprocess.STDOUT,  # Merge stdout and stderr

communicate is used to wait for the process to exit:

stdoutdata, stderrdata = proc.communicate()

The subprocess module does not support timeout–ability to kill a process running for more than X number of seconds–therefore, communicate may take forever to run.

What is the simplest way to implement timeouts in a Python program meant to run on Windows and Linux?

Solution :

I don’t know much about the low level details; but, given that in
python 2.6 the API offers the ability to wait for threads and
terminate processes, what about running the process in a separate

import subprocess, threading

class Command(object):
    def __init__(self, cmd):
        self.cmd = cmd
        self.process = None

    def run(self, timeout):
        def target():
            print 'Thread started'
            self.process = subprocess.Popen(self.cmd, shell=True)
            print 'Thread finished'

        thread = threading.Thread(target=target)

        if thread.is_alive():
            print 'Terminating process'
        print self.process.returncode

command = Command("echo 'Process started'; sleep 2; echo 'Process finished'")

The output of this snippet in my machine is:

Thread started
Process started
Process finished
Thread finished
Thread started
Process started
Terminating process
Thread finished

where it can be seen that, in the first execution, the process
finished correctly (return code 0), while the in the second one the
process was terminated (return code -15).

I haven’t tested in windows; but, aside from updating the example
command, I think it should work since I haven’t found in the
documentation anything that says that thread.join or process.terminate
is not supported.

If you’re on Unix,

import signal
class Alarm(Exception):

def alarm_handler(signum, frame):
    raise Alarm

signal.signal(signal.SIGALRM, alarm_handler)
signal.alarm(5*60)  # 5 minutes
    stdoutdata, stderrdata = proc.communicate()
    signal.alarm(0)  # reset the alarm
except Alarm:
    print "Oops, taking too long!"
    # whatever else

Here is Alex Martelli’s solution as a module with proper process killing. The other approaches do not work because they do not use proc.communicate(). So if you have a process that produces lots of output, it will fill its output buffer and then block until you read something from it.

from os import kill
from signal import alarm, signal, SIGALRM, SIGKILL
from subprocess import PIPE, Popen

def run(args, cwd = None, shell = False, kill_tree = True, timeout = -1, env = None):
    Run a command with a timeout after which it will be forcibly
    class Alarm(Exception):
    def alarm_handler(signum, frame):
        raise Alarm
    p = Popen(args, shell = shell, cwd = cwd, stdout = PIPE, stderr = PIPE, env = env)
    if timeout != -1:
        signal(SIGALRM, alarm_handler)
        stdout, stderr = p.communicate()
        if timeout != -1:
    except Alarm:
        pids = []
        if kill_tree:
        for pid in pids:
            # process might have died before getting to this line
            # so wrap to avoid OSError: no such process
                kill(pid, SIGKILL)
            except OSError:
        return -9, '', ''
    return p.returncode, stdout, stderr

def get_process_children(pid):
    p = Popen('ps --no-headers -o pid --ppid %d' % pid, shell = True,
              stdout = PIPE, stderr = PIPE)
    stdout, stderr = p.communicate()
    return [int(p) for p in stdout.split()]

if __name__ == '__main__':
    print run('find /', shell = True, timeout = 3)
    print run('find', shell = True)

timeout is now supported by call() and communicate() in the subprocess module (as of Python3.3):

import subprocess"command", timeout=20, shell=True)

This will call the command and raise the exception


if the command doesn’t finish after 20 seconds.

You can then handle the exception to continue your code, something like:

try:"command", timeout=20, shell=True)
except subprocess.TimeoutExpired:
    # insert code here

Hope this helps.

Since Python 3.5, there’s a new universal command (that is meant to replace check_call, check_output …) and which has the timeout= parameter as well., *, stdin=None, input=None, stdout=None, stderr=None, shell=False, cwd=None, timeout=None, check=False, encoding=None, errors=None)

Run the command described by args. Wait for command to complete, then return a CompletedProcess instance.

It raises a subprocess.TimeoutExpired exception when the timeout expires.

surprised nobody mentioned using timeout

timeout 5 ping -c 3 somehost

This won’t for work for every use case obviously, but if your dealing with a simple script, this is hard to beat.

Also available as gtimeout in coreutils via homebrew for mac users.

I’ve modified sussudio answer. Now function returns: (returncode, stdout, stderr, timeout) – stdout and stderr is decoded to utf-8 string

def kill_proc(proc, timeout):
  timeout["value"] = True

def run(cmd, timeout_sec):
  proc = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  timeout = {"value": False}
  timer = Timer(timeout_sec, kill_proc, [proc, timeout])
  stdout, stderr = proc.communicate()
  return proc.returncode, stdout.decode("utf-8"), stderr.decode("utf-8"), timeout["value"]

    outFile =  tempfile.SpooledTemporaryFile() 
    errFile =   tempfile.SpooledTemporaryFile() 
    proc = subprocess.Popen(args, stderr=errFile, stdout=outFile, universal_newlines=False)
    wait_remaining_sec = timeout

    while proc.poll() is None and wait_remaining_sec > 0:
        wait_remaining_sec -= 1

    if wait_remaining_sec <= 0:
        raise ProcessIncompleteError(proc, timeout)

    # read temp streams from start;;
    out =
    err =

Prepending the Linux command timeout isn’t a bad workaround and it worked for me.

cmd = "timeout 20 "+ cmd
subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output, err) = p.communicate()

I added the solution with threading from jcollado to my Python module easyprocess.


pip install easyprocess


from easyprocess import Proc

# shell is not supported!
stdout=Proc('ping localhost').call(timeout=1.5).stdout
print stdout

Here is my solution, I was using Thread and Event:

import subprocess
from threading import Thread, Event

def kill_on_timeout(done, timeout, proc):
    if not done.wait(timeout):

def exec_command(command, timeout):

    done = Event()
    proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

    watcher = Thread(target=kill_on_timeout, args=(done, timeout, proc))
    watcher.daemon = True

    data, stderr = proc.communicate()

    return data, stderr, proc.returncode

In action:

In [2]: exec_command(['sleep', '10'], 5)
Out[2]: ('', '', -9)

In [3]: exec_command(['sleep', '10'], 11)
Out[3]: ('', '', 0)

The solution I use is to prefix the shell command with timelimit. If the comand takes too long, timelimit will stop it and Popen will have a returncode set by timelimit. If it is > 128, it means timelimit killed the process.

See also python subprocess with timeout and large output (>64K)

if you are using python 2, give it a try

import subprocess32

    output = subprocess32.check_output(command, shell=True, timeout=3)
except subprocess32.TimeoutExpired as e:
    print e

I’ve implemented what I could gather from a few of these. This works in Windows, and since this is a community wiki, I figure I would share my code as well:

class Command(threading.Thread):
    def __init__(self, cmd, outFile, errFile, timeout):
        self.cmd = cmd
        self.process = None
        self.outFile = outFile
        self.errFile = errFile
        self.timed_out = False
        self.timeout = timeout

    def run(self):
        self.process = subprocess.Popen(self.cmd, stdout = self.outFile, 
            stderr = self.errFile)

        while (self.process.poll() is None and self.timeout > 0):
            self.timeout -= 1

        if not self.timeout > 0:
            self.timed_out = True
            self.timed_out = False

Then from another class or file:

        outFile =  tempfile.SpooledTemporaryFile()
        errFile =   tempfile.SpooledTemporaryFile()

        executor = command.Command(c, outFile, errFile, timeout)
        executor.daemon = True

        if executor.timed_out:
            out = 'timed out'
            out =
            err =


Once you understand full process running machinery in *unix, you will easily find simplier solution:

Consider this simple example how to make timeoutable communicate() meth using (available alsmost everythere on *nix nowadays). This also can be written with epoll/poll/kqueue, but variant could be a good example for you. And major limitations of (speed and 1024 max fds) are not applicapable for your task.

This works under *nix, does not create threads, does not uses signals, can be lauched from any thread (not only main), and fast enought to read 250mb/s of data from stdout on my machine (i5 2.3ghz).

There is a problem in join’ing stdout/stderr at the end of communicate. If you have huge program output this could lead to big memory usage. But you can call communicate() several times with smaller timeouts.

class Popen(subprocess.Popen):
    def communicate(self, input=None, timeout=None):
        if timeout is None:
            return subprocess.Popen.communicate(self, input)

        if self.stdin:
            # Flush stdio buffer, this might block if user
            # has been writing to .stdin in an uncontrolled
            # fashion.
            if not input:

        read_set, write_set = [], []
        stdout = stderr = None

        if self.stdin and input:
        if self.stdout:
            stdout = []
        if self.stderr:
            stderr = []

        input_offset = 0
        deadline = time.time() + timeout

        while read_set or write_set:
                rlist, wlist, xlist =, write_set, [], max(0, deadline - time.time()))
            except select.error as ex:
                if ex.args[0] == errno.EINTR:

            if not (rlist or wlist):
                # Just break if timeout
                # Since we do not close stdout/stderr/stdin, we can call
                # communicate() several times reading data by smaller pieces.

            if self.stdin in wlist:
                chunk = input[input_offset:input_offset + subprocess._PIPE_BUF]
                    bytes_written = os.write(self.stdin.fileno(), chunk)
                except OSError as ex:
                    if ex.errno == errno.EPIPE:
                    input_offset += bytes_written
                    if input_offset >= len(input):

            # Read stdout / stderr by 1024 bytes
            for fn, tgt in (
                (self.stdout, stdout),
                (self.stderr, stderr),
                if fn in rlist:
                    data =, 1024)
                    if data == '':

        if stdout is not None:
            stdout = ''.join(stdout)
        if stderr is not None:
            stderr = ''.join(stderr)

        return (stdout, stderr)

You can do this using select

import subprocess
from datetime import datetime
from select import select

def call_with_timeout(cmd, timeout):
    started =
    sp = subprocess.Popen(cmd, stdout=subprocess.PIPE)
    while True:
        p = select([sp.stdout], [], [], timeout)
        if p[0]:
        ret = sp.poll()
        if ret is not None:
            return ret
        if ( > timeout:
            return None

python 2.7

import time
import subprocess

def run_command(cmd, timeout=0):
    start_time = time.time()
    df = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    while timeout and df.poll() == None:
        if time.time()-start_time >= timeout:
            return -1, ""
    output = 'n'.join(df.communicate()).strip()
    return df.returncode, output

Example of captured output after timeout tested in Python 3.7.8:

    return, shell=True, capture_output=True, timeout=20, cwd=cwd, universal_newlines=True)
except subprocess.TimeoutExpired as e:
    print(e.output.decode(encoding="utf-8", errors="ignore"))
    assert False;

The exception subprocess.TimeoutExpired has the output and other members:

cmd – Command that was used to spawn the child process.

timeout – Timeout in seconds.

output – Output of the child process if it was captured by run() or
check_output(). Otherwise, None.

stdout – Alias for output, for symmetry with stderr.

stderr – Stderr output of the child process if it was captured by
run(). Otherwise, None.

More info:

I’ve used killableprocess successfully on Windows, Linux and Mac. If you are using Cygwin Python, you’ll need OSAF’s version of killableprocess because otherwise native Windows processes won’t get killed.

Although I haven’t looked at it extensively, this decorator I found at ActiveState seems to be quite useful for this sort of thing. Along with subprocess.Popen(..., close_fds=True), at least I’m ready for shell-scripting in Python.

This solution kills the process tree in case of shell=True, passes parameters to the process (or not), has a timeout and gets the stdout, stderr and process output of the call back (it uses psutil for the kill_proc_tree). This was based on several solutions posted in SO including jcollado’s. Posting in response to comments by Anson and jradice in jcollado’s answer. Tested in Windows Srvr 2012 and Ubuntu 14.04. Please note that for Ubuntu you need to change the parent.children(…) call to parent.get_children(…).

def kill_proc_tree(pid, including_parent=True):
  parent = psutil.Process(pid)
  children = parent.children(recursive=True)
  for child in children:
  psutil.wait_procs(children, timeout=5)
  if including_parent:

def run_with_timeout(cmd, current_dir, cmd_parms, timeout):
  def target():
    process = subprocess.Popen(cmd, cwd=current_dir, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)

    # wait for the process to terminate
    if (cmd_parms == ""):
      out, err = process.communicate()
      out, err = process.communicate(cmd_parms)
    errcode = process.returncode

  thread = Thread(target=target)

  if thread.is_alive():
    me = os.getpid()
    kill_proc_tree(me, including_parent=False)

There’s an idea to subclass the Popen class and extend it with some simple method decorators. Let’s call it ExpirablePopen.

from logging import error
from subprocess import Popen
from threading import Event
from threading import Thread

class ExpirablePopen(Popen):

    def __init__(self, *args, **kwargs):
        self.timeout = kwargs.pop('timeout', 0)
        self.timer = None
        self.done = Event()

        Popen.__init__(self, *args, **kwargs)

    def __tkill(self):
        timeout = self.timeout
        if not self.done.wait(timeout):
            error('Terminating process {} by timeout of {} secs.'.format(, timeout))

    def expirable(func):
        def wrapper(self, *args, **kwargs):
            # zero timeout means call of parent method
            if self.timeout == 0:
                return func(self, *args, **kwargs)

            # if timer is None, need to start it
            if self.timer is None:
                self.timer = thr = Thread(target=self.__tkill)
                thr.daemon = True

            result = func(self, *args, **kwargs)

            return result
        return wrapper

    wait = expirable(Popen.wait)
    communicate = expirable(Popen.communicate)

if __name__ == '__main__':
    from subprocess import PIPE

    print ExpirablePopen('ssh -T', stdout=PIPE, timeout=1).communicate()

I had the problem that I wanted to terminate a multithreading subprocess if it took longer than a given timeout length. I wanted to set a timeout in Popen(), but it did not work. Then, I realized that Popen().wait() is equal to call() and so I had the idea to set a timeout within the .wait(timeout=xxx) method, which finally worked. Thus, I solved it this way:

import os
import sys
import signal
import subprocess
from multiprocessing import Pool

cores_for_parallelization = 4
timeout_time = 15  # seconds

def main():
    jobs = [...YOUR_JOB_LIST...]
    with Pool(cores_for_parallelization) as p:, jobs)

def run_parallel_jobs(args):
    # Define the arguments including the paths
    initial_terminal_command = 'C:\Python34\python.exe'  # Python executable
    function_to_start = 'C:\temp\'  # The multithreading script
    final_list = [initial_terminal_command, function_to_start]

    # Start the subprocess and determine the process PID
    subp = subprocess.Popen(final_list)  # starts the process
    pid =

    # Wait until the return code returns from the function by considering the timeout. 
    # If not, terminate the process.
        returncode = subp.wait(timeout=timeout_time)  # should be zero if accomplished
    except subprocess.TimeoutExpired:
        # Distinguish between Linux and Windows and terminate the process if 
        # the timeout has been expired
        if sys.platform == 'linux2':
            os.kill(pid, signal.SIGTERM)
        elif sys.platform == 'win32':

if __name__ == '__main__':

Late answer for Linux only, but in case someone wants to use subprocess.getstatusoutput(), where the timeout argument isn’t available, you can use the built-in Linux timeout on the beginning of the command, i.e.:

import subprocess

timeout = 25 # seconds
cmd = f"timeout --preserve-status --foreground {timeout} ping"
exit_c, out = subprocess.getstatusoutput(cmd)

if (exit_c == 0):
    print("Error: ", out)

timeout Arguments:

Unfortunately, I’m bound by very strict policies on the disclosure of source code by my employer, so I can’t provide actual code. But for my taste the best solution is to create a subclass overriding Popen.wait() to poll instead of wait indefinitely, and Popen.__init__ to accept a timeout parameter. Once you do that, all the other Popen methods (which call wait) will work as expected, including communicate. provides extensions to the subprocess module which allow you to wait up to a certain period of time, otherwise terminate.

So, to wait up to 10 seconds for the process to terminate, otherwise kill:

pipe  = subprocess.Popen('...')

timeout =  10

results = pipe.waitOrTerminate(timeout)

This is compatible with both windows and unix. “results” is a dictionary, it contains “returnCode” which is the return of the app (or None if it had to be killed), as well as “actionTaken”. which will be “SUBPROCESS2_PROCESS_COMPLETED” if the process completed normally, or a mask of “SUBPROCESS2_PROCESS_TERMINATED” and SUBPROCESS2_PROCESS_KILLED depending on action taken (see documentation for full details)

for python 2.6+, use gevent

 from gevent.subprocess import Popen, PIPE, STDOUT

 def call_sys(cmd, timeout):
      p= Popen(cmd, shell=True, stdout=PIPE)
      output, _ = p.communicate(timeout=timeout)
      assert p.returncode == 0, p. returncode
      return output

 call_sys('./', 2)

 # example
 sleep 5
 echo done
 exit 1

Leave a Reply

Your email address will not be published. Required fields are marked *