Non blocking

Posted on

Question :

Non blocking

I’m trying to make a non blocking subprocess call to run a script from my program. I need to pass args from to once when it( is first started via after this runs for a period of time then exits.
for insert, (list) in enumerate(list, start =1):

    sys.args = [list]["python", "", sys.args], shell = True)

{loop through program and do more stuff..}

And my slave script
print sys.args
while True:
    {do stuff with args in loop till finished}

Currently, blocks from running the rest of its tasks, I simply want to be independent of, once I’ve passed args to it. The two scripts no longer need to communicate.

I’ve found a few posts on the net about non blocking but most of them are centered on requiring communication with at some-point which I currently do not need. Would anyone know how to implement this in a simple fashion…?

Asked By: DavidJB


Answer #1:

You should use subprocess.Popen instead of

Something like:

subprocess.Popen(["python", ""] + sys.argv[1:])

From the docs on

Run the command described by args. Wait for command to complete, then return the returncode attribute.

(Also don’t use a list to pass in the arguments if you’re going to use shell = True).

Here’s a MCVE1 example that demonstrates a non-blocking suprocess call:

import subprocess
import time

p = subprocess.Popen(['sleep', '5'])

while p.poll() is None:
    print('Still sleeping')

print('Not sleeping any longer.  Exited with returncode %d' % p.returncode)

An alternative approach that relies on more recent changes to the python language to allow for co-routine based parallelism is:

# python3.5 required but could be modified to work with python3.4.
import asyncio

async def do_subprocess():
    print('Subprocess sleeping')
    proc = await asyncio.create_subprocess_exec('sleep', '5')
    returncode = await proc.wait()
    print('Subprocess done sleeping.  Return code = %d' % returncode)

async def sleep_report(number):
    for i in range(number + 1):
        print('Slept for %d seconds' % i)
        await asyncio.sleep(1)

loop = asyncio.get_event_loop()

tasks = [


1Tested on OS-X using python2.7 & python3.6

Answered By: mgilson

Answer #2:

There’s three levels of thoroughness here.

As mgilson says, if you just swap out for subprocess.Popen, keeping everything else the same, then will not wait for to finish before it continues. That may be enough by itself. If you care about zombie processes hanging around, you should save the object returned from subprocess.Popen and at some later point call its wait method. (The zombies will automatically go away when exits, so this is only a serious problem if runs for a very long time and/or might create many subprocesses.) And finally, if you don’t want a zombie but you also don’t want to decide where to do the waiting (this might be appropriate if both processes run for a long and unpredictable time afterward), use the python-daemon library to have the slave disassociate itself from the master — in that case you can continue using in the master.

Answered By: zwol

Answer #3:

For Python 3.8.x

import shlex
import subprocess

cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)

This will allow the parent process to exit while the child process continues to run. Not sure about zombies.

Tested on Python 3.8.1 on macOS 10.15.5

Answered By: JS.

Leave a Reply

Your email address will not be published. Required fields are marked *