I’m debugging some python script that must run on my virtual machine. And, I prefer to edit the scripts locally(outside of virtual machines). So I find it’s tedious to
scp modified scripts to virtual machines every time. Can anyone suggests some effective way?
Particularly, I’m wondering if it’s possible to execute python scripts on remote PVM. Something like that:
python --remote email@example.com hello.py //**FAKED**, served to explain ONLY
It is possible using ssh. Python accepts hyphen(-) as argument to execute the standard input,
cat hello.py | ssh firstname.lastname@example.org python -
Run python –help for more info.
Although this question isn’t quite new and an answer was already chosen, I would like to share another nice approach.
Using the paramiko library – a pure python implementation of SSH2 – your python script can connect to a remote host via SSH, copy itself (!) to that host and then execute that copy on the remote host. Stdin, stdout and stderr of the remote process will be available on your local running script. So this solution is pretty much independent of an IDE.
On my local machine, I run the script with a cmd-line parameter ‘deploy’, which triggers the remote execution. Without such a parameter, the actual code intended for the remote host is run.
import sys import os def main(): print os.name if __name__ == '__main__': try: if sys.argv == 'deploy': import paramiko # Connect to remote host client = paramiko.SSHClient() client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) client.connect('remote_hostname_or_IP', username='john', password='secret') # Setup sftp connection and transmit this script sftp = client.open_sftp() sftp.put(__file__, '/tmp/myscript.py') sftp.close() # Run the transmitted script remotely without args and show its output. # SSHClient.exec_command() returns the tuple (stdin,stdout,stderr) stdout = client.exec_command('python /tmp/myscript.py') for line in stdout: # Process each line in the remote output print line client.close() sys.exit(0) except IndexError: pass # No cmd-line args provided, run script normally main()
Exception handling is left out to simplify this example. In projects with multiple script files you will probably have to put all those files (and other dependencies) on the remote host.
ssh user@machine python < script.py - arg1 arg2
cat | is usually not necessary
You can do it via ssh.
ssh email@example.com "python ./hello.py"
You can also edit the script in ssh using a textual editor or X11 forwarding.
I’ve had to do this before using Paramiko in a case where I wanted to run a dynamic, local PyQt4 script on a host running an ssh server that has connected my OpenVPN server and ask for their routing preference (split tunneling).
So long as the ssh server you are connecting to has all of the required dependencies of your script (PyQt4 in my case), you can easily encapsulate the data by encoding it in base64 and use the
exec() built-in function on the decoded message. If I remember correctly my one-liner for this was:
stdout = client.exec_command('python -c "exec(\"' + open('hello.py','r').read().encode('base64').strip('n') + '\".decode(\"base64\"))"' )
It is hard to read and you have to escape the escape sequences because they are interpreted twice (once by the sender and then again by the receiver). It also may need some debugging, I’ve packed up my server to PCS or I’d just reference my OpenVPN routing script.
The difference in doing it this way as opposed to sending a file is that it never touches the disk on the server and is run straight from memory (unless of course they log the command). You’ll find that encapsulating information this way (although inefficient) can help you package data into a single file.
For instance, you can use this method to include raw data from external dependencies (i.e. an image) in your main script.