开发者

How do I launch background jobs w/ paramiko?

开发者 https://www.devze.com 2022-12-18 23:16 出处:网络
Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, tas开发者_如何学运维k)): (A, 1), (B, 2), (C, 2), (A,3), (B

Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, tas开发者_如何学运维k)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file.

In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using:

bash -c -l nohup test_cmd & 

doesnt work either, exec_cmd still blocks to process end.

In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out).

So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking?

Update

The dtach command works nicely for this!


without using nohup or screen.

def command(channel, cmd):
    channel.exec_command(cmd + ' > /dev/null 2>&1 &')

this says "Redirect STDOUT from cmd into dev/null, then redirect STDERR back into STDOUT, which goes into /dev/null. Then push it into the background."

exec_command wont get hung up on any output (thats not coming), thus it'll return.


I don't know anything about paramiko and it's exec_cmd, but maybe bash's disown could help.

#!/bin/bash -l
test_cmd &
disown test_cmd


For this purpose I wrote a small shell script which I execute on the remote side:

#!/bin/bash

# check for command line arguments
if [ $# -lt 2 ]; then
        echo "usage: runcommand.sh EXECUTIONSTRING TASKNAME"
        exit -1
fi

taskname=$2
execstr=$1
logfile=$taskname.log

echo START $taskname > $logfile
echo OWNPID $BASHPID >> $logfile
stime=`date -u +"%Y-%m-%d_%H-%M-%S"`
stimes=`date -u +"%s"`
echo STARTTIME $stime >> $logfile
echo STARTTIMES $stimes >> $logfile
# execute program
$execstr 1>$taskname.stdout 2>$taskname.stderr 
echo RETVAL $? >> $logfile

stime=`date -u +"%Y-%m-%d_%H-%m-%S"`
stimes=`date -u +"%s"`
echo STOPTIME $stime >> $logfile
echo STOPTIMES $stimes >> $logfile
echo STOP $taskname >> $logfile

What it does: executes a given task, pipes the output of stdout, stderr to two different files and creates a logfile which saves when the task was started, when it finished and the return value of the task.

Then I first copy the script to the remote host and execute it there with exec_command:

command = './runcommand.sh "{execpath}" "{taskname}" > /dev/null 2>&1 &'
ssh.exec_command(command.format(execpath=anexecpath, taskname=ataskname)
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号