开发者

Is it possible to make a bash shell script interact with another command line program?

开发者 https://www.devze.com 2023-01-11 02:48 出处:网络
I am using a interactive command line program in a Linux terminal running the bash shell. I have a definite sequence of command that I input开发者_如何学运维 to the shell program. The program writes i

I am using a interactive command line program in a Linux terminal running the bash shell. I have a definite sequence of command that I input开发者_如何学运维 to the shell program. The program writes its output to standard output. One of these commands is a 'save' command, that writes the output of the previous command that was run, to a file to disk.

A typical cycle is:

$prog
$$cmdx
$$<some output>
$$save <filename>
$$cmdy
$$<again, some output>
$$save <filename>
$$q
$<back to bash shell>
  • $ is the bash prompt
  • $$ is the program's prompt
  • q is the quit command for prog
  • prog is such that it appends the output of the previous command to filename

How can I automate this process? I would like to write a shell script that can start this program, and cycle through the steps, feeding it the commands one by one and, and then quitting. I hope the save command works correctly.


If your command doesn't care how fast you give it input, and you don't really need to interact with it, then you can use a heredoc.

Example:

#!/bin/bash
prog <<EOD
cmdx
save filex
cmdy
save filey
q
EOD

If you need branching based on the output of the program, or if your program is at all sensitive to the timing of your commands, then Expect is what you want.


I recommend you use Expect. This tool is designed to automate interactive shell applications.


Where there's a need, there's a way! I think that it's a good bash lesson to see how process management and ipc works. The best solution is, of course, Expect. But the real reason is that pipes can be tricky and many commands are designed to wait for data, meaning that the process will become a zombie for reasons that bay be difficult to predict. But learning how and why reminds us of what is going on under the hood.

When two processes engage in a conversation, the danger is that one or both will try to read data that will never arrive. The rules of engagement have to be crystal clear. Things like CRLF and character encoding can kill the party. Luckily, two close partners like a bash script and its child process are relatively easy to keep in line. The easiest thing to miss is that bash is launching a child process for just about every thing it does. If you can make it work with bash, you thoroughly know what you're doing.

The point is that we want to talk to another process. Here's a server:

# a really bad SMTP server

# a hint at courtesy to the client
shopt -s nocasematch

echo "220 $HOSTNAME SMTP [$$]"

while true
do
    read
    [[ "$REPLY" =~ ^helo\ [^\ ] ]] && break
    [[ "$REPLY" =~ ^quit ]] && echo "Later" && exit
    echo 503 5.5.1 Nice guys say hello.
done

NAME=`echo "$REPLY" | sed -r -e 's/^helo //i'`
echo 250 Hello there, $NAME 

while read
do
    [[ "$REPLY" =~ ^mail\ from: ]] && { echo 250 2.1.0 Good guess...; continue; }
    [[ "$REPLY" =~ ^rcpt\ to: ]] && { echo 250 2.1.0 Keep trying...; continue; }
    [[ "$REPLY" =~ ^quit ]] && { echo Later, $NAME; exit; }
    echo 502 5.5.2 Please just QUIT
done

echo Pipe closed, exiting

Now, the script that hopefully does the magic.

# Talk to a subprocess using named pipes

rm -fr A B      # don't use old pipes
mkfifo A B

# server will listen to A and send to B
./smtp.sh < A > B &

# If we write to A, the pipe will be closed.
# That doesn't happen when writing to a file handle.
exec 3>A

read < B
echo "$REPLY"

# send an email, so long as response codes look good
while read L
do
    echo "> $L"
    echo $L > A
    read < B
    echo $REPLY
    [[ "$REPLY" =~ ^2 ]] || break

done <<EOF
HELO me
MAIL FROM: me
RCPT TO: you
DATA
Subject: Nothing

Message
.
EOF

# This is tricky, and the reason sane people use Expect.  If we
# send QUIT and then wait on B (ie. cat B) we may have trouble.
# If the server exits, the "Later" response in the pipe might
# disappear, leaving the cat command (and us) waiting for data.
# So, let cat have our STDOUT and move on.
cat B &

# Now, we should wait for the cat process to get going before we
# send the QUIT command. If we don't, the server will exit, the
# pipe will empty and cat will miss its chance to show the
# server's final words.
echo -n > B     # also, 'sleep 1' will probably work.

echo "> quit"
echo "quit" > A

# close the file handle
exec 3>&-

rm A B

Notice that we are not simply dumping the SMTP commands on the server. We check each response code to make sure things are OK. In this case, things will not be OK and the script will bail.


I use Expect to interact with the shell for switch and router backups. A bash script calls the expect script with the correct variables.

for i in <list of machines> ; do expect_script.sh $i ; exit

This will ssh to each box, run the backup commands, copy out the appropriate files, and then move on to the next box.


For simple use cases you may use a combination of subshell, echo & sleep:

# in Terminal.app
telnet localhost 25
helo localhost
ehlo localhost
quit

(sleep 5; echo "helo localhost"; sleep 5; echo "ehlo localhost"; sleep 5; echo quit ) | 
   telnet localhost 25 


echo "cmdx\nsave\n...etc..." | prog

..?

0

精彩评论

暂无评论...
验证码 换一张
取 消