I have the following functions.
hello () {
echo "Hello"
}
func () {
hello
echo "world"
}
If I don't want the output of the hello function to be printed but want to do something with it, I want to capture the output in some variable, Is the only possible way is to fork a subshell like below? Is it not an unnecessary creation of a new child process? Can 开发者_JAVA技巧this be optimized?
func () {
local Var=$(hello)
echo "${Var/e/E} world"
}
An ugly solution is to temporarily replace echo
so that it sets a global variable, which you can access from your function:
func () {
echo () {
result="$@"
}
result=
hello
unset -f echo
echo "Result is $result"
}
I agree it's nasty, but avoids the subshell.
How about using a file descriptor and a Bash here string?
hello () {
exec 3<<<"Hello"
}
func () {
local Var
exec 3>&-
hello && read Var <&3
echo "${Var/e/E} world"
exec 3>&-
}
func
You can make the caller pass in a variable name to hold the output value and then create a global variable with that name inside the function, like this:
myfunc() { declare -g $1="hello"; }
Then call it as:
myfunc mystring
echo "$mystring world" # gives "hello world"
So, your functions can be re-written as:
hello() {
declare -g $1="Hello"
}
func() {
hello Var
echo "${Var/e/E} world"
}
The only limitation is that variables used for holding the output values can't be local.
Related post which talks about using namerefs:
- How to return an array in bash without using globals?
Not a bash answer: At least one shell, ksh optimises command substitution $( ... )
to not spawn a subshell for builtin commands. This can be useful when your script tends to perform a lot of these.
Do you have the option of modifying the hello()
function? If so, then give it an option to store the result in a variable:
#!/bin/bash
hello() {
local text="hello"
if [ ${#1} -ne 0 ]; then
eval "${1}='${text}'"
else
echo "${text}"
fi
}
func () {
local var # Scope extends to called functions.
hello var
echo "${var} world"
}
And a more compact version of hello()
:
hello() {
local text="hello"
[ ${#1} -ne 0 ] && eval "${1}='${text}'" || echo "${text}"
}
This doesn't literally answer the question, but it is a viable alternate approach for some use cases...
This is sort of a spin off from @Andrew Vickers, in that you can lean on eval
.
Rather than define a function, define what I'll call a "macro" (the C equivalent):
MACRO="local \$var=\"\$val world\""
func()
{
local var="result"; local val="hello"; eval $MACRO;
echo $result;
}
- Redirect the stdout of the function to the FD of the write end of an "automatic" pipe. Then, after the (non-forking) call, ...
- Read the FD of the read end of the same pipe.
#!/usr/bin/env bash
# This code prints 'var=2, out=hello' meaning var was set and the stdout got captured
# See: https://stackoverflow.com/questions/7502981/how-to-get-the-output-of-a-shell-function-without-forking-a-sub-shell
main(){
local -i var=1 # Set value
local -i pipe_write=0 pipe_read=0 # Just defensive programming
create_pipe # Get 2 pipe automatic fd, see function below
# HERE IS THE CALL
callee >&"$pipe_write" # Run function, see below
exec {pipe_write}>&- # Close fd of the pipe writter end (to make cat returns)
local out=$(cat <&"$pipe_read") # Grab stdout of callee
exec {pipe_read}>&- # Just defensive programming
echo "var=$var, out=$out" # Show result
}
callee(){
var=2 # Set an outer scope value
echo hello # Print some output
}
create_pipe(){
: 'From: https://superuser.com/questions/184307/bash-create-anonymous-fifo
Return: pipe_write and pipe_read fd => to outer scope
'
exec 2> /dev/null # Avoid job control print like [1] 1030612
tail -f /dev/null | tail -f /dev/null &
exec 2>&1
# Save the process ids
local -i pid2=$!
local -i pid1=$(jobs -p %+)
# Hijack the pipe's file descriptors using procfs
exec {pipe_write}>/proc/"$pid1"/fd/1
# -- Read
exec {pipe_read}</proc/"$pid2"/fd/0
disown "$pid2"; kill "$pid1" "$pid2"
}
main
Note that it would be much shorter code using an automatic normal fd as follows:
exec {fd}<> <(:)
instead of using the create_pipe
function as this code does (copying this answer). But then the reading FD line used here like:
local out=$(cat <&"$fd")
would block. And it would be necessary to try reading with a timeout like the following:
local out=''
while read -r -t 0.001 -u "${fd}" line; do
out+="$line"$'\n'
done
But I try to avoid arbitrary sleeps
or timeouts
if possible.\
Here the closing of the FD of write end of the pipe makes the read cat
line returns at the end of content (magically from my poor knowledge).
精彩评论