Tha Brad Tha Brad -3 years ago 65
Bash Question

STDOUT & STDERR from previous Command as Arguments for next Command

Somehow I don't find a sufficient answer to my problem, only parts of hackarounds.

I'm calling a single "chained" shell command (from a Node app), that starts a long-running update process, which it's stdout/-err should be handed over, as arguments, to the second part of the shell command (another Node app that logs into a DB).

I'd like to do something like this:

updateCommand 2>$err 1>$out ; logDBCommand --log arg err out

  • Can't use
    as it is only for files or file descriptors.

  • Also if I use shell variables (like
    error=$( { updateCommand | sed 's/Output/tmp/'; } 2>&1 ); logDBCommand --log arg \"${error}.\"
    ), I can only have stdout or both mixed into one argument.

  • And I don't want to pipe, as the second command (logCommand) should run whether the first one succeeded or failed execution.

  • And I don't want to cache to file, cause honestly that's missing the point and introduce another asynchronous error vector

  • List item

Answer Source

After a little chat in #!/bin/bash someone suggested to just make use of tpmsf (file system held in RAM), which is the 2nd most elegant (but only possible) way to do this. So I can make use of the > operator and have stdout and stderr in separate variables in memory.

command1 >/dev/shm/c1stdout 2>/dev/shm/c1stderr 
A=$(cat /dev/shm/c1sdtout) 
B=$(cat /dev/shm/c1stderr) 
command2 $A $B

(or shorter):

A=$(command1 2>/dev/shm/c1stderr ) 
B=$(cat /dev/shm/c1stderr) 
command2 $A $B
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download