Asked  7 Months ago    Answers:  5   Viewed   31 times

I see here that testing whether $? is zero (success) or something else (failure) is an anti-pattern, but I have not been able to find this anywhere else.

Sticking to the definition of anti-pattern of the Wikipedia: "An anti-pattern (or anti-pattern) is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive." Why would this be an anti-pattern?



This is an antipattern because it introduces complexity that wouldn't exist if you didn't require the exit status to be recorded at all.

if your_command; then ...

has much less to go wrong than

if [ "$?" -eq 0 ]; then ...

For examples of things that can go wrong: Think about traps, or even new echo statements added for debugging, modifying $?. It's not visually obvious to a reader that a separate line running your_command can't have anything added below it without changing logical flow.

That is:

echo "Finished running your_command" >&2
if [ "$?" -eq 0 ]; then ... checking the echo, not the actual command.

Thus, in cases where you really do need to deal with exit status in a manner more granular than immediately branching on whether its value is zero, you should collect it on the same line:

# whitelisting a nonzero value for an example of when "if your_command" won't do.
your_command; your_command_retval=$?
echo "Finished running your_command" >&2 ## now, adding more logging won't break the logic.
case $your_command_retval in
  0|2) echo "your_command exited in an acceptable way" >&2;;
  *)   echo "your_command exited in an unacceptable way" >&2;;

Finally: If you enclose your_command inside of an if statement, this marks it as tested, such that your shell won't consider a nonzero exit status for purposes of set -e or an ERR trap.


set -e
if [ "$?" -eq 0 ]; then ...

...will never (barring a number of corner cases and caveats which plague set -e's behavior) reach the if statement with any value of $? other than 0, as the set -e will force an exit in that case. By contrast:

set -e
if your_command; then ...

...marks the exit status of your_command as tested, and so does not consider it cause to force the script to exit per set -e.

Tuesday, June 1, 2021
answered 7 Months ago

The code can be re-factored as follows:

app.controller('tokenCtrl', function($scope, tokenService) {
    tokenService.getTokens.then ( callbackFn(tokens) {
        $scope.tokens = tokens;

app.factory('tokenService', function($http) {
    var getTokens = function() {
        //return promise
        return $http.get('/api/tokens').then (function onFulfilled(response) {
                //return tokens

    return {
        getTokens: getTokens

By having the service return a promise, and using the .then method of the promise, the same functionality is achieved with the following benefits:

  • The promise can be saved and used for chaining.

  • The promise can be saved and used to avoid repeating the same $http call.

  • Error information is retained and can be retrieved with the .catch method.

  • The promise can be forwarded to other clients.

Tuesday, June 1, 2021
answered 7 Months ago

You can use the typeset command to make your functions available on a remote machine via ssh. There are several options depending on how you want to run your remote script.

# Define your function
myfn () {  ls -l; }

To use the function on the remote hosts:

typeset -f myfn | ssh user@host "$(cat); myfn"
typeset -f myfn | ssh user@host2 "$(cat); myfn"

Better yet, why bother with pipe:

ssh user@host "$(typeset -f myfn); myfn"

Or you can use a HEREDOC:

ssh user@host << EOF
    $(typeset -f myfn)

If you want to send all the functions defined within the script, not just myfn, just use typeset -f like so:

ssh user@host "$(typeset -f); myfn"


typeset -f myfn will display the definition of myfn.

cat will receive the definition of the function as a text and $() will execute it in the current shell which will become a defined function in the remote shell. Finally the function can be executed.

The last code will put the definition of the functions inline before ssh execution.

Wednesday, June 2, 2021
answered 7 Months ago


One reason is that singletons aren't easy to handle with unit tests. You can't control the instantiation and by their very nature may retain state across invocations.

For that reason the principle of dependency injection is popular. Each class is injected (configured) with the classes they need to function (rather than derive via singleton accessors) and so tests can control which dependent class instances to use (and provide mocks if required).

Frameworks such as Spring will control the lifecycle of their objects and often create singletons, but these objects are injected into their dependent objects by the framework. Thus the codebase itself doesn't treat the objects as singletons.

e.g. rather than this (for example)

public class Portfolio {
   private Calculator calc = Calculator.getCalculator();

you would inject the calculator:

public class Portfolio {
   public Portfolio(Calculator c) {
      this.calc = c;

Thus the Portfolio object doesn't know/care about how many instances of the Calculator exist. Tests can inject a dummy Calculator that make testing easy.


By limiting yourself to one instance of an object, the options for threading are limited. Access to the singleton object may have to be guarded (e.g. via synchronisation). If you can maintain multiple instances of those objects, then you can tailor then number of instances to the threads you have running, and increase the concurrent capabilities of your codebase.

Wednesday, June 2, 2021
answered 7 Months ago

You want

./script 2>&1 1>/dev/null | ./other-script

The order here is important. Let's assume stdin (fd 0), stdout (fd 1) and stderr (fd 2) are all connected to a tty initially, so

0: /dev/tty, 1: /dev/tty, 2: /dev/tty

The first thing that gets set up is the pipe. other-script's stdin gets connected to the pipe, and script's stdout gets connected to the pipe, so script's file descriptors so far look like:

0: /dev/tty, 1: pipe, 2: /dev/tty

Next, the redirections occur, from left to right. 2>&1 makes fd 2 go wherever fd 1 is currently going, which is the pipe.

0: /dev/tty, 1: pipe, 2: pipe

Lastly, 1>/dev/null redirects fd1 to /dev/null

0: /dev/tty, 1: /dev/null, 2: pipe

End result, script's stdout is silenced, and its stderr is sent through the pipe, which ends up in other-script's stdin.

Also see

Also note that 1>/dev/null is synonymous to, but more explicit than >/dev/null

Tuesday, July 27, 2021
answered 5 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :