YenForYang YenForYang - 3 years ago 238
Perl Question

Limiting the number of simultaneous instances of a program executed within a Perl script (to >1)

I'm using a resource-intensive program in a Perl script [specifically

to transfer files to Google Drive].

I have yet to figure out how I want to call
, as I need to limit the number of instances of
based on some condition (anything relevant to preventing server overload, freezing, crashing, etc.). I would like for the script to wait for the apt system "conditions" (this can be for some long or indefinite amount of time) before it executes

Some details:

  • The script itself is essentially passed a file or directory path containing (possibly numerous) files by another program (this program written in Python--call this program
    for reference).

  • <A>
    only returns a value to a script and thus knows nothing about the script or
    , other than that it accepts input.

  • <A>
    cannot be altered (i.e. changing
    is beyond my ken)

  • <A>
    fires at varying intervals [i.e. sometimes it will execute the script many times rapidly in succession (creating multiple instances), other times it might only fire once every few hours, minutes, etc.]

  • Assume that
    can't be altered directly either (i.e. again, beyond my ken).

  • If absolutely necessary, the number of instances of the script can be limited instead of
    (though I'd prefer it only be
    , as the processing done by the script is rather light and needs no limitation).

  • Modules are fine to use.

  • I would like to avoid using Unix-like operating system commands like
    (unless absolutely necessary).

Currently, I'm using a rather poorly written bash script in place of the Perl script. The bash script implements a rudimentary (poorly designed) "check/sleep loop" using
pgrep -wc
loops and
statements. (To be honest, I don't even think the bash script really works/helps atm.)

Answer Source

I'll assume for a moment that your script is the only thing running rclone. If you wanted only 1 copy running, you would just use a lockfile.

For N instances (for small N), I would just have N lockfiles - have the program try each lock in turn, in a loop; pause if all the locks are already held and retry 1s later, in a loop. Once it has a lock, run rclone then release the lock when it is done.

A more sound approach would be to use SysV semaphores but, unless you want a large N, really care about response times or are worried about fairness between callers, it is not likely to be worth the time learning them.

If your script is not the only program calling rclone, then would need to intercept all calls - instead of putting this code in your program, could replace rclone by wrapper that implements the parallelism constraint as above and then calls the real program.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download