I recently had a Linux process which "leaked" file descriptors: It opened them and didn't properly close some of them.
If I had monitored this, I could tell - in advance - that the process was reaching its limit.
Is there a nice, Bash\Python way to check the FD usage ratio for a given process in a Ubuntu Linux system?
I now know how to check how many open file descriptors are there; I only need to know how many file descriptors are allowed for a process. Some systems (like Amazon EC2) don't have the
Count the entries in
/proc/<pid>/fd/. The hard and soft limits applying to the process can be found in