Is it possible to set the maximum number of open files to some "infinite" value or must it be a number?
I had a requirement to set the descriptor limit for a daemon user to be "unlimited" and I'm trying to determine if that's possible or how to do it. I've seen some mailing lists refer to a "max" value that can be used (as in: "myuser hard nofile max", but so far the man pages and references I've consulted don't back that up.
If I can't use 'max' or similar, I'd like to know how to determine what the max number of files is (theoretically) so I have some basis for whatever number I pick. I don't want to use 100000000 or something if there's a more reasonable way to get an upper bound.
I'm using RHEL 5 if it's important.
Update: I'm an idiot when it comes to writing questions. Ideally I'd like to do this in the limits.conf file (which is where "max" would come from). Does that change any answers?
POSIX allows you to set the
RLIMIT_NOFILE resource limit to
setrlimit(). What this means is that the system will not enforce this resource limit. Of course, you will still be limited by the implementation (e.g.
MAXINT) and any other resource limitations (e.g. available memory).
Update: RHEL 5 has a maximum value of 1048576 (220) for this limit (
/usr/include/linux/fs.h), and will not accept any larger value including infinity, even for root. So on RHEL 5 you can use this value in
/etc/security/limits.conf and that is as close as you are going to get to infinity.
Not long ago a Linux kernel patch was applied to allow this limit to be set to infinity, however it has since been reverted as a result of unintended consequences.