On Linux, you can open a lot of files (thousands). You can limit the number of opened handles in a single process with the setrlimit syscall and the ulimit shell builtin. You can query them with the getrlimit
syscall and also using /proc/self/limits
(or /proc/1234/limits
for process of pid 1234). The maximum number of system-wide opened files is thru /proc/sys/fs/file-max
(on my system, I have 1623114).
So on Linux you could not bother, and open many files at once.
And I would suggest to maintain a memoized cache of opened files, and use them if possible (in a MRU policy). Don't open and close each file too often, only when some limit has been reached... (e.g. when an open
did fail).
In other words, you could have your own file abstraction (or just a struct
) which knows the file name, may have an opened FILE*
(or a null pointer) and keep the current offset, maybe also the last time of opening or writing, then manage a collection of such things in a FIFO discipline (for those having an opened FILE*
). You certainly want to avoid close
-ing (and later re-open
-ing) a file descriptor too often.
You might occasionally (i.e. once a few minutes) call sync(2), but don't call it too often (certainly not more than once per 10 seconds). If using buffered FILE
-s don't forget to sometimes fflush
them. Again, don't do that very often.