This is a follow-up question, UNIX-focused, to my previous question here.
I was wondering whether a file descriptor open by a process, could safely be used in forked processes.
I've run a few tests by running several hundreds processes at the same time, all writing continuously to the same file descriptor. I found out that:
- when
fwrite()
calls are up to 8192 bytes, all calls are perfectly serialized and the file is OK. - when
fwrite()
calls are more than 8192 bytes, the string is split into 8192 byte chunks that get randomly written to the file, which ends up corrupted.
I tried to use flock()
, without success as every process tries to lock/unlock the same file descriptor, which does not make sense. The outcome is the same.
Is there a way to safely share the file descriptor between all the processes, and get all fwrite()
calls properly serialized?