flock
锁不关心线程——事实上,它们也不关心进程。如果您在两个进程中使用相同的文件描述符(通过 fork 继承),则使用该 FD 锁定文件的任何一个进程都将获得两个进程的锁定。换句话说,在下面的代码中,两个 flock
调用都将返回成功:子进程锁定文件,然后父进程获取相同的锁而不是阻塞,因为它们都是同一个 FD。
import fcntl, time, os
f = open("testfile", "w+")
print "Locking..."
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
print "locked"
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
if os.fork() == 0:
# We're in the child process, and we have an inherited copy of the fd.
# Lock the file.
print "Child process locking..."
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
print "Child process locked..."
time.sleep(1000)
else:
# We're in the parent. Give the child process a moment to lock the file.
time.sleep(0.5)
print "Parent process locking..."
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
print "Parent process locked"
time.sleep(1000)
同样,如果您锁定同一个文件两次,但使用不同的文件描述符,则锁定将相互阻塞——无论您是在同一个进程还是同一个线程中。见羊群(2):If a process uses open(2) (or similar) to obtain more than one descriptor for the same file, these descriptors are treated independently by flock(). An attempt to lock the file using one of these file descriptors may be denied by a lock that the calling process has already placed via another descriptor.
记住,对于 Linux 内核来说,进程和线程本质上是相同的东西,并且内核级 API 通常对它们进行相同的处理,这一点很有用。在大多数情况下,如果系统调用记录了进程间子/父行为,那么线程也是如此。
当然,您可以(并且可能应该)自己测试这种行为。