我在 linux、nfs 上,涉及多台机器。
我正在尝试使用 fcntl 来实现文件锁定。我一直在使用flock,直到我发现它只能在同一台机器上的进程之间工作。
现在,当我用 F_SETLKW 调用 fcntl 时,perl 警报(用于添加超时)不像以前那样工作。这通常没问题,但 ctrl-c 也不起作用。
我认为正在发生的是 fcntl 仅每 30 秒左右检查一次信号。警报最终还是会回来。ctrl-c 被抓住了,……最终。
我可以做些什么来调整 fcntl 检查这些信号的频率吗?
I'm definitely no expert on the matter, but my knowledge is that fcntl
, as you also stated, won't work in your case. fcntl advisory locks only make sense within the same machine.
So forget me if this if off-topic. I used File::NFSLock to solve cache storms/dogpile/stampeding problem. There were multiple application servers reading and writing cache files on a NFS volume (not very good idea, but that was what we had start with).
I subclassed/wrapped File::NFSLock to modify its behavior. In particular I needed:
machine:pid
instead of just pid
.This has worked wonderfully for a couple of years.
Until the volume of requests had a 10x increase. That is, last month I started to experience the first problems where a really busy cache file was being written to by two backends at the same time, leaving dead locks behind. This happened for me when we reached around 9-10M overall pageviews per day, just to give you an idea.
The final broken cache file looked like:
<!-- START OF CACHE FILE BY BACKEND b1 -->
... cache file contents ...
<!-- END OF CACHE FILE BY BACKEND b1 -->
... more cache file contents ... wtf ...
<!-- END OF CACHE FILE BY BACKEND b2 -->
This can only happen if two backends write to the same file at the same time... It's not yet clear if this problem is caused by File::NFSLock + our mods or some bug in the application.
In conclusion, if your app is not terribly busy and trafficked, then go for File::NFSLock, I think it's your best bet. You sure you still want to use NFS?