开发者

Python: Lock a file

开发者 https://www.devze.com 2023-02-07 07:07 出处:网络
I have a Python app running on Linux. It is called every minute from cron. It checks a directory for files and if it finds one it processes it - this can take several minutes. I don\'t want the next c

I have a Python app running on Linux. It is called every minute from cron. It checks a directory for files and if it finds one it processes it - this can take several minutes. I don't want the next cron job to pick up the file currently being processed so I lock it using the code below which calls portalocker. The problem is it doesn't seem to work. The next cron job manages to get a file handle returned for the file all ready being processed.

def open_and_lock(full_filename):
    file_handle = open(full_filename, 'r')
    try:
        portalocker.lock(file_handle, portalocker.LOCK_EX
                            | portalocker.LOCK_NB)
        return file_handle
    ex开发者_如何学Pythoncept IOError:
        sys.exit(-1)

Any ideas what I can do to lock the file so no other process can get it?

UPDATE

Thanks to @Winston Ewert I checked through the code and found the file handle was being closed way before the processing had finished. It seems to be working now except the second process blocks on portalocker.lock rather than throwing an exception.


After fumbling with many schemes, this works in my case. I have a script that may be executed multiple times simultaneously. I need these instances to wait their turn to read/write to some files. The lockfile does not need to be deleted, so you avoid blocking all access if one script fails before deleting it.

import fcntl

def acquireLock():
    ''' acquire exclusive lock file access '''
    locked_file_descriptor = open('lockfile.LOCK', 'w+')
    fcntl.lockf(locked_file_descriptor, fcntl.LOCK_EX)
    return locked_file_descriptor

def releaseLock(locked_file_descriptor):
    ''' release exclusive lock file access '''
    locked_file_descriptor.close()

lock_fd = acquireLock()

# ... do stuff with exclusive access to your file(s)

releaseLock(lock_fd)


You're using the LOCK_NB flag which means that the call is non-blocking and will just return immediately on failure. That is presumably happening in the second process. The reason why it is still able to read the file is that portalocker ultimately uses flock(2) locks, and, as mentioned in the flock(2) man page:

flock(2) places advisory locks only; given suitable permissions on a file, a process is free to ignore the use of flock(2) and perform I/O on the file.

To fix it you could use the fcntl.flock function directly (portalocker is just a thin wrapper around it on Linux) and check the returned value to see if the lock succeeded.


Don't use cron for this. Linux has inotify, which can notify applications when a filesystem event occurs. There is a Python binding for inotify called pyinotify.

Thus, you don't need to lock the file -- you just need to react to IN_CLOSE_WRITE events (i.e. when a file opened for writing was closed). (You also won't need to spawn a new process every minute.)

An alternative to using pyinotify is incron which allows you to write an incrontab (very much in the same style as a crontab), to interact with the inotify system.


what about manually creating an old-fashioned .lock-file next to the file you want to lock?

just check if it’s there; if not, create it, if it is, exit prematurely. after finishing, delete it.


I think fcntl.lockf is what you are looking for.

0

精彩评论

暂无评论...
验证码 换一张
取 消