开发者

How to lock file in a multi-user file management system

开发者 https://www.devze.com 2023-03-06 14:27 出处:网络
I\'ve a program (deployed a copy to each users computer) for user to store files on a开发者_如何学编程 centralized file server with compression (CAB file).

I've a program (deployed a copy to each users computer) for user to store files on a开发者_如何学编程 centralized file server with compression (CAB file).

When adding a file, user need to extract the file onto his own disk, add the file, and compress it back onto the server. So if two users process the same compressed file at the same time, the later uploaded one will replace the one earlier and cause data loss.

My strategy to prevent this is before user extract the compressed file, the program will check if there is a specified temp file exist on the server. If not, the program will create such temp file to prevent other user's interfere, and will delete the temp file after uploading; If yes, the program will wait until the temp file is deleted.

Is there better way of doing this? And will frequently creating and deleting empty files damage the disk?


And will frequently creating and deleting empty files damage the disk?

No. If you're using a solid-state disk, there's a theoretical limit on the number of writes that can be performed (which is an inherit limitation of FLASH). However, you're incredibly unlikely to ever reach that limit.

Is there better way of doing this

Well, I would go about this differently:

Write a Windows Service that handles all disk access, and have your client apps talk to the service. So, when a client needs to retrieve a file, it would open a socket connection to your service and request the file and either keep it in memory or save it to their local disk. Perform any modifications on the client's local copy of the file (decompress, add/remove/update files, recompress, etc), and, when the operation is complete and you're ready to save (or commit in source-control lingo) your changes, open another socket connection to your service app (running on the server), and send it the new file contents as a binary stream.

The service app would then handle loading and saving the files to disk. This gives you a lot of additional capabilities, as well - the server can keep track of past versions (perhaps even committing each version to svn or another source control system), provide metadata such as what the latest version is, etc.

Now that I'm thinking about it, you may be better off just integrating an svn interface into your app. SharpSVN is a good library for this.


Creating temporary files to flag the lock is a viable and widely used option (and no, this won't damage the disk). Another option is to open the compressed file exclusively (or let other processes only read the file but not write it) and keep the file opened while the user works with the contents of the file.


Is there better way of doing this?

Yes. From what you've written here, it sounds like you are well on your way towards re-inventing revision control. Perhaps you could use some off-the-shelf version control system? Or perhaps at least re-use some code from such systems? Or perhaps you could at least learn a little about the problems those systems faced, how fixing the obvious problems led to non-obvious problems, and attempt to make a system that works at least as well?

My understanding is that version control systems went through several stages (see "Edit Conflict Resolution" on the original wiki, the Portland Pattern Repository). In roughly chronological order:

  1. The master version is stored on the server. Last-to-save wins, leading to mysterious data loss with no warning.
  2. The master version is stored on the server. When I pull a copy to my machine, the system creates a lock file on the server. When I push my changes to the server (or cancel), the system deletes that lock file. No one can change those files on the server, so we've fixed the "mysterious data loss" problem, but we have endless frustration when I need to edit some file that someone else checked out just before leaving on a long vacation.
  3. The master version is stored on the server. First-to-save wins ("optimistic locking"). When I pull the latest version from the server, it includes some kind of version-number. When I later push my edits to the server, if the version-number I pulled doesn't match the current version on the server, someone else has cut in first and changed things ahead of me, and the system gives some sort of polite message telling me about it. Ideally I pull the latest version from the server and carefully merge it with my version, and then push the merged version to the server, and everything is wonderful. Alas, all too often, an impatient person pulls the latest version, overwrites it with "his" version, and pushes "his" version, leading to data loss.
  4. Every version is stored on the server, in an unbroken chain. (Centralized version control like TortoiseSVN is like this).
  5. Every version is stored in every local working directory; sometimes the chain forks into 2 chains; sometimes two chains merge back into one chain. (Distributed version control tools like TortoiseHg are like this).

So it sounds like you're doing what everyone else did when they moved from stage 1 to stage 2. I suppose you could slowly work your way through every stage. Or maybe you could jump to stage 4 or 5 and save everyone time?


Take a look at the FileStream.Lock method. Quoting from MSDN:

Prevents other processes from reading from or writing to the FileStream.
...

Locking a range of a file stream gives the threads of the locking process exclusive access to that range of the file stream.

0

精彩评论

暂无评论...
验证码 换一张
取 消