开发者

Java: how to handle two process trying to modify the same file [duplicate]

开发者 https://www.devze.com 2023-03-12 07:36 出处:网络
This question already has answers here: Closed 11 years ago. Possible Duplicate: How can I lock a file using java (if possible)
This question already has answers here: Closed 11 years ago.

Possible Duplicate:

How can I lock a file using java (if possible)

I have two process that invoke two Java programs that modifying the same text file. I notice the content of the text file is missing data. I suspect that when one java program obtain the write stream to the text file, I think it block the other java program from modifying it(like when you have a file open, you cant delete that file). Is there a way to work around this other than database? (not to say that db solution is not clean or elegant, just that we wrote lot of codes in manipulating this text file)

EDIT

It turns out that I made a mistake targeting the problem. The reason why, data in my text file is missing is because,

ProcessA: keep add rows of data to the text file

ProcessB: at the beginning, load all the rows of the text field into a List. It is then manipulate the contains of that list. At the end, ProcessB write the list back out, replacing the contain of the text file.

This work great in sequential process. But when running together, if ProcessA adding data to the file, during the time ProcessB manipulating the List, then when ProcessB write the List back out, whatever ProcessA just add, will be overrided. So my initial thought is before ProcessB write the List back out, sync the data between text file and the List. So when I write the List back out, it will contains everything. so here is my effort

public void synchronizeFile(){
    try {
        File file = new File("path/to/file/that/both/A/and/B/write/to");
        FileChannel channel = new RandomAccessFile(file, "rw").getChannel();
        FileLock lock = channel.lock(); //Lock the file. Block until release the lock
        List<PackageLog> tempList = readAllLogs(file);
        if(tempList.size() > logList.size()){
            //data are in corrupted state. Synchronized them.
            for(PackageLog pl : tempList){
                if(!pl.equals(lookUp(pl.getPackageLabel().getPackageId(), pl.getPackage开发者_如何学编程Label().getTransactionId()))){
                    logList.add(pl);
                }
            }
        }
        lock.release(); //Release the file
        channel.close();
    } catch (IOException e) {
        logger.error("IOException: ", e); 
    }
}

So logList is the current List that ProcessB want to write out. So before the write out, I read the file and store data into tempList, if tempList and logList are not the same, sync them. The problem is that at this point, both ProcessA and ProcessB currently access the file, so when I try to lock the file, and read from it List<PackageLog> tempList = readAllLogs(file);, I either get OverlappingFileLockException, or java.io.IOException: The process cannot access the file because another process has locked a portion of the file. Please please help me fix this problem :(

EDIT2: My understand of Lock

public static void main(String[] args){
    File file = new File("C:\\dev\\harry\\data.txt");

    FileReader fileReader = null;
    BufferedReader bufferedReader = null;
    FileChannel channel = null;
    FileLock lock = null;
    try{
        channel  = new RandomAccessFile(file, "rw").getChannel();
        lock = channel.lock();
        fileReader = new FileReader(file);
        bufferedReader = new BufferedReader(fileReader);
        String data;
        while((data = bufferedReader.readLine()) != null){
            System.out.println(data);
        }
    }catch(IOException e){
        e.printStackTrace();
    }finally{
        try {
            lock.release();
            channel.close();
            if(bufferedReader != null) bufferedReader.close();
            if(fileReader != null) fileReader.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

and I got this error IOException: The process cannot access the file because another process has locked a portion of the file


So, you could use the method that Vineet Reynolds suggests in his comment.

If the two processes are actually just separate Threads within the same application, then you could set a flag somewhere to indicate that the file is open.

If it's two separate applications/processes altogether, the underlying filesystem should lock the files. When you get an I/O error from your output stream, you should be able to wrap a try/catch block around that, and then set your app up to retry later, or whatever the desired behavior is for your particular application.

Files aren't really designed to be written to simultaneously by multiple applications. If you can maybe describe why you want to write to a file simultaneously from multiple processes, there may be other solutions that could be suggested.


Updates after your recent edits: Ok, so you need at least 3 files to do what you're talking about. You definitely cannot try to read/write data to a single file concurrently. Your three files are:

  1. the file that ProcessA dumps new/incoming data to
  2. the file that ProcessB is currently working on
  3. a final "output" file that holds the output from ProcessB.

ProcessB's loop:

  • Take any data in file#2, process it, and write the output to file#3
  • Delete file#2
  • Repeat

ProcessA's loop:

  • Write all new, incoming data to file#1
  • Periodically check to see if file#2 exists
  • When file#2 is deleted by ProcessB, then ProcessA should stop writing to file#1, rename file#1 to be file#2
  • Start a new file#1
  • Repeat


If this is two separate applications trying to access the file. The one would through an IOException because he cant access it. If that occurs, in the catch(IOException err){} add code to pause the current thread for a few milliseconds and then recursively try to write again - until it gains access.

public boolean writeFile()
{
    try
    {
       //write to file here
        return true;
    }
    catch (IOException err) // Can't access
    {
        try
        {
            Thread.sleep(200); // Sleep a bit
            return writeFile(); // Try again
        }
        catch (InterruptedException err2)
        {
           return writeFile(); // Could not sleep, try again anyway
        }
    }
}

This will keep on trying until you get a StackOverflow Exception meaning that it went in too deep; but the chance of that happening in this situation is very little - would only happen if the file was to be kept open for really long time by other application.

Hope this helps!


The code in the updated question is most likely that of process B, and not of process A. I'll assume that this is the case.

Considering that an instance of the OverlappingFileLockException exception is thrown, it appears that another thread in the same process is attempting to lock on the same file. This is not a conflict between A and B, but rather a conflict within B, if one goes by the API documentation on the lock() method and when the condition under which it throws OverlappingFileLockException:

If a lock that overlaps the requested region is already held by this Java virtual machine, or if another thread is already blocked in this method and is attempting to lock an overlapping region of the same file

The only solution to prevent this, is to have any other thread in B prevented from acquiring a lock on the same file, or the same overlapping region in the file.

The IOException being thrown has a bit more interesting message. It probably confirms the above theory, but without looking at the entire source code, I cannot confirm anything. The lock method is expected to block until the exclusive lock is acquired. If it was acquired, then there ought to be no problem in writing to the file. Except for one condition. If the file has already been opened (and locked) by the same JVM in a different thread, using a File object (or in other words, a second/different file descriptor), then the attempted write on the first file descriptor will fail even if the lock was acquired (after all, the lock does not lock out other threads).

An improved design, would be to have a single thread in each process that acquires an exclusive lock on the file (while using a single File object, or a single file descriptor) for only a certain amount of time, perform the required activity in the file, and then release the lock.


Think about this using the MapReduce mentality. Let's assume the each program is writing output without reading the other's output. I would write two separate files, and then have a 'reduce' phase. Your reduction might be a simple chronologically ordered merge.

If, however, your programs' require one-anothers' output. You have a very different problem and need to rethink how you are partitioning the work.

Finally, if the two programs' outputs are similar but independent, and you are writing it into one file so a third program can read it all, consider changing the third program to read both files.

0

精彩评论

暂无评论...
验证码 换一张
取 消