开发者

Linux & Windows: Using Large Files to save physical memory

开发者 https://www.devze.com 2023-02-16 08:02 出处:网络
Good afternoon, We have implemented a C++ cKeyArray class to test whether we can use the Large File API to save physical memory. During Centos Linux testing, we found that the Linux File API was just

Good afternoon, We have implemented a C++ cKeyArray class to test whether we can use the Large File API to save physical memory. During Centos Linux testing, we found that the Linux File API was just as fast as using the heap for random access processing. Here are the numbers: for a 2,700,000 row SQL database where the KeySize for each row is 62 bytes,

cKeyArray class using LINUX File API BruteForceComparisons = 197275 BruteForceTimeElapsed = 1,763,504,445 microsecs Each BruteForce Comparisons requires two random access, there the mean time required for each random access = 1,763,504,445 microsecs / (2 * 197275) = 4470 microsecs

Heap , no cKeyArray class

BruteForceComparisons = 197275 BruteForceTimeElapsed = 1,708,442,690microsecs the mean time required for each random access = 4300 microsecs.

On 32 bit Windows,the numbers are,

cKeyArray class using Windows File API BruteForceComparisons = 197275 BruteForceTimeElapsed = 9243787 milli开发者_开发技巧secs the mean time for each random access is 23.4 millisec

Heap, no cKeyArray class BruteForceComparisons = 197275 BruteForceTimeElapsed = 2,141,941 millisecs the mean time requires for each random access is 5.4 millisec

We are wondering why the Linux cKeyArray numbers are just as good the Linux heap numbers while on 32 bit Windows the mean heap random access time is 4 times as fast the cKeyArray Windows File API. Is there some way we can speed up the Windows cKeyArray File API?

Previouly, we received a lot of good suggestions from Stack Overflow on using the Windows Memory Mapped File API. Based on these Stack Overflow suggestions we have implemented a Memory Mapped File MRU caching class which functions properly.

Because we want to devlop a cross-platform solution, we want to do due diligence to see why the Linux File API is so fast? Thank you. We are trying to post a portion of the cKeyArray class implementation below.

#define KEYARRAY_THRESHOLD 100000000   
// Use file instead of memory if requirement is above this number


cKeyArray::cKeyArray(long RecCount_,int KeySize_,int MatchCodeSize_, char* TmpFileName_) {
    RecCount=RecCount_;
    KeySize=KeySize_;
    MatchCodeSize=MatchCodeSize_;
    MemBuffer=0;
    KeyBuffer=0;
    MemFile=0;
    MemFileName[0]='\x0';
    ReturnBuffer=new char[MatchCodeSize + 1];   
    if (RecCount*KeySize<=KEYARRAY_THRESHOLD) {     
        InMemory=true;
        MemBuffer=new char[RecCount*KeySize];
        memset(MemBuffer,0,RecCount*KeySize);
    } else {
        InMemory=false;
        strcpy(MemFileName,TmpFileName_);
        try {
            MemFile=
                              new cFile(MemFileName,cFile::CreateAlways,cFile::ReadWrite);
        }
        catch (cException e)
        {
            throw e;    
        }
        try {
            MemFile->SetFilePointer(
                               (int64_t)(RecCount*KeySize),cFile::FileBegin);
        }
        catch (cException e)
        {
            throw e;    
        }
        if (!(MemFile->SetEndOfFile()))
            throw cException(ERR_FILEOPEN,MemFileName);

        KeyBuffer=new char[KeySize];
    }
}

char *cKeyArray::GetKey(long Record_) {
        memset(ReturnBuffer,0,MatchCodeSize + 1);   
    if (InMemory) {
        memcpy(ReturnBuffer,MemBuffer+Record_*KeySize,MatchCodeSize);
    } else {
        MemFile->SetFilePointer((int64_t)(Record_*KeySize),cFile::FileBegin);
        MemFile->ReadFile(KeyBuffer,KeySize);
        memcpy(ReturnBuffer,KeyBuffer,MatchCodeSize);   
    }
    return ReturnBuffer;
}


uint32_t cKeyArray::GetDupeGroup(long Record_) {
    uint32_t DupeGroup(0);
    if (InMemory) {
         memcpy((char*)&DupeGroup,
                         MemBuffer+Record_*KeySize + MatchCodeSize,sizeof(uint32_t)); 
    } else {
        MemFile->SetFilePointer(
                        (int64_t)(Record_*KeySize + MatchCodeSize) ,cFile::FileBegin);
        MemFile->ReadFile((char*)&DupeGroup,sizeof(uint32_t));
    }
    return DupeGroup;
}


On Linux, the OS aggressively caches file data in main memory -- so although you haven't explicitly allocated memory for the file contents, they are nevertheless stored in RAM. Here's a decent link with some more information about the page cache -- only one thing is missing from that description, which is that most Linux filesystems actually implement the standard I/O interfaces as thin wrappers around the page cache. That means that even though you haven't explicitly memory mapped the file, the system is still treating it as though it were memory mapped under-the-covers. That's why you see roughly equivalent performance with either approach.

I second the suggestion to factor the platform-specific stuff out, and use whichever appropach is fastest for each platform. Be sure to benchmark -- don't ever make assumptions about performance.


Your memory-mapped solution should be as much as 10x faster than the file solution even in Linux. That is the speed I experience in my test cases.

Each file access system call takes hundreds of CPU cycles to complete. Time which your program could be using to do real work.

One explanation for why the speeds are similar could be that your memory map has not been used before. When a memory mapped page is accessed for the first time it must be assigned to a physical page of RAM and zeroed out or if it is a disk file it must be loaded from disk into RAM. All of that takes a considerable amount of time.

If you touch (read or write a value) each 4K of RAM before using it you should see a significant speed increase in the memory map.

0

精彩评论

暂无评论...
验证码 换一张
取 消