开发者

Java MemoryMapping big files

开发者 https://www.devze.com 2023-02-25 05:48 出处:网络
The Java limitation of MappedByteBuffer to 2GIG make it tricky to use for mapping big files. The usual recommended approach is to use an array of MappedByteBuffer and index it through:

The Java limitation of MappedByteBuffer to 2GIG make it tricky to use for mapping big files. The usual recommended approach is to use an array of MappedByteBuffer and index it through:

long PAGE_SIZE = Integer.MAX_VALUE;
MappedByteBuffer[] buffers;

开发者_如何学Goprivate int getPage(long offset) {
    return (int) (offset / PAGE_SIZE)
}

private int getIndex(long offset) {
    return (int) (offset % PAGE_SIZE);
}

public byte get(long offset) {
    return buffers[getPage(offset)].get(getIndex(offset));
}

this can be a working for single bytes, but requires rewriting a lot of code if you want to handle read/writes that are bigger and require crossing boundaries (getLong() or get(byte[])).

The question: what is your best practice for these kind of scenarios, do you know any working solution/code that can be re-used without re-inventing the wheel?


Have you checked out dsiutil's ByteBufferInputStream?

Javadoc

The main usefulness of this class is that of making it possible creating input streams that are really based on a MappedByteBuffer.

In particular, the factory method map(FileChannel, FileChannel.MapMode) will memory-map an entire file into an array of ByteBuffer and expose the array as a ByteBufferInputStream. This makes it possible to access easily mapped files larger than 2GiB.

  • long length()
  • long position()
  • void position(long newPosition)

Is that something you were thinking of? It's LGPL too.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号