I'm currently writing something that needs to handle very large text files (a few GiB at least). What's needed here (and this is fixed) is:
- CSV-based, following RFC 4180 with the exception of embedded line breaks
- random read access to lines, though mostly line by line and near the end
- appending lines at the end
- (changing lines). Obviously that calls for the rest of the file to be rewritten, it's also rare, so not particularly important at the moment
The size of the file forbids keeping it completely in memory (which is also not desirable, since when appending the changes should be persisted as soon as possible).
I have thought of using a memory-mapped region as a window into the file which gets moved around if a line outside its range is requested. Of course, at that stage I still have no abstraction above the byte level. To actually work with the contents I have a CharsetDecoder
giving me a CharBuffer
. Now the problem is, I can deal with lines of text probably just fine in the CharBuffer
, but I also need to know the byte offset of that line within the file (to keep a cache of line indexes and offsets so I don't have to scan the file from the beginning again to find a specific line).
Is there a way to map the offsets in a CharBuffer
to offsets in the matching ByteBuffer
at all? It's obviously trivial with ASCII or ISO-8859-*, less so with UTF-8 and with ISO 2022 or BOCU-1 things would get downright ugly (not that I actually expect the latter two, but UTF-8 should be the default here – and still poses problems).
I guess I could just convert a portion of the CharBuffer
to bytes again and use the length. Either it works or I get problems with diacritics in which case I could probably mandate the use of NFC or NFD to assure that th开发者_如何学JAVAe text is always unambiguously encoded.
Still, I wonder if that is even the way to go here. Are there better options?
ETA: Some replies to common questions and suggestions here:
This is a data storage for simulation runs, intended to be a small-ish local alternative to a full-blown database. We do have database backends as well and they are used, but for cases where they are unavailable or not applicable we do want this.
I'm also only supporting a subset of CSV (without embedded line breaks), but that's ok for now. The problematic points here are pretty much that I cannot predict how long the lines are and thus need to create a rough map of the file.
As for what I outlined above: The problem I was pondering was that I can easily determine the end of a line on the character level (U+000D + U+000A), but I didn't want to assume that this looks like 0A 0D
on the byte level (which already fails for UTF-16, for example, where it's either 0D 00 0A 00
or 00 0D 00 0A
). My thoughts were that I could make the character encoding changable by not hard-coding details of the encoding I currently use. But I guess I could just stick to UTF-8 and ingore everything else. Feels wrong, somehow, though.
It's very difficult to maintain a 1:1 mapping between a sequence of Java chars (which are effectively UTF-16) and bytes which could be anything depending on your file encoding. Even with UTF-8, the "obvious" mapping of 1 byte to 1 char only works for ASCII. Neither UTF-16 nor UTF-8 guarantees that a unicode character can be stored in a single machine char
or byte
.
I would maintain my window into the file as a byte buffer, not a char buffer. Then to find line endings in the byte buffer, I'd encode the Java string "\r\n"
(or possibly just "\n"
) as a byte sequence using the same encoding as the file is in. I'd then use that byte sequence to search for line endings in the byte buffer. The position of a line ending in the buffer + the offset of the buffer from the start of the file maps exactly to the byte position in the file of the line ending.
Appending lines is just a case of seeking to the end of the file and adding your new lines. Changing lines is more tricky. I think I would maintain a list or map of byte positions of changed lines and what the change is. When ready to write the changes:
- sort the list of changes by byte position
- read the original file up to the next change and write it to a temporary file.
- write the changed line to the temporary file.
- skip the changed line in the original file.
- go back to step 2 unless you have reached the end of the original file
- move the temp file over the original file.
Would it be possible to split the file in "subfiles" (of course you must not split it within one Utf-8 char)? Then you need some meta data for each of the subfiles (total number of chars, and total number of lines).
If you have this and the "subfiles" are relative small so that you can always load one compleatly then the handling becomes easy.
Even the editing becomes easy to, because you only need to update the "subfile" and its meta data.
If you would put it to the edge: then you could use a database and store one line per data base row. -- If this is a good idea strongly depends on your use case.
CharBuffer assumes all characters are UTF-16 or UCS-2 (perhaps someone knows the difference)
The problem using a proper text format is that you need to read every byte to know where the n-th character is or where the n'th line is. I use multi-GB text files but assume ASCII-7 data, and I only read/write sequentially.
If you want random access on an unindexed text file, you can't expect it to be performant.
If you are willing to buy a new server you can get one with 24 GB for around £1,800 and 64GB for around £4,200. These would allow you to load even multi-GB files into memory.
If you had fixed width lines then using a RandomAccessFile
might solve a lot of your problems. I realise that your lines are probably not fixed width, but you could artificially impose this by adding an end of line indicator and then padding lines (eg with spaces).
This obviously works best if your file currently has a fairly uniform distribution of line lengths and doesn't have some lines that are very, very long. The downside is that this will artificially increase the size of your file.
- Finding the start of line:
Stick with UTF-8 and \n denoting the end of the line should not be a problem. Alternatively you can allow UTF-16, and recognize the data: it has to be quoted (for instance), has N commans (semicolons) and another end of line. Can read the header to know how many columns the structure.
- Inserting into the middle of the file
can be achieved by reserving some space at the end/beginning of each line.
- appending lines at the end
That's trivial as long as the file is locked (as any other modifications)
In case of fixed column count I'd split the file logically and/or physically into columns and implemented some wrappers/adapters for IO tasks and managing the file as a whole.
How about a table of offsets at somewhat regular intervals in the file, so you can restart parsing somewhere near the spot you are looking for?
The idea would be that these would be byte offsets where the encoding would be in its initial state (i.e. if the data was ISO-2022 encoded, then this spot would be in the ASCII compatible mode). Any index into the data would then consist of a pointer into this table plus whatever is required to find the actual row. If you place the restart points such that each are between two points fits into the mmap window, then you can omit the check/remap/restart code from the parsing layer, and use a parser that assumes that data is sequentially mapped.
精彩评论