开发者

python precautions to economize on size of text file of purely numerical characters

开发者 https://www.devze.com 2023-04-07 21:12 出处:网络
I am tabulating a lot of output from some network an开发者_Go百科alysis, listing an edge per line, which results in dozens of gigabytes, stretching the limits of my resources (understatement). As I on

I am tabulating a lot of output from some network an开发者_Go百科alysis, listing an edge per line, which results in dozens of gigabytes, stretching the limits of my resources (understatement). As I only deal with numerical values, it occurred to me that I might be smarter than using the Py3k defaults. I.e. some other character encoding might save me quite some space if I only have digits (and space and the occasional decimal dot). As constrained I am, I might even save on the line endings (Not to have the Windows standard CRLF duplicate). What is the best practice on this?

An example line would read like this:

62233 242344 0.42442423

(Where actually the last number is pointlessly precise, I will cut it back to three nonzero digits.)

As I will need to read in the text file with other software (Stata, actually), I cannot keep the data in arbitrary binary, though I see no reason why Stata would only read UTF-8 text. Or you simply say that avoiding UTF-8 barely saves me anything?

I think compression would not work for me, as I write the text line by line and it would be great to limit the output size even during this. I might easily be mistaken how compression works, but I thought it could save me space after the file is generated, but my issue is that my code crashes already as I am tabulating the text file (line by line).

Thanks for all the ideas and clarifying questions!


You can use zlib or gzip to compress the data as you generate it. You won't need to change your format at all, the compression will adjust to the characters and sequences that you use the most to create an optimal file size.


Avoid the character encodings entirely and save your data in a binary format. See Python's struct. Ascii-encoded a value like 4-billion takes 10 bytes, but fits in a 4-byte integer. There are a lot of downsides to a custom binary format (its hard to manually debug, or inspect with other tools, etc)


I have done some study on this. Clever encoding does not matter once you apply compression. Even if you use some binary encoding, they seems to contain the same entropy and end up in similar size after compression.

The Power of Gzip

Yes there are Python library allow you to stream output and automatically compress it.

Lossy encoding does save space. Cutting down the precision helps.


I don't know the capabilities of data input in Stata, and a quick search reveals that said capabilities are described in the User's Guide, which seems to be available only on dead-tree copies. So I don't know if my suggestion is feasible.

An instant saving of half the size would be if you used 4-bits per character. You have an alphabet of 0 to 9, period, (possibly) minus sign, space and newline, which are 14 characters fitting perfectly in 2**4==16 slots.

If this can be used in Stata, I can help more with suggestions for quick conversions.

0

精彩评论

暂无评论...
验证码 换一张
取 消