I am currently using struct.unpack to read a binary file. Frequently, I would be reading different types of values, so I might read a few longs, then read 8 floats, then read 2 shorts, a couple bytes, etc.
But they are generally grouped nicely so you might get a bunch of longs, and then a bunch of floats, and then a bunch of shorts, etc.
I've read a couple posts about how arrays perform much faster than unpack, but am not sure if there will be a significant difference if I am constantly calling fromfile with different array objects (one 开发者_Python百科for each type I might come across).
Has anyone done any performance tests to compare the two in this situation?
Sounds like you are in the best position to do the time trials. You already have the struct.unpack
version, so make an array.fromfile
version and then use the timeit
module to do some benchmarks. Something like this:
python -m timeit -s "import struct_version" "struct_version.main()"
python -m timeit -s "import array_version" "array_version.main()"
where struct_version
and array_version
are your two different versions, and main
is the function that does all the processing.
精彩评论