I've written a开发者_高级运维 hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck.
I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge (> 150 MB) with very long lines.
My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not ended yet. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.
Note: I've a x86_64 system so psyco (and it seems also PyPy) is no option.
Update: I now uploaded my parser/writer to bitbucket.
The most general tip I'd give without further information would be to read the entire file, or at least a substantial section of it, into memory at once. You don't want to be reading it one character at a time and seeking here and there; regardless of the buffering that's going on under the hood it's probably a good idea just to have the whole thing in memory so you can operate on it however you want.
I have written parsers in Python and there's no particular requirement for them to be particularly slower than a parser written in any other language. As it is with these sorts of things, it's more likely that you're doing work you don't need to do. Of those class of item, creating and destroying and recreating the same object is more costly than just storing it off somewhere. Recomputing a value over and over again is more costly than just storing it somewhere. Etc., etc.
In Python specifically, one trap that people fall into is doing a lot of unnecessary string manipulation. Don't append to strings one character at a time; when you're building up your tokens do your work on the "master" string and strip out the token in one fell swoop. (In other words, index into the "master" string, figure out the start and end points, and then grab it with token = master[start:end]
.) Doing string concatenation one character at a time is a short path to performance misery. I suspect even if you want/need for some reason to do for c in master: newstr += c
you might have better luck stuffing the 'c's into a list and then newstr = ''.join(newstr_charlist)
.
You could use ANTLR or pyparsing, they might speed up your parsing process.
And if you want to keep your current code, you might want to look at Cython/PyPy, which increases your perfomance (sometimes upto 4x).
精彩评论