I'm decoding a large (about a gigabyte) flat file database, which mixes character encodings willy nilly. The python module chardet
is doing a good job, so far, of identifying the encodings, but if hit a stumbling block...
In [428]: badish[-3]
Out[428]: '\t\t\t"Kuzey r\xfczgari" (2007) {(#1.2)} [Kaz\xc4\xb1m]\n'
In [429]: chardet.detect(badish[-3])
Out[429]: {'confidence': 0.98999999999999999, 'encoding': 'Big5'}
In [430]: unicode(badish[-3], 'Big5')
---开发者_运维问答------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
~/src/imdb/<ipython console> in <module>()
UnicodeDecodeError: 'big5' codec can't decode bytes in position 11-12: illegal multibyte sequence
chardet reports a very high confidence in it's choice of encodings, but it doesn't decode... Are there any other sensible approaches?
A point that can't be stressed too strongly: You should not expect any reasonable encoding guess from a piece of text that is so short and has such a high percentage of plain old ASCII characters in it.
About big5: chardet casts a very wide net when checking CJK encodings. There are lots of unused slots in big5, and chardet doesn't exclude them. That string is not valid big5, as you have found out. It is in fact valid (but meaningless) big5_hkscs (which used a lot of the holes in big5).
There are an enormous number of single-byte encodings that fit the string.
At this stage it's necessary to seek out-of-band help. Googling "Kuzey etc" drags up a Turkish TV series "Kuzey rüzgari" so we now have the language.
That means that if it was entered by a person familar with Turkish, it could be in cp1254, or iso_8859_3 (or _9), or mac_turkish. All of those produce gibberish for the [Kaz??m] word near the end. According to the imdb website, that's the name of a character, and it's the same gibberish as obtained by decoding with cp1254 and iso-8859-9 (Kazım). Decoding with your suggested iso-8859-2 gives KazĹm which doesn't look very plausible either.
Can you generalise this? I don't think so :-)
I would strongly suggest that in such a case that you decode it using latin1 (so that no bytes are mangled) and flag the record as having unknown encoding. You should use a minimum length cutoff as well.
Update For what it's worth, the_two_bytes_in_the_character_name.decode('utf8') produces U+0131 LATIN SMALL LETTER DOTLESS I which is used in Turkish and Azerbaijani. Further googling indicates that Kazım is a common-enough Turkish given name.
精彩评论