I'm loading web-page using urllib. Ther eis russian symbols, but page encoding is 'utf-8'
1
pageData = unicode(requestHandler.read()).decode开发者_运维百科('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 262: ordinal not in range(128)
2
pageData = requestHandler.read()
soupHandler = BeautifulSoup(pageData)
print soupHandler.findAll(...)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 340-345: ordinal not in range(128)
In your first snippet, the call unicode(requestHandler.read())
tells Python to convert the bytestring returned by read
into unicode
: since no code is specified for the conversion, ascii
gets tried (and fails). It never gets to the point where you're going to call .decode
(which would make no sense to call on that unicode object anyway).
Either use unicode(requestHandler.read(), 'utf-8')
, or requestHandler.read().decode('utf-8')
: either of these should produce a correct unicode object if the encoding is indeed utf-8
(the presence of that D0
byte suggests it may not be, but it's impossible to guess from being shown a single non-ascii character out of context).
print
ing Unicode data is a different issue and requires a well configured and cooperative terminal emulator -- one that lets Python set sys.stdout.encoding
on startup. For example, on a Mac, using Apple's Terminal.App:
>>> sys.stdout.encoding
'UTF-8'
so the printing of Unicode objects works fine here:
>>> print u'\xabutf8\xbb'
«utf8»
as does the printing of utf8-encoded byte strings:
>>> print u'\xabutf8\xbb'.encode('utf8')
«utf8»
but on other machines only the latter will work (using the terminal emulator's own encoding, which you need to discover on your own because the terminal emulator isn't telling Python;-).
If requestHandler.read()
delivers a UTF-8 encoded stream, then
pageData = requestHandler.read().decode('utf-8')
will decode this into a Unicode string (at which point, as Dietrich Epp noted correctly), the unicode()
call is not necessary anymore.
If it throws an exception, then the input is obviously not UTF-8-encoded.
精彩评论