开发者

PYODBC corrupts utf8 data (reading from MYSQL information_schema DB)

开发者 https://www.devze.com 2023-02-11 08:14 出处:网络
EDIT: I completely reworked this question to reflect my better understanding of the problem The PYODBC+MYSQL command used to fetch all table names in my DB

EDIT: I completely reworked this question to reflect my better understanding of the problem

The PYODBC+MYSQL command used to fetch all table names in my DB

cursor.execute("select table_name from information_schema.tables where
             table_schema='mydbname'")

The result is a list of unicode开发者_StackOverflow strings with every second character omitted, in each string.

The information_schema DB is utf8, although my table names are pure ascii. Reading from my DB which is latin1 works fine. Executing set character_set_* = 'utf8' does not help.

Executing the same query from a C++/ODBC test program works fine.

Do you know how pyodbc works wrt to character encoding? What encoding does it assume when working with a utf8 DB?

I work on Linux with UnixODBC, python 2.6.4, pyodbc 2.1.7


The ODBC specification only allows two encodings: ASCII and UCS-2. It is the job of the ODBC driver to convert whatever the database is in to one of those two, but I find most ODBC driver authors don't understand how it is supposed to work.

When a query is executed, pyodbc does not ask for any encoding. It executes the query and then asks the driver for the data type of each column. If the data type is Unicode, it will read the buffer and treat it as UCS2. If the data type is ASCII, it will read the buffer and treat it as ASCII.

The storage format is supposed to be irrelevant.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号