开发者

Why is executemany slow in Python MySQLdb?

开发者 https://www.devze.com 2023-01-20 00:24 出处:网络
I am developing a program in Python that accesses a MySQL database using MySQLdb. In certain situations, I have to run an INSERT or REPLACE command on many rows. I am currently doing it like this:

I am developing a program in Python that accesses a MySQL database using MySQLdb. In certain situations, I have to run an INSERT or REPLACE command on many rows. I am currently doing it like this:

db.execute("REPLACE INTO " + table + " (" + ",".join(cols) + ") VALUES" +
    ",".join(["(" + ",".join(["%s"] * len(cols)) + ")"] * len(data)),
    [row[col] for row in data for col in cols])

It works fine, but it is kind of awkward. I was wondering if I could make it easier to read, and I found out about the executemany command. I changed my code to look like this:

db.executemany("REPLACE INTO " + table + " (" + ",".join(cols) + ") " + 
    "VALUES(" + ",".join(["%s"] * len(cols)) + ")",
    [tuple(row[col] for col in cols) for row in data])

It still worked, but it ran a lot slower. In my tests, for relatively small data sets (about 100-200 rows), it ran about 6 times slower. For big data sets (about 13,000 rows, the biggest I am expecting to handle), it ran about 50 times slower. Why is it doing this?

I would really like to simplify my code, but I don't want the big drop in performance. Does anyone k开发者_如何学Cnow of any way to make it faster?

I am using Python 2.7 and MySQLdb 1.2.3. I tried tinkering with the setinputsizes function, but that didn't seem to do anything. I looked at the MySQLdb source code and it looks like it shouldn't do anything.


Try lowercasing the word 'values' in your query - this appears to be a bug/regression in MySQL-python 1.2.3.

MySQL-python's implementation of executemany() matches the VALUES clause with a regular expression and then just clones the list of values for each row of data, so you end up executing exactly the same query as with your first approach.

Unfortunately the regular expression lost its case-insensitive flag in that release (subsequently fixed in trunk r622 but never backported to the 1.2 branch) so it degrades to iterating over the data and firing off a query per row.


Your first example is a single (large) statement that is generated and then sent to the database.

The second example is a much simpler statement that inserts/replaces a single row but is executed multiple times. Each command is sent to the database separately so you have to pay the turnaround time from client to server and back for every row inserted. I would think that this extra latency introduced between the commands is the main reason for the decreased performance of the second example.


Strongly do not recommend to use executeMany in pyodbc as well as ceodbc both slow and contains a lot of bugs.

Instead consider use execute and manually construct SQL query using simple string format.

transaction = "TRANSACTION BEGIN {0} COMMIT TRANSACTION

bulkRequest = ""
for i in range(0, 100)
    bulkRequest = bulkRequest + "INSERT INTO ...... {0} {1} {2}"

ceodbc.execute(transaction.format(bulkRequest))

Current implementation is very simple fast and reliable.


In case you're using mysqlclient-python (fork of MySQLdb1), also the recommended driver for Django (by Django), there's the following usecase you need to know of:

cursor.executemany falls back to using cursor.execute (silently) in case your query is of the form:

INSERT INTO testdb.test (type, some_field, status, some_char_field) VALUES (%s, hex(%s), %s, md5(%s));

The driver employs a python regex that doesn't seem to support the use of mysql functions in the VALUES clause.

    RE_INSERT_VALUES = re.compile(
    r"\s*((?:INSERT|REPLACE)\b.+\bVALUES?\s*)" +
    r"(\(\s*(?:%s|%\(.+\)s)\s*(?:,\s*(?:%s|%\(.+\)s)\s*)*\))" +
    r"(\s*(?:ON DUPLICATE.*)?);?\s*\Z",
    re.IGNORECASE | re.DOTALL)

Link to the relevant github issue https://github.com/PyMySQL/mysqlclient-python/issues/334

0

精彩评论

暂无评论...
验证码 换一张
取 消