开发者

Lost connection to MySQL server during query

开发者 https://www.devze.com 2022-12-13 22:19 出处:网络
I have a huge table and I need to process all rows in it. I\'m always getting this Lost connection message and I\'m not able to reconnect and restore the cursor to the last position it was. This is ba

I have a huge table and I need to process all rows in it. I'm always getting this Lost connection message and I'm not able to reconnect and restore the cursor to the last position it was. This is basically the code I have here:

#
import MySQLdb

class DB:
  conn = None

  def connect(self):
    self.conn = MySQLdb.connect('hostname', 'user', '*****', 'some_table', cursorclass=MySQLdb.cursors.SSCursor)

  def query(self, sql):
    try:
     cursor = self.conn.cursor()
     cursor.execute(sql)
   except (AttributeError, MySQLdb.OperationalError):
     self.connect()
     cursor = self.conn.cursor()
     cursor.execute(sql)
   return cursor
#

#
db = DB()
sql = "SELECT bla FROM foo"
data = db.query(sql)

for row in data:
    do_something(row)
#
开发者_运维知识库

But I'm always getting this:

#
Traceback (most recent call last):
  File "teste.py", line 124, in <module>
   run()
 File "teste.py", line 109, in run
   for row in data:
 File "/usr/lib64/python2.5/site-packages/MySQLdb/cursors.py", line 417, in next
   row = self.fetchone()
 File "/usr/lib64/python2.5/site-packages/MySQLdb/cursors.py", line 388, in fetchone
   r = self._fetch_row(1)
 File "/usr/lib64/python2.5/site-packages/MySQLdb/cursors.py", line 285, in _fetch_row
   return self._result.fetch_row(size, self._fetch_type)
   _mysql_exceptions.OperationalError: (2013, 'Lost connection to MySQL server during query')
    Exception _mysql_exceptions.OperationalError: (2013, 'Lost connection to MySQL server during query') in <bound method SSCursor.__del__ of <MySQLdb.cursors.SSCursor object at 0x7f7e3c8da410>> ignored
#

Do you have any idea?


The mysql docs have a whole page dedicated to this error: http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

of note are

  • You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.5.2.10, “Packet too large”.

  • You can get more information about the lost connections by starting mysqld with the --log-warnings=2 option. This logs some of the disconnected errors in the hostname.err file


There are three ways to enlarge the max_allowed_packet of mysql server:

  1. Change max_allowed_packet=64M in file /etc/mysql/my.cnf on the mysql server machine and restart the server
  2. Execute the sql on the mysql server: set global max_allowed_packet=67108864;
  3. Python executes sql after connecting to the mysql:
connection.execute('set max_allowed_packet=67108864')


You can also encounter this error with applications that fork child processes, all of which try to use the same connection to the MySQL server. This can be avoided by using a separate connection for each child process.

Forks might hit you. Beware not in this case though.


Make sure you close cursor before connection. I've resolved my problem with this:

if cur and con:                        
    cur.close() 
    con.close() 


You need to increase the timeout on your connection. If you can't or don't want to do that for some reason, you could try calling:

data = db.query(sql).store_result()

This will fetch all the results immediately, then your connection won't time out halfway through iterating over them.


I my case the reason for the

ERROR 2013 (HY000): Lost connection to MySQL server during query

error was that parts of my table were corrupted. I was also not able to mysqldump my table because some rows broke it. The error was not related to any memory issues etc. like mentioned above.

The nice thing was that MySQL returned me the row number which was the first what failed. It was something like

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table mytable at row: 12723

The solution was to copy the data into a new table. In my case I lost 10 rows of data because i had to skip these corrupted rows. First I created a "tmp" table with the schema of the old one. SHOW CREATE TABLE is your friend here. E.g.

SHOW CREATE TABLE mydatabase.mytable;

With the i created the new table. Let's call it mytabletmp. And then copy the rows you are able to copy via e.g.

insert into mysqltabletmp select * from mytable where id < 12723;
insert into mysqltabletmp select * from mytable where id > 12733;

After that drop old table, rename tmp-table to the old table name.

There are also some nice Information from Peter regarding this problem.


This was happening to me with mariadb because I made a varchar(255) column a unique key.. guess that's too heavy for a unique, as the insert was timing out.


Multiprocessing and Django DB don't play well together.

I ended up closing Django DB connection first thing in the new process.

So that one will have no references to the connection used by the parent.

from multiprocessing import Pool

multi_core_arg = [[1,2,3], [4,5,6], [7,8,9]]
n_cpu = 4
pool = Pool(n_cpu)
pool.map(_etl_, multi_core_arg)
pool.close()
pool.join()

def _etl_(x):
    from django.db import connection 
    connection.close() 
    print(x)

OR

Process.start() calls a function which starts with

Some other suggest to use

from multiprocessing.dummy import Pool as ThreadPool

It solved my (2013, Lost connection) problem, but thread use GIL, when doing IO, to will release it when IO finish.

Comparatively, Process spawn a group of workers that communication each other, which may be slower.

I recommend you to time it. A side tips is to use joblib which is backed by scikit-learn project. some performance result shows it out perform the native Pool().. although it leave the responsibility to coder to verify the true run time cost.


Set the 'max_allowed_packet' setting to 64M and restart your MySql server. If that did not fixed your issues, the problem may lie elsewhere.

I've a multi-threaded PHP CLI application that does simultaneous queries and I recently noticed this issue. It's now obvious to me that MySql server consider all connections from the same IP as a 'single' connection and therefore drop all connections whenever a single query finishes.

I wonder though that is there a way to make MySql allow say 100 connections from the same IP and consider each connection as an individual connection.


I encountered similar problems too. In my case it was solved by getting the cursor in this way:

cursor = self.conn.cursor(buffered=True)


The same as @imxylz, but I had to use mycursor.execute('set GLOBAL max_allowed_packet=67108864') as I got a read-only error without using the GLOBAL parameter.

mysql.connector.__version__ 

8.0.16


This can also happen if someone or something kills your connection using the KILL command.


This happened to me when I tried to update a table whose size on disk was bigger than the available disk space. The solution for me was simply to increase the available disk space.


In my case, I ran into this problem when sourcing an SQL dump which had placed the tables in the wrong order. The CREATE in question included a CONSTRAINT ... REFERENCES that referenced a table that had not been created yet.

I located the table in question, and moved its CREATE statement to above the offending one, and the error disappeared.

The other error I encountered relating to this faulty dump was ERROR 1005/ errno: 150 -- "Can't create table" , again a matter of tables being created out of order.


This happend to me when my CONSTRAINT name have the same name with other CONSTRAINT name.

Changing my CONSTRAINT name solved this.


I was running into the same problem. Because of some other issues I had tried to add a cnx.close() line to my other functions. Instead, I removed all these extraneous closes and setup my class like this:

class DBase:

config = {
      'user': 'root',
      'password': '',
      'host': '127.0.0.1',
      'database': 'bio',
      'raise_on_warnings': True,
      'use_pure': False,
      }

def __init__(self):
    import mysql.connector
    self.cnx = mysql.connector.connect(**self.config)
    self.cur = self.cnx.cursor(buffered=True)
    print(self.cnx)
def __enter__(self):
    return DBase()

def __exit__(self, exc_type, exc_val, exc_tb):
    self.cnx.commit()
    if self.cnx:
        self.cnx.close()

Any function that is called within this class is connects, commits, and closes.


I was getting this error with a "broken pipe" when I tried to do bulk inserts with millions of records. I ended up solving this by chunking my data into smaller batch sizes and then running an executemany command with the mysql cursor for each of the inserts I needed to do. This solved the problem and didn't seem to affect the performance in any noticeable way.

eg.

def chunks(data):
    for i in range(0, len(data), CHUNK_SIZE):
        yield data[i:i + CHUNK_SIZE]


def bulk_import(update_list):
    new_list = list(chunks(update_list))
    for batch in new_list:
         cursor.execute(#SQL STATEMENT HERE)


You can see my answer followed for similar problem:

https://stackoverflow.com/a/69610550/16647254

use lock to solve this problem

lock.acquire()
mysqlhelper.getconn()
result_db_num = mysqlhelper.update(sql, [businessid, md5_id])
mysqlhelper.end()
mysqlhelper.dispose()
lock.release()


I had the same problem and wrestled for many hours, experimenting with LOTS of different solutions. What finally worked for me was this. The code opens a 10 connection pool initially, then mysql.connector serves up connections from the pool with get_connection().

class DB:
    connection = None

    def __init__(self):
        self.conn()

    def conn(self):
        try:
            if not self.connection:
                self.connection = mysql.connector.pooling.MySQLConnectionPool(user='web', password='mypasswd',
                                                                              host='prod', database='myelection',
                                                                              autocommit=True, pool_size=10,
                                                                              buffered=True)
            return self.connection.get_connection()

        except mysql.connector.errors.InterfaceError as err:
            print("can't connect to mysql ", err)

        except mysql.connector.DatabaseError as err:
            print("database error: ", err)

        except Exception as err:
            print("unknown db exception: ", err)

        print("exiting from conn() with error.")
        exit()

    # Make sure your class methods open, then automatically 
    # close the connections and the cursors.
    def new_polling_place(self, pp_name):
        #  cur = self.conn().cursor(dictionary=True)
        with self.conn() as con:
            with con.cursor() as cur:
                cur.execute("INSERT INTO pollingplace (pp_name) VALUES (%s)", [pp_name])
                return cur.lastrowid


This very same situation happened to me while working with mariadb , sqlalchemy and pandas and just like @iamapotatoe above I also created a function to break up the dataframe into chunks and port them over to the sql database bit by bit. This can be utilized especially if changing the max_allowed_packet in the mysql config option doesn't work for you .

def load_large_df(table_name,df_to_load,batch_size,engine):
    df_to_load = df_to_load.dropna(how='all')
    with engine.connect() as conn:
        conn.execute(f"DROP TABLE IF EXISTS {table_name}")
        rows = df_to_load.shape[0]
        batch = int(rows/batch_size)
        

        strt = 0
        while strt < rows:
            df = df_to_load[strt:].head(batch)
            df.to_sql(table_name,con=conn,if_exists='append')
            strt += batch


very simple to solve, go to the control panel of you phpadmin and click on config/then edit the .ini file you see. look for port 3306 if that's not the port you are using for your connection change 3306 to the port you are using. on your login screen just put localhost for your server, your port if its not the default or if you did not change the file name my.ini in sql configuration leavit as is. then put your username:root or the one you created then the password:1234 or the one you assigned. if you are connecting localy, do not check the url option. then type the name of the database you want to edit. note: once you are connected you will see the list of databases you have on your server or the server you are connecting to.

0

精彩评论

暂无评论...
验证码 换一张
取 消