开发者

Loading one table in MySQL is ridiculously slow

开发者 https://www.devze.com 2023-03-02 12:59 出处:网络
Fro clarity all other tables in the DB work as expected, and load ~2million rows in a fraction of a second. The one table of just ~600 rows is taking 10+minutes to load in navcat.

Fro clarity all other tables in the DB work as expected, and load ~2million rows in a fraction of a second. The one table of just ~600 rows is taking 10+minutes to load in navcat.

I can't think of any possible reason for this. There are just 4 columns. One of them is a large text field, but I've worked with large text fields before and they've never been this slow.

running explain select * from parser_queue I get

 id  setect type  table     type  possible keys  key  key len  ref  rows  extra
 1   SIMPLE    parser_queue  ALL  -              -    -        -    658   - 

The profile tells me that 453 seconds are spent 'sending data' I've also got this in the 'Status' tab. I don't understand most of it, but these numbers are much higher than my other tables.

Bytes_received            31
Bytes_sent                32265951
Com_select                1
Created_tmp_files         16
Handler_read_rnd_next     659
Key_read_requests         9018487
Key_reads                 3928
Key_write_requests        310431
Key_writes                4290
Qcache_hits               135077
Qcache_inserts            14289
Qcache_lowmem_prunes      4133
Qcache_queries_in_cache   983
Questions                 1
Select_scan               1
Table_locks_immediate     31514

The data stored in the text field is about 12000 chars on average. There is a primary, auto increment int id field, a tinyint status field, a text field, and a timestamp field with on update current timestamp.


OK I will try out both answers, but I can answer the questions quickly first:

Primary key on the ID field is the only key. This table is used for queuing, with ~50 records added/deleted per hour, but I only created it yesterday. Could it become corrupted in such a short time?

It is MyISAM


More work trying to isolate the problem:

repair table did nothing optimize table did nothing created a temp table. queries were about 50% slower on the temp table.

Deleted the table and rebuilt it. SELECT * takes 18 seconds with just 4 rows.

Here is the SQL I used to create the table:

CREATE TABLE IF NOT EXISTS `parser_queue` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `status` tinyint(4) NOT NULL DEFAULT '1',
  `data` text NOT NULL,
  `last_updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM;

Stranger still, everything seems fine on my local box. The slowness only happens on the dev site.

For clarity: there are more than 100 tables on the dev site and this is the only one that is funky.


OK I have disabled all cron jobs which use this table. SHOW PROCESSLIST does not reveal any locks on the table.

Changing the engine to InnoDB did not produce an开发者_如何学运维y significant improvement (86 seconds vs 94 for MyISAM)

any other ideas? . . .

Running SHOW PROCESSLIST during the query reveals it spends most of its time writing to net


If you suspect corruption somewhere, you can try either (or both) of the following:

CREATE TABLE temp SELECT * FROM parser_queue;

This will create a new table "identical" to the previous one, except it will be recreated. Alternatively (or maybe after you've made a copy), you can try:

REPAIR TABLE parser_queue;

You may also want to try optimizing the table; it might have gotten fragmented since you are using it as a queue.

OPTIMIZE TABLE parser_queue;

You can determine if the table is fragmented by running SHOW TABLE STATUS LIKE 'Data_Free' and see if this produces a high number.

Update

You say you are storing gzcompressed data in the TEXT columns. Try changing the TEXT column to BLOB instead, which is meant to handle binary data, such as compressed text.


The name gives away that you are using the table for queueing (lots of inserts and delets, maybe?). Maybe you have had the table a while and it's heavily fragmented. If my assumptions are correct, try OPTIMIZE TABLE parser_queue;

You can read more about this in the manual: http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html


Right, the problem seems to have been only this: the text fields where too huge.

Running

 SELECT id, status, last_updated FROM parser_queue

takes less time than

 SELECT data FROM parser_queue WHERE id = 6

Since all the queries I will be running return only one row, the slowdown will not affect me so much. I'm already using gzcompress on the data stored, so I don't think there is much more I could do anyway.

0

精彩评论

暂无评论...
验证码 换一张
取 消