开发者

Mysql improve SELECT speed

开发者 https://www.devze.com 2023-02-06 12:43 出处:网络
I\'m currently trying to improve the speed of SELECTS for a MySQL table and would appreciate any suggestions on ways to improve it.

I'm currently trying to improve the speed of SELECTS for a MySQL table and would appreciate any suggestions on ways to improve it.

We have over 300 million records in the table and the table has the stru开发者_运维技巧cture tag, date, value. The primary key is a combined key of tag and date. The table contains information for about 600 unique tags most containing an average of about 400,000 rows but can range from 2000 to over 11 million rows.

The queries run against the table are:

  SELECT date,
         value 
    FROM table 
   WHERE tag = "a" 
     AND date BETWEEN 'x' and 'y' 
ORDER BY date

....and there are very few if any INSERTS.

I have tried partitioning the data by tag into various number of partitions but this seems to have little increase in speed.


take time to read my answer here: (has similar volumes to yours)

500 millions rows, 15 million row range scan in 0.02 seconds.

MySQL and NoSQL: Help me to choose the right one

then amend your table engine to innodb as follows:

create table tag_date_value
(
tag_id smallint unsigned not null, -- i prefer ints to chars
tag_date datetime not null, -- can we make this date vs datetime ?
value int unsigned not null default 0, -- or whatever datatype you require
primary key (tag_id, tag_date) -- clustered composite PK
)
engine=innodb;

you might consider the following as the primary key instead:

primary key (tag_id, tag_date, value) -- added value save some I/O

but only if value isnt some LARGE varchar type !

query as before:

select
 tag_date, 
 value
from
 tag_date_value
where
 tag_id = 1 and
 tag_date between 'x' and 'y'
order by
 tag_date;

hope this helps :)

EDIT

oh forgot to mention - dont use alter table to change engine type from mysiam to innodb but rather dump the data out into csv files and re-import into a newly created and empty innodb table.

note i'm ordering the data during the export process - clustered indexes are the KEY !

Export

select * into outfile 'tag_dat_value_001.dat' 
fields terminated by '|' optionally enclosed by '"'
lines terminated by '\r\n'
from
 tag_date_value
where
 tag_id between 1 and 50
order by
 tag_id, tag_date;

select * into outfile 'tag_dat_value_002.dat' 
fields terminated by '|' optionally enclosed by '"'
lines terminated by '\r\n'
from
 tag_date_value
where
 tag_id between 51 and 100
order by
 tag_id, tag_date;

-- etc...

Import

import back into the table in correct order !

start transaction;

load data infile 'tag_dat_value_001.dat' 
into table tag_date_value
fields terminated by '|' optionally enclosed by '"'
lines terminated by '\r\n'
(
tag_id,
tag_date,
value
);

commit;

-- etc...


What is the cardinality of the date field (that is, how many different values appear in that field)? If the date BETWEEN 'x' AND 'y' is more limiting than the tag = 'a' part of the WHERE clause, try making your primary key (date, tag) instead of (tag, date), allowing date to be used as an indexed value.

Also, be careful how you specify 'x' and 'y' in your WHERE clause. There are some circumstances in which MySQL will cast each date field to match the non-date implied type of the values you compare to.


I would do two things - first throw some indexes on there around tag and date as suggested above:

alter table table add index (tag, date);

Next break your query into a main query and sub-select in which you are narrowing your results down when you get into your main query:

SELECT date, value
FROM table
WHERE date BETWEEN 'x' and 'y'
AND tag IN ( SELECT tag FROM table WHERE tag = 'a' )
ORDER BY date


Your query is asking for a few things - and with that high # of rows, the look of the data can change what the best approach is.

   SELECT date, value 
   FROM table 
   WHERE tag = "a" 
     AND date BETWEEN 'x' and 'y' 
   ORDER BY date

There are a few things that can slow down this select query.

  1. A very large result set that has to be sorted (order by).
  2. A very large result set. If tag and date are in the index (and let's assume that's as good as it gets) every result row will have to leave the index to lookup the value field. Think of this like needing the first sentence of each chapter of a book. If you only needed to know the chapter names, easy - you can get it from the table of contents, but since you need the first sentence you have to go to the actual chapter. In certain cases, the optimizer may choose just to flip through the entire book (table scan in query plan lingo) to get those first sentences.
  3. Filtering by the wrong where clause first. If the index is in the order tag, date... then tag should (for a majority of your queries) be the more stringent of the two columns. So basically, unless you have more tags than dates (or maybe than dates in a typical date range), then dates should be the first of the two columns in your index.

A couple of recommendations:

  1. Consider if it's possible to truncate some of that data if it's too old to care about most of the time.
  2. Try playing with your current index - i.e. change the order of the items in it.
  3. Do away with your current index and replace it with a covering index (has all 3 fields in it)
  4. Run some EXPLAIN's and make sure it's using your index at all.
  5. Switch to some other data store (mongo db?) or otherwise ensure this monster table is kept as much in memory as possible.


I'd say your only chance to further improve it is a covering index with all three columns (tag, data, value). That avoids the table access.

I don't think that partitioning can help with that.


I would guess that adding an index on (tag, date) would help:

alter table table add index (tag, date);

Please post the result of an explain on this query (EXPLAIN SELECT date, value FROM ......)


I think that the value column is at the bottom of your performance issues. It is not part of the index so we will have table access. Further I think that the ORDER BY is unlikely to impact the performance so severely since it is part of your index and should be ordered.

I will argument my suspicions for the value column by the fact that the partitioning does not really reduce the execution time of the query. May you execute the query without value and further give us some results as well as the EXPLAIN? Do you really need it for each row and what kind of column is it?

Cheers!


Try inserting just the needed dates into a temporary table and the finishing with a select on the temporary table for the tags and ordering.

CREATE temporary table foo
SELECT date, value 
FROM table 
WHERE date BETWEEN 'x' and 'y' ;

ALTER TABLE foo ADD INDEX index( tag );

SELECT date, value 
FROM foo 
WHERE tag = "a" 
ORDER BY date;

if that doesn't work try creating foo off the tag selection instead.

CREATE temporary table foo
SELECT date, value 
FROM table 
WHERE tag = "a";    

ALTER TABLE foo ADD INDEX index( date );

SELECT date, value 
FROM foo 
WHERE date BETWEEN 'x' and 'y' 
ORDER BY date;
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号