I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run.
Here's the entry from the slow query log:
# Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974
use statsdb;
SELECT count(distinct Visits.visitorid) as 'uniques'
FROM Visits,Visitors
WHERE Visits.visitorid=Visitors.visitorid
and candidateid in (32)
and visittime>=1275721200 and visittime<=1275807599
and (omit=0 or omit>=1275807599)
AND Visitors.segmentid=9
AND Visits.visitorid NOT IN
(SELECT Visits.visitorid
FROM Visits,Visitors
WHER开发者_如何学JAVAE Visits.visitorid=Visitors.visitorid
and candidateid in (32)
and visittime<1275721200
and (omit=0 or omit>=1275807599)
AND Visitors.segmentid=9);
It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know.
I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load.
Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index
1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where
2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where
2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where
I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods.
Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time.
Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing?
In case you want server info:
- Accessing via PHP 4.4.4 MySQL 4.1.22
- All tables are InnoDB
- We run optimize table on all tables weekly
- The sum of both the tables used in the query is 500 MB
MySQL config:
key_buffer = 350M
max_allowed_packet = 16M
thread_stack = 128K
sort_buffer = 14M
read_buffer = 1M
bulk_insert_buffer_size = 400M
set-variable = max_connections=150
query_cache_limit = 1048576
query_cache_size = 50777216
query_cache_type = 1
tmp_table_size = 203554432
table_cache = 120
thread_cache_size = 4
wait_timeout = 28800
skip-external-locking
innodb_file_per_table
innodb_buffer_pool_size = 3512M
innodb_log_file_size=100M
innodb_log_buffer_size=4M
Here's the structure, Bill:
CREATE TABLE `Visitors` (
`visitorid` bigint(20) unsigned NOT NULL auto_increment,
`ip` int(11) unsigned default '0',
`candidateid` int(11) unsigned NOT NULL default '0',
`omit` int(11) unsigned NOT NULL default '0',
`segmentid` int(10) unsigned NOT NULL default '0',
PRIMARY KEY (`visitorid`),
KEY `cand_visitor_omit` (`candidateid`,`visitorid`,`omit`),
KEY `ip_omit` (`ip`,`omit`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2837988 ;
CREATE TABLE `Visits` (
`visitid` bigint(20) unsigned NOT NULL auto_increment,
`visitorid` bigint(20) unsigned NOT NULL default '0',
`visittime` int(11) unsigned NOT NULL default '0',
`converted` tinyint(4) NOT NULL default '0',
`superconverted` tinyint(4) NOT NULL default '0',
`clickedotheroffer` tinyint(4) NOT NULL default '0',
PRIMARY KEY (`visitid`),
KEY `visittime_visitorid` (`visittime`,`visitorid`),
KEY `visitorid` (`visitorid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3912081 ;
The answer from @OMG Ponies is close to what I was thinking when I asked for your table definitions. Basically, you only need one instance of Visitors in this query.
A given Visitor who has some matching visits in the time period and no matching visits earlier than the time period should be counted:
SELECT COUNT(DISTINCT v.visitorid) AS unique_visitor_count
FROM Visitors v
JOIN Visits current ON v.visitorid = current.visitorid
AND current.visittime BETWEEN 1275721200 AND 1275807599
LEFT JOIN Visits earlier ON v.visitorid = earlier.visitorid
AND earlier.visittime < 1275721200
WHERE v.candidateid IN (32)
AND v.segmentid = 9
AND v.omit NOT BETWEEN 1 AND 1275807598
AND earlier.visitorid IS NULL;
You might benefit from an index on Visitors(candidateid,segmentid,omit), since those columns are used in your WHERE
clause. You could also try an index on Visitors(visitorid,candidateid,segmentid,omit).
Basically if you can get the query optimization to say using index
it means it's getting all the data it needs from the index data structure and it won't have to read the table data at all!
I tried out the query above with a few tries at indexes. The indexes I suggested above didn't help, it still wants to use the cand_visitor_omit index for Visitors. But I changed the visittime_visitorid index on Visits by reversing the columns:
CREATE INDEX visitorid_visittime ON Visits(visitorid, visittime);
This got the optimization plan to tell me it was going to use this as a covering index for both joins to Visits (see "Using index" in the extra field at the right):
+----+-------------+---------+------+---------------------------+---------------------+---------+------------------+------+--------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+------+---------------------------+---------------------+---------+------------------+------+--------------------------------------+
| 1 | SIMPLE | v | ref | PRIMARY,cand_visitor_omit | cand_visitor_omit | 4 | const | 1 | Using where |
| 1 | SIMPLE | current | ref | visitorid_visittime | visitorid_visittime | 8 | test.v.visitorid | 2 | Using where; Using index |
| 1 | SIMPLE | earlier | ref | visitorid_visittime | visitorid_visittime | 8 | test.v.visitorid | 2 | Using where; Using index; Not exists |
+----+-------------+---------+------+---------------------------+---------------------+---------+------------------+------+--------------------------------------+
Changing the index in this way also makes your other single-column index on Visitors(visitorid) redundant, so you can drop that one.
Here's my re-write of your query:
SELECT COUNT(DISTINCT v.visitorid) AS uniques
FROM VISITS v
JOIN VISITORS vv ON vv.visitorid = v.visitorid
AND vv.segmentid = 9
LEFT JOIN VISTS pv ON pv.visitorid = v.visitorid
AND pv.visitorid = vv.visitorid
AND pv.candidateid = v.candidateid
AND pv.visittime < 1275721200
AND v.omit NOT BETWEEN 1 AND 1275807598
WHERE x.visitorid IS NULL
AND v.candidateid = 32
AND v.visittime BETWEEN 1275721200 AND 1275807599
AND v.omit NOT BETWEEN 1 AND 1275807598
Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods.
Why on earth would you run the same query that many times for a page? It should be run once - you need to define a GROUP BY
clause appropriate for the data to return a count for each of these. My assumption is the group by should be by candidateid
...
A query like this may perform better:
SELECT count(distinct v.visitorid) as 'uniques'
FROM Visits v
inner join Visitors vr on v.visitorid = vr.visitorid
left outer join (
SELECT v1.visitorid
FROM Visits v1
inner join Visitors v2 on v1.visitorid = v2.visitorid
WHERE candidateid = 32
and visittime < 1275721200
and (omit = 0 or omit >= 1275807599)
and v2.segmentid = 9
) vo on v.visitorid = vo.visitorid
where candidateid = 32
and visittime between 1275721200 and 1275807599
and (omit = 0 or omit >= 1275807599)
and vr.segmentid = 9
and vo.visitorid is null
If your question is "why it is fast some times" then I'm pretty sure the answer is query cache. When you run this query for the first time it works longer but then it stores the result in cache. And then it simply returns results from cache unless the data set is changed or cache itself expires. Did you consider this option at all?
精彩评论