开发者

Is it possible to speed up a sum() in MySQL?

开发者 https://www.devze.com 2022-12-18 13:01 出处:网络
I\'m doing a \"select sum(foo) from bar\" query on a MySQL database that\'s summing up 7.3mm records and taking about 22 seconds per run.Is there a开发者_C百科 trick to speeding up sums in MySQL?No, y

I'm doing a "select sum(foo) from bar" query on a MySQL database that's summing up 7.3mm records and taking about 22 seconds per run. Is there a开发者_C百科 trick to speeding up sums in MySQL?


No, you can't speed up the function itself. The problem here is really that you're selecting 7.3 million records. MySQL has to scan the entire table, and 7.3 million is a pretty big number. I'm impressed that it finishes that fast, actually.

A strategy you could employ would be to break your data into smaller subsets (perhaps by date? Month?) and maintain a total sum for old data that's not going to change. You could periodically update the sum, and the overall value could be calculated by adding the sum, and any new data that's been added since then, which will be a much smaller number of rows.


Turn on QUERY CACHE in mysql. Caching is OFF by default. You need to set mysql ini file.

-- hint mysql server about caching
SELECT SQL_CACHE sum(foo) FROM bar;

MySQL optimizer may be able to return a cache if no changes were made to the table.

Read more here: http://www.mysqlperformanceblog.com/2006/07/27/mysql-query-cache/


Two things here:

1) You should not do sum for 7.3m records on regular basis - introduce staging tables serving business needs (by day, month, year, department, etc.) and fill them on scheduled basis, possibly reuse those tables instead of original 'raw' table (like select summarized value for each day when you need few days interval, etc.)

2) check your transaction settings

http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html#isolevel_repeatable-read


No, not really. It will always have to enumerate all the rows in the table.

You could create a additional table and update the sum in there on every insert, update, delete?


If your query is really that simple, no... but if your are using a more complex query (and abbreviated it here) you could (probably) - like using better joins...


You can probably try adding an index on bar.foo field. The index will contain all values of bar column, but is smaller thus quicker to scan than the original foo table, especially if foo has a lot of other columns.

0

精彩评论

暂无评论...
验证码 换一张
取 消