开发者

Averaging large amounts of data in SQL Server

开发者 https://www.devze.com 2023-01-14 00:18 出处:网络
It is desired to perform averaging calculations on a large set of data. Data is captured from the devices rather often and we want to get the last day\'s average, the last week\'s average, the last mo

It is desired to perform averaging calculations on a large set of data. Data is captured from the devices rather often and we want to get the last day's average, the last week's average, the last month's average and the last year's average.

Unfortunately, taking the average of the last year's data takes several minutes to complete. I only have a basic knowledge of SQL and am hoping that there is some good information here to speed things up.

The table has a timestamp, an ID that identifies which device the data belongs to and a floating point data value.

The query I've been using follows this general example:

 select avg(value)
 from table 
 where id in(1,2,3,4) timestamp > last_year

Edit: I should clarify also that they are requesting that these averages be calculated on a rolling basis. 开发者_C百科As in "year to date" averages. I do realize that just simply due to the sheer volume of results, we may have to compromise.


For this kind of problems you can always try the following solutions: 1) optimize the query: look at the query plan, create some indexes, defrag the existing ones, run the query when the server is free, etc 2) create a cache table. To populate the cache table chose one of the following strategies: 1) use triggers on the tables that affects the result and on insert,update,delete refresh the cache table. The trigger should run very, very, very fast. Other condition is to not block any records ( otherwise you will end up in a deadlock if the server is busy ) 2) populate the cache table with a job once per day/hour/etc 3) one solution that I like is to populate the cache by a SP when the result is needed ( ex: when the report is requested by the user ) and to use some logic to serialize the process ( only one user at one time can generate the cache ) plus some optimization to not recompute the same rows next time ( ex: if no row was added for the yesterday, and in cache I have the result for yesterday, I don't recalculate that value - calculate only the new values from the last run )


You might want to consider making the Clustered index on the timestamp. Typically clustered index is wasted on the id. One caution on this, the sort order of the output of other sql statements may change if there was no explicit sort.


You can make a caching table, for statistics cache, it should have something similar to this structure:

year | reads_sum  | total_reads | avg
=====|============|=============|=====
2009 | 6817896234 | 564345      |

at the end of the year, you fill avg (average) field with the, now quick to calculate, value.

0

精彩评论

暂无评论...
验证码 换一张
取 消