I'm running a procedure which takes around 1 minute for the first time execution but for the next time it reduces to around 9-10 seconds. And after some time again it takes around 1 minute.
My procedure is working with single table which is having 6 non clustered and 1 clustered indexes and unique id column is uniqueidentifier data type with 1,218,833 rows.
Can you guide me where is the problem/possible performance improvement is?
Thanks in advance.
Here is the procedure.
PROCEDURE [dbo].[Proc] (
@HLevel NVARCHAR(100),
@HLevelValue INT,
@Date DATE,
@Numbers NVARCHAR(MAX)=NULL
)
AS
declare @LoopCount INT ,@DateLastYear DATE
DECLARE @Table1 TABLE ( list of columns )
DECLARE @Table2 TABLE ( list of columns )
-- LOOP FOR 12 MONTH DATA
SET @LoopCount=12
WHILE(@LoopCount>0)
BEGIN
SET @LoopCount= @LoopCount -1
-- LAST YEAR DATA
DECLARE @LastDate DATE;
SET @LastDate=DATEADD(D,-1, DATEADD(yy,-1, DATEADD(D,1,@Date)))
INSERT INTO @Table1
SELECT list of columns
FROM Table3 WHERE Date = @Date
AND
CASE
WHEN @HLevel='crieteria1' THEN col1
WHEN @HLevel='crieteria2' THEN col2
WHEN @HLevel='crieteria3' THEN col3
END =@HLevelValue
INSERT INTO @Table2
SELECT list of columns
FROM table4
WHERE Date= @开发者_如何转开发LastDate
AND ( @Numbers IS NULL OR columnNumber IN ( SELECT * FROM dbo.ConvertNumbersToTable(@Numbers)))
INSERT INTO @Table1
SELECT list of columns
FROM @Table2 Prf2 WHERE Prf2.col1 IN (SELECT col2 FROM @Table1) AND Year(Date) = Year(@Date)
SET @Date = DATEADD(D,-1,DATEADD(m,-1, DATEADD(D,1,@Date)));
END
SELECT list of columns FROM @Table1
The first time the query runs, the data is not in the data cache and so has to be retrieved from disk. Also, it has to prepare an execution plan. Subsequent times you run the query, the data will be in the cache and so it will not have to go to disk to read it. It can also reuse the execution plan generated originally. This means execution time can be much quicker and why an ideal situation is to have large amounts of RAM in order to be able to cache as much data in memory as possible (it's the data cache that offers the biggest performance improvements).
If execution times subsequently increase again, it's possible that the data is being removed from the cache (and execution plans can be removed from the cache too) - depends on how much pressure there is for RAM. If SQL Server needs to free some up, it will remove stuff from the cache. Data/execution plans that are used most often/have the highest value will remain cached for longer.
There are of course other things that could be a factor such as what load is on the server at the time, whether your query is being blocked by other processes etc
It seems that stored procedure is recompiling repeatedly after some time. To reduce the recompilation please check this article:
http://blog.sqlauthority.com/2010/02/18/sql-server-plan-recompilation-and-reduce-recompilation-performance-tuning/
精彩评论