After reading Kim Tripp's article on transaction log throughput and discovering that I have gazillions of VLFs, I'm planning to restructure the logs as she outlined. I want to measure the resulting increase in log throughput to see if the fragmentation makes a difference on my servers, but I'm at a loss as to how to do so. I couldn't find anything in the BOL or Google on measuring log throughput, and the best strategy I've been able to cobble together is to see if the average wait time per task for LOGBUFFER and WRITELOG waits decreases.
SELECT wait_type, (wait_time_ms - signal_wait_time_ms) * 1. /
waiting_tasks_count AS [Wait (ms) per Task]
FROM sys.dm_os_wait_stats
WHERE wait_type IN ('LOGBUFFER', 'WRITELOG')
Is开发者_JAVA百科 there something more definitive, perhaps akin to the perfmon database throughput counters (http://technet.microsoft.com/en-us/library/ms189883.aspx)?
select * from sys.dm_os_performance_counters
where counter_name in ('Log Flushes/sec'
,'Log Bytes Flushed/sec'
,'Log Flush Waits/sec'
,'Log Flush Wait Time')
and instance_name = '<dbname>';
This being a performance counter, you would need to compute the actual value from the raw value. For the 'Log Flush Wait Time' counter, which is of type 65792 (ie. NumberOfItems64) is easy: the raw value is the value. But the other ones are type 272696576 (ie. RateOfCountsPerSecond64) for which the value is computed by dividing the delta or two consecutive raw values to the number of seconds passed between the tacking of the samples.
The easieer alternative if to fire up Perfmon.exec and look at the corresponding performance counters.
精彩评论