As part of my talk at the Nottingham SQL Server User group last night, I offered some tips on how to track SQL Server index usage over time.

sys.dm_db_index_usage_stats is a very handy table for seeing how many seeks/scans and updates your indexes get. But there are a few issues with. Firstly it’s a DMV, so it will be flushed on a number of events:

  • SQL Server Instance Restart
  • Database Detach
  • Database Close
  • SQL Server 2012/2014 – Alter Index rebuild

For a good thorough check normally a full months worth of stats are needed, this normally covers most of the business processes. I may event be useful to compare stats across months so you also capture events such as Financial Year end.

A better break down of when the scans and seeks happened can be very useful. It’s all well and good knowing your index was scanned 5000 times and Seeked across 15000 times, but what processes were doing that? Was it OLTP workload during 9-5 or was it the overnight processing? This could make a big difference to any planned Index changes.

So here’s a technique I use. It’s pretty simple and allows you to tune it to your own needs,

Here be SQL

First off, we need somewhere to store the index usage stats, so lets create a table:

create table index_stats_archive
	datestored datetime not null,
	database_id smallint not null, 
	object_id int not null,
	index_id int not null, 
	user_seeks bigint not null, 
	user_scans bigint not null, 
	user_lookups bigint not null, 
	user_updates bigint not null, 
	last_user_seek datetime null, 
	last_user_scan datetime null, 
	last_user_lookup datetime null, 
	last_user_update datetime null, 
	system_seeks bigint not null, 
	system_scans bigint not null, 
	system_lookups bigint not null, 
	system_updates bigint not null, 
	last_system_seek datetime null, 
	last_system_scan datetime null, 
	last_system_lookup datetime null, 
	last_system_update datetime null

Now we need to populate this. This is simply done by running this query as frequently as you’d like:

insert into index_stats_archive select getdate(), * from sys.dm_db_index_usage_stats

This will capture all index usage across your SQL Server instance. If you want to filter it to just a specific database, then you can use:

insert into index_stats_archive select getdate(), * from sys.dm_db_index_usage_stats where database_id=DB_ID('MyProdDb')

For the purposes of getting some data to play with, here’s a quick T-SQL snippet that will gather some data every 10 minutes across an hour:

 declare @cnt integer
 select @cnt = 0
 while @cnt <6
	insert into index_stats_archive select getdate(), * from sys.dm_db_index_usage_stats;
	waitfor delay '00:10';
	select @cnt = @cnt +1	

Once that’s run, or you populate it enough, we want to start looking at the data contained within it. This is my starting query. It’s fairly simple, but has the benefit of working on all versions of SQL Server since 2008, so you don’t have to maintain different copies for different versions. There is certainly room for more in depth analysis, and if you come up with any other interesting way please drop me a comment below as I’d love to know.

 ;With StatsCount as(
select *, 
	Row_number() over (partition by database_id, object_id, index_id order by datestored) as rn
	from index_stats_archive 
select r1.datestored as 'dtstart',
	datediff(mi, r1.datestored,r2.datestored) as	'TimeperiodMin',
	r2.user_seeks-r1.user_seeks as 'UserSeekIncrease',
	r2.user_scans - r1.user_scans as 'UserScansIncrease',				 
	r2.user_lookups - r1.user_lookups as 'UserLookupsIncrease', 			 
	r2.user_updates - r1.user_updates as 'UserUpdatesIncrease',			 
	r2.system_seeks - r1.system_seeks as 'SystemSeeksIncrease',			 
	r2.system_scans - r1.system_scans as 'SystemScansIncrease', 			 
	r2.system_lookups - r1.system_lookups as 'SystemLookupsIncrease',  			 
	r2.system_updates - r1.system_updates as 'SystemUpdatesIncrease',  			 
	r2.last_system_scan ,		 
		from statscount r1 inner join statscount r2 on
			r1.rn = r2.rn - 1
			and r1.database_id = r2.database_id
			and r1.object_id = r2.object_id
			and r1.index_id = r2.index_id

This will return a resultset like this:
Results from script to track sql server index usage

This shows nicely how the clustered index on object 748842030 (clustered indexes are always index_id 1), is being used to seek ~3500 times every 10 minutes, so that’s quite a high usage index over that time frame.
It’s friend on object 752825844 however is barely managing 1 seek per minute on average. So that index isn’t quite seeing that amount of usage. But it might be very different at another time of day, and at least now you could find out.

The query can easily be filtered down to just one table:

where r1.Object_id=Object_id('MyProdTable')

A certain time range:

where r1.datestored between '2017-01-13 11:13' and '2017-01-13 11:34'

Or only tables that have seen some usage:

where r1.user_seeks!=0

If you find other ways to use this to track SQL Server index usage , then please let me know in the comments below.