Monthly Archives: June 2012

Adding indexes to production databases

It is good to have a plan of attack. What’s even better is when that plan goes smashingly. The last couple weeks I have been able to find very effective indexes to add to my SQL environment in quite swift fashion. Adding these indexes not only helps that one database but helps the environment as a whole since my shop is heavily virtualized. All resources are shared so databases must play nice together. Its a big leap from a plateau I had reached in performance tuning. I hadn’t quite gotten to the confidence level to really suggest changes to databases but now I feel I can (in certain scenarios).

In prod, DBAs can get very good data on how the server is operating from the sys.dm_ objects. I like to observe my environment as a whole and pick server with the highest CPU utilization. The list of VMs is sortable by host CPU utilization. I’ve looked at this list enough that I can guess what the top 5 servers are going to be, but occasionally there is a surprise. If I have time for tuning I will grab one of these servers in the top of my list and open up SSMS. Before opening up SSMS (hah like I ever close that) I like to make sure there is no ready time or balooning that would throw anything later in my plan of attack off.

Once I’ve got a server that I know has some decent activity I will make sure it has been online for a while. This is because the sys.dm_ data gets cleared out with an instance restart. Once that is confirmed I’ll take a look at statistics. If they are not getting updated, the current indexes might be good enough but the optimizer doesn’t know it needs to use them. Using a sample to update stats might be good enough but if you want to be really sure run this on all your tables UPDATE STATISTICS table_or_indexed_view with FULLSCAN.


Now I am ready to take a look at the plan cache. I attended Brent Ozar’s session at #sqlconnections to pickup these scripts. They have proved very helpful. What you get in a results set is your heavy hitting statements or procedures. He did a nice job of aggregating sys.dm_exec_query_plan and sys.dm_exec_query_stats to get things like execution count, average reads and worker time.

From here attacking the query scientifically requires a baseline. At this point, if I am looking at a stored proc I will run it with view actual plan on and pull out the heaviest statement. With that statement I will turn on Statistics IO and Statistics time for connection properties. I’ll run the query and take a look at the plan to find any obvious indexes like foreign keys or where clauses. I’ll add the best index I can come up with and run the query again. If its a good index, the new plan will use the index and so far I have seen great improvements on the number of reads (100x-1000x less reads).

What I like to do now is check the missing index queries. This is really just confirming what I already know. It helps when the optimizer agrees with you on what is needed. Back in 2007 Bart Duncan posted this gem which actually includes the statement to create the index. Your plans that pop up in Brent’s plan cache queries might benefit from an index on the tables that Bart’s query will find. I have found that in the top five there is probably 3 indexes that could be combined into one.

On some of my servers, I see a table repeated several times in the top 10 index recommendations. This makes me take a really close look at that table and the queries from the cache that hit that table. I check the current indexes and try to find one an unindexed column that might be in a join or frequent where clause.


I think not. Most all posts and information I have seen on indexing comes with heavy doses of disclaimers. Cowboy-up and start trying some indexes. These disclaimers caused me to plateau in my performance tuning. I was so concerned I would cause a major performance hit I didn’t add indexes and just let the application suffer. Don’t be afraid to add indexes.

If you add an index and it causes performance to tank, there is something inherently wrong with your database design. Or its simply inserting data more than anything else. Normally, if you add a bad index, the optimizer will ignore it and you will only be wasting some space in your database. If you want your queries to be thousands of times faster, adding a much needed index is the easiest way to go.

1 Comment

Posted by on June 24, 2012 in SQL Admin