Monthly Archives: July 2012

SQL Server Statistics

The statistics I am writing about are the ones that belong to indexes and columns. They fell under my feature radar for quite some time. My previous post I wrote about improving a 10 minute query to just over 1 second simply by updating statistics. Just today I had someone let me know they improved a 1 hour+ query down to a 1 minute query simply by updating statistics. This procedure is transparent to the application, it won’t cause any functional problems like a schema change.

“Wait stats” are popular so when you hit your search engine of choice you will get a mix of “wait stats” which are NOT column or index stats. So it’s a real PITA to find solid information on column and index statistics. Notice I keep repeating column AND index stats. This is because sp_helpstats only shows the auto generated column stats and I think ALL statistics are important.

These little buggers are wildly important to the optimizer. The optimizer is the special sauce inside SQL Server that polishes your turd queries so the data can slide smoothly back into your crapplication. So what does the optimizer thinks about this?


That’s right, it doesn’t like it. Some of my servers take on a grumpy personality and it’s usually the ones with bad stats. Bad stats will cause the optimizer to pick poor plans to retrieve your data. For example, choosing the wrong join operator to combine two tables it thinks has 1 row but really one has 1 zillion rows. Or simply choosing the wrong order to join tables.

The optimizer is all knowing and assumes you don’t know how to pick the proper join order. It will re-write your query based on what it thinks is most efficient. And if it doesn’t have good numbers to make those decisions, you are in trouble. It will pick the wrong indexes, the wrong order and all sorts of badness if it doesn’t have good stats.

How do I fix? (In order of expertise)

If you don’t do anything else with stats, leave these settings alone.

execute sp_updatestats is the easy way out. Run that in the context of your database and SQL will grind away until it has updated everything it needs to.

Run the update statistics command on your tables and views with FULLSCAN. FULLSCAN is the sample rate of your data that sql will base its stats off of. You can decrease the sample rate for zillion row tables to decrease the amount of time the command takes to run.

You can use the database tuning adviser to recommend advanced statistics such as multi-column stats and filtered stats that will improve the queries you fed into DTA.

You can identify overlapping stats using the DMVs so the optimizer doesn’t pick stale auto-created column stats over the good index stats.

You can run this query to identify stats based on when they were last updated. This took me quite a bit of digging around to find out how to identify last update. You can get most of the information you need without having to run DBCC SHOW_STATISTICS. sys.stats and the handy system function STATS_DATE hold enough info to get you by. Without knowing that function I started down the wrong path of looking at sys.objects and trying to figure out more information than I really needed at this point in time. sys.objects has modified date which isn’t actually the last time stats were updated but the last time the schema for that object was updated. This query will help you identify, without a query plan, if you have stale stats.

USE [msdb] -- because everyone has one of those
  STATS_DATE(object_id, stats_id) AS LastStatsUpdate
FROM sys.stats 
ORDER BY LastStatsUpdate

Why does it break?

Out of site, out of mind. You automatically get a statistic when you create an index. Also, the optimizer will create single column stats on the fly (by default) as queries are run. Who knew you had to take care of these things?

Lots of data is changing and that is fairly normal to SQL Server. Auto-updating statistics using it’s built in algorithm doesn’t always catch stale stats. Using a higher sample rate can get your precious optimizer better numbers and then it will give you better query plans.


The mother of all posts I have found so far. Must read multiple times.

As far as stats queries go, this is certified #awesomesause:

I haven’t run this yet but this definitely has great potential and helped me identify some ways of looking at statistics:

Some more info:

and three….. deeper and deeper….

With the correct parameters and schedule, you won’t ever have stats problems again:

trace flag for better statistics

1 Comment

Posted by on July 24, 2012 in SQL Admin, SQL Dev


A gap in SQL maintenance

An accidental DBA can make it quite some time before learning about database maintenance. Users become numb to performance problems that slowly change over time. It’s like watching a child grow up. If you see them every day the change isn’t that noticeable. But if you see them once every 5 years the shock factor is there.

With experience I’ve gained a spidey sense of approximately how long something should take. I don’t really need that many details or prior experience with the database in question. So when the email archive searching was taking 10 minutes when only a date was selected I jumped right on the issue.

A 10 minute query is easy to catch with the DMVs. The system is pretty lightly used but has all the emails archived so there is plenty of data. I had the user recreate the issue and pulled out the sql text that was running. I ran it in management studio with the actual plan on. Sure enough it seemed hit some particularly gooey part of the database and would crawl every time it was run.

I jumped to a premature conclusion that there was some missing index. But after checking the tables, all filters and joins had indexes and the query plan was using them.

Then I checked my agent job that runs an index maintenance script. The particular script I ran on this server was by Michelle Ufford I have been using this script since I discovered maintenance was a necessity a few years ago.

Back in time

I have been a big fan of this script for maintenance. When I inhereted about 30 servers 3 years ago, little to no maintenance was done. I’m now up to about 100 servers. One server did have a maintenace plan but it was doing all most all of the GUI based maintenance plan tasks in succession and taking almost all night to run when it would complete successfully. It looked something like this:

After some research I concluded that there were some things that were unnecessary. The first was a rebuild right after a reorg of all indexes. The reorg is completely redundant maintenance. The next thing I started to investigate is statistics. I didn’t really know what they were at the time but I had read somewhere that stats get updated when you rebuild an index. This is true for the lead column in the index but it doesn’t get ALL stats. I decided eventually to scrap the whole thing and run Michelle’s stored procedure weekly and checkdb less frequently.

Michelle’s script builds some intelligence around index maintenance. Just rebuilding all indexes isn’t necessary and will heavily tax the system, especially with VLDBs. In the script, lightly fragmented indexes only get a reorg while heavily fragmented indexes get rebuilt. Tiny little indexes don’t get any maintenance because they don’t cause performance problems.

Back to the problem

There are a couple problems with only running an index maintenance script like this one. Some indexes can get quite large but not fragmented so the stats can get stale because they won’t ever hit the rebuild threshold. SQL Server can auto-update stats if they are stale but sometimes large tables can need more frequent attention. Also in this script, no attention is given to column level stats that get added when the database is set to auto-create stats(those stats prefixed with _WA_). Or the column level stats that get added during database design.

In the case of my 10 minute query, I checked the actual query plan and ran the query in SSMS. I noticed that one of the pipes in the plan showed something like 50,000 rows actual and the optimizer only estimated 1. I’ve read that means a stats problem so I decided to do a complete update statistics on all tables with fullscan. I should have checked the statistics properties to see when it was last updated but I wasn’t too concerned. That just means I won’t know how stale they were.

After that was done, I ran the query once more and it took 4 minutes and the second time it ran in just over 1 second. I would say that is a success but there is one thing that baffles me to this day. The estimated and actual rows are still the same as before. I did get a very slight difference in the query plan. The optimizer changed a nested loop join to hit clustered index instead of a non-clustered index. It also changed the join order slightly. But I anticipated the new plan would be able to estimate rows better on that pipe but it was still 50,000/1.

I switched the maintenance to OLA’s script which can include FULLSCAN stats maintenance if needed.

Yes, I improved query performance by 99%. Yes, that is awesome, but I also was the root cause of the problem because I neglected to do proper maintenance. I hope to one day understand the internals of query plans and stats a bit better. More specifically why auto-update stats and index maintenance combined can still let stats get out of wack. Also, I want to have a better way to alert if stats are stale in my environment.

1 Comment

Posted by on July 12, 2012 in SQL Admin