How I Get By Without sysadmin

The people who follow me on twitter have probably noticed that when I’m not tweeting absolute nonsense, I’m tweeting about performance tuning SQL. What most people don’t know is that I don’t have sysadmin access on our servers, nor do I have any of the other permissions that would let me collect trace information or access the dynamic management views. I’ll be honest, at first I thought this would be an insurmountable task – how could I possibly tune queries without access to the tools that I have typically used to tune queries? Eventually, I realized that I always had the knowledge that the tools provided, so I got to work putting this knowledge to work for me.

Write Queries Organically

What the hell? Queries aren’t fruit salad! You’re absolutely right, but it’s still possible to do a bit of genetic programming when working on a query. Here’s how this process works for me:

  1. A query request comes in with a query attached from the previous system.
  2. I take a look at the old report and read the SQL so I can understand what’s going on.
  3. I slowly build the core query in SSMS before I create any dynamic logic.
  4. I add of the dynamic logic, test it a few times, and then throw it over the wall for the development team to play with.

Building the core query is where genetic programming comes into play. As I write a query, I’ll start with a core table and slowly add in additional tables and predicates one at a time until I’ve built my query. The whole time I’m doing this, I keep telling SSMS to return the actual execution plan. Normally, if everything runs quickly, I give the execution plan a quick once over with each step to look for glaring trouble spots that need attention before I move on. As I move through the query, I don’t add any aggregation, I just keep adding tables, predicates, and output columns. This lets see where I need to group, verify the data I’m getting, and actually see the raw data before I start aggregating values. This last part is extremely helpful when you’re dealing with CTEs and aggregates.

How does this help me tune the query? Whenever there is a slow down because of a bad join or missing index it’s immediately visible and I can take action. Remember how I said I always return the actual execution plan? There’s a reason for that. I can open up the execution plan and quickly determine what is causing the slow down. If a query goes from taking about a second to 30 seconds, I know that I’ve either done something incredibly wrong or else I’ve missed something in a join or where clause.

Start Small

Whenever I start writing a query, I start with the table and conditions that will filter down results the fastest. This comes from knowing how SQL Server builds query results – I suggest taking a look at Itzik Ben-Gan’s fantastic book Inside Microsoft SQL Server 2005: T-SQL Querying for an introduction to this topic. Basically, I find the conditions that will be the most restrictive and build my query out from there. I work with a range of data that covers 7 years and the detail tables contain 40,000,000+ rows. This isn’t a lot in the grand scheme of things, but when seconds matter, it’s important to have the smallest possible number of records to join on before you get to the monster table.

Cheat and Use Hints

When all else fails, or when the execution plan is just plain bad, I look at the indexes being used and I compare them to the possible indexes on the table. There are covering indexes in place for most common queries on any single table. When it makes sense, I force the engine to use specific indexes through judicious application of index hints. What I don’t do is feed my queries into the Database Tuning Advisor and apply the indexes it suggests. I do use the DTA tools to look for indexes that I might have missed, but I create indexes myself and occasionally, carefully, combine its suggestions with my own to create better covering indexes.

When working almost exclusively with dynamic SQL, it’s important to provide index hints since the number of possible execution plans is staggering and SQL Server will not be able to hold them all in memory. It’s better (and cheaper) to front load the brainpower during query development than to find out that you have poorly performing queries and have to rewrite them on the fly while customers are calling up and complaining about poor performance.

Avoid Parameter Sniffing

Parameter sniffing is when SQL Server caches the values of variables passed into the query for later re-use. The problem is that these might be infrequently used values or the values might always be different. When you have multiple clients accessing the database and @ClientId is a parameter for every query, you don’t want that value to be cached. Instead, I immediately copy stored procedure parameters to a second set of internal parameters. Unfortunately, I lose the advantage of caching if there are common queries that will always use the same or similar parameters, so I’m careful about how I do this. When it makes sense, I don’t copy parameter values.

Careful Conditions

One of the easiest places to lose performance is by writing poorly performing where clauses. Essentially, make sure the columns in your where conditions are always using an indexed column and that you are using the columns in a way that will make use of the index – WHERE DATEPART(yyyy, my_col) > 2006 isn’t going to cut it. Denis Gobo has a great write up on this topic over at Less Than Dot.


There are a lot of great tools out there, both within SQL Server and from third party vendors, that make this process easier. It’s also important to realize that these are skills that we all have and can put to use before resorting to tools to get the last 10 or 20% of performance tuning.

Related Topics

Here are two related articles about security within SQL Server.

SQL Server Security – Database Roles
Flexible Database-Level Roles


5 Comments so far. Comments are closed.
  1. great article, I’ll be sharing this with some developers on my team. It isn’t as complex as so many people fear and make it out to be. You explain how simple it can be if people apply logic and a methodology to their actions.

  2. disclaimer: So I like to save index hints for dead last and you do convey that but even with dynamic SQL sometimes index hints are not the only solution. There are some cases where you need to always recompile a proc or a statement within a proc when using dynamic SQL.

    Also for parameter sniffing, the recompile hint is another good option. If the parameters can be vastly different or refer to data with big cardinality differences than perhaps the recompile hint can be your friend here as well. The local parameters option sort of has a similar effect and can also work.

    Great post again because it shows those two important pieces of writing well performing code: 1.) Using common sense and a careful methodology and 2.) You are actually looking at your code and looking at query plans and using empirical evidence with changes!

    Sometimes just getting folks to look at query plans can be a big battle :)

  3. Mike – Good point about using OPTION(RECOMPILE). I typically refrain from recommending that people use query level options simply because some of the others options available (FORCE ORDER, for example) can have unintended side effects. However, when tuning queries for selectivity OPTION(RECOMPILE) can be a great resource.

    Of cousre, with OPTION RECOMPILE, you also have to weigh the CPU cost of continuously recompiling execution plans versus the cost of a poor, but not awful, execution plan.

    Thanks for adding the link to Itzik’s article about recompiling select queries, it makes for great background info to add to anyone’s performance tuning toolkit. It’s not as much of a black art as people make it out to be, it just requires some careful thought, a solid methodology, and (on occasion) digging into the nasty guts of the execution plan.

    As far as index hinting goes, it’s usually something I save for the end of query tuning, but I always consider it an option. In the current application I’m working on, all database activity is controlled through stored procedures and some of the procedures have up to 12 parameters. Certain combinations of parameters produce different join paths. In cases like that providing the query engine with as much information up front is key to getting the queries to perform adequately. In one scenario, I was able to tune the execution of the query from 4 minutes and 28 seconds to 6 seconds by carefully apply indexes (I avoided plan caches by constantly generating new GUIDs and putting random spaces in my ad hoc SQL, which works to avoid cached plans for the moment).

  4. smart workaround for the ad hoc. Yes sometimes index hints can definitely help to force a certain order or use a certain plan. I have also seen them cause issues and often I see them used because someone doesn’t understand indexes. In that case, they don’t understand bookmark lookups (You found the index key in your index but that index doesn’t have other columns found in your select, for instance.. so it has to do a lookup… expensive with a lot of rows) or they don’t understand that sometimes a scan is a better option. Like you say though, they have their place. Even join hints can have their place :)


2 pings so far. Trackbacks are closed.

This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 531 items have been purified.