One of the most advantageous methods for increasing the performance of queries is the formation of efficient indexes. A well-built index helps the query to avoid the necessity of scanning the entire table for results. If you have created an index, SQL server will automatically measure and stores the equivalent statistical information regarding the distribution of values in the indexed column. The optimizer which determines the optimal strategy for evaluating a query uses this statistical information.
There are two types of indexes: clustered and non-clustered, each with some unique advantages depending on the data set.
A clustered index dictates the storage order of the data in a table. When searched for a range of values the clustered indexes are more efficient on columns of data as the data is sorted. These index types also shine at finding a specific row when the indexed value is unique.
In a non-cluster index, the data is stored in one place and the data value in another place. When a query searches for a data value, first the query locates the non-cluster index for the corresponding data value in the table to find the location and then retrieves the data from the location. Most often non-cluster indexes are used for the queries resulting in exact matches.
In a broad-spectrum, indexes should be kept as slight as possible, to reduce the amount of processing related to each query. For analyzing and optimizing the performance of a query the measurement and collection of statistical data is inevitable. The statistical performance of data about the index keys is maintained in SQL server. If it’s configured, non-indexed keys can also be measured statistically.
The basic statistical concepts, which a database administrator needs to understand regarding performance optimization, are:
”¢ Cardinality: measures how many unique values exist in the data set.
”¢ Density: measures the uniqueness of values within a data set. Density is determined by dividing the number of rows that correspond to a given key by the number of rows in the table. The optimizer will likely ignore indexes with high densities.
”¢ Selectivity: measures the number of rows that will be returned by a particular query. Selectivity is determined by dividing the number of keys requested by the number of rows they access. To calculate the relative cost of a query plan, the optimizer needs a valid selectivity measure.
SQL Server automatically updates the statistical information periodically as the data in the tables change. Sampling the data, rather than analyzing all of it minimizes the cost of this automatic statistical update.
To design and choose indexes in a complex database table is normally a tedious task. Providentially, SQL Server has a built-in Index Tuning Wizard to establish an optimal set of statistics and indexes. Running the Index Tuning Wizard will provide a list of suggestions based on a scripted workload for improving the query performance of your database. We need to implement only the recommendations of the wizard, which applies to the particular situation. In the case of any dynamic system, we need to update our analysis on optimal database performance periodically.