SQL Accesses Test
This test monitors various critical metrics regarding accesses to the MS SQL database.
Target of the test : A Microsoft SQL server
Agent deploying the test : An internal agent
Outputs of the test : One set of results for every Microsoft SQL server monitored
|
Measurement | Description | Measurement Unit | Interpretation |
---|---|---|---|
Full table scans: |
This metric represents the number of unrestricted full scans being handled by the server. The scans can either be base table or full index scans and the value represents the sum of all scans for all the database instances being handled by the server. |
Scans/Sec |
Generally it is better to have fewer table scans for a database. In many cases, the SQL server itself will perform a few full table scans on a regular basis for internal use. To detect anomalies, check for random full table scans that may represent the behavior of applications using the database server. If an unusually high number of full table scans are noticed, use the Profiler and Index Tuning Wizard to determine what |
Page splits: |
This value represents the rate of page splits occurring in a database server as the result of index pages overflowing. |
Splits/Sec |
If the rate of page splits is high, consider increasing the fill factor of the indexes. |
Sql compilations: |
This value is the rate of SQL compilations happening in the database. |
Reqs/Sec |
SQL compilations are a normal part of an SQL Server’s operation. But because compilations take up CPU and other resources, the SQL server attempts to reuse as many execution plans in cache as possible (execution plans are created when compilations occur). The more that execution plans are reused, the less overhead there is on the server, and the faster overall performance is. A high number of SQL compilations indicates that the server is very busy. Compilations can be forced if object schema changes, if previously parallelized execution plans have to run serially, if statistics are recomputed, or if a number of other things occur. In some cases, it is possible to reduce the number of unnecessary compilations. |
Sql recompilations: |
This value is the rate of SQL recompilations happening in the database server. |
Reqs/Sec |
The lower the rate of SQL recompilations, the better the database server performance can be. |
Batch requests: |
This value is the rate of batch requests being handled by the database server. |
Reqs/Sec |
This metric is a good indicator of how busy an SQL Server is. This metric is generally in step with the server’s CPU usage. If the number of batch requests processed is very high (1000 requests/sec or higher), check for CPU and network bottlenecks that can be caused by the high activity rate. The capacity of an MS SQL database server to handle batch requests will depend on the hardware capabilities (CPU, memory, network interface speeds, etc.). |
Work files rate: |
Indicates the number of workfiles that are created per second. |
Workfiles/Sec |
Workfiles are used for temp record storage during hash operations. They are created in memory but can overflow or spill to disk (to tempdb, not to separate OS-level files). A value greater than 20 for this measure indicates trash in the tempdb or poorly coded queries. |
Work tables rate: |
Indicates the number of worktables created per second. |
Worktables/Sec |
The relational engine may need to build a worktable to perform a logical operation specified in an SQL statement. Worktables are typically generated for certain GROUP BY, ORDER BY, or UNION queries. For example, if an ORDER BY clause references columns not covered by any indexes, the relational engine may need to generate a worktable to sort the result set into the order requested. Worktables are built in tempdb and are dropped automatically at the end of the statement. |
Cache requests for work table creation: |
Indicates the percentage of work tables created where the initial two pages of the work table were not allocate but were immediately available from the work table cache. |
Percent |
When a query execution plan is cached, the tempdb work tables required by the plan, if any, are often cached. When a work table is cached, the table is truncated (from the previous execution of the code) and up to nine pages remain in the cache for reuse. This improves the performance of the next execution of the query. A value less than 90% for this measure may indicate insufficient memory; this is because, when memory is low, the database server engine drops execution plans and their corresponding work tables from the cache. This may have caused the low cache hit ratio for work tables. On 32-bit systems, a low value for this measure may also hint at the need for upgrading to 64-bit. |
Free space scans |
Indicates the rate at which scans were initiated to search for free space in which to insert a new record fragment. |
Scans/sec
|
This measure represents inserts into a table with no physical ordering of the rows. A table with no ordering, without a clustered index, is known as a heap table. Inserts into heaps will require SQL Server to perform free space scans to identify pages with free space to insert rows. A heap table also requires an additional, internal column called an uniquifier to be generated for each row inserted. Extra processing is required to define and store a heap table since SQL Server normally uses the clustered index as a storage mechanism for the table data. Freespace scans have an additional I/O expense for inserts and can possibly cause contention on the GAM, SGAM, and PFS pages when there are many connections inserting. It is usually recommended that you physically order the table rows by using a clustered index on the table. A low value is hence desired for this measure. If the value of this measure is very high, you have to quickly investigate the reasons for the same and rapidly figure out what needs to be done to control the free space scans. Towards this end, you first need to identify the I/O-intensive queries executing on the SQL database. For this, you can use the detailed diagnosis of the Avg physical reads measure of the SQL Cached Queries test. The detailed diagnosis of this test reveals which queries are serviced by direct database accesses (and not by the cache). Since such queries are bound to increase processing overheads, optimizing these queries will not only conserve resources, but will also reduce free space scans. Secondly, use the detailed diagnosis of the SQL Missing Indexes test to precisely pinpoint which columns on which tables are not indexed. Unindexed columns can cause queries to take more time and consume more I/O resources. Identifying such columns and indexing them can help reduce I/O and free space scans. Thirdly, you can use the detailed diagnosis of the SQL Unused Indexes test to know which indexes are not used effectively, and are hence increasing I/O overheads during query execution. By ensuring that such indexes are used, you can reduce I/O and free space scans. |
SQL cancelled rate: |
Indicates the number of SQL queries that were cancelled per second. |
Cancelled/Sec |
Users my cancel the currently executing query/batch request at any time. When such cancellations occur, an attention event occurs. An attention is a request by the client to end the currently running request. Typically, an attention event can occur when a cancel request, client-interrupt request, or a broken client connection has occurred. A high value for this measure may hence result in a large number of attention events. If the query or batch that was cancelled was in an explicit user transaction (BEGIN TRAN …. END TRAN), these attention events could result in open transactions and severe blocking problems. |