What DBCC SHOW_STATISTICS tells me

This is one of the blog post in a series of posts about optimizing SQL Server queries. A list of blog posts about query tuning can be found here – https://jbswiki.com/2021/04/15/sql-server-query-optimization-introduction/

DBCC SHOW_STATISTICS displays current query optimization statistics for a table or indexed view.  Basically it shows you the statistics, or a summary of the data, that SQL Server will use to help generate an execution plan.

=> Create below database and table.

Create database Test
go
USE Test
GO
CREATE TABLE [dbo].[Table1](
[Col1] [int] NULL,
[Col2] [int] NULL,
[Col3] [int] NULL,
) ON [PRIMARY]
GO

=> You will see a clustered index created automatically. Add the below non clustered index.

USE Test
GO
CREATE NONCLUSTERED INDEX [IX_Table1] ON [dbo].[Table1]
(
[Col1] ASC,
[Col2] ASC,
[Col3] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
GO

=> Insert some sample rows.

insert into Table1 values (1,2,3)
go 10
set nocount on
declare @i int
set @i =2
while (@i<20)
begin
insert into Table1 values(@i,12,13)
set @i=@i+1
end

=> Lets run dbcc show_statistics and see the output. We have inserted some rows, but we don’t see any output.

dbcc show_statistics (‘Table1′,’IX_Table1’)

blog5_1

=> Running update statistics with fullscan and then dbcc show_statistics and we get the below output. This seems to show what we have inserted.

update statistics Table1 with fullscan
dbcc show_statistics(‘Table1′,’IX_Table1’)

blog5_2

Tab1 from above screenshot

blog5_3

[$] Name : Name of the index.

[$] Updated : Date when the statistics for this index is updated or the time when it was last rebuilt.

[$] Rows : Number of rows in the underlying table.

[$] Rows Sampled : Number of rows sampled. In our case we performed an update statistics with fullscan, hence we see all the rows sampled. In most of the case, you will see less number of rows in “Rows Sampled” than the “Rows” indicating we have a sample value set.

[$] Steps : Number of steps in the histogram. In our case it is 10, since the number of rows in tab 3 as per the above screenshot is just 10.

[$] Density : Calculated as 1 / distinct values for all values in the first key column of the statistics object, excluding the histogram boundary values. This Density value is not used by the query optimizer and is displayed for backward compatibility with versions before SQL Server 2008.

[$] Average Key Length : Average number of bytes per value for all of the key columns in the statistics object. In our case 3 columns are int.So, 4+4+4 = 12.

[$] String Index : Yes indicates the statistics object contains string summary statistics to improve the cardinality estimates for query predicates that use the LIKE operator; for example, WHERE ProductName LIKE ‘%Bike’. String summary statistics are stored separately from the histogram and are created on the first key column of the statistics object when it is of type char, varchar, nchar, nvarchar, varchar(max), nvarchar(max), text, or ntext..

[$] Unfiltered Rows : Total number of rows in the table before applying the filter expression. If Filter Expression is NULL, Unfiltered Rows is equal to Rows.

Tab2 from above screenshot.

blog5_4

[$] All Density : It is the value of 1/distinct rows.
select distinct col1 from Table1
=> Returns 19 rows => 1/19 = 0.05263158

select distinct col1,col2 from Table1
=> Returns 19 rows => 1/19 = 0.05263158

select distinct col1,col2,col3 from Table1
=> Returns 19 rows => 1/19 =0.05263158

[$] Average length : Average length, in bytes, to store a list of the column values for the column prefix.

Col1 => 4
Col1, Col2 => 4+4 = 8
Col1, Col2, col3 => 4+4+4 = 12

[$] Columns : Names of the columns in the prefix for which All density and Average length are displayed.

Tab3 from above screenshot

blog5_5

SQL Server builds the histogram from the sorted set of column values in three steps as per https://technet.microsoft.com/en-us/library/dd535534(v=SQL.100).aspx.

=> Histogram initialization: In the first step, a sequence of values starting at the beginning of the sorted set is processed, and up to 200 values of RANGE_HI_KEY,EQ_ROWS, RANGE_ROWS, and DISTINCT_RANGE_ROWS are collected (RANGE_ROWS and DISTINCT_RANGE_ROWS are always zero during this step). The first step ends either when all input has been exhausted, or when 200 values have been found.

=> Scan with bucket merge: Each additional value from the leading column of the statistics key is processed in the second step, in sorted order; for each successive value, either the value is added to the last range, or a new range at the end is created (this is possible because the input values are sorted). If a new range is created, one pair of existing, neighboring ranges is collapsed into a single range. This pair of ranges is selected in order to minimize information loss. The number of histogram steps after the ranges are collapsed stays at 200 throughout this step. This method is based on a variation of the maxdiff histogram.

=> Histogram consolidation: In the third step, more ranges can be collapsed if a significant amount of information is not lost. Therefore, even if the column has more than 200 unique values, the histogram might have less than 200 steps.

[$] RANGE_HI_KEY : This is the key value. This value can be anything from the below output.
select distinct col1 from Table1

[$] RANGE_ROWS : Number of rows whose column value falls between the values as marked in below screenshot. If we take the second row, We have 1 column between RANGE_HI_KEY “1” and “3”, which is value “2”.

blog5_6

To understand this further, insert close to 100000 rows,

set nocount on
declare @i int
set @i =29
while (@i<100000)
begin
insert into Table1 values(@i,12,13)
set @i=@i+1
end

Now check the output of TAB3 again,

blog5_7

It is pretty evident, that between RANGE_HI_KEY “1” and “99998” we have 99987 rows.

[$] EQ_ROWS : Total number of rows equal to corresponding RANGE_HI_KEY.
From the above screenshot,

Row 1 => RANGE_HI_KEY is 1 => select COUNT(*) from table1 where col1=1 => 10, which is equal to EQ_ROWS.

Row 2 => RANGE_HI_KEY is 99998 => select COUNT(*) from table1 where col1=99998 => 1, which is equal to EQ_ROWS.

Row 3 => RANGE_HI_KEY is 99999 => select COUNT(*) from table1 where col1=99999 => 1, which is equal to EQ_ROWS.

[$] DISTINCT_RANGE_ROWS : Number of distinct rows between each of the values in RANGE_HI_KEY. From the above screenshot,

Row 1 => RANGE_HI_KEY is 1 => This is the first row and it will always be 0.

Row 2 => RANGE_HI_KEY is 99998 => select distinct count(col1) from table1 where col1>1 and col1<99998 => 99987, which is equal to  DISTINCT_RANGE_ROWS.

Row 3 => RANGE_HI_KEY is 99999 => select distinct count(col1) from table1 where col1>99998 and col1<99999 => 0, which is equal to DISTINCT_RANGE_ROWS.

[$] AVG_RANGE_ROWS : Average number of rows with duplicate column values within a histogram step, excluding the upper bound (RANGE_ROWS / DISTINCT_RANGE_ROWS for DISTINCT_RANGE_ROWS > 0).

Row 1 => RANGE_HI_KEY is 1 => DISTINCT_RANGE_ROWS = 0, Hence AVG_RANGE_ROWS is 1.

Row 2 => RANGE_HI_KEY is 99998 => DISTINCT_RANGE_ROWS > 0, (RANGE_ROWS / DISTINCT_RANGE_ROWS) =>(99987 / 99987) => 1, which is same as AVG_RANGE_ROWS.

Row 3 => RANGE_HI_KEY is 99999 => DISTINCT_RANGE_ROWS = 0, Hence AVG_RANGE_ROWS is 1.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

@@FETCH_STATUS -9

This is one of the blog post in a series of posts about optimizing SQL Server queries. A list of blog posts about query tuning can be found here – https://jbswiki.com/2021/04/15/sql-server-query-optimization-introduction/

I was working on a performance issue with my application team. They updated me that everything seems to be working fine when SQL services is recycled. It remains fine for 3 or 4 days and then they notice slowness. Once they see this behavior, SQL service will be restarted to solve the issue.

I looked at the perfmon when the issue occurred next time. Disk and CPU counters were normal. But counters related to memory were not that great. After the SQL server restart, the Page Life Expectancy(PLE) was decreasing throughout and reaching below 50 with higher values for Free list stalls/sec and Lazy writes/sec. This proved that something is leaking the memory within SQL server.

I checked the SQL server memory usage further using the below query,

select top 100 (select top 100 cast((cntr_value/1024.0) as decimal(28,2))
from sys.dm_os_performance_counters
where object_name like ‘%Memory Manager%’ and counter_name like ‘Total Server Memory (KB)%’) as ‘Total Server Memory (MB)’, (select top 100 sum(cast(size_in_bytes as decimal(18,2)))/1024/1024 FROM sys.dm_exec_cached_plans)  AS [Plan Cache(MB)], (select top 100 cast((cntr_value/1024.0) as decimal(28,2))
from sys.dm_os_performance_counters where object_name like ‘%Memory Manager%’ and counter_name like ‘Connection Memory (KB)%’) as ‘Connection Memory (MB)’, (select top 100 cast((cntr_value/1024.0) as decimal(28,2)) from sys.dm_os_performance_counters where object_name like ‘%Memory Manager%’ and counter_name like ‘Lock Memory (KB)%’) as ‘Lock Memory (MB)’, (select top 100 cast((cntr_value/1024.0) as decimal(28,2)) from sys.dm_os_performance_counters
where object_name like ‘%Memory Manager%’ and counter_name like ‘SQL Cache Memory (KB)%’) as ‘SQL Cache Memory (MB)’, (select top 100 cast((cntr_value/1024.0) as decimal(28,2)) from sys.dm_os_performance_counters
where object_name like ‘%Memory Manager%’ and counter_name like ‘Optimizer Memory (KB)%’) as ‘Optimizer Memory (MB)’, (select top 100 cast((cntr_value/1024.0) as decimal(28,2)) from sys.dm_os_performance_counters
where object_name like ‘%Cursor Manager b%’ and counter_name like ‘Cursor memory usage%’ and instance_name = ‘_Total’) as ‘Cursor memory usage (MB)’
go

From the output, I was able to see that the memory used by cursor was close to 50GB. Max server memory for this instance was set to 65GB. It was pretty clear that the cursor is leaking the memory. I checked how many cursors are active and how much memory does it occupy using the below query,

select count(*) as total_open_cursors,
convert (numeric(5,2), (select cntr_value/1024.0 from sys.dm_os_performance_counters WHERE counter_name = ‘Cursor memory usage’ and instance_name = ‘_Total’)) as MemUsed_by_Cursors_MB
from sys.dm_exec_cursors (0) c
where is_open=1

It showed that close to 63548 cursors were active with 50GB memory used and growing steadily. I wanted to check what these cursors were doing and hence checked the FETCH_STATUS of these cursors that are active in this instance.

select [fetch_status],count(*) from  sys.dm_exec_cursors (0)
group by [fetch_status]

There were close to 63546 cursors with FETCH_STATUS -9 and 2 with FETCH_STATUS 0. As per https://msdn.microsoft.com/en-us/library/ms187308.aspx, below are the return values for FETCH_STATUS.

Return value Description
0 The FETCH statement was successful.
-1 The FETCH statement failed or the row was beyond the result set.
-2 The row fetched is missing.
-9 The cursor is not performing a fetch operation.

FETCH_STATUS -9 refers that the cursor is active, but not doing any fetch operation. I checked the text for these cursors using the below query,

select t.text,[fetch_status],dormant_duration,session_id,cursor_id,name,properties,creation_time,is_open from  sys.dm_exec_cursors (0) a CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS t

The text column from the above querie’s output revealed that each of these cursors were not properly deallocated and this is bloating up the memory usage. We found an issue in the application code where the cursors were not getting properly deallocated which is basically what is causing the -9 issue where no fetch operation is performed. Once the application code was modified to deallocate the cursors properly, the issue was resolved permanently.

I tried to reproduce this issue on my lab machine with Max Server Memory set at 500 MB .

create database JB_Cursor
go

use JB_Cursor
go

create table JB_count(
sno int primary key identity (1,1),
sname varchar(255))

Executed the below query,

DECLARE db_cursor CURSOR  FOR
SELECT name  FROM sys.databases
OPEN db_cursor

DECLARE DatabaseBackup_cursor CURSOR FOR
select name from master..sysobjects
OPEN DatabaseBackup_cursor
FETCH NEXT FROM DatabaseBackup_cursor
WHILE @@FETCH_STATUS = 0
BEGIN
FETCH NEXT FROM DatabaseBackup_cursor
END

CLOSE DatabaseBackup_cursor
DEALLOCATE DatabaseBackup_cursor

CLOSE db_cursor
DEALLOCATE db_cursor

Executed the below query in a different query window,

select getdate() as time_checked,
max (c.creation_time) as maxc,
min(c.creation_time) minc,
count(*) as total_open_cursors,
–convert(numeric(5,2), (count(*) * 1.0/datediff(mi,min(c.creation_time),getdate()))) as open_cursors_created_per_minute,
convert (numeric(5,2), (select cntr_value/1024.0 from sys.dm_os_performance_counters
WHERE counter_name = ‘Cursor memory usage’ and instance_name = ‘_Total’)) as MemUsed_by_Cursors_MB
from sys.dm_exec_cursors (0) c
where is_open=1 and [fetch_status] = -9
go
select fetch_status,dormant_duration,* from  sys.dm_exec_cursors (0)

From the output you will see that db_cursor will have a fetch_status as -9. This is because cursor db_cursor doesn’t perform a fetch and just waits on the cursor DatabaseBackup_cursor to complete.

The below query will reproduce the issue we are looking for,

set nocount on
declare @i int
Declare @cursorname VARCHAR(1000)
Declare @cursorname1 VARCHAR(1000)
Declare @cursorname2 VARCHAR(1000)
set @i=1

while(@i<20000)
begin
set @cursorname = ‘db_cursor’ + cast(@i as varchar)
set @cursorname2 = ‘DatabaseBackup_cursor’ + cast(@i as varchar)

set @cursorname1=’Declare @name varchar(255)
DECLARE ‘+@cursorname+’ CURSOR  FOR
SELECT name  FROM sys.databases
OPEN ‘+@cursorname+’

DECLARE ‘+@cursorname2+’ CURSOR FOR
select name from master..sysobjects
OPEN ‘+@cursorname2+’
FETCH NEXT FROM ‘+@cursorname2+’ into @name
WHILE @@FETCH_STATUS = 0
BEGIN
insert into JB_count values (@name)
FETCH NEXT FROM ‘+@cursorname2+’ into @name
END

CLOSE ‘+@cursorname2+’
DEALLOCATE ‘+@cursorname2+’

–CLOSE ‘+@cursorname+’
–DEALLOCATE ‘+@cursorname+’   —-Not closing or deallocating the cursor just to reproduce the issue.

exec (@cursorname1)
set @i=@i+1
end

Open a new query window and execute the below,

select count(*) as total_open_cursors,
convert (numeric(5,2), (select cntr_value/1024.0 from sys.dm_os_performance_counters WHERE counter_name = ‘Cursor memory usage’ and instance_name = ‘_Total’)) as MemUsed_by_Cursors_MB
from sys.dm_exec_cursors (0) c
where is_open=1 and [fetch_status] = -9
go
select [fetch_status],count(*) as [Cursor_Count] from  sys.dm_exec_cursors (0)
group by [fetch_status]
go
select t.text,[fetch_status],dormant_duration,session_id,cursor_id,name,properties,creation_time,is_open from  sys.dm_exec_cursors (0) a CROSS APPLY
sys.dm_exec_sql_text(a.sql_handle) AS t
go

This is what I saw in my lab machine,

blog3_1

Out of 500 MB, Cursors alone is using close to 350 MB memory. The resolution for this issue in our example would be to uncomment the deallocate statement.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

SQL Server Parameter Sniffing

This is one of the blog post in a series of posts about optimizing SQL Server queries. A list of blog posts about query tuning can be found here – https://jbswiki.com/2021/04/15/sql-server-query-optimization-introduction/

SQL server creates an execution plan for a stored procedure whenever it gets compiled or recompiled. Repeated execution of the same stored procedure uses the execution plan from the plan cache instead of creating a new one every time. The execution plan that is created while compiling the stored procedure basically considers the parameter being passed. This parameter value is actually sniffed and used for cardinality estimation. Parameter sniffing is good most of the time. But there are situations where this creates huge performance issues when the estimates go bad while executing the stored procedure with a different parameter than the one used while the execution plan was created.

Let’s consider we got into a new job and started earning a salary of 1000 dollars. We estimate our monthly expense to be 1000 dollars and create a budget plan for this. Every time we change jobs or whenever there is a hike in the salary, we revise the budget plan. So, in this case we (SQL server) create a new budget plan (execution plan) using the salary(parameter) $1000. After a month when looking at the estimated and actual spending, we are happy that the actual spending matches the estimated spending and all looks good. But after some months we could see that the money in hand was spent within 10 days and land into a serious issue. We had to borrow money from others and had to cater to the expense for that month. Later when we looked at the estimated and actual spending, there are serious differences in the estimates where actual spending is way higher than the estimated spending. Analyzing further we understood that there were some additional spending’s that actually ballooned our actual spending.

Let’s look at an example using database JB_Parameter_Sniffing. You can download the database backup here,

  • Execute queries,select COUNT(1) from tab1
    select city,COUNT(*) from tab1 group by city

  • Store procedure proc1 definition,
    create proc [dbo].[proc1] (@city varchar(100))
    as
    begin
    select sno,sname,city
    from tab1
    where city=@city
    end
    GO
  • exec proc1 ‘Bangalore’
    The execution completes in 201 ms.
    The statistics looks like this:

    3

  • The execution plan for the highlighted query looks like this:4

    Let’s check how the estimates and actuals look for the Index Seek and the Key lookup:

  • exec proc1 ‘Chennai’
    The query completes in 3232 ms.

    6
    7

    Let’s now check how the estimates and actuals look for the Index Seek and the Key lookup:

  • The execution of the stored procedure with parameter ‘Chennai’ uses the cached plan of stored procedure with parameter ‘Bangalore’ and the plan proves to be bad for the second execution as the estimates seems bad. We are estimating 100 rows, but the rows returned seems to be close to 100000.
  • In this case, we are under estimating the number of rows during second execution.
  • Let’s clear the plan from plan cache and run the queries again in different order and check how things looks now,10
  • exec proc1 ‘Chennai’18
    13
  • exec proc1 ‘Bangalore’14
    15
    16
  • In this case, the execution of the stored procedure with parameter ‘Bangalore’ uses the cached plan of stored procedure with parameter ‘Chennai’ and the plan proves to be bad for the second execution as the estimates seems bad. We are estimating 100000 rows, but the rows returned seems to be close to 100. The same query when run first executes in just 201 MS.
  • In this case we are over estimating rows during second execution.
  •  From the above example, a value is used while creating the execution plan and the same is used by all other executions no matter what parameter is passed. But, how can we identify this? How can we get to know what parameter was used while the plan is created? The answer lies in the XMP plan. Right click the graphical execution plan and click “Show Execution plan XML…” and search for “ParameterList”. In my case I see the below,17
  • In this case the execution plan was created using parameter “Chennai” as indicated under ParameterCompiledValue and the current execution using the parameter “Bangalore” is indicated by ParameterRuntimeValue.How do we solve this?

    1) RECOMPILE Hint

    -> The easiest way is to add a recompile hint to the stored procedure.

    alter proc [dbo].[proc1] (@city varchar(100))
    with recompile
    as
    begin
    select sno,sname,city
    from tab1
    where city=@city
    end
    GO

    -> Executing the stored procedure now will create different execution plan for different parameters. Make sure you clear the plan cache before attempting this.

    18
    19

    Are there any issues using the recompile hint?

    -> We will use stored procedure Proc2 and Proc2_Recompile to test this.

    -> Stored procedure Proc2 and Proc2_Recompile are same, the only difference is that procedure Proc2_Recompile contains a recompile hint.

    -> Stored procedure Proc1 and Proc2 are same, But Proc2 contains an IN clause just  to make sure that the execution plan creation takes little more time. The stored procedure definition can be seen from the database backup.

    -> Execute the below query and check the performance. The below query executes stored procedure Proc2 100 times. It will execute exec proc2 ‘Chennai’ 10 times  and exec proc2 ‘Bangalore’ 90 times.

    -> Procedure Proc2 takes close to 3 minute 54 seconds.

    20

    -> Procedure Proc2_Recompile takes close to 2 minute 45 seconds.

    21

    -> In this case we have an improvement while using the recompile hint.

    -> Let’s execute the procedure other way around. Procedure Proc2 takes close to 5 seconds.

    22

    -> Procedure Proc2_Recompile takes close to 2 minute 44 seconds.

    23

    -> In this case we don’t have an improvement while using the recompile hint. The performance has gone worse while using the recompile hint.

    -> The time difference comes from the fact that it took much time for creating an execution plan than executing it. With recompile hint it had to spend time in creating the plan every time it executed the procedure. In this case parameter sniffing is better than recompiling the procedure every time.

    2) Local variables

    -> Let us implement Proc3 with a local variable as below,

    Create proc [dbo].[proc3] (@city varchar(100))
    as
    begin
    declare @L_city varchar(100)
    set @L_city = @city
    declare @s_sno int, @s_sname varchar(255), @s_city varchar(255)
    select @s_sno=sno,@s_sname=sname,@s_city=city
    from tab1
    where city=@L_city
    end

    -> Executing procedure,

    24

    25

    -> Executing procedure,

    26
    27

    -> The estimates are not that great. But this Is how it works when using the local variables.

    3) OPTIMIZE FOR (@VARIABLE=VALUE) -> It would be good if one line of what this option does is explained

    This hint advises the query optimizer to use a particular value for a local variable when the query is compiled and optimized. The value is used only during query optimization, in this article’s context the value is used to generate the execution plan or we use the value to create an estimation as per our requirements in the estimated execution plan and not during query execution.

    -> Let us create procedure Proc4 as below,

    create proc [dbo].[proc4] (@city varchar(100))
    as
    begin
    declare @s_sno int, @s_sname varchar(255), @s_city varchar(255)
    select @s_sno=sno,@s_sname=sname,@s_city=city
    from tab1
    where city=@city
    option (optimize for (@city=’Chennai’))
    end

    -> Executing procedure,

    28
    29

    -> Executing procedure,

    30
    31

    -> The problem arises when any new data is inserted and a new city with more than 100000 rows for that city comes in and when data related to city “Chennai” is deleted.

    4) OPTIMIZE FOR (@VARIABLE=UNKNOWN)  -> It would be good if one line of what this option does is explained

    This hint advised the query optimizer to use statistical data instead of the initial value to determine the value for a local variable during query optimization. It basically matches the query optimizer to make decision the same way it will do if no parameters are defined.

    -> We can use this option to work around issues arising due to previous option.

    -> Let us create procedure Proc5 as below,

    create proc [dbo].[proc5] (@city varchar(100))
    as
    begin
    declare @s_sno int, @s_sname varchar(255), @s_city varchar(255)
    select @s_sno=sno,@s_sname=sname,@s_city=city
    from tab1
    where city=@city
    option (optimize for (@city UNKNOWN))
    end

    -> Executing procedure,

    32
    33

    -> Executing procedure,

    34
    35

    -> This is similar to using a local variable.

    5) Trace flag 4136

    -> Enabling trace flag 4136 in the instance level.

    36

    -> Executing procedure,

    37

    -> Executing procedure,

    38

    -> Enabling trace flag 4136 in the instance level will disable parameter sniffing to all queries including queries that benefit from parameter sniffing.

    -> The trace flag can be used with in the query as below,

    create proc [dbo].[proc6] (@city varchar(100))
    as
    begin
    declare @s_sno int, @s_sname varchar(255), @s_city varchar(255)
    select @s_sno=sno,@s_sname=sname,@s_city=city
    from tab1
    where city=@city
    option (querytraceon 4136)
    end

    -> Executing procedure,

    39
    -> Executing procedure,

    40

    5) Creating different stored procedure for different city.

    -> We can create separate stored procedure for different cities. This requires us to check the table data periodically and create different procedures as appropriate. Not a great idea, but as a workaround.

    6) Disabling parameter sniffing at database level starting SQL server 2016

    -> Starting SQL server 2016, Parameter sniffing comes as a database scoped configuration.

    -> “Select * from sys.database_scoped_configurations” provide us the details.

    -> Database properties contains it as enabled by default.

    41

    -> We can disable it in GUI or use “ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SNIFFING = OFF;”

    We have some of these options to resolve parameter sniffing. We should test these options and conclude depending on the results.

    Thank You,
    Vivek Janakiraman

    Disclaimer:
    The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.