SQL Server 2022 Performance Tuning Tips: Optimizing for Peak Efficiency

SQL Server 2022 introduces numerous enhancements aimed at improving performance and efficiency. Whether you’re dealing with query optimization, index management, or memory allocation, these new features and best practices can help you achieve significant performance gains. In this blog, we’ll explore specific tuning tips and tricks for SQL Server 2022, highlighting changes that enhance query performance without requiring any code changes. We’ll also address how these improvements solve longstanding issues from previous versions. Practical T-SQL examples will be provided to help you implement these tips. Let’s dive in! 🎉

Key SQL Server 2022 Enhancements for Performance Tuning ⚙️

  1. Intelligent Query Processing (IQP) Enhancements: SQL Server 2022 continues to enhance IQP features, including Adaptive Joins, Batch Mode on Rowstore, and more.
  2. Automatic Plan Correction: This feature helps to identify and fix suboptimal execution plans automatically.
  3. Increased Parallelism: SQL Server 2022 offers more granular control over parallelism, improving the performance of complex queries.
  4. Optimized TempDB Usage: Improvements in TempDB management reduce contention and improve query performance.

Specific Tuning Tips and Tricks 🔧

1. Leverage Intelligent Query Processing (IQP) 🧠

SQL Server 2022 builds on the IQP feature set, which adapts to your workload to optimize performance. Here are some specific IQP features to take advantage of:

  • Batch Mode on Rowstore: This feature allows batch mode processing on traditional rowstore tables, providing significant performance improvements for analytical workloads.

Example Query:

-- Without Batch Mode on Rowstore
SELECT SUM(SalesAmount) 
FROM Sales.SalesOrderDetail
WHERE ProductID = 707;

-- With Batch Mode on Rowstore (SQL Server 2022)
SELECT SUM(SalesAmount) 
FROM Sales.SalesOrderDetail WITH (USE HINT ('ENABLE_BATCH_MODE'))
WHERE ProductID = 707;
  • Adaptive Joins: SQL Server dynamically chooses the best join strategy (nested loop, hash join, etc.) during query execution, optimizing performance based on actual data distribution.

Example Query:

-- Without Adaptive Joins
SELECT p.ProductID, p.Name, SUM(s.Quantity) AS TotalSold
FROM Production.Product p
JOIN Sales.SalesOrderDetail s ON p.ProductID = s.ProductID
GROUP BY p.ProductID, p.Name;

-- With Adaptive Joins (SQL Server 2022)
SELECT p.ProductID, p.Name, SUM(s.Quantity) AS TotalSold
FROM Production.Product p
JOIN Sales.SalesOrderDetail s ON p.ProductID = s.ProductID
GROUP BY p.ProductID, p.Name;

2. Utilize Automatic Plan Correction 🛠️

Automatic Plan Correction helps to identify and fix inefficient execution plans. This feature automatically captures query performance baselines and identifies regressions, correcting them as needed.

Enabling Automatic Plan Correction:

ALTER DATABASE SCOPED CONFIGURATION 
SET AUTOMATIC_TUNING = AUTO_PLAN_CORRECTION = ON;

3. Optimize TempDB Usage 🗄️

TempDB can often become a bottleneck in SQL Server. SQL Server 2022 introduces several enhancements to manage TempDB more efficiently:

  • Memory-Optimized TempDB Metadata: Reduces contention on system tables in TempDB, particularly beneficial for workloads with heavy use of temporary tables.

Enabling Memory-Optimized TempDB Metadata:

ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED_TEMPDB_METADATA = ON;

4. Fine-Tune Parallelism Settings 🏃‍♂️

SQL Server 2022 offers more granular control over parallelism, which can improve the performance of complex queries by better utilizing CPU resources.

Setting MAXDOP (Maximum Degree of Parallelism):

-- Setting MAXDOP for the server
EXEC sys.sp_configure 'max degree of parallelism', 8;
RECONFIGURE;

-- Setting MAXDOP for a specific query
SELECT * 
FROM LargeTable 
OPTION (MAXDOP 4);

Solving Previous Issues with SQL Server 2022 🔄

1. Resolving Parameter Sniffing Issues 🎯

Parameter sniffing can lead to suboptimal plans being reused, causing performance issues. SQL Server 2022’s Parameter Sensitive Plan Optimization addresses this by creating multiple plans for different parameter values.

Example T-SQL Query:

-- Enabling Parameter Sensitive Plan Optimization
ALTER DATABASE SCOPED CONFIGURATION 
SET PARAMETER_SENSITIVE_PLAN_OPTIMIZATION = ON;

2. Handling Query Store Performance Overhead 📈

The Query Store feature in SQL Server 2022 has been enhanced to minimize performance overhead while still capturing valuable query performance data.

Best Practices:

  • Limit Data Capture: Configure Query Store to capture only significant queries to reduce overhead.
  • Use Read-Only Secondary Replicas: Leverage Always On Availability Groups to offload Query Store data collection to read-only replicas.

Business Use Case: E-Commerce Platform 🛒

Consider an e-commerce platform experiencing slow query performance during peak shopping seasons. By implementing SQL Server 2022’s performance tuning features, the platform can:

  • Improve Checkout Process Speed: Use IQP features like Batch Mode on Rowstore to optimize complex analytical queries that calculate discounts and shipping costs.
  • Enhance Product Search Efficiency: Utilize Adaptive Joins to dynamically optimize search queries based on the data distribution of products.
  • Reduce Database Contention: Apply TempDB optimization techniques to handle the high volume of temporary data generated during transactions.

Conclusion 🎉

SQL Server 2022 offers a wealth of new features and enhancements designed to optimize performance and solve long-standing issues. By leveraging Intelligent Query Processing, Automatic Plan Correction, and other tuning tips, you can achieve significant performance gains without extensive code changes. Whether you’re running a high-traffic e-commerce platform or a complex analytical workload, these tuning tips can help you get the most out of your SQL Server 2022 environment.

For more tutorials and tips on SQL Server, including performance tuning and database management, be sure to check out our JBSWiki YouTube channel.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

SQL Server Unused Indexes: Identification, Monitoring, and Management

Indexes are crucial for optimizing query performance in SQL Server. However, not all indexes are used effectively; some might remain unused, consuming space and resources unnecessarily. In this comprehensive blog, we’ll delve into the concept of unused indexes, how to identify them, the potential risks of deleting them, and best practices for managing them. We’ll also explore real-world scenarios and provide the necessary T-SQL scripts for monitoring and handling unused indexes.


🔍 What is an Unused Index?

An unused index is an index that exists in the database but is not used by the SQL Server query optimizer. This could be due to several reasons:

  1. Outdated Query Patterns: The index may have been useful for queries that are no longer executed.
  2. Changes in Data Distribution: Alterations in data patterns may render the index less effective or redundant.
  3. Incorrect Index Design: The index might not align with the current workload or data structure.

Unused indexes can lead to unnecessary resource consumption, such as additional storage space and increased overhead during data modification operations (INSERT, UPDATE, DELETE).

Risks of Removing Unused Indexes ⚠️

While removing unused indexes can free up resources, it can also lead to unexpected performance issues if not done carefully. Here are some potential risks:

  1. Impact on Rarely Used Queries: An index might appear unused but could be critical for infrequent queries, such as quarterly reports.
  2. Incorrect Monitoring Period: A short monitoring period might not capture all usage patterns, leading to incorrect conclusions.

Best Practices for Monitoring Unused Indexes 📊

  1. Extended Monitoring Period: Monitor index usage over an extended period (e.g., several months) to capture all usage patterns.
  2. Analyze Workload Patterns: Understand your workload and identify critical periods (e.g., end-of-month processing).
  3. Test Before Removing: Always test the impact of removing an index in a non-production environment.

Advantages of Managing Unused Indexes 🌟

  1. Improved Performance: Reducing the number of unused indexes can improve performance for data modification operations.
  2. Reduced Storage Costs: Freeing up storage space by removing unused indexes.
  3. Simplified Maintenance: Fewer indexes to maintain and monitor.

🔧 How to Identify Unused Indexes

Identifying unused indexes involves monitoring the usage statistics provided by SQL Server. The sys.dm_db_index_usage_stats dynamic management view (DMV) is a valuable resource for this purpose.

📋 T-SQL Script to Identify Unused Indexes

The following script retrieves information about indexes that haven’t been used since the last server restart:

SELECT 
    i.name AS IndexName,
    i.object_id,
    o.name AS TableName,
    s.name AS SchemaName,
    i.index_id,
    u.user_seeks,
    u.user_scans,
    u.user_lookups,
    u.user_updates
FROM 
    sys.indexes AS i
JOIN 
    sys.objects AS o ON i.object_id = o.object_id
JOIN 
    sys.schemas AS s ON o.schema_id = s.schema_id
LEFT JOIN 
    sys.dm_db_index_usage_stats AS u 
    ON i.object_id = u.object_id AND i.index_id = u.index_id
WHERE 
    i.is_primary_key = 0
    AND i.is_unique_constraint = 0
    AND o.type = 'U'
    AND u.index_id IS NULL
    AND u.object_id IS NULL
ORDER BY 
    s.name, o.name, i.name;

This script filters out primary key and unique constraint indexes, focusing on user-created indexes that have not been used since the last server restart.


⚠️ Potential Issues with Deleting Unused Indexes

While removing unused indexes can free up resources, it also carries potential risks:

  1. Hidden Usage: Some indexes may not show usage in the DMV statistics if they are used infrequently or during specific maintenance operations.
  2. Future Requirements: An index deemed unused might be needed for future queries or batch jobs, especially if they run infrequently (e.g., quarterly reports).
  3. Inaccurate Assessment: Short monitoring periods can lead to incorrect conclusions about an index’s utility.

⏲️ Best Time Frame for Monitoring

It’s advisable to monitor index usage over a prolonged period, ideally encompassing a full business cycle (e.g., monthly, quarterly). This ensures that all potential usage patterns, including infrequent but critical operations, are accounted for.


🛠️ Handling Unused Indexes

Best Practices for Managing Unused Indexes

  1. Prolonged Monitoring: As mentioned, extend the monitoring period to capture all usage patterns.
  2. Review Before Deletion: Before removing an index, consult with application developers and database administrators to understand its purpose.
  3. Testing and Staging: Always test the impact of removing an index in a staging environment before applying changes to production.
  4. Documentation: Maintain documentation of all indexes and their intended purpose to avoid unintentional removal.

📜 Example Scenarios

1. Beneficial Removal of an Unused Index

Scenario: A retail company finds an unused index on a transactional table that has not been utilized for over a year. The index occupies significant disk space and slows down data modification operations.

Action: After thorough analysis and consultation, the company decides to remove the index, resulting in improved performance and reduced storage costs.

T-SQL for Removing the Index:

DROP INDEX IndexName ON SchemaName.TableName;

2. Problematic Removal of a Used Index

Scenario: A financial services company removes an index that appears unused based on a short monitoring period. The index was actually used for a quarterly reconciliation job, leading to significantly slower performance and extended processing times during the next quarter.

Lesson Learned: The company learned the importance of comprehensive monitoring and consultation before making changes.


🏢 Business Use Cases

Cost Optimization

Removing unused indexes can free up valuable disk space and reduce maintenance overhead, leading to cost savings. This is particularly beneficial for organizations with large databases where storage costs are a significant concern.

Performance Enhancement

By eliminating unnecessary indexes, the performance of data modification operations can be improved, leading to faster transaction processing and more efficient database operations.


🏁 Conclusion

Managing unused indexes in SQL Server requires careful analysis and a comprehensive approach. While removing unused indexes can provide benefits like reduced storage costs and improved performance, it is crucial to ensure that the indexes are genuinely unused and not required for infrequent operations. By following best practices and leveraging the right tools, you can optimize your SQL Server environment effectively.

For any questions or further guidance, feel free to reach out or leave a comment! Happy optimizing! 🚀

For more tutorials and tips on SQL Server, including performance tuning and database management, be sure to check out our JBSWiki YouTube channel.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

Proactively Managing Transactional Replication Latency with SQL Server

Transactional replication is a critical component of many SQL Server environments, providing high availability, load balancing, and other essential benefits. However, managing replication latency, the delay between an action occurring on the publisher and it being reflected on the subscriber, is vital for ensuring system performance and data integrity. In this blog post, we’ll explore a proactive approach to monitor and alert on replication latency, helping database administrators (DBAs) maintain optimal system health.

The Issue:

Replication latency can sometimes go unnoticed until it impacts the system performance or data accuracy, leading to potential data loss or business disruptions. Traditional monitoring techniques may not provide real-time alerts or may require significant manual intervention, making them less effective for immediate latency identification and resolution.

The Script:

To address this challenge, we introduce a SQL script designed by Vivek Janakiraman from JBSWiki, specifically crafted to monitor transactional replication latency in SQL Server environments. This script efficiently posts tracer tokens to specified publications and measures the time taken for these tokens to move through the replication components, providing a clear picture of any latency present in the system.

/*
Author: Vivek Janakiraman
Company: JBSWiki
Description: This script is used to alert in case there is Transactional replication Log reader or distribution agent latency.
It posts tracer tokens to specified publications and measures the latency to the distributor and subscriber.
*/

-- Switch to the publisher database to insert tracer tokens.
USE [Publisher_Database_Here] -- Use your publisher database name here.
-- Insert tracer tokens into the specified publications.
EXEC sys.sp_posttracertoken @publication = 'Publication_Name' -- Change appropriate Publication that should be monitored.
EXEC sys.sp_posttracertoken @publication = 'Publication_Name1' -- Change appropriate Publication that should be monitored.
-- Wait for 5 minutes to allow the tokens to propagate.
WAITFOR DELAY '00:05:00'

-- Switch to the distribution database to query latency information.
USE distribution
;WITH LatestEntries AS (
-- Select the latest entries for each publication and agent.
SELECT publication_id, agent_id, MAX(publisher_commit) AS MaxDate
FROM MStracer_tokens t
JOIN MStracer_history h ON t.tracer_id = h.parent_tracer_id
GROUP BY publication_id, agent_id
)
-- Select latency information for the latest tokens.
SELECT c.name, t.publication_id, h.agent_id, t.publisher_commit,
ISNULL(DATEDIFF(s,t.publisher_commit,t.distributor_commit), 299) as 'Time To Dist (sec)',
ISNULL(DATEDIFF(s,t.distributor_commit,h.subscriber_commit), 299) as 'Time To Sub (sec)'
INTO #REPL_LATENCY
FROM MStracer_tokens t
JOIN MStracer_history h ON t.tracer_id = h.parent_tracer_id
JOIN distribution.dbo.MSdistribution_agents c ON h.agent_id = c.id
JOIN LatestEntries le ON t.publication_id = le.publication_id AND h.agent_id = le.agent_id AND t.publisher_commit = le.MaxDate
ORDER BY t.publisher_commit DESC

-- Check if there is any latency beyond acceptable limits and select those records.
IF EXISTS (SELECT 1 FROM #REPL_LATENCY WHERE ([Time To Dist (sec)] > 30 OR [Time To Sub (sec)] > 30))
BEGIN
SELECT name, publication_id, agent_id, publisher_commit, [Time To Dist (sec)], [Time To Sub (sec)]
INTO #REPL_LATENCY_Email
FROM #REPL_LATENCY
WHERE ([Time To Dist (sec)] > 30 OR [Time To Sub (sec)] > 30)
END

-- Prepare the HTML body content for the email alert.
DECLARE @body_content NVARCHAR(MAX);
SET @body_content = N'
<style>
table.GeneratedTable {
width: 100%;
background-color: #D3D3D3;
border-collapse: collapse;
border-width: 2px;
border-color: #A9A9A9;
border-style: solid;
color: #000000;
}
table.GeneratedTable td, table.GeneratedTable th {
border-width: 2px;
border-color: #A9A9A9;
border-style: solid;
padding: 3px;
}
table.GeneratedTable thead {
background-color: #A9A9A9;
}
</style>
<table class="GeneratedTable">
<thead>
<tr>
<th>name</th>
<th>publication_id</th>
<th>agent_id</th>
<th>publisher_commit</th>
<th>[Time To Dist (sec)]</th>
<th>[Time To Sub (sec)]</th>
</tr>
</thead>
<tbody>' +
CAST(
(SELECT td = name, '',
td = publication_id, '',
td = agent_id, '',
td = publisher_commit, '',
td = [Time To Dist (sec)], '',
td = [Time To Sub (sec)], ''
FROM [dbo].#REPL_LATENCY_Email
FOR XML PATH('tr'), TYPE
) AS NVARCHAR(MAX)
) +
N'</tbody>
</table>';

-- Send an email alert if there is any latency issue found.
IF EXISTS (SELECT TOP 1 * FROM [dbo].#REPL_LATENCY_Email)
BEGIN
EXEC msdb.dbo.sp_send_dbmail @profile_name = 'JBSWIKI',
@body = @body_content,
@body_format = 'HTML',
@recipients = 'jvivek2k1@yahoo.com',
@subject = 'ALERT: Transactional Replication Latency Alert';
END

-- Cleanup temporary tables.
DROP TABLE #REPL_LATENCY
DROP TABLE #REPL_LATENCY_Email

The Solution:

The script works by first posting tracer tokens to the specified publications within the publisher database. It then waits for a predetermined amount of time (defaulted to 5 minutes in the script) to allow the tokens to propagate through the system. Following this, the script measures the latency to the distributor and subscriber, providing a detailed report of the time taken in each stage of the replication process.

This information is then used to generate an HTML-formatted email alert if the latency exceeds predefined thresholds (30 seconds in the provided script), allowing for immediate action to be taken. The use of HTML formatting in the email ensures that the information is presented in an easily digestible format, facilitating quick understanding and response by the DBA.

Conclusion:

Proactive monitoring and management of transactional replication latency are paramount for maintaining the health and performance of SQL Server environments. The script provided offers a straightforward and effective solution for DBAs to stay ahead of potential replication issues. By automating the process of latency detection and alerting, this approach not only saves valuable time but also helps in preventing the negative impact of replication latency on business operations.

Remember, while this script serves as a valuable tool in your monitoring arsenal, it’s also important to tailor the solution to your specific environment and requirements. Regularly reviewing and adjusting the latency thresholds and monitoring frequency will ensure you continue to get the most out of your replication setup.

For more tutorials and tips on SQL Server, including performance tuning and database management, be sure to check out our JBSWiki YouTube channel.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.