SQL Server Always On Series: Availability group DDL operations are permitted only when you are using the master database


Introduction

During a recent attempt to perform a manual failover from Always On Availability Group from JBSAG1 to JBSAG2 using Microsoft SQL Server Management Studio (SSMS), an unexpected error disrupted the process. The error, specifically identified as Microsoft SQL Server Error 35208, posed a challenge to the manual failover operation. In this detailed account, we’ll explore the encountered issue, outline the steps taken to address it, and ultimately achieve a successful manual failover.

Issue
Upon initiating the Always On Availability Group manual failover, the process encountered a hurdle with the following error:

TITLE: Microsoft SQL Server Management Studio
Manual Failover failed (Microsoft.SqlServer.Management.HadrTasks)

ADDITIONAL INFORMATION:
Failed to perform a manual failover of the availability group ‘JBSWiki’ to server instance ‘JBSAG2’. (Microsoft.SqlServer.Management.HadrModel)

For help, click: https://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=16.100.47021.0&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&LinkId=20476

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

Availability group DDL operations are permitted only when you are using the master database. Run the ‘USE master’ statement, and retry your availability group DDL statement. (Microsoft SQL Server, Error: 35208)
For help, click: https://docs.microsoft.com/sql/relational-databases/errors-events/mssqlserver-35208-database-engine-error

Navigating through this error was the initial challenge, particularly when attempting the failover using the SSMS graphical user interface (GUI). Executing the ‘USE master’ statement in this context presented uncertainties, prompting a reevaluation of the failover approach.

Solution
To address the dilemma, the following steps were taken:

GUI Failover Attempt:

  • Initially, the failover was attempted through the SSMS GUI, raising questions about how to execute the ‘USE master’ statement within the graphical interface.

Scripted Failover Action:

  • The failover wizard was restarted, and instead of concluding the process through the GUI, the failover action was scripted for manual execution. I started the failover wizard again and this time instead of clicking finish at the end, tried scripting the failover action.

Manual Execution of Failover Command:

  • The failover command was manually executed from the SSMS SQLCMD query window, successfully completing the failover process.

Database Context Discovery:

  • Investigation revealed that the database context for a new query window was set to a user database (JBDB) instead of ‘master,’ leading to a pivotal realization. I tried clicking on a new query window and I saw that the database context for that query window was set to an user database JBDB and not master.

Connection Options Adjustment:

  • This is when I realized that I might have connected to the SQL Server instance with an User Database specified on the “Connect to database” in “Options <<” as part of making a connection from SSMS. Please check screenshot below,

After realizing this, I disconnected the existing sessions on SSMS and changed “Connect to database” in “Options <<” to “Master” and connected to SQL server JBSAG1 and JBSAG2.

Success After Correction:

  • Following these corrective actions, subsequent manual failover and failback attempts via SSMS were executed seamlessly.

Summary
In summary, this journey through troubleshooting manual failover with error 35208 underscored the importance of the database context, especially when initiating DDL operations. The solution involved a meticulous adjustment of connection options within SSMS, ensuring a connection to the ‘master’ database before attempting manual failover. This article provides a detailed account of the encountered challenge, the thought process behind the solution, and the successful resolution achieved through careful steps and insights.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

MODIFY FILE failed for database ‘tempdb’, file id 1. Size of file (41435136 KB) is greater than MAXSIZE (26214400 KB). (Microsoft SQL Server, Error: 5040)


Introduction:

In the dynamic realm of Azure SQL Managed Instances, configuring database file sizes is a routine task. However, challenges can arise, as I recently encountered while attempting to modify the Tempdb database file size. This blog post details the steps taken to address a perplexing error message that surfaced during this process.

Problem Statement:
The objective was straightforward: adjusting the maximum file size of the Tempdb database on an Azure SQL Managed Instance to 25 GB.

However, both through the graphical user interface (GUI) and Transact-SQL (T-SQL) commands, an error persisted:

TITLE: Microsoft SQL Server Management Studio
——————————
Alter failed for Database ‘tempdb’.  (Microsoft.SqlServer.Smo)
For help, click: https://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=16.100.47008.0&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Alter+Database&LinkId=20476
——————————
ADDITIONAL INFORMATION:
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
——————————
MODIFY FILE failed for database ‘tempdb’, file id 1. Size of file (41435136 KB) is greater than MAXSIZE (26214400 KB). (Microsoft SQL Server, Error: 5040)
For help, click: https://docs.microsoft.com/sql/relational-databases/errors-events/mssqlserver-5040-database-engine-error
——————————

The error surfaced when attempting the operation through the graphical user interface (GUI), and repeating the process using Transact-SQL (T-SQL) resulted in the identical error.

The error notification suggests that the existing size of the Tempdb database file (tempdev) exceeds the size I intended to establish. Contrary to this indication, the actual size of tempdev is only 16 MB, and my intention was to set the Maximum File size to 25 GB.

The error message indicates that the file size is reported as 41,435,136 KB, equivalent to 39.5 GB. Attempting to establish the Maximum file size of Tempdev below 39.5 GB resulted in an error. However, setting it to 40 GB proved successful.

This particular Azure SQL Managed Instance is of the Business Critical tier with 80 Vcores and includes 4 replicas. To delve deeper into the issue, I connected to the read replica of this managed instance through a query window, utilizing the ApplicationIntent=ReadOnly parameter.


I’ve established a connection to the user database JBSWiki to confirm that I am indeed connected to the Read Replica, as verified by executing the following query:

SELECT DATABASEPROPERTYEX(DB_NAME(), 'Updateability');

Upon execution, you will observe the result as READ_ONLY, affirming your connection to the Read Replica. It’s crucial to note that running the same query under the context of Tempdb will display READ_WRITE. To avoid confusion, ensure the query is executed on a user database to validate your connection to a Read Replica.

-> I executed below query to check the tempdb size on Read replica,

use tempdb
select file_id, type, type_desc, name, physical_name, size, max_size, growth, is_percent_growth from sys.database_files

-> Below is the output,

The size of tempdev on the read replica was configured at 39.5 GB, creating complications when attempting to adjust the Maximum file size on the Azure SQL Managed Instance. To address this, I reduced the size of tempdb on the read replica to 15 GB using DBCC SHRINKFILE. Following this adjustment, I successfully set the Maximum File size of the Tempdb data file to 25 GB.

Summary:
Navigating the nuances of database file sizes in an Azure SQL Managed Instance, particularly when dealing with read replicas, demands a comprehensive approach. Verifying the reported sizes, understanding the intricacies of database contexts, and proactive measures such as shrinking the Tempdb on read replicas are pivotal for resolving challenges like the ‘MODIFY FILE failed’ error. This journey underscores the significance of a meticulous troubleshooting process, ensuring a smooth configuration experience within the Azure SQL environment.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

Unveiling Query Performance Insights: Leveraging T-SQL with SQL Server Query Store

Introduction

In the dynamic realm of database management, monitoring and optimizing query performance is paramount. SQL Server offers the Query Store feature, a powerful tool that captures and stores detailed information about query execution plans and runtime statistics. In this blog post, we’ll delve into leveraging T-SQL to extract valuable insights from the Query Store, focusing on queries executed within the last 7 days.

Query Store Essentials

Before we dive into T-SQL scripts, let’s briefly revisit the Query Store. It acts as a repository for query-related data, allowing database administrators to analyze historical performance metrics and troubleshoot issues efficiently.

Comprehensive Query Analysis in the Last 7 Days

To get a comprehensive view of queries executed in the last 7 days, we can use the following T-SQL script:

-- T-SQL script to analyze queries executed in the last 7 days

SELECT
p.query_id query_id,
q.object_id object_id,
ISNULL(OBJECT_NAME(q.object_id),'') object_name,
qt.query_sql_text query_sql_text,
ROUND(CONVERT(float, SUM(rs.avg_duration*rs.count_executions))*0.001,2) total_duration,
ROUND(CONVERT(float, SUM(rs.avg_cpu_time*rs.count_executions))*0.001,2) total_cpu_time,
ROUND(CONVERT(float, SUM(rs.avg_logical_io_reads*rs.count_executions))*8,2) total_logical_io_reads,
ROUND(CONVERT(float, SUM(rs.avg_physical_io_reads*rs.count_executions))*8,2) total_physical_io_reads,
ROUND(CONVERT(float, SUM(rs.avg_logical_io_writes*rs.count_executions))*8,2) total_logical_io_writes,
SUM(rs.count_executions) count_executions,
COUNT(distinct p.plan_id) num_plans
FROM sys.query_store_runtime_stats rs
JOIN sys.query_store_plan p ON p.plan_id = rs.plan_id
JOIN sys.query_store_query q ON q.query_id = p.query_id
JOIN sys.query_store_query_text qt ON q.query_text_id = qt.query_text_id
WHERE rs.first_execution_time BETWEEN (getdate()-7) AND (getdate())
GROUP BY p.query_id, qt.query_sql_text, q.object_id
HAVING COUNT(distinct p.plan_id) >= 1
ORDER BY count_executions DESC

This script provides a detailed breakdown of query performance, including execution counts, total duration, CPU time, and more, for queries executed in the last 7 days.

Specific Query Search Example

Suppose you want to focus on queries containing a specific keyword within the last 7 days. The following T-SQL script demonstrates how to achieve this:

-- T-SQL script to search for a specific query in the last 7 days

SELECT
p.query_id query_id,
q.object_id object_id,
ISNULL(OBJECT_NAME(q.object_id),'') object_name,
qt.query_sql_text query_sql_text,
ROUND(CONVERT(float, SUM(rs.avg_duration*rs.count_executions))*0.001,2) total_duration,
ROUND(CONVERT(float, SUM(rs.avg_cpu_time*rs.count_executions))*0.001,2) total_cpu_time,
ROUND(CONVERT(float, SUM(rs.avg_logical_io_reads*rs.count_executions))*8,2) total_logical_io_reads,
ROUND(CONVERT(float, SUM(rs.avg_physical_io_reads*rs.count_executions))*8,2) total_physical_io_reads,
ROUND(CONVERT(float, SUM(rs.avg_logical_io_writes*rs.count_executions))*8,2) total_logical_io_writes,
SUM(rs.count_executions) count_executions,
COUNT(distinct p.plan_id) num_plans
FROM sys.query_store_runtime_stats rs
JOIN sys.query_store_plan p ON p.plan_id = rs.plan_id
JOIN sys.query_store_query q ON q.query_id = p.query_id
JOIN sys.query_store_query_text qt ON q.query_text_id = qt.query_text_id
WHERE rs.first_execution_time BETWEEN (getdate()-7) AND (getdate())
and qt.query_sql_text LIKE 'DELETE FROM%'
GROUP BY p.query_id, qt.query_sql_text, q.object_id
HAVING COUNT(distinct p.plan_id) >= 1
ORDER BY count_executions DESC

This example narrows down the search to queries with the keyword “DELETE FROM” executed in the last 7 days.

Conclusion

In conclusion, harnessing the power of T-SQL with the SQL Server Query Store provides a robust way to analyze and optimize query performance. The provided scripts offer a starting point for gaining valuable insights into the queries executed in the last 7 days, enabling proactive database management and performance tuning.

Stay tuned for more in-depth explorations of SQL Server features and best practices!

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.