Resizing SQL Server Database from single data file to multiple Data file

-> Requirement is to move data from a database that has 1 data file to 4 data files.

-> Existing setup,

SQL Server : SQL Server 2017
Database Size : 2 TB
Number of Data file(s) : 1
Data file size : 1.8 TB
Log file size : 200 GB

-> Solution requirement,

Number of Data files : 4
Data File 1 Size : 650 GB
Data File 2 Size : 650 GB
Data File 3 Size : 650 GB
Data File 4 Size : 650 GB
Log file size : 200 GB

-> Below tasks were undertaken on a test server initially.

-> Production database was restored on a test server. Additional 3 data drives of size 700 GB each added.

-> Database recovery model was changed from Full to simple.

-> Added close to 3 additional data files of size 650 GB on the 3 additional drives added.

-> Executed below command on the primary data file. This command basically moves data from all user objects from primary data file to additional data files that were added. This will result in all user objects to be moved from Primary data file to secondary data files added.

USE [DATABASE_NAME]
GO
DBCC SHRINKFILE (N'PRIMARY_DATA_FILE' , EMPTYFILE)
GO

-> The above command will be very slow. In my case it took close to 13 hours to complete. While the above command was executing I used below code to check the progress,


if convert(varchar(20),SERVERPROPERTY('productversion')) like '8%'
SELECT [name], fileid, filename, [size]/128.0 AS 'Total Size in MB',
[size]/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB',
CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Used Space In MB',
(100-((([size]/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0)/([size]/128.0))*100.0)) AS 'percentage Used'
FROM sysfiles
else
SELECT @@servername as 'ServerName',db_name() as DBName,[name], file_id, physical_name, [size]/128 AS
'Total Size in MB',
[size]/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB',
CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Used Space In MB',
(100-((([size]/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0)/([size]/128.0))*100.0)) AS 'percentage Used'
FROM sys.database_files
go

-> Reviewing the output from above query, specifically “Used Space in MB” and “Percentage Used” will provide us the details whether the process is progressing.

-> I stopped the resizing query when the primary data file’s “Used space in MB” reached 471,860 MB.

-> I am stopped this in-between just to make sure I am not moving all data from primary data file and then resulting in too much new data being inserted to primary data file later.

-> Shrinked the primary data file from 1.8 TB to 650 GB.

-> There are instances where shrink can take several hours if resizing is not completed fully. In my case it completed in 3 minutes using below command,


USE [DATABASE_NAME]
GO
DBCC SHRINKFILE (N'PRIMARY_DATA_FILE' , 665600, TRUNCATEONLY)
GO

-> In case shrinking the primary data file is very slow, you should allow the resizing to complete fully. You will get below error message when it completes,

Msg 1119, Level 16, State 1, Line 20
Removing IAM page (3:5940460) failed because someone else is using the object that this IAM page belongs to.

-> You can get more details about above error message from this article.

-> Reissue the shrink command and it will complete soon.

-> Problem with this is that you will experience more writes on the primary data file than other 3 data files and this can result in sub-optimal performance.

-> Perform a reindex on the database to ensure you remove any fragmentation as a result of resizing.

-> Changed the recovery model for the database from Simple to Full and performed a full backup.

-> This method worked out well for me in the test environment.

-> This method was replicated on our production environment after 6 months. It had issues while performing on the production environment due to below reasons,

  1. The scripts to increase the data file in the production environment was a copy of the script used in Test environment.
  2. The data growth in production was not taken into account in 6 months and the additional data file size added did not cope up with the additional data added.

-> Due to above issue, behavior on Production database server was as below,

  1. When resizing was started and we were checking the progress using the above query provided. We found that Additional data files “Used Space in MB” was increasing, and “Available Space In MB” decreasing.
  2. But in primary data file “Used Space in MB” and “Available Space In MB” did not change, it was static. Expected result should be that “Available Space In MB” should be increasing and “Used Space in MB” should be decreasing.
  3. It was stopped after 10 hours. We then realized that after the shrink with empty file command was terminated, primary data file “Used Space in MB” started coming down and “Available Space In MB” increasing. This took 1 more hour and were able to see some data moved from Primary data file to secondary data file.
  4. We then increased the additional data file size appropriately and then started executing the command. It moved the required amount of data and it was working as expected. I stopped the resize at a value where data files were having same amount of data and performed an index optimize.

-> In my case I was lucky that we took downtime for a whole weekend.

-> The whole process will not be possible on a production environment in case there is not downtime allowed.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.


			

SQL Server 2019 on Linux Containers

-> Connect to a Red Hat Enterprise Linux 8 Azure Virtual machine using Putty,

Blog93_1

-> Enable Docker CE Repository,

Sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

Blog93_2

-> Install Docker CE using dnf command,

sudo dnf list docker-ce
sudo dnf install docker-ce --nobest -y

Blog93_3

Blog93_4

-> After installation, Start and enable docker service,

sudo systemctl start docker
sudo systemctl enable docker
docker --version

Blog93_5

-> Pull the SQL Server 2019 image,

sudo docker pull mcr.microsoft.com/mssql/server:2019-latest

Blog93_6

Blog93_7

-> Execute the container,

sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Pa$$w0rd1*=Pa$$w0rd1*" \
-p 1433:1433 --name jb-prd-sql1 \
-d mcr.microsoft.com/mssql/server:2019-latest

Blog93_8

-> Verify the container,

sudo docker ps -a

Blog93_9

-> Enter the container,

sudo docker exec -it jb-prd-sql1 "bash"

Blog93_10

-> Connect to SQL Server Instance,

/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "Pa$$w0rd1*=Pa$$w0rd1*"

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

 

 

SQL Server 2019 on Redhat Linux

-> SQL Server 2019 installation will be performed on a Redhat Azure Virtual Machine.

-> Provisioning a Red Hat Enterprise Linux 8 Azure Virtual machine.

Blog92_1

Blog92_2

Blog92_3

-> NIC network security group will be set to None as this is a test instance. Appropriate configuration will be required for production server.

Blog92_5

Blog92_6

Blog92_7

Blog92_8

-> Install Putty from https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html to login to the provisioned Virtual Machine and start the SQL Server 2019 Install.

-> From Azure portal open the virtual machine provisioned and click on connect. Make sure SSH tab is selected. Public IP address in the SSH tab will be used in the putty session to connect to the provisioned Virtual Machine,

Blog92_9

Blog92_10

-> Below command can be used to download Microsoft SQL Server 2019 Red Hat repository configuration file,

sudo curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/8/mssql-server-2019.repo

Blog92_11

Blog92_12

-> Below command will be used to install SQL Server,

sudo yum install -y mssql-server

Blog92_13

Blog92_14

-> After package installation, execute below command to set SA password and slect appropriate edition,

sudo /opt/mssql/bin/mssql-conf setup

Blog92_15

-> Below command can be used to check the status of installed SQL Server,

systemctl status mssql-server

Blog92_16

-> Use below command to allow remote connections by opening TCP port 1433,

sudo firewall-cmd --zone=public --add-port=1433/tcp –permanent
sudo firewall-cmd –reload

Blog92_17

-> Open SQL Server Management Studio and connect to the Installed SQL Server using Public IP address,

Blog92_18

Blog92_19

-> All SQL Server configuration can be performed from SQL Server Management Studio.

 

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.