-> Requirement is to move data from a database that has 1 data file to 4 data files.
-> Existing setup,
SQL Server : SQL Server 2017
Database Size : 2 TB
Number of Data file(s) : 1
Data file size : 1.8 TB
Log file size : 200 GB
-> Solution requirement,
Number of Data files : 4
Data File 1 Size : 650 GB
Data File 2 Size : 650 GB
Data File 3 Size : 650 GB
Data File 4 Size : 650 GB
Log file size : 200 GB
-> Below tasks were undertaken on a test server initially.
-> Production database was restored on a test server. Additional 3 data drives of size 700 GB each added.
-> Database recovery model was changed from Full to simple.
-> Added close to 3 additional data files of size 650 GB on the 3 additional drives added.
-> Executed below command on the primary data file. This command basically moves data from all user objects from primary data file to additional data files that were added. This will result in all user objects to be moved from Primary data file to secondary data files added.
USE [DATABASE_NAME] GO DBCC SHRINKFILE (N'PRIMARY_DATA_FILE' , EMPTYFILE) GO
-> The above command will be very slow. In my case it took close to 13 hours to complete. While the above command was executing I used below code to check the progress,
if convert(varchar(20),SERVERPROPERTY('productversion')) like '8%' SELECT [name], fileid, filename, [size]/128.0 AS 'Total Size in MB', [size]/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB', CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Used Space In MB', (100-((([size]/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0)/([size]/128.0))*100.0)) AS 'percentage Used' FROM sysfiles else SELECT @@servername as 'ServerName',db_name() as DBName,[name], file_id, physical_name, [size]/128 AS 'Total Size in MB', [size]/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Available Space In MB', CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS 'Used Space In MB', (100-((([size]/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0)/([size]/128.0))*100.0)) AS 'percentage Used' FROM sys.database_files go
-> Reviewing the output from above query, specifically “Used Space in MB” and “Percentage Used” will provide us the details whether the process is progressing.
-> I stopped the resizing query when the primary data file’s “Used space in MB” reached 471,860 MB.
-> I am stopped this in-between just to make sure I am not moving all data from primary data file and then resulting in too much new data being inserted to primary data file later.
-> Shrinked the primary data file from 1.8 TB to 650 GB.
-> There are instances where shrink can take several hours if resizing is not completed fully. In my case it completed in 3 minutes using below command,
USE [DATABASE_NAME] GO DBCC SHRINKFILE (N'PRIMARY_DATA_FILE' , 665600, TRUNCATEONLY) GO
-> In case shrinking the primary data file is very slow, you should allow the resizing to complete fully. You will get below error message when it completes,
Msg 1119, Level 16, State 1, Line 20
Removing IAM page (3:5940460) failed because someone else is using the object that this IAM page belongs to.
-> You can get more details about above error message from this article.
-> Reissue the shrink command and it will complete soon.
-> Problem with this is that you will experience more writes on the primary data file than other 3 data files and this can result in sub-optimal performance.
-> Perform a reindex on the database to ensure you remove any fragmentation as a result of resizing.
-> Changed the recovery model for the database from Simple to Full and performed a full backup.
-> This method worked out well for me in the test environment.
-> This method was replicated on our production environment after 6 months. It had issues while performing on the production environment due to below reasons,
- The scripts to increase the data file in the production environment was a copy of the script used in Test environment.
- The data growth in production was not taken into account in 6 months and the additional data file size added did not cope up with the additional data added.
-> Due to above issue, behavior on Production database server was as below,
- When resizing was started and we were checking the progress using the above query provided. We found that Additional data files “Used Space in MB” was increasing, and “Available Space In MB” decreasing.
- But in primary data file “Used Space in MB” and “Available Space In MB” did not change, it was static. Expected result should be that “Available Space In MB” should be increasing and “Used Space in MB” should be decreasing.
- It was stopped after 10 hours. We then realized that after the shrink with empty file command was terminated, primary data file “Used Space in MB” started coming down and “Available Space In MB” increasing. This took 1 more hour and were able to see some data moved from Primary data file to secondary data file.
- We then increased the additional data file size appropriately and then started executing the command. It moved the required amount of data and it was working as expected. I stopped the resize at a value where data files were having same amount of data and performed an index optimize.
-> In my case I was lucky that we took downtime for a whole weekend.
-> The whole process will not be possible on a production environment in case there is not downtime allowed.
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.