Table Variable or Temp Table

-> Create the below objects. Initially we start with fewer rows.

create database JB_TableVariable
go
use JB_TableVariable
go
create table Table1(
Col1 int primary key identity (1,1)
, Col2 char(2000)
, Col3 int
, Col4 int
)
create table Table2(
Col1 int primary key identity (1,1)
, Col2 char(2000)
, Col3 int
, Col4 int
)
create table Table3(
Col1 int primary key identity (1,1)
, Col2 char(2000)
, Col3 int
, Col4 int
)
set nocount on
go
insert into Table1 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 500
insert into Table1 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),0)
GO 500
insert into Table2 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 500
insert into Table2 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),0)
GO 500
insert into Table3 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 500
insert into Table3 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),0)
GO 500

-> Create the below stored procedures. Stored procedure sp_tableVariable uses table variable and stored procedure sp_temptable uses temporary table.

— Stored procedure using Table variable
create proc sp_tableVariable
WITH RECOMPILE
as
begin
DECLARE @Table4 TABLE(
Col1 int primary key
, Col2 char(2000)
, Col3 int
, Col4 int
)
insert into @Table4
select a.col1, b.col2, a.col3,b.col4
from Table1 a INNER JOIN Table2 b
ON a.col1 = b.col1
select a.col1, b.col2, a.col3
from Table1 a INNER JOIN @Table4 b
ON a.col1 = b.col1
where b.col4 = 1
order by b.col3 desc
end
— Stored procedure using Temporary table
create proc sp_temptable
WITH RECOMPILE
as
begin
create TABLE #table5(
Col1 int primary key
, Col2 char(2000)
, Col3 int
, Col4 int
)
insert into #table5
select a.col1, b.col2, a.col3,b.col4
from Table1 a INNER JOIN Table2 b
ON a.col1 = b.col1
select a.col1, b.col2, a.col3
from Table1 a INNER JOIN #table5 b
ON a.col1 = b.col1
where b.col4 = 1
order by b.col3 desc
drop table #table5
end

-> Executing stored procedure sp_tableVariable that uses table variable. The query completed within 1 second.

blog17_1

-> Executing stored procedure sp_Temptable that uses temporary table. The query completed within 1 second.

Blog17_2.PNG

-> In this case, when we started initially with 500 rows. Stored procedure execution using temporary table and table variable seems same.

-> Now inserting 5000 more rows and updating the statistics.

use JB_TableVariable
go
set nocount on
go

insert into Table1 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 5000
insert into Table2 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 5000
insert into Table3 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 5000

update statistics Table1 with fullscan
update statistics Table2 with fullscan
update statistics Table3 with fullscan

-> Executing stored procedure sp_tableVariable that uses table variable. The query completed in 3 seconds.

Blog17_3.PNG

-> Executing stored procedure sp_Temptable that uses temporary table. The query completed in 1 second.

Blog17_4.PNG

-> Now inserting 50000 more rows and updating the statistics.

use JB_TableVariable
go
set nocount on
go
insert into Table1 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 50000
insert into Table2 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 50000
insert into Table3 values (SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 8),CONVERT(INT,rand()*1000000),1)
GO 50000
update statistics Table1 with fullscan
update statistics Table2 with fullscan
update statistics Table3 with fullscan

-> Executing stored procedure sp_tableVariable that uses table variable. The query completed in 1 minute 37 seconds.

Blog17_5.PNG

-> Not a good sign, the query this time took close to 97 seconds. From the execution plan, it is clear that we are spending most of our time in sorting the data and also it is evident that the sort is spilling to tempdb. Let us look at the sort and see why it is spilling to tempdb,

Blog17_5_1.PNG

-> The sort operator advises us that the optimizer estimated 1 row, but actually processed 55500 rows. Due to the bad estimates, sort spilled to tempdb to a greater extent. The warnings section updates us that the optimizer estimated and provided 1024KB granted memory and this made close to 33713 pages to be written to disk (Tempdb) as we did not have sufficient memory for the sort to process 55500 rows in memory.

-> Further we are seeing a nested loop join as the estimates for the clustered index scan on @table4 and subsequent sort operator as 1. The problem with table variables is that when used, the optimizer will always estimate the numbers of rows for table variable as 1 no matter how many millions of rows it contains.

Blog17_5_2.PNG

-> This resulted in the stored procedure performing very bad. Lets look at the other stored procedure that uses temporary table.

-> Executing stored procedure sp_Temptable that uses temporary table. The query completed in 11 seconds.

blog17_6

-> Stored procedure sp_Temptable comparably is quiet faster, the query took close to 11 seconds. From the execution plan, we are spending most of our time in scanning data from temporary table #table5 and table Table1. We are seeing a merge join and then a sort is employed. Lets look at the sort operator. We are seeing a warning symbol in the sort  operator indicating sort spilling to tempdb, but lets see how serious is that,

blog17_6_1

-> The sort operator advises us that the optimizer estimated 55546.3 rows and actually processed 55500 rows. The estimates seems pretty decent. The sort spilled to tempdb. The warnings section updates us that the optimizer estimated and provided 139040KB granted memory and this made close to 13875 pages to be written to disk (Tempdb). Sort spilling to tempdb is definitely an issue and should be resolved to best extent possible. But in our case the spill seems comparably less and the query completed fine. Removing the spill can make this query even faster.

-> We are seeing a merge join as the estimates for the clustered index scan on #table4 and subsequent sort operator are all fine.

Blog17_6_2.PNG

-> This resulted in the stored procedure sp_Temptable performing comparatively faster.

-> So does this tell us that we should always use temporary tables over table variables? No not at all. Each and every query should be tested with different combinations and we should come to a concluson based on the results.

-> Table variables and temporary tables work efficiently when used appropriately. Temporary tables allows you to add indexes after the temporary tables are created along with implicit indexes that are created as part of primary key or foreign keys. But table variables allow only implicit indexes that are created as part of primary key or foreign keys.

-> Table variables and temporary tables have their own disadvantages. You will see metadata contention when using temporary tables, but not with table variables.

Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my company or anyone else. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s