There are some major differences between temp tables, table variables and common table expressions (CTEs). Some of the big differences are:
Temp Tables vs. Table Variables
- SQL Server does not place locks on table variables when the table variables are used.
- Temp tables allow for multiple indexes to be created
- Table variables allow a single index the Primary Key to be created when the table variable is declared only. There is an exception to this, that if you can create the index inline, for example by creating a unique constraint inline as shown in the comments. However these indexes (and the table variable in general) will always be assumed to have 1 row in them, no matter how much data is within the table variable.
- Temp tables can be created locally (#TableName) or globally (##TableName)
- Table variables are destroyed as the batch is completed.
- Temp tables can be used throughout multiple batches.
- Temp tables can be used to hold the output of a stored procedure (temp tables will get this functionality in SQL Server 2008).
Table variables and Temp Tables vs. CTEs
- CTEs are used after the command which creates them.
- CTEs can be recursive within a single command (be careful because they can cause an infinite loop).
- Table variables and Temp Tables can be used throughout the batch.
- The command before the CTE must end with a semi-colon (;).
- As Temp tables and table variables are tables you can insert, update and delete the data within the table.
- CTEs can not have any indexes created on them, source tables much have indexes created on them.
If you can think of anything that I’ve missed, feel free to post them in the comments.
Denny
0 Responses
how do you use CTE with an UPDATE STatement
“Table variables allow a single index the Primary Key to be created when the table variable is declared only.”
You can build one or more constraints on a table variable as well, creating underlying indexes on the table var:
DECLARE @tbl table
(Col0 bigint
,Col1 varchar (30)
,UNIQUE (Col0, Col1) — comment/uncomment this line
)
INSERT @tbl
SELECT ROW_NUMBER() OVER (ORDER BY c.id) as Col0
,CAST(NULL AS varchar (30)) as Col1
FROM
(SELECT TOP 1000000 1 as id
FROM sys.syscolumns a
, sys.syscolumns b
, sys.syscolumns) c
SET STATISTICS TIME ON
BEGIN
SELECT Col0, Col1
FROM @tbl
WHERE Col0 = 343469
END
SET STATISTICS TIME OFF
GO
http://www.texastoo.com/post/2009/12/10/Table-Variables-indexes-and-total-cost-of-query.aspx
Didn’t know that a unique index could be created, however the optomizer will still assume that only a single row will be returned. If the select statement is changed to:
[CODE]SELECT top 100 Col0, Col1
FROM @tbl
WHERE Col0 between 343469 and 500000
[/CODE]
And you look at the execution plan it’ll still show that a single row is the estimated number of rows to be returned even though there are way more than that being returned.
I’ll update the post to account for the unique index posibility.
Odd behavior using a Table Variable…
We have a process that involves stored procedures that are called in a nested fashion (usually less than 4 deep.
Some of the stored procedures insert/update table variables. When these are run directly from SSMS, they execute very quickly. When the same stored procedure is called during the above, nested, process it appeared to freeze when it reached the insert/update statements for the table variables.
We changed the code to use temp tables instead and it ran without a problem, as fast as it had originally.
I have not run into this behaviour before. Any suggestions?