?
Solved

Global Temporary Tables vs Local Temporary Tables

Posted on 2008-10-23
8
Medium Priority
?
2,079 Views
Last Modified: 2010-04-21
We are developing an application that (very simply put) will copy data from many local databases to one central. In this process lots of rows gets inserted to a temporary table. This temporary table is later used to check which rows are up to date, which rows should be deleted and which rows needs to be updated.

In a current test we noticed a huge performance difference when populating the temporary table. If we had a global temporary table it took about 3 minutes to populate the table with 700,000 rows. If we had a local temporary table, the same task took 9 minutes.

The only difference between the questions are that # gets switched to ##.

We've tested on two different SQL Server Std 2005 running on different hardware with the same result.

Server 1:
Microsoft SQL Server 2005 - 9.00.3159.00 (Intel X86)   Mar 23 2007 16:15:11   Copyright (c) 1988-2005 Microsoft Corporation  Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

Server 2:
Microsoft SQL Server 2005 - 9.00.3233.00 (X64)   Mar  6 2008 21:58:47   Copyright (c) 1988-2005 Microsoft Corporation  Standard Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)

Can anyone explain the differnce in performance?

If the performance gap is expected, would you recommend us to convert to global temp tables considering that under peak we will have 30-40 sessions doing synchronizations at the same time? Each temporary table is crated with a unique guid as a name.
CREATE TABLE #3ILRRRF920M64UVI (
	[Table] VarChar(16),
	[GUID] Char(16),
	[DateChanged] DateTime,
	[ExistingObjectDateChanged] DateTime
)
 
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '2TKJQLNEV1PKK1P1', '2000-08-28 16:22:18.000', NULL )
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '34DLCD7S0E0JFGDB', '2005-01-22 21:36:53.000', NULL )
... another 700.000 times...
 
CREATE INDEX Tmp_IGUID ON #3ILRRRF920M64UVI ([GUID])

Open in new window

0
Comment
Question by:TheMegaLoser
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 3
  • 2
8 Comments
 
LVL 9

Expert Comment

by:Sander Stad
ID: 22786311
There is no difference in performance in the two types of temporary tables. There are a few differences though.
Read this article that will explain it: http://decipherinfosys.wordpress.com/2007/05/04/temporary-tables-ms-sql-server/
0
 
LVL 9

Expert Comment

by:Sander Stad
ID: 22786342
The only thing that would come to mind is that the global table will be accessible for multiple session untill the last session has ended it and with the local temporary table this table is created for every session
0
 
LVL 12

Author Comment

by:TheMegaLoser
ID: 22786381
Well, hard facts show us that there is a performance difference. As stated above we've tried the exact same code on different sql servers on different hardware with the same performance difference.

We've repeated the scenario a couple of times with the same performance difference.
0
Free Backup Tool for VMware and Hyper-V

Restore full virtual machine or individual guest files from 19 common file systems directly from the backup file. Schedule VM backups with PowerShell scripts. Set desired time, lean back and let the script to notify you via email upon completion.  

 
LVL 39

Accepted Solution

by:
BrandonGalderisi earned 1500 total points
ID: 22786424
One thing that you can do to significantly improve performance is wrapping the insert inside an implicit transaction.  While this may seem like a weird concept, what it does is forces SQL Server to not write the records to the table until the entire table has been cached in the transaction log and then there is one write.  It's basically the batch approach to single row inserts.

CREATE TABLE #3ILRRRF920M64UVI (
        [Table] VarChar(16),
        [GUID] Char(16),
        [DateChanged] DateTime,
        [ExistingObjectDateChanged] DateTime
)
begin tran
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '2TKJQLNEV1PKK1P1', '2000-08-28 16:22:18.000', NULL )
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '34DLCD7S0E0JFGDB', '2005-01-22 21:36:53.000', NULL )
... another 700.000 times...
 commit
CREATE INDEX Tmp_IGUID ON #3ILRRRF920M64UVI ([GUID])


Take the following two scripts.

I ran it SEVERAL times and these were the results
22076, 13670
21546, 10780
20076, 10263
create table #t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into #t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table #t
go
 
create table #t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into #t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())
drop table #t

Open in new window

0
 
LVL 39

Expert Comment

by:BrandonGalderisi
ID: 22786486
Changing to global temp table yielded these results.

global temp table
18876, 11580
19000, 10873


table variable:
24683, 18030
22873, 16296

create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table ##t
go
 
create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())
drop table ##t
 
go
 
 
declare @t table  (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into @t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
 
go
 
declare @t table  (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into @t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())

Open in new window

0
 
LVL 12

Author Comment

by:TheMegaLoser
ID: 22787635
BrandonGalderisi, thanks for the idea. I'll try to implement and see how it works out.

I'm still curious about the performance difference though. If I change your first code to use global temproary tables:  

create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table ##t
go

I see a 10% performance gain on the sql servers I test on.

Is this true for you also? If so, do you have any idea why (which is the original question)?
0
 
LVL 39

Expert Comment

by:BrandonGalderisi
ID: 22787802
look at post http:#22786486 .  Yes, I did try that.  And I am aware of what the original question is.  But I guess I was tackling it from the approach of "it is what it is and how can we improve regardless".

The decision to use # or ## is yours.  Realize that the ## tables (even though you are generating a unique name) will be viewable across sessions.  I can only imagine that the entire thing is inside of dynamic SQL or that you are building a sql string in your app and executing that string.  

I'm curious about what is actually going on here though and if using a temp table may not be the right way to do this at all. :)  

care to give a little more insight about what it is you're doing?
0
 
LVL 12

Author Closing Comment

by:TheMegaLoser
ID: 31509201
Although I sitll don't understand why there's a performance difference between global and local temporary tables the answer did give a performance increase.
0

Featured Post

What is SQL Server and how does it work?

The purpose of this paper is to provide you background on SQL Server. It’s your self-study guide for learning fundamentals. It includes both the history of SQL and its technical basics. Concepts and definitions will form the solid foundation of your future DBA expertise.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Load balancing is the method of dividing the total amount of work performed by one computer between two or more computers. Its aim is to get more work done in the same amount of time, ensuring that all the users get served faster.
Ever needed a SQL 2008 Database replicated/mirrored/log shipped on another server but you can't take the downtime inflicted by initial snapshot or disconnect while T-logs are restored or mirror applied? You can use SQL Server Initialize from Backup…
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.
This videos aims to give the viewer a basic demonstration of how a user can query current session information by using the SYS_CONTEXT function
Suggested Courses

801 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question