Solved

Global Temporary Tables vs Local Temporary Tables

Posted on 2008-10-23
8
2,066 Views
Last Modified: 2010-04-21
We are developing an application that (very simply put) will copy data from many local databases to one central. In this process lots of rows gets inserted to a temporary table. This temporary table is later used to check which rows are up to date, which rows should be deleted and which rows needs to be updated.

In a current test we noticed a huge performance difference when populating the temporary table. If we had a global temporary table it took about 3 minutes to populate the table with 700,000 rows. If we had a local temporary table, the same task took 9 minutes.

The only difference between the questions are that # gets switched to ##.

We've tested on two different SQL Server Std 2005 running on different hardware with the same result.

Server 1:
Microsoft SQL Server 2005 - 9.00.3159.00 (Intel X86)   Mar 23 2007 16:15:11   Copyright (c) 1988-2005 Microsoft Corporation  Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

Server 2:
Microsoft SQL Server 2005 - 9.00.3233.00 (X64)   Mar  6 2008 21:58:47   Copyright (c) 1988-2005 Microsoft Corporation  Standard Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)

Can anyone explain the differnce in performance?

If the performance gap is expected, would you recommend us to convert to global temp tables considering that under peak we will have 30-40 sessions doing synchronizations at the same time? Each temporary table is crated with a unique guid as a name.
CREATE TABLE #3ILRRRF920M64UVI (
	[Table] VarChar(16),
	[GUID] Char(16),
	[DateChanged] DateTime,
	[ExistingObjectDateChanged] DateTime
)
 
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '2TKJQLNEV1PKK1P1', '2000-08-28 16:22:18.000', NULL )
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '34DLCD7S0E0JFGDB', '2005-01-22 21:36:53.000', NULL )
... another 700.000 times...
 
CREATE INDEX Tmp_IGUID ON #3ILRRRF920M64UVI ([GUID])

Open in new window

0
Comment
Question by:TheMegaLoser
  • 3
  • 3
  • 2
8 Comments
 
LVL 9

Expert Comment

by:Sander Stad
ID: 22786311
There is no difference in performance in the two types of temporary tables. There are a few differences though.
Read this article that will explain it: http://decipherinfosys.wordpress.com/2007/05/04/temporary-tables-ms-sql-server/
0
 
LVL 9

Expert Comment

by:Sander Stad
ID: 22786342
The only thing that would come to mind is that the global table will be accessible for multiple session untill the last session has ended it and with the local temporary table this table is created for every session
0
 
LVL 12

Author Comment

by:TheMegaLoser
ID: 22786381
Well, hard facts show us that there is a performance difference. As stated above we've tried the exact same code on different sql servers on different hardware with the same performance difference.

We've repeated the scenario a couple of times with the same performance difference.
0
Optimizing Cloud Backup for Low Bandwidth

With cloud storage prices going down a growing number of SMBs start to use it for backup storage. Unfortunately, business data volume rarely fits the average Internet speed. This article provides an overview of main Internet speed challenges and reveals backup best practices.

 
LVL 39

Accepted Solution

by:
BrandonGalderisi earned 500 total points
ID: 22786424
One thing that you can do to significantly improve performance is wrapping the insert inside an implicit transaction.  While this may seem like a weird concept, what it does is forces SQL Server to not write the records to the table until the entire table has been cached in the transaction log and then there is one write.  It's basically the batch approach to single row inserts.

CREATE TABLE #3ILRRRF920M64UVI (
        [Table] VarChar(16),
        [GUID] Char(16),
        [DateChanged] DateTime,
        [ExistingObjectDateChanged] DateTime
)
begin tran
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '2TKJQLNEV1PKK1P1', '2000-08-28 16:22:18.000', NULL )
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '34DLCD7S0E0JFGDB', '2005-01-22 21:36:53.000', NULL )
... another 700.000 times...
 commit
CREATE INDEX Tmp_IGUID ON #3ILRRRF920M64UVI ([GUID])


Take the following two scripts.

I ran it SEVERAL times and these were the results
22076, 13670
21546, 10780
20076, 10263
create table #t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into #t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table #t
go
 
create table #t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into #t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())
drop table #t

Open in new window

0
 
LVL 39

Expert Comment

by:BrandonGalderisi
ID: 22786486
Changing to global temp table yielded these results.

global temp table
18876, 11580
19000, 10873


table variable:
24683, 18030
22873, 16296

create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table ##t
go
 
create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())
drop table ##t
 
go
 
 
declare @t table  (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into @t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
 
go
 
declare @t table  (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into @t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())

Open in new window

0
 
LVL 12

Author Comment

by:TheMegaLoser
ID: 22787635
BrandonGalderisi, thanks for the idea. I'll try to implement and see how it works out.

I'm still curious about the performance difference though. If I change your first code to use global temproary tables:  

create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table ##t
go

I see a 10% performance gain on the sql servers I test on.

Is this true for you also? If so, do you have any idea why (which is the original question)?
0
 
LVL 39

Expert Comment

by:BrandonGalderisi
ID: 22787802
look at post http:#22786486 .  Yes, I did try that.  And I am aware of what the original question is.  But I guess I was tackling it from the approach of "it is what it is and how can we improve regardless".

The decision to use # or ## is yours.  Realize that the ## tables (even though you are generating a unique name) will be viewable across sessions.  I can only imagine that the entire thing is inside of dynamic SQL or that you are building a sql string in your app and executing that string.  

I'm curious about what is actually going on here though and if using a temp table may not be the right way to do this at all. :)  

care to give a little more insight about what it is you're doing?
0
 
LVL 12

Author Closing Comment

by:TheMegaLoser
ID: 31509201
Although I sitll don't understand why there's a performance difference between global and local temporary tables the answer did give a performance increase.
0

Featured Post

VMware Disaster Recovery and Data Protection

In this expert guide, you’ll learn about the components of a Modern Data Center. You will use cases for the value-added capabilities of Veeam®, including combining backup and replication for VMware disaster recovery and using replication for data center migration.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In this article I will describe the Backup & Restore method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Load balancing is the method of dividing the total amount of work performed by one computer between two or more computers. Its aim is to get more work done in the same amount of time, ensuring that all the users get served faster.
Using examples as well as descriptions, and references to Books Online, show the different Recovery Models available in SQL Server and explain, as well as show how full, differential and transaction log backups are performed
Viewers will learn how the fundamental information of how to create a table.

803 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question