• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2082
  • Last Modified:

Global Temporary Tables vs Local Temporary Tables

We are developing an application that (very simply put) will copy data from many local databases to one central. In this process lots of rows gets inserted to a temporary table. This temporary table is later used to check which rows are up to date, which rows should be deleted and which rows needs to be updated.

In a current test we noticed a huge performance difference when populating the temporary table. If we had a global temporary table it took about 3 minutes to populate the table with 700,000 rows. If we had a local temporary table, the same task took 9 minutes.

The only difference between the questions are that # gets switched to ##.

We've tested on two different SQL Server Std 2005 running on different hardware with the same result.

Server 1:
Microsoft SQL Server 2005 - 9.00.3159.00 (Intel X86)   Mar 23 2007 16:15:11   Copyright (c) 1988-2005 Microsoft Corporation  Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

Server 2:
Microsoft SQL Server 2005 - 9.00.3233.00 (X64)   Mar  6 2008 21:58:47   Copyright (c) 1988-2005 Microsoft Corporation  Standard Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)

Can anyone explain the differnce in performance?

If the performance gap is expected, would you recommend us to convert to global temp tables considering that under peak we will have 30-40 sessions doing synchronizations at the same time? Each temporary table is crated with a unique guid as a name.
CREATE TABLE #3ILRRRF920M64UVI (
	[Table] VarChar(16),
	[GUID] Char(16),
	[DateChanged] DateTime,
	[ExistingObjectDateChanged] DateTime
)
 
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '2TKJQLNEV1PKK1P1', '2000-08-28 16:22:18.000', NULL )
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '34DLCD7S0E0JFGDB', '2005-01-22 21:36:53.000', NULL )
... another 700.000 times...
 
CREATE INDEX Tmp_IGUID ON #3ILRRRF920M64UVI ([GUID])

Open in new window

0
TheMegaLoser
Asked:
TheMegaLoser
  • 3
  • 3
  • 2
1 Solution
 
Sander StadSysteemontwikkelaar, Database AdministratorCommented:
There is no difference in performance in the two types of temporary tables. There are a few differences though.
Read this article that will explain it: http://decipherinfosys.wordpress.com/2007/05/04/temporary-tables-ms-sql-server/
0
 
Sander StadSysteemontwikkelaar, Database AdministratorCommented:
The only thing that would come to mind is that the global table will be accessible for multiple session untill the last session has ended it and with the local temporary table this table is created for every session
0
 
TheMegaLoserAuthor Commented:
Well, hard facts show us that there is a performance difference. As stated above we've tried the exact same code on different sql servers on different hardware with the same performance difference.

We've repeated the scenario a couple of times with the same performance difference.
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
BrandonGalderisiCommented:
One thing that you can do to significantly improve performance is wrapping the insert inside an implicit transaction.  While this may seem like a weird concept, what it does is forces SQL Server to not write the records to the table until the entire table has been cached in the transaction log and then there is one write.  It's basically the batch approach to single row inserts.

CREATE TABLE #3ILRRRF920M64UVI (
        [Table] VarChar(16),
        [GUID] Char(16),
        [DateChanged] DateTime,
        [ExistingObjectDateChanged] DateTime
)
begin tran
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '2TKJQLNEV1PKK1P1', '2000-08-28 16:22:18.000', NULL )
INSERT INTO #3ILRRRF920M64UVI VALUES( 'CFil', '34DLCD7S0E0JFGDB', '2005-01-22 21:36:53.000', NULL )
... another 700.000 times...
 commit
CREATE INDEX Tmp_IGUID ON #3ILRRRF920M64UVI ([GUID])


Take the following two scripts.

I ran it SEVERAL times and these were the results
22076, 13670
21546, 10780
20076, 10263
create table #t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into #t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table #t
go
 
create table #t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into #t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())
drop table #t

Open in new window

0
 
BrandonGalderisiCommented:
Changing to global temp table yielded these results.

global temp table
18876, 11580
19000, 10873


table variable:
24683, 18030
22873, 16296

create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table ##t
go
 
create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())
drop table ##t
 
go
 
 
declare @t table  (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into @t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
 
go
 
declare @t table  (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
begin tran
while @i<700000
begin
insert into @t (t) select newid()
set @i=@i+1
end
commit
select datediff(ms,@dt,getdate())

Open in new window

0
 
TheMegaLoserAuthor Commented:
BrandonGalderisi, thanks for the idea. I'll try to implement and see how it works out.

I'm still curious about the performance difference though. If I change your first code to use global temproary tables:  

create table ##t (t varchar(36))
declare @i int,@dt datetime
set @i=1
set @dt=getdate()
while @i<700000
begin
insert into ##t (t) select newid()
set @i=@i+1
end
select datediff(ms,@dt,getdate())
drop table ##t
go

I see a 10% performance gain on the sql servers I test on.

Is this true for you also? If so, do you have any idea why (which is the original question)?
0
 
BrandonGalderisiCommented:
look at post http:#22786486 .  Yes, I did try that.  And I am aware of what the original question is.  But I guess I was tackling it from the approach of "it is what it is and how can we improve regardless".

The decision to use # or ## is yours.  Realize that the ## tables (even though you are generating a unique name) will be viewable across sessions.  I can only imagine that the entire thing is inside of dynamic SQL or that you are building a sql string in your app and executing that string.  

I'm curious about what is actually going on here though and if using a temp table may not be the right way to do this at all. :)  

care to give a little more insight about what it is you're doing?
0
 
TheMegaLoserAuthor Commented:
Although I sitll don't understand why there's a performance difference between global and local temporary tables the answer did give a performance increase.
0

Featured Post

Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

  • 3
  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now