Solved

Urgent

Posted on 2007-11-27
14
870 Views
Last Modified: 2008-09-08
We are executing a SQL job having step loding data into # table (Temp Database) with some 10 Million records using a big query. Whenever we are in th step we get the following errors.

SQL Error Log Details
2007-11-27 12:33:22.27 spid4     Process ID 55:281 owns resources that are blocking processes on Scheduler 0.
Job Log Details
Msg 1205, Sev 13: Transaction (Process ID 59) was deadlocked on thread | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [SQLSTATE 40001]
0
Comment
Question by:Kalirajan
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 3
  • 2
  • +2
14 Comments
 
LVL 31

Expert Comment

by:James Murrell
ID: 20360131
0
 

Author Comment

by:Kalirajan
ID: 20360169
We already enabled Auto Update Statistics all the production databases.  I getting this issue on doing executing Selectinto statement to insert the data into # table using a big query.

Moreover This was not happend consistenly.
0
 
LVL 31

Expert Comment

by:James Murrell
ID: 20360554
You can also turn on the trace flag (see Books Online) to get more info about the deadlocks.

or could be your temp db is out of space
0
Instantly Create Instructional Tutorials

Contextual Guidance at the moment of need helps your employees adopt to new software or processes instantly. Boost knowledge retention and employee engagement step-by-step with one easy solution.

 
LVL 31

Accepted Solution

by:
James Murrell earned 168 total points
ID: 20360561
clear the tempDB space there are several  methods.  http://www.support.microsoft.com/?id=307487
0
 
LVL 25

Expert Comment

by:imitchie
ID: 20361079
it is possible the problem is due to parallelism and contention between threads, try using MAXDOP 1 for the query. can you also check that there are no other tasks that may interfere with your select/into query? if it's an option at all, consider rewriting a 10m row select/into as smaller batches (for many good reasons, log/temp space is one) - partitioning inserts by primary key
0
 
LVL 35

Assisted Solution

by:David Todd
David Todd earned 166 total points
ID: 20361260
Hi,

I came across another question yesterday where another expert was recommending to NOT use select into #SomeTempTable for large recordsets, but instead to create the table, then do an insert.

The reason was that creating a temp table puts a schema lock on TempDB, and for a large select into the lock persists for the duration of the insert, thus blocking other temp table creation.

http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/Q_22984077.html

HTH
  David
0
 
LVL 25

Expert Comment

by:imitchie
ID: 20361353
that expert is probably from the 70s (okay..SQL 7.0). SQL 2000 solved that problem long ago.
0
 
LVL 23

Assisted Solution

by:Racim BOUDJAKDJI
Racim BOUDJAKDJI earned 166 total points
ID: 20362038
shorten you rollback segments...Depending on the conf, use 5000 to 10000 segments...That way if the system chooses a deadlock victim, it won't rolabck the entire operation.  You can then program an on fail step to resume operations...This should help...Also make sure that your selects tables are adequately indexed...I suggest *not* using temp tables in this case...
0
 

Author Comment

by:Kalirajan
ID: 20364392
We received the Tempdb Log File full at one point of time. Even though we are having enough space in the drive. We changed the default value for the log file space into 1 GB. So the Tempdb issue was solved.  Even then the deadlock was appears again.

We also reindexing all the production tables daily before doing this data load.

0
 
LVL 23

Expert Comment

by:Racim BOUDJAKDJI
ID: 20364840
As I suggested, shorten your rollbacks segments...The following code breaks into 10000 chunks

 Instead of

<<delete from yourtable where <condition matches 10 million records>>>

use…

-- set rowcount to delete sets of 10 000 records (use a smaller subset size if you think it's better)
set rowcount 10000

-- start delete
delete from yourtable where <condition matches 10 million records>
while @@rowcount > 0 -- loop until there's no record left to delete
begin
    -- after each delete there will be an implicit commit
    delete from yourtable where <condition matches 10 million records>
end


Hope this helps...
0
 
LVL 25

Expert Comment

by:imitchie
ID: 20364908
instead of

select f1, f2, f3..
into #abc
from abc where ...

change it to

-- get structure created
select f1, f2, f3..
into #abc
from abc where 1=0

-- temp index to help keep track of where we are up to
create index #keyid on #abc(f1)

-- variable for where we are up to
declare @f1 int set @f1 = 0

-- start inserting in batches of 50000
insert into #abc (f1, f2, f3..)
select top 50000 f1, f2, f3..
from abc where ...
AND f1 > @f1

-- continuing from previous position
while @@rowcount > 0 begin
    select @f1 = max(f1) from #abc   -- update progress
    insert into #abc (f1, f2, f3..)
    select top 50000 f1, f2, f3..
    from abc where ...
    AND f1 > @f1
end
0
 
LVL 35

Expert Comment

by:David Todd
ID: 20367906
Hi,

imitchie:
>> that expert is probably from the 70s (okay..SQL 7.0). SQL 2000 solved that problem long ago. <<

I've openned a question to discuss this here:
http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/Q_22988582.html

Regards
  David
0

Featured Post

Free learning courses: Active Directory Deep Dive

Get a firm grasp on your IT environment when you learn Active Directory best practices with Veeam! Watch all, or choose any amount, of this three-part webinar series to improve your skills. From the basics to virtualization and backup, we got you covered.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

International Data Corporation (IDC) prognosticates that before the current the year gets over disbursing on IT framework products to be sent in cloud environs will be $37.1B.
Ever wondered why sometimes your SQL Server is slow or unresponsive with connections spiking up but by the time you go in, all is well? The following article will show you how to install and configure a SQL job that will send you email alerts includ…
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.
Via a live example, show how to setup several different housekeeping processes for a SQL Server.

733 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question