Urgent

We are executing a SQL job having step loding data into # table (Temp Database) with some 10 Million records using a big query. Whenever we are in th step we get the following errors.

SQL Error Log Details
2007-11-27 12:33:22.27 spid4     Process ID 55:281 owns resources that are blocking processes on Scheduler 0.
Job Log Details
Msg 1205, Sev 13: Transaction (Process ID 59) was deadlocked on thread | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [SQLSTATE 40001]
KalirajanAsked:
Who is Participating?

Improve company productivity with a Business Account.Sign Up

x
 
James MurrellConnect With a Mentor Product SpecialistCommented:
clear the tempDB space there are several  methods.  http://www.support.microsoft.com/?id=307487
0
 
James MurrellProduct SpecialistCommented:
0
 
KalirajanAuthor Commented:
We already enabled Auto Update Statistics all the production databases.  I getting this issue on doing executing Selectinto statement to insert the data into # table using a big query.

Moreover This was not happend consistenly.
0
A proven path to a career in data science

At Springboard, we know how to get you a job in data science. With Springboard’s Data Science Career Track, you’ll master data science  with a curriculum built by industry experts. You’ll work on real projects, and get 1-on-1 mentorship from a data scientist.

 
James MurrellProduct SpecialistCommented:
You can also turn on the trace flag (see Books Online) to get more info about the deadlocks.

or could be your temp db is out of space
0
 
imitchieCommented:
it is possible the problem is due to parallelism and contention between threads, try using MAXDOP 1 for the query. can you also check that there are no other tasks that may interfere with your select/into query? if it's an option at all, consider rewriting a 10m row select/into as smaller batches (for many good reasons, log/temp space is one) - partitioning inserts by primary key
0
 
David ToddConnect With a Mentor Senior DBACommented:
Hi,

I came across another question yesterday where another expert was recommending to NOT use select into #SomeTempTable for large recordsets, but instead to create the table, then do an insert.

The reason was that creating a temp table puts a schema lock on TempDB, and for a large select into the lock persists for the duration of the insert, thus blocking other temp table creation.

http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/Q_22984077.html

HTH
  David
0
 
imitchieCommented:
that expert is probably from the 70s (okay..SQL 7.0). SQL 2000 solved that problem long ago.
0
 
Racim BOUDJAKDJIConnect With a Mentor Database Architect - Dba - Data ScientistCommented:
shorten you rollback segments...Depending on the conf, use 5000 to 10000 segments...That way if the system chooses a deadlock victim, it won't rolabck the entire operation.  You can then program an on fail step to resume operations...This should help...Also make sure that your selects tables are adequately indexed...I suggest *not* using temp tables in this case...
0
 
KalirajanAuthor Commented:
We received the Tempdb Log File full at one point of time. Even though we are having enough space in the drive. We changed the default value for the log file space into 1 GB. So the Tempdb issue was solved.  Even then the deadlock was appears again.

We also reindexing all the production tables daily before doing this data load.

0
 
Racim BOUDJAKDJIDatabase Architect - Dba - Data ScientistCommented:
As I suggested, shorten your rollbacks segments...The following code breaks into 10000 chunks

 Instead of

<<delete from yourtable where <condition matches 10 million records>>>

use…

-- set rowcount to delete sets of 10 000 records (use a smaller subset size if you think it's better)
set rowcount 10000

-- start delete
delete from yourtable where <condition matches 10 million records>
while @@rowcount > 0 -- loop until there's no record left to delete
begin
    -- after each delete there will be an implicit commit
    delete from yourtable where <condition matches 10 million records>
end


Hope this helps...
0
 
imitchieCommented:
instead of

select f1, f2, f3..
into #abc
from abc where ...

change it to

-- get structure created
select f1, f2, f3..
into #abc
from abc where 1=0

-- temp index to help keep track of where we are up to
create index #keyid on #abc(f1)

-- variable for where we are up to
declare @f1 int set @f1 = 0

-- start inserting in batches of 50000
insert into #abc (f1, f2, f3..)
select top 50000 f1, f2, f3..
from abc where ...
AND f1 > @f1

-- continuing from previous position
while @@rowcount > 0 begin
    select @f1 = max(f1) from #abc   -- update progress
    insert into #abc (f1, f2, f3..)
    select top 50000 f1, f2, f3..
    from abc where ...
    AND f1 > @f1
end
0
 
David ToddSenior DBACommented:
Hi,

imitchie:
>> that expert is probably from the 70s (okay..SQL 7.0). SQL 2000 solved that problem long ago. <<

I've openned a question to discuss this here:
http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/Q_22988582.html

Regards
  David
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.