Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2281
  • Last Modified:

The SQL Server cannot obtain a LOCK resource at this time

Hi All,

 I have a sieries of jobs in SQL Agent which are berforming BULK INSERTS.  Bulk Insert  uses WITH(TABLOCK)  and  batch sets.     If i run more that 4 jobs simultaniously i get:

The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configuration.



SET LOCK_TIMEOUT -1   is being used to allow an infinate lock timeout period.


Memory config is @ default settings (no tweaks)  and there is 4Gb on sever in total (allowing SQL to have 2g)  

Any ideas ?


Cheers
0
ShogunWade
Asked:
ShogunWade
  • 19
  • 6
  • 4
1 Solution
 
Duane LawrenceCommented:
SET LOCK_TIMEOUT -1

When you set this to infinate, your defeating the locking escalation mechanism built into MS SQL Server (and other databases).  Set the lock timeout back to its default and try to cause the error again.  I doupt you will get the error, but if you do, post the exact text of the error here.

Duane
0
 
ShogunWadeAuthor Commented:
thanks for the thought duane,   but the SET LOCK_TIMEOUT was added in an initial attempt to resolve the problem.    

I am stipulating WITH(TABLOCK) in the bulk insert yo circumvent lock escalation as this is an archiving process (high volume) and allowing SQL to start rid locking is very costly in this senario.


the observed effect of me setting

SET LOCK_TIMEOUT -1

is that the is marginally less occurances of the error.
0
 
ShogunWadeAuthor Commented:
PS: this is the exact message.  
0
Free Backup Tool for VMware and Hyper-V

Restore full virtual machine or individual guest files from 19 common file systems directly from the backup file. Schedule VM backups with PowerShell scripts. Set desired time, lean back and let the script to notify you via email upon completion.  

 
Duane LawrenceCommented:
I hope I am not saying something else you are already doing.

But according to

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_ba-bz_4fec.asp
TABLOCK

Specifies that a table-level lock is acquired for the duration of the bulk copy operation. A table can be loaded concurrently by multiple clients if the table has no indexes and TABLOCK is specified. By default, locking behavior is determined by the table option table lock on bulk load. Holding a lock only for the duration of the bulk copy operation reduces lock contention on the table, significantly improving performance.

The table you are inserting into can not have indexes.  So does it?  If it does, try inserting into a importdatatable, then maybe a trigger or a stored proc, run by a job, that will do a batch insert into the true destination table.

Duane
0
 
ShogunWadeAuthor Commented:
"I hope I am not saying something else you are already doing."    thats ok, I'm employing as many optimisations as feasibly possible, so many that it would take a life time to document them all :)  so I appreciate any thoughts even if I've already been there.

Perhaps if i try and give a little more info on the project.    

Basically there is tons of SQL for transformation ,aggregation etc. and when its complete the data that is not "pending" needs to be shoved to the archive.  so there is an ACTIVE database and an ARCHIVE database both on the same SQL instance.     Now under normal circumstances to logical thing to do would be to insert the data from the active to the archive.   However the ARCHIVE is so big that we cant use SQL Backup techniques, So, to kill 2 birds with one stone. I BCP out the "complete" data into a series of files then BULK INSERT then to the archive.  This gives me substantially better archive times and gives the ability to ZIP up the native BCP output and dump it to tape.   All Archive data in the database therefore can be reconstituted from the zip files.  this means I cut the backup times from multiple hours per day to minutes.  

Just for the record the "complete" records to be moved are typically in the order of around 20million rows.

The Archive database has a series of partitioned views but because we have Standard Edition of SQL we cant BULK INSERT to the view so I have a task that splits the data table by table in the BCP out process.  These are then BULK inserted to the underlying tables.    The tables have primary keys contraints etc to comply with the requirements of partitioned views.

Im curious about when you say "The table you are inserting into can not have indexes." ?
0
 
Duane LawrenceCommented:
I was quoting the document in the link on the Microsoft website.  I believe a primary key has an index automatically.  So you are running into the ",but not if you have an index"  clause.  

Try inserting into a importdatatable as a work around, then have a batch querry load the data into the correct tables.  I would assume the archive table is date partitioned like the example below.  I have a data warehouse that uses tables with the check constraint on them as well.

-- Create the fact table for 1999
CREATE TABLE [dbo].[sales_fact_1999] (
   [date_key] datetime NOT NULL
CHECK ([date_key] BETWEEN '1999-01-01' AND '1999-12-31'),
   [product_key] [int] NOT NULL ,
   [customer_key] [int] NOT NULL ,
   [promotion_key] [int] NOT NULL ,
   [store_key] [int] NOT NULL ,
   [store_sales] [money] NULL ,
   [store_cost] [money] NULL ,
   [unit_sales] [float] NULL
)
go

-- Create the fact table for 2000
CREATE TABLE [dbo].[sales_fact_2000] (
   [date_key] datetime NOT NULL
CHECK ([date_key] BETWEEN '2000-01-01' AND '2000-12-31'),
   [product_key] [int] NOT NULL ,
   [customer_key] [int] NOT NULL ,
   [promotion_key] [int] NOT NULL ,
   [store_key] [int] NOT NULL ,
   [store_sales] [money] NULL ,
   [store_cost] [money] NULL ,
   [unit_sales] [float] NULL
)
go

--Create the UNION ALL view.
CREATE VIEW [dbo].[sales_fact]
AS
SELECT * FROM [dbo].[sales_fact_1999]
UNION ALL
SELECT * FROM [dbo].[sales_fact_2000]
go

--Now insert a few rows of data, for example:
INSERT INTO [sales_fact]
VALUES ('1999-01-25', 347, 8901, 0, 13, 5.3100, 1.8585, 3.0)

INSERT INTO [sales_fact]
VALUES ('1999-03-24', 576, 7203, 0, 13, 2.1000, 0.9450, 3.0)

INSERT INTO [sales_fact]
VALUES ('1999-06-04', 139, 7203, 0, 13, 5.3700, 2.2017, 3.0)

INSERT INTO [sales_fact]
VALUES ('2000-09-14', 396, 8814, 0, 13, 6.4800, 2.0736, 2.0)

INSERT INTO [sales_fact]
VALUES ('2000-11-13', 260, 8269, 0, 13, 5.5200, 2.4840, 3.0)
0
 
ShogunWadeAuthor Commented:
Ive tried performing standard insert statements but the performance is not good enough for what we required.      

These are the two thoughts I have presently:

1) tweak the LOCKS option sp_configure locks
this is currently 0 which is dyanmically allocated.  The only thing with manually setting this is that I dont want to sacrifice too much memory.

2) At the back of my mind i wonder whether tweaking the BATCHSIZE on the bulkinsert will help?    

3)  Implement some sequencing control to allow only 3 or 4 of the bulk-inserts to operate concurrently

Option 3 is the easiest for me, but would prefer a more elegant solution.

What do you think about options 1 & 2 ?
0
 
ShogunWadeAuthor Commented:
:) i cant count  "These are the two thoughts  ... 1) ... 2).... 3)..."    LOL :) i mean 3 thoughts
0
 
Duane LawrenceCommented:
Your talking about advanced advanced tweaking, we both know that for this kind of stuff what works on server A won't work on server B nor will it work for others.   I just guessing, but you probably have a SAN for a hard drive as well as multigig network cards in your system.  There will not be a solution that is right or elegant, you have to ask what is your bandwitdth, do you have other fires to put out?  Can you afford to let them go while you research this more?  I would opt for number 3 for now and if you can find a better solution in the future (3 month, or six months from now) then do it then, for now the system has to run.

Then there is another option.  Try asking this question to Lowfatspread when he comes on shortly after noon.  I looked at your profile, it is quite impressive, you may honestly know more than I do.  Nice talking to you though.

Duane
0
 
ShogunWadeAuthor Commented:
"we both know that for this kind of stuff what works on server A won't work on server B nor will it work for others"    Absolutely.    However I think in this case this shouldnt be an issue becuase the system is a "closed black box service"  which is farmed out hardware & all, so each box running this database is exactly the same hw/ sw + config.   Pesently there are no SANs, just wopping local RAIDs so it will all be on the same box for now.   Having said that, one or two customer have much more significant volumes of data so this may be a possibility in the future.

" do you have other fires to put out? "  Well, yes & no.   The reality of the situation is that this senario only occurs in 2 situations (1) we get a new customer and "backload" substantial amounts of data (pos 0.25Tb) or (2) a system failure occurs and there is a backlog.   So whilst its not a critical thing, our saleman keeps bringing in orders. & is means i can sleep @ night ;)

"you may honestly know more than I do"  Im sure you are just being modest :)

Your feedback has been really valuable anyway.  No doubt you spotted the my proportion of deleted to PAQ'd answered :O  ,  unfortunately when I ask a question its guarunteed to be a tricky one.   (I just wish I could reflect the dificulty level in the # points 500 is never enough).

Once again thanks for assistance so far.  
0
 
ShogunWadeAuthor Commented:
Having just spent some time monitoring the lock requests etc.  it would appear that although im specifying WITH(TABLOCK) in my BULK INSERT the indexes are being page locked.


Ive just tested addign the BATCHSIZE but the performance drop is too much to bear.

So im thinking now of using sp_indexoption to prevent page or row locking on the indexes.   This wont be an issue for us in terms of concurrency.   Has anyone, used these options before (i havent)  any pitfalls asside from concurrency ?
0
 
arbertCommented:
"Having just spent some time monitoring the lock requests etc.  it would appear that although im specifying WITH(TABLOCK) in my BULK INSERT the indexes are being page locked."


I've noticed this many times as well ShogunWade.  haven't quite figured out what the hint is for if it's not honored.  

"3)  Implement some sequencing control to allow only 3 or 4 of the bulk-inserts to operate concurrently"

How many are you currently running at the same time?

0
 
ShogunWadeAuthor Commented:
Hi arbert,

"How many are you currently running at the same time?"   -  Its variable  but I can always generate to problem with 5 or more   (but ive been testing this with more than 20 jobs).  

The reality of the situation is that many of the bulk inserts will attempt to insert into the same table at the same time.   I was hoping that by imposing table locks i could create a first come first served type sequence whilest eliminating the huge cost of row and page locks.

Ive just tested this morning using

EXEC sp_indexoption 'table1', 'disallowpagelocks', TRUE
EXEC sp_indexoption 'table1', 'disallowrowlocks', TRUE


This seems to have improved matters but i am still getting 400/ 500 Extent locks / sec   (I have yet to check whether this mod is sufficient to prevent the error,   just need to rig up the test)



0
 
ShogunWadeAuthor Commented:
Hi All,

Furhter update on tests....

Whilst sp_indexoption reduced the locks substantially it wans enough to prevent the problem.

Im just looking @  sp_tableoption 'mytable','table lock on bulk load','ON'

has anyone used this option before.  I havent.   Im gonna give it a try as soon as my test DB is repaired (damned torn page).  So id be interested in hearing if anyone has any experence of sp_tableoption

Cheers

Mike
0
 
ShogunWadeAuthor Commented:
According to BOL:

"If TABLOCK is not specified, the default uses row-level locks, unless the table lock on bulk load option is set to on. Setting the table lock on bulk load option using sp_tableoption sets the locking behavior for a table during a bulk load.   .....      .....If the TABLOCK hint is specified, the default setting for the table set with sp_tableoption is overridden for the duration of the bulk load."

So as far as I can see teoretically my WITH(TABLOCK) should have worked but didnt (as you have noticed arbert).   Given that this overrides the sp_tableoption setting i figure that if its a bug in BULK INSERT if I remove the WITH(TABLOCK) and use this option there is a chance (if not remote) that it might work.



0
 
arbertCommented:
So you don't want to use TablockX to actually lock the whole table and make the other BCP jobs wait for their turn?

0
 
ShogunWadeAuthor Commented:
Hmm,   sounds plausable,   but as far as BOL says BCP & BULK INSERT only support TABLOCK hint not TABLOCKX.

I did think about doing


SELECT * FROM Table1 WITH(TABLOCKX,HOLDLOCK) WHERE 1=0     -- Force exclusive lock first
BULK INSERT ....  etc....


but it feels a bit cludgy so i havent tried it yet.
0
 
ShogunWadeAuthor Commented:
Grrrrrrrrr!

The sp_tableoption has no effect.  Good old Microsoft.

One thing that i think may be having an effect is the fact that I increased the "Maximum Worker Threads" setting for the sql instance to 1024 as the default was too few for this environment, causeing numerous SQL Agent job to fail with the error "Cant Get a worker thread"  (paraphrased)   Changing that setting eliminated this problem.  I guess that more worker threads running = more locks  so I may just get the pocket calculator out and try increasing the lock memory just enough to solve the prob.        

My simplistic thinking is that given no other tasks are running (this is guarunteed in this case) 4 jobs work without problems.    

having boosted max worker theads,  SQL Agent doesnt start waiting for a worker thread untill i have 8 of these jobs executing  thus im thinking of doubling the lock memory.   I already have stuff in place which monitors throuput (records / sec / sp) so if increasing the lock memory doesnt impact this too much im ok.

If not then im open to any and all suggestions :)
0
 
ShogunWadeAuthor Commented:
<----- dont try this @ home ------>

If i have sp_tableoption    set  as mentioned and dont use WITH(TABLOCK) on the bulk insert and run multiple jobs   on an empy database containing only empty tables.  the database blows up and goes into suspect mode.   I have repeated this 3 times now.    

0
 
arbertCommented:
"but it feels a bit cludgy so i havent tried it yet."

no more cludgy than this :)   :

"the database blows up and goes into suspect mode."


I assume none of these bulk inserts generate parallel plans right?
0
 
ShogunWadeAuthor Commented:
correct.  plans are not parallel.
0
 
ShogunWadeAuthor Commented:
It seems that im not going to solve this in a graceful way.    

If it were you arbert, would you implement sequence control and prevent too many running at once or try and patch it by doing the TABLOCKX,HOLDLOCK   or something else ?

0
 
arbertCommented:
Sad, but ya, that's probably what I would do.  I've had to do it in several instances.  It sucks that you can't "load full throttle", but getting everything to run to completion by staggering is sometimes better than not getting a run at all....

Just out of curiosity, have you tried DTS using the "fast load" option (which actually generates BULK INSERT statements)?  At least that way, you could have everything in one package and you can tell DTS the max number of tasks to execute at a time.

Brett
0
 
ShogunWadeAuthor Commented:
It wouldnt quire work to easily in DTS..

Basically there are source data files coming in at random intervals which cause job-sequences to be initiated.  these "intertwine" at points and invoke other jobs and or merge at times.  most of the job execution logic is based on guesses about what is optimal.  eg:  a certain task runs most optimally where n records are in the job  so under certain circumstances the job management code will hold back work in order to improve performance by "joining it with another job" .      

Gulp.   I conceived and wrote it and it event confuses me :)   But esentially I am doing stuff that DTS would be unable tocope with.   it wont be too difficult to throtlte back my existing code.  

Thanks for all your input arbert  & duane.    

I suggest that i split 50/50 the points between you both as you have both helped quite a bit.    Is this acceptable ?
0
 
arbertCommented:
Nah, give them to duane, he gave much more valuable information than me!!  I just wanted to follow the thread--I like to see the problems that a man of your experience comes across!!!!!
0
 
ShogunWadeAuthor Commented:
ok, your call.

"I like to see the problems that a man of your experience comes across!!!!!"   -- the sort that deserve more than 500pts generally ;)

I'll keep a note of this question and if i ever find a grateful answer ill post it.    (I might even in a cruel moment have to add this to my "Tricky Questions for Interviewing People" list ;)  )

Cheers both of you.
0
 
ShogunWadeAuthor Commented:
oops i meant   graceful answer  not " grateful answer "   sorry .
0
 
arbertCommented:
Remind me never to interview with you lol
0
 
ShogunWadeAuthor Commented:
lol.
0

Featured Post

NFR key for Veeam Agent for Linux

Veeam is happy to provide a free NFR license for one year.  It allows for the non‑production use and valid for five workstations and two servers. Veeam Agent for Linux is a simple backup tool for your Linux installations, both on‑premises and in the public cloud.

  • 19
  • 6
  • 4
Tackle projects and never again get stuck behind a technical roadblock.
Join Now