Link to home
Start Free TrialLog in
Avatar of Simon Cripps
Simon CrippsFlag for United Kingdom of Great Britain and Northern Ireland

asked on

0xe0434352 failure on scheduled job

Hi,
I have been running some scheduled batch jobs for a period of time on sone large files overnight.
I have a SQL job that updates a table with about 700,000 rows.
This has been running fine till recently, where I am getting the error 0xE0434352 which stops the job on CommandTimeout line.
Thinking that this was due to the increase in the size of the database I have increased the timeout size to from 1 to 2 to 3 hours Command2a.CommandTimeout = 10800.
However this still fails each night on the server.
The next day on finding the issue I run the job manually and all closes fine, within 1 hour and no errors.
I am using SQLserver 2008 R2
Any suggestions as to the issue
Avatar of lozzamoore
lozzamoore
Flag of United Kingdom of Great Britain and Northern Ireland image

What are the scheduled batch jobs? VBS scripts?
Avatar of Anthony Perkins
Unrelated, but have you considered that 3 hours is a very long time to update only 700K rows?  Perhaps you can post your code and we can suggest an alternative.
There must be something else running at the same time with your job that's creating IO contention, locking and/or blocking in your database surounding that table. Check for any other competing jobs like backup, purges, archiving, etc running around the same time.
Avatar of Simon Cripps

ASKER

Thanks for the advice. Ive reduce the job so that it only updats 10 instead of 30 field in the table.
The job ran OK last nigh, however would like to see a few nights run befor I clode this off as I also suspect that Icohan may well have found the reason, so will report back in a few days.
Cheers so far.
I have run this 6 times now and it has still failed twice.
My two thoughts are that wither it is a space issue or conflicting programs.
With regards to space, the 4GB mb has 680mb standby left and just 14mb free. could the lack of freem memory slow the program to a halt or would the system just use the standby memory, I don't understand the difference between the two.
I was thinking that this may be a conflicting resources, however this job is run at 4:00am with no other jobs runninSQL server and visula studio were open but not accessing the database, so not too sure they are at fault.
Any further thoughts as to what causes the error 0xE0434352 would be really appreciated
Is the SQL Agent job type a TSQL job, or operating system command?

In either case, could you post the body of the job step that is failing?

Can you take a look at the SQL Server Error log and SQL Server Agent Log to see if there is any additional information present at the time of the job failures?

Thanks,
Not too sure of the terminology that you mention. What I have is a .exe job that I have created and Task Scheduler starts this program at a specified time. this job

the part of the jon that fail is attached and the line that fails is
Dim reader2 As SqlDataReader = command2a.ExecuteReader()
Note this only fails in about 1 in 3 runs

The SQL in the job is a simple update akin to
UPDATE table1
Set Field2 = Table2.field2,........
From Table1 INNERJOIN Table2 on table1.Field1 = table2.Field1

I can't see anything at the moment in those logs that stands out as an issue, However I notics a record in the Program error log from a few days ago which was

<?xml version="1.0"?><Result State="FAILED"><Error><HResult>0x80004005I32</HResult><Description><![CDATA[Error connecting to the data source.]]></Description><Source>XML BulkLoad for SQL Server</Source><Type>FATAL</Type></Error></Resu

The message was incomplete. the could be a red herring though.

Still puzzled, so thanks for your persistance.

command1a.Connection.Close()
       
time = DateTime.Now
Console.WriteLine("Complete 2) SP_GlobalFeedMerchantReview at  : " + (DateTime.Now.ToString("F")))
'Update all existing merchant records on Global file from Input file
Dim command2a As New SqlCommand("SP_GlobalFeedMerchantUpdate_all", myNewConnection2)
command2a.CommandTimeout = 10800
command2a.CommandType = CommandType.StoredProcedure
command2a.Connection.Open()
Dim reader2 As SqlDataReader = command2a.ExecuteReader()
command2a.Connection.Close()

Open in new window

Ok so you are using an operating system command.

Can you explain exactly what the EXE is doing, in high level terms?
(I presume the section your posted is only a small subset of what the EXE does?)

For example, it seems like it is using an input file to update some records in your SQL server?
Does the EXE interact with any other elements besides the input file and your SQL Server?

It's possible an SSIS package would be a more standard/streamlined way of doing this...

Regards,
Why do you nead to run the DataReader and do you have any idea of the volumes of data you read from SQL and for what purpose?
Besides that and not directly related to the failure of your data reader I would add a ROWLOCK hint to the update as below to avoid locking/blocking in SQL

UPDATE table1 WITH (ROWLOCK)Set Field2 = Table2.field2,........
From Table1 INNERJOIN Table2 on table1.Field1 = table2.Field1
I think I'm getting close to the issue now.
I run the job to insert and update  records on my database. all existing records get updated with a few new inserts. I have around 1,000,000 records in the table with a compressed size of about 3GB. As the records get updated the size of the database grows and can get up to 8gb over the week.
I then manually shrink the database back to the 3gb mark.
I do run a scheduled shrink of the database at the end of each nightly batch but this does not appear to save the space as when I do it manually.

I have a concluding part to my job
Dim BckUpBat As String = (GetCurrentDLLPath() + "\Data\" + "MySiteDBShrink.bat")
        Process.Start(BckUpBat)

The calls an SQL job

USE [MySiteDB]
GO
DBCC SHRINKDATABASE(N'MySiteDB' )
GO

Could it be that the program closes befor the shrink completes. If so how do I keep it open till it completes, if not is there something else I need to do to automatically close the database
ASKER CERTIFIED SOLUTION
Avatar of Anthony Perkins
Anthony Perkins
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Acperkins: I interested to understand why this is a bad idea. The scheduled shrink takes place in the middle of the night when there is low use. Also if I do a manual shrink the performance improves, so why would a scheduled one be any different? Any thoughts would be useful,cheers.
>>I interested to understand why this is a bad idea. The scheduled shrink takes place in the middle of the night when there is low use. <<
The short answer: SHRINKDATABASE causes massive fragmentation.  For a better answer see here:
http://www.sqlskills.com/blogs/paul/post/Auto-shrink-e28093-turn-it-OFF!.aspx

I suspect this one was written by Paul with you in mind:
http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/13/629059.aspx
So this no seams to leave me in a bit of a catch 22 situation. If I don't shrink my database this size increases significantly from its actual 3.5 gb to arround 8gb and maybe a bit more. I believe this happens as I update each of the records ever night. I suspect that the extra memory is used as some type of swap storgae or the like, but not sure. When the database gets to this size, the job runs out of memory and crashes with the exception mentioned above.
If I do shrink the database it effectly ruins the indexing.

hmmmmm
Does this make sense? If it does then how do I reduce the size of the database, whilst keeping the integrity of the index.
Any clues
>>If I don't shrink my database this size increases significantly from its actual 3.5 gb to arround 8gb and maybe a bit more. I believe this happens as I update each of the records ever night.<<
It sounds like to me, it is the Transaction Log and not the data file that is increasing in size dramatically.  First you need to confirm this is the case, next you need to identify why this is happening.  There are a number of reasons from poorly written queries to failing to backup the Transaction Log or not doing it frequently enough.  A good way to visualize the database files (data and transaction log files) is by doing a right click on the database, select Reports | Standard Reports | Disk Usage.  Here it will clearly show you how much is allocated for each file and how much is used.  

>>When the database gets to this size, the job runs out of memory and crashes with the exception mentioned above.<<
The database size has no effect on the memory.  What has an effect on memory are inefficient querries.  You need to do some homework (see first paragraph) and tell us the exact error message you are getting.
Still monitoring and modifying so will close of when i can safely run the job without shrinking it.
The main error is that the job fails on a "timeout expired" as the timeout period has expired or the server has stopped responding.
I have increase the timeout time to 180 minutes, which sometimes is way more than enough.
I have removed the database shrink, so now the database size increases from 3.2gb once refreshed to 6.3GB after one days (light) use.
Looking at the report mentioned above. 5.6gb is due to data and 0.7GB due to the transaction log. These are still within the limits (10gb) for this database.
looking at system resources out of 4gb memnory, 3.2gb memory is used and 0.8 is in standy with 0mb free. The SQLServr.exe using 1.5gb. I know it was mentioned that this should not have an affect on memory, however I am feeling that this may have a factor on performance and time to complete the querie. I know that adding 1gb more memory is the wrong way about trying to resolve the solution, but I may have to try this. as cannot see a way arround.
The querie that the timeout is failing on has been refiened so only a small amount of the fields are being updated (about 10) but still all 1,000,000 rows are being updated.
Any more suggestions, most welcome.
 
>>These are still within the limits (10gb) for this database.<<
You are not by any chance use SQL Server Express Edition are you?

>>Any more suggestions, most welcome.<<
Unfortunately without seeing the actual query, it is impossible to say much.  One thing you should consider is updating in batches of say 100K each.  This will be far more efficient than doing it in one batch.

Also, I don't believe you ever told us what was your Recovery Model.  I suspect it is Full.
Yes I am using SQL Server express edition.

the recovery model is curently set to simple

With regards to batching 100k, any examples on how to do that. i.e set the count or orther solutions.

The sql is attached below.

(many thanks for your patience and persistance, by the way

UPDATE       Feed
SET  ProductName = Feed1.F1ProductName, ProductPrice = Feed1.F1ProductPrice, ActiveProduct = N'Y', ProductDescription = Feed1.F1ProductDescription, AffiliateURL = Feed1.F1AffiliateURL, ProductUpdatedDate = Feed1. F1ProductUpdatedDate, Warranty, InStock = Feed1.F1InStock, Delivery = Feed1.F1Delivery, DeliveryTime = GlobalFeed1.F1DeliveryTime, AdvertiserProductUrl = Feed1.F1AdvertiserProductUrl, ImageURL = Feed1.F1ImageURL
FROM  GlobalFeed INNER JOIN Feed1 ON Feed.MTMerchantID = Feed1.MTMerchantID AND  Feed. MerchantProductId = Feed1. F1MerchantProductId

Open in new window

The query looks pretty clean, no nasty scalar functions, or subqueries. Assuming you have good indexes to use (say on MTMerchantID, MerchantProductId), this is pretty well optimised.

There are plenty of techniques to do batch updates. Probably the simplest using this procedure model, and if you are permitted to make schema changes, is to add a surrogate datetime key called "last_updated" or similar.

You then reference this column in a loop like so:

declare @myolddatatime datetime
declare @mynewdatetime datetime
set @myolddatetime=(select top 1 last_updated from Feed)
set @mydatetime=getdate()

set rowcount 100000
while exists (select 1 from Feed where last_updated=@myolddatetime)
begin
      --update statement here, including update of column: last_updated=@mynewdatetime

end
set rowcount 0

Cheers,
"I know that adding 1gb more memory is the wrong way about trying to resolve the solution, but I may have to try this. as cannot see a way arround.
...
Yes I am using SQL Server express edition."

The Express Edition of SQL Server only supports 1 GB of memory.


You have a typo in your UPDATE statement which prevents it from executing.  The column Warranty is not set to anything.  Even after removing that one column I am confused what you are actually doing (and this may explain the time it is taking).  This is what it looks like:

UPDATE  Feed
SET     ProductName = Feed1.F1ProductName,
        ProductPrice = Feed1.F1ProductPrice,
        ActiveProduct = N'Y',
        ProductDescription = Feed1.F1ProductDescription,
        AffiliateURL = Feed1.F1AffiliateURL,
        ProductUpdatedDate = Feed1.F1ProductUpdatedDate,
        InStock = Feed1.F1InStock,
        Delivery = Feed1.F1Delivery,
        DeliveryTime = GlobalFeed1.F1DeliveryTime,
        AdvertiserProductUrl = Feed1.F1AdvertiserProductUrl,
        ImageURL = Feed1.F1ImageURL
FROM    GlobalFeed
        INNER JOIN Feed1 ON Feed.MTMerchantID = Feed1.MTMerchantID
                            AND Feed.MerchantProductId = Feed1.F1MerchantProductId

Open in new window

How is Feed related to GlobalFeed?  There is no relationship between the tables.  This could be causing a CROSS JOIN that is making the whole process explode in time and disk space.

What is GlobalFeed1 ?

I think we are pretty close to getting to the bottom of this, we just need the actual valid query you are using.
Also after you fix the actual query that does the update there may be another thing to consider - why not running the stored procedure in a separate TSQL command step of the job just like:

exec SP_GlobalFeedMerchantUpdate_all;


instead of
"....command2a.CommandType = CommandType.StoredProcedure..." and using SqlDataReader

This way it should not time out in SQL if your server timeout is set to unlimited.
Ok , A couple of things that I have done that I think have made a positive difference is:
a) For the individual database I have changed the auto growth from 1mb to 10%.
b) for SSMS I have changed the memory setting from around 2000000kb to just 1024 kb

This was as I saw a lot of transactions increasing the database by 1mb at a time and thought that would not be helpfull doing so many. Also by decreasing the memory to 1024kb has ensured that the machines memory is not completely taken up. I feel that this is a key issue in that it appears the machines memory is being used up, this makes the SQL server unable to perform the big queries. I suspect that this may have some impact on performance but has worked OK for a couple of days.
Are these 2 changes made a good idea or could I be storing up issues for the future. I will keep monitoring this to see if it has fixed the issue in the short term.
No joy, issue still happening. More info on the issue below
The database has 40 tables 38 of which are smaller than 2 mb.
There are 2 large tables
The first of these tables GlobalLive which has 800,000 records, when I look at the properties of this table the size is about 2gb.
The other table Globalinput. This table starts of empty, then during an evening batch gets populated with another 800,000 records. These records match ones in GlobalLive and is used to update the GlobalLive table. At the end of the batch the Globalinput file is again emptied.
Looking at the global input table, even though it is empty, the size is about 1gb
Looking at the database with all the tables I would expect it to come to a size equal to the sum of the tables, however it is almost double that. At about 5gb.
When running this batch job the total database increase to over 10gb about 25% of which id the log file, this often causes the job to abend..
I have noticed when the GlobaInput file is having the records deleted (The idea was to clear space and to start with a fresh table each night) the size of the database increases by several GB . Therefore I am thinking that there is some storage allocated to a roll back.  May this be the issue, if so how would I free up the space. I understand that some space will be allocated to indexes, which I would like to keep for performance.
I take a back up each night and would only restore to a full back up, if that helps with freeing up memory.
Any ideas what may be taking up this space and how I can reduce it.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Pretty sure that stopping the shrink database has fix much of the issue. The extended timeout was not set correctly and so think that has also resolved the matter. Cheers