Solved

0xe0434352 failure on scheduled job

Posted on 2011-03-24
27
12,527 Views
Last Modified: 2012-05-11
Hi,
I have been running some scheduled batch jobs for a period of time on sone large files overnight.
I have a SQL job that updates a table with about 700,000 rows.
This has been running fine till recently, where I am getting the error 0xE0434352 which stops the job on CommandTimeout line.
Thinking that this was due to the increase in the size of the database I have increased the timeout size to from 1 to 2 to 3 hours Command2a.CommandTimeout = 10800.
However this still fails each night on the server.
The next day on finding the issue I run the job manually and all closes fine, within 1 hour and no errors.
I am using SQLserver 2008 R2
Any suggestions as to the issue
0
Comment
Question by:Simon Cripps
  • 12
  • 8
  • 4
  • +1
27 Comments
 
LVL 7

Expert Comment

by:lozzamoore
Comment Utility
What are the scheduled batch jobs? VBS scripts?
0
 
LVL 75

Expert Comment

by:Anthony Perkins
Comment Utility
Unrelated, but have you considered that 3 hours is a very long time to update only 700K rows?  Perhaps you can post your code and we can suggest an alternative.
0
 
LVL 39

Expert Comment

by:lcohan
Comment Utility
There must be something else running at the same time with your job that's creating IO contention, locking and/or blocking in your database surounding that table. Check for any other competing jobs like backup, purges, archiving, etc running around the same time.
0
 

Author Comment

by:Simon Cripps
Comment Utility
Thanks for the advice. Ive reduce the job so that it only updats 10 instead of 30 field in the table.
The job ran OK last nigh, however would like to see a few nights run befor I clode this off as I also suspect that Icohan may well have found the reason, so will report back in a few days.
Cheers so far.
0
 

Author Comment

by:Simon Cripps
Comment Utility
I have run this 6 times now and it has still failed twice.
My two thoughts are that wither it is a space issue or conflicting programs.
With regards to space, the 4GB mb has 680mb standby left and just 14mb free. could the lack of freem memory slow the program to a halt or would the system just use the standby memory, I don't understand the difference between the two.
I was thinking that this may be a conflicting resources, however this job is run at 4:00am with no other jobs runninSQL server and visula studio were open but not accessing the database, so not too sure they are at fault.
Any further thoughts as to what causes the error 0xE0434352 would be really appreciated
0
 
LVL 7

Expert Comment

by:lozzamoore
Comment Utility
Is the SQL Agent job type a TSQL job, or operating system command?

In either case, could you post the body of the job step that is failing?

Can you take a look at the SQL Server Error log and SQL Server Agent Log to see if there is any additional information present at the time of the job failures?

Thanks,
0
 

Author Comment

by:Simon Cripps
Comment Utility
Not too sure of the terminology that you mention. What I have is a .exe job that I have created and Task Scheduler starts this program at a specified time. this job

the part of the jon that fail is attached and the line that fails is
Dim reader2 As SqlDataReader = command2a.ExecuteReader()
Note this only fails in about 1 in 3 runs

The SQL in the job is a simple update akin to
UPDATE table1
Set Field2 = Table2.field2,........
From Table1 INNERJOIN Table2 on table1.Field1 = table2.Field1

I can't see anything at the moment in those logs that stands out as an issue, However I notics a record in the Program error log from a few days ago which was

<?xml version="1.0"?><Result State="FAILED"><Error><HResult>0x80004005I32</HResult><Description><![CDATA[Error connecting to the data source.]]></Description><Source>XML BulkLoad for SQL Server</Source><Type>FATAL</Type></Error></Resu

The message was incomplete. the could be a red herring though.

Still puzzled, so thanks for your persistance.

command1a.Connection.Close()
       
time = DateTime.Now
Console.WriteLine("Complete 2) SP_GlobalFeedMerchantReview at  : " + (DateTime.Now.ToString("F")))
'Update all existing merchant records on Global file from Input file
Dim command2a As New SqlCommand("SP_GlobalFeedMerchantUpdate_all", myNewConnection2)
command2a.CommandTimeout = 10800
command2a.CommandType = CommandType.StoredProcedure
command2a.Connection.Open()
Dim reader2 As SqlDataReader = command2a.ExecuteReader()
command2a.Connection.Close()

Open in new window

0
 
LVL 7

Expert Comment

by:lozzamoore
Comment Utility
Ok so you are using an operating system command.

Can you explain exactly what the EXE is doing, in high level terms?
(I presume the section your posted is only a small subset of what the EXE does?)

For example, it seems like it is using an input file to update some records in your SQL server?
Does the EXE interact with any other elements besides the input file and your SQL Server?

It's possible an SSIS package would be a more standard/streamlined way of doing this...

Regards,
0
 
LVL 39

Expert Comment

by:lcohan
Comment Utility
Why do you nead to run the DataReader and do you have any idea of the volumes of data you read from SQL and for what purpose?
Besides that and not directly related to the failure of your data reader I would add a ROWLOCK hint to the update as below to avoid locking/blocking in SQL

UPDATE table1 WITH (ROWLOCK)Set Field2 = Table2.field2,........
From Table1 INNERJOIN Table2 on table1.Field1 = table2.Field1
0
 

Author Comment

by:Simon Cripps
Comment Utility
I think I'm getting close to the issue now.
I run the job to insert and update  records on my database. all existing records get updated with a few new inserts. I have around 1,000,000 records in the table with a compressed size of about 3GB. As the records get updated the size of the database grows and can get up to 8gb over the week.
I then manually shrink the database back to the 3gb mark.
I do run a scheduled shrink of the database at the end of each nightly batch but this does not appear to save the space as when I do it manually.

I have a concluding part to my job
Dim BckUpBat As String = (GetCurrentDLLPath() + "\Data\" + "MySiteDBShrink.bat")
        Process.Start(BckUpBat)

The calls an SQL job

USE [MySiteDB]
GO
DBCC SHRINKDATABASE(N'MySiteDB' )
GO

Could it be that the program closes befor the shrink completes. If so how do I keep it open till it completes, if not is there something else I need to do to automatically close the database
0
 
LVL 75

Accepted Solution

by:
Anthony Perkins earned 500 total points
Comment Utility
>>I do run a scheduled shrink of the database at the end of each nightly batch <<
In case you are not aware this is a very bad idea and leads to precisely what you are trying to avoid: Bad performance.
0
 

Author Comment

by:Simon Cripps
Comment Utility
Acperkins: I interested to understand why this is a bad idea. The scheduled shrink takes place in the middle of the night when there is low use. Also if I do a manual shrink the performance improves, so why would a scheduled one be any different? Any thoughts would be useful,cheers.
0
 
LVL 75

Expert Comment

by:Anthony Perkins
Comment Utility
>>I interested to understand why this is a bad idea. The scheduled shrink takes place in the middle of the night when there is low use. <<
The short answer: SHRINKDATABASE causes massive fragmentation.  For a better answer see here:
http://www.sqlskills.com/blogs/paul/post/Auto-shrink-e28093-turn-it-OFF!.aspx

I suspect this one was written by Paul with you in mind:
http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/13/629059.aspx
0
Control application downtime with dependency maps

Visualize the interdependencies between application components better with Applications Manager's automated application discovery and dependency mapping feature. Resolve performance issues faster by quickly isolating problematic components.

 

Author Comment

by:Simon Cripps
Comment Utility
So this no seams to leave me in a bit of a catch 22 situation. If I don't shrink my database this size increases significantly from its actual 3.5 gb to arround 8gb and maybe a bit more. I believe this happens as I update each of the records ever night. I suspect that the extra memory is used as some type of swap storgae or the like, but not sure. When the database gets to this size, the job runs out of memory and crashes with the exception mentioned above.
If I do shrink the database it effectly ruins the indexing.

hmmmmm
Does this make sense? If it does then how do I reduce the size of the database, whilst keeping the integrity of the index.
Any clues
0
 
LVL 75

Expert Comment

by:Anthony Perkins
Comment Utility
>>If I don't shrink my database this size increases significantly from its actual 3.5 gb to arround 8gb and maybe a bit more. I believe this happens as I update each of the records ever night.<<
It sounds like to me, it is the Transaction Log and not the data file that is increasing in size dramatically.  First you need to confirm this is the case, next you need to identify why this is happening.  There are a number of reasons from poorly written queries to failing to backup the Transaction Log or not doing it frequently enough.  A good way to visualize the database files (data and transaction log files) is by doing a right click on the database, select Reports | Standard Reports | Disk Usage.  Here it will clearly show you how much is allocated for each file and how much is used.  

>>When the database gets to this size, the job runs out of memory and crashes with the exception mentioned above.<<
The database size has no effect on the memory.  What has an effect on memory are inefficient querries.  You need to do some homework (see first paragraph) and tell us the exact error message you are getting.
0
 

Author Comment

by:Simon Cripps
Comment Utility
Still monitoring and modifying so will close of when i can safely run the job without shrinking it.
0
 

Author Comment

by:Simon Cripps
Comment Utility
The main error is that the job fails on a "timeout expired" as the timeout period has expired or the server has stopped responding.
I have increase the timeout time to 180 minutes, which sometimes is way more than enough.
I have removed the database shrink, so now the database size increases from 3.2gb once refreshed to 6.3GB after one days (light) use.
Looking at the report mentioned above. 5.6gb is due to data and 0.7GB due to the transaction log. These are still within the limits (10gb) for this database.
looking at system resources out of 4gb memnory, 3.2gb memory is used and 0.8 is in standy with 0mb free. The SQLServr.exe using 1.5gb. I know it was mentioned that this should not have an affect on memory, however I am feeling that this may have a factor on performance and time to complete the querie. I know that adding 1gb more memory is the wrong way about trying to resolve the solution, but I may have to try this. as cannot see a way arround.
The querie that the timeout is failing on has been refiened so only a small amount of the fields are being updated (about 10) but still all 1,000,000 rows are being updated.
Any more suggestions, most welcome.
 
0
 
LVL 75

Expert Comment

by:Anthony Perkins
Comment Utility
>>These are still within the limits (10gb) for this database.<<
You are not by any chance use SQL Server Express Edition are you?

>>Any more suggestions, most welcome.<<
Unfortunately without seeing the actual query, it is impossible to say much.  One thing you should consider is updating in batches of say 100K each.  This will be far more efficient than doing it in one batch.

Also, I don't believe you ever told us what was your Recovery Model.  I suspect it is Full.
0
 

Author Comment

by:Simon Cripps
Comment Utility
Yes I am using SQL Server express edition.

the recovery model is curently set to simple

With regards to batching 100k, any examples on how to do that. i.e set the count or orther solutions.

The sql is attached below.

(many thanks for your patience and persistance, by the way

UPDATE       Feed
SET  ProductName = Feed1.F1ProductName, ProductPrice = Feed1.F1ProductPrice, ActiveProduct = N'Y', ProductDescription = Feed1.F1ProductDescription, AffiliateURL = Feed1.F1AffiliateURL, ProductUpdatedDate = Feed1. F1ProductUpdatedDate, Warranty, InStock = Feed1.F1InStock, Delivery = Feed1.F1Delivery, DeliveryTime = GlobalFeed1.F1DeliveryTime, AdvertiserProductUrl = Feed1.F1AdvertiserProductUrl, ImageURL = Feed1.F1ImageURL
FROM  GlobalFeed INNER JOIN Feed1 ON Feed.MTMerchantID = Feed1.MTMerchantID AND  Feed. MerchantProductId = Feed1. F1MerchantProductId

Open in new window

0
 
LVL 7

Expert Comment

by:lozzamoore
Comment Utility
The query looks pretty clean, no nasty scalar functions, or subqueries. Assuming you have good indexes to use (say on MTMerchantID, MerchantProductId), this is pretty well optimised.

There are plenty of techniques to do batch updates. Probably the simplest using this procedure model, and if you are permitted to make schema changes, is to add a surrogate datetime key called "last_updated" or similar.

You then reference this column in a loop like so:

declare @myolddatatime datetime
declare @mynewdatetime datetime
set @myolddatetime=(select top 1 last_updated from Feed)
set @mydatetime=getdate()

set rowcount 100000
while exists (select 1 from Feed where last_updated=@myolddatetime)
begin
      --update statement here, including update of column: last_updated=@mynewdatetime

end
set rowcount 0

Cheers,
0
 
LVL 75

Expert Comment

by:Anthony Perkins
Comment Utility
"I know that adding 1gb more memory is the wrong way about trying to resolve the solution, but I may have to try this. as cannot see a way arround.
...
Yes I am using SQL Server express edition."

The Express Edition of SQL Server only supports 1 GB of memory.


You have a typo in your UPDATE statement which prevents it from executing.  The column Warranty is not set to anything.  Even after removing that one column I am confused what you are actually doing (and this may explain the time it is taking).  This is what it looks like:

UPDATE  Feed
SET     ProductName = Feed1.F1ProductName,
        ProductPrice = Feed1.F1ProductPrice,
        ActiveProduct = N'Y',
        ProductDescription = Feed1.F1ProductDescription,
        AffiliateURL = Feed1.F1AffiliateURL,
        ProductUpdatedDate = Feed1.F1ProductUpdatedDate,
        InStock = Feed1.F1InStock,
        Delivery = Feed1.F1Delivery,
        DeliveryTime = GlobalFeed1.F1DeliveryTime,
        AdvertiserProductUrl = Feed1.F1AdvertiserProductUrl,
        ImageURL = Feed1.F1ImageURL
FROM    GlobalFeed
        INNER JOIN Feed1 ON Feed.MTMerchantID = Feed1.MTMerchantID
                            AND Feed.MerchantProductId = Feed1.F1MerchantProductId

Open in new window

0
 
LVL 75

Expert Comment

by:Anthony Perkins
Comment Utility
How is Feed related to GlobalFeed?  There is no relationship between the tables.  This could be causing a CROSS JOIN that is making the whole process explode in time and disk space.

What is GlobalFeed1 ?

I think we are pretty close to getting to the bottom of this, we just need the actual valid query you are using.
0
 
LVL 39

Expert Comment

by:lcohan
Comment Utility
Also after you fix the actual query that does the update there may be another thing to consider - why not running the stored procedure in a separate TSQL command step of the job just like:

exec SP_GlobalFeedMerchantUpdate_all;


instead of
"....command2a.CommandType = CommandType.StoredProcedure..." and using SqlDataReader

This way it should not time out in SQL if your server timeout is set to unlimited.
0
 

Author Comment

by:Simon Cripps
Comment Utility
Ok , A couple of things that I have done that I think have made a positive difference is:
a) For the individual database I have changed the auto growth from 1mb to 10%.
b) for SSMS I have changed the memory setting from around 2000000kb to just 1024 kb

This was as I saw a lot of transactions increasing the database by 1mb at a time and thought that would not be helpfull doing so many. Also by decreasing the memory to 1024kb has ensured that the machines memory is not completely taken up. I feel that this is a key issue in that it appears the machines memory is being used up, this makes the SQL server unable to perform the big queries. I suspect that this may have some impact on performance but has worked OK for a couple of days.
Are these 2 changes made a good idea or could I be storing up issues for the future. I will keep monitoring this to see if it has fixed the issue in the short term.
0
 

Author Comment

by:Simon Cripps
Comment Utility
No joy, issue still happening. More info on the issue below
The database has 40 tables 38 of which are smaller than 2 mb.
There are 2 large tables
The first of these tables GlobalLive which has 800,000 records, when I look at the properties of this table the size is about 2gb.
The other table Globalinput. This table starts of empty, then during an evening batch gets populated with another 800,000 records. These records match ones in GlobalLive and is used to update the GlobalLive table. At the end of the batch the Globalinput file is again emptied.
Looking at the global input table, even though it is empty, the size is about 1gb
Looking at the database with all the tables I would expect it to come to a size equal to the sum of the tables, however it is almost double that. At about 5gb.
When running this batch job the total database increase to over 10gb about 25% of which id the log file, this often causes the job to abend..
I have noticed when the GlobaInput file is having the records deleted (The idea was to clear space and to start with a fresh table each night) the size of the database increases by several GB . Therefore I am thinking that there is some storage allocated to a roll back.  May this be the issue, if so how would I free up the space. I understand that some space will be allocated to indexes, which I would like to keep for performance.
I take a back up each night and would only restore to a full back up, if that helps with freeing up memory.
Any ideas what may be taking up this space and how I can reduce it.
0
 
LVL 75

Assisted Solution

by:Anthony Perkins
Anthony Perkins earned 500 total points
Comment Utility
>>For the individual database I have changed the auto growth from 1mb to 10%<<
In general that is a bad idea.  It should be a fixed size something like 500 MB or 1GB.
>>for SSMS I have changed the memory setting from around 2000000kb to just 1024 kb<<
I suspect you are talking about SQL Server memory and not the SSMS client app.  But again, unless the server is shared with other applications (not recommended), than all you have achieved is cripple SQL Server.

As soon as you have the answers to my question here http:a35377236 we may be able to help you more.
0
 

Author Closing Comment

by:Simon Cripps
Comment Utility
Pretty sure that stopping the shrink database has fix much of the issue. The extended timeout was not set correctly and so think that has also resolved the matter. Cheers
0

Featured Post

Comprehensive Backup Solutions for Microsoft

Acronis protects the complete Microsoft technology stack: Windows Server, Windows PC, laptop and Surface data; Microsoft business applications; Microsoft Hyper-V; Azure VMs; Microsoft Windows Server 2016; Microsoft Exchange 2016 and SQL Server 2016.

Join & Write a Comment

Introduction SQL Server Integration Services can read XML files, that’s known by every BI developer.  (If you didn’t, don’t worry, I’m aiming this article at newcomers as well.) But how far can you go?  When does the XML Source component become …
Slowly Changing Dimension Transformation component in data task flow is very useful for us to manage and control how data changes in SSIS.
Via a live example, show how to extract information from SQL Server on Database, Connection and Server properties
Via a live example, show how to shrink a transaction log file down to a reasonable size.

743 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now