Solved

DTS/File transfer protocol task - inconsistent in transfering files

Posted on 2004-09-13
12
609 Views
Last Modified: 2013-11-30
From SQL server/DTS - using File transfer protocol task, I'm transfering .txt data files from a unix box to the local box.  There are about 40 jobs that transfers 40 files and execute between from 10 PM to 5 AM every night.
It is becoming rare that all these jobs transfers the files successfully.  It is becoming inconsistent.  1 or 2 jobs fail each an every day.  I'm not able to figure the problem.  
If I check the unix box admin, he says that he can't see any error on the other server.  He is asking to give more details on the error.
The only error I could see in the log file is
*************************************************************
Step Error Source: File Transfer Protocol Task
Step Error Description:Failed to connect to the internet site for file transfer.
Step Error code: 800403F1
Step Error Help File:
Step Error Help Context ID:0

Step Error Source: Microsoft Data Transformation Services Flat File Rowset Provider
Step Error Description:Error opening datafile: The system cannot find the file specified.
**************************************************************

Could someone pleaseeeeeeeeeeeee help me.  How do I figure out why it fails on FTP.

Appreciate your help.

Thanks in advance.
 



 
0
Comment
Question by:divyashruthi
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 4
  • 2
  • +1
12 Comments
 
LVL 10

Expert Comment

by:Jay Toops
ID: 12044148
the only thing that comes to mind would be too many silmoutaneous connections to the
ftp server.

best just to retry the step until its successful.


Also if you can try manually ftping the files down during that time

create a job that just tests ftp download every 10 minutes and
log the activity to a db ..

see what happens.

Jay
0
 
LVL 18

Expert Comment

by:ShogunWade
ID: 12044591
agree with jl,   but out of interest where are there " 40 jobs that transfers 40 files "   as opposed to 1 job transfering 40 files
0
 
LVL 1

Expert Comment

by:ratb3rt
ID: 12044799
A good idea might be to change the DTS-FTP part of the job into a batch script. You can then call this from the DTS package and record stdin/stdout/stderr to your own log files.
0
Free eBook: Backup on AWS

Everything you need to know about backup and disaster recovery with AWS, for FREE!

 
LVL 10

Expert Comment

by:Jay Toops
ID: 12045326
Also if its trying to FTP the 40 files at the SAME time .. your poor ftp server would
get very upset.

Jay
0
 
LVL 18

Expert Comment

by:ShogunWade
ID: 12045351
yes thats what i was thinking.   and i would think that your get the files quicker is you do them in series rather than parallel.  
0
 

Author Comment

by:divyashruthi
ID: 12045853
so, is this the problem solely on the local server that's getting the files and not the unix server where the files reside.

I need to get the files, in each DTS package (40 packages running at various times) b'cos I do the data scrubb.

Is it possible to fix the problem by keeping the way I setup or is it compulsory that I have to write a batch script or do a package the ftp's all the files in series.

I was hoping to do this quickly.

Please suggest.

Thank you.
0
 
LVL 1

Expert Comment

by:ratb3rt
ID: 12045946
a quick batch script would only help identify where the problem was, not necessarily solve it
0
 
LVL 18

Expert Comment

by:ShogunWade
ID: 12045953
Does the FTP logs on the UNIX box have any indication of your failed attemps on?

FTP is a pretty simple protocol, and there is not normally much that goes wrong with it asside from the obvious like connection failure and  permissions, etc.

However as jl mentions there are options in FTP to restrict the number of concurrent sessions, etc.  These are often set to prevent excessive strain on the server.   Is your application the one thing that accesses this ftp site?   are you sure the files are available when you try to get them? etc.   these are the sorts of questions you need to address i think, because i suspect  the problem lies in that area.   Of course serializing the FTP means that you will only have 1 connection which should eliminate the possibility of exceeding the max connections, but from a technical point of view there is nothing wrong with your strategy.
0
 
LVL 18

Expert Comment

by:ShogunWade
ID: 12045964
oops " Is your application the one thing that accesses this ftp site? "  i mean " Is your application the only thing that accesses this ftp site? "
0
 
LVL 10

Accepted Solution

by:
Jay Toops earned 500 total points
ID: 12046699
OK this is the quick fix

Go to your job scheduler
1) EM -> Management -> SQL SERVER AGENT -> JOBS
2) pick your job that failing
3) right mouse to Properties
4) click steps tab
5) click on your download step (there is probably only one)
6) click edit
7) click advanced
8) enter RETRY ATTEMPTS (like 5 or somthing) use your judgement
9) enter retry interval of 4-5 minutes

done..

Jay
0
 

Author Comment

by:divyashruthi
ID: 12056013
The FTP logs on UNIX box have no indication of the failed attemps.  "Nope, my application is not the only thing that accesses this ftp site? "  Files are available when I try to get them.


To change the retry attempts in sql server agent jobs - in the steps tab I have only 1 step that executes all the steps in the package.  By design I have about 8 tasks in the package.  step 2 is where I FTP the files.  
Setting the retry property to 5 or more - will that cause the package to execute completely (i.e run all the steps/tasks in the package) and then if fails at any point  of time say at "task 2" will it continue with all the steps, 3,4,5 etc. and then try executing for next 4 times until successful.  I'm kind of lost here.  Please help.
0
 
LVL 10

Expert Comment

by:Jay Toops
ID: 12056112
Job RETRY only comes into play if the entire package fails.
the scheduler job steps does not relate to package task steps.

so if the package fails for ANY reason, the retry will
execute it again after the wait interval. up to the maximum number of retrys.

so if your package runs successfully the first time
retry will not come into play at all.. your job step will be done.

so for settings like
8) enter RETRY ATTEMPTS 5
9) enter retry interval of 4 minutes
if your package fails on the first attempt.  in  4 minutes the scheduler will
launch your package again.

if it is then successful.. no further processing will take place...

ok?

Jay
0

Featured Post

U.S. Department of Agriculture and Acronis Access

With the new era of mobile computing, smartphones and tablets, wireless communications and cloud services, the USDA sought to take advantage of a mobilized workforce and the blurring lines between personal and corporate computing resources.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

The Delta outage: 650 cancelled flights, more than 1200 delayed flights, thousands of frustrated customers, tens of millions of dollars in damages – plus untold reputational damage to one of the world’s most trusted airlines. All due to a catastroph…
In this article we will learn how to fix  “Cannot install SQL Server 2014 Service Pack 2: Unable to install windows installer msi file” error ?
Familiarize people with the process of retrieving data from SQL Server using an Access pass-thru query. Microsoft Access is a very powerful client/server development tool. One of the ways that you can retrieve data from a SQL Server is by using a pa…
This video shows, step by step, how to configure Oracle Heterogeneous Services via the Generic Gateway Agent in order to make a connection from an Oracle session and access a remote SQL Server database table.

749 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question