Solved

Question regarding designing ETL process with DTS

Posted on 2008-10-08
7
490 Views
Last Modified: 2013-11-30
    We have two ideas based on how a DTS package should be built for an ETL process. One methodolgy is to seperate it into multiple packages and then run them utilizing jobs. The other methodology is to create the process in one master package and run the single package as a job.
    Based on experience, what is the more logical design. If you answer, please do not choose A or B, identify which one is the best way and give supporting reasons (pro's and con's) on why one is more efficient than the other.
    Once again, thank you for your ongoing support!
0
Comment
Question by:Kevin_Maillet
  • 2
  • 2
  • 2
  • +1
7 Comments
 
LVL 4

Accepted Solution

by:
randy_knight earned 167 total points
ID: 22675533
With DTS, I always did it like your option A for the following reasons:

1. The DTS ExecutePackage task is a pain to work with.
2. It is much easier to restart a failed job than to go into your master package, disable steps that have al ready completed, etc.  
3. If you build some metadata tables to handle pre-requisites and complettion of batches, etc. you can go massively parallel this way.

I coud go on and on.  SSIS is a different world but with DTS this is the only wayt to do it IMO.

0
 
LVL 75

Expert Comment

by:Anthony Perkins
ID: 22675597
It really depends if you are passing global variables from the parent to the child and back.  If the answer is yes, than you have no choice
0
 

Author Comment

by:Kevin_Maillet
ID: 22689823
Is there any performance reasons why one is more effective than the other?
0
Optimizing Cloud Backup for Low Bandwidth

With cloud storage prices going down a growing number of SMBs start to use it for backup storage. Unfortunately, business data volume rarely fits the average Internet speed. This article provides an overview of main Internet speed challenges and reveals backup best practices.

 
LVL 75

Assisted Solution

by:Anthony Perkins
Anthony Perkins earned 166 total points
ID: 22691865
You should not be focusing on performance as the key reason.  If there diffrences, they would have to be negligable.  Instead focus on maintenance and if the steps are independent of each other.  If they are than go with using jobs, if not than use a master/child DTS package approach.  That is the correct lithmus test to make.
0
 
LVL 51

Assisted Solution

by:Mark Wills
Mark Wills earned 167 total points
ID: 22696759
Definitely look towards operability and scalability. I do like a modular approach, with each module being it's own package, and also self contained. For that reason, often see a few packages being brought into an ETL process, but in such a way that they can largely run independantly of the previous package.

For example, load staging, validate staging, report exceptions, put away staging. Can be four distinct packages - each has a specific job to do, and works with whatever data may exist (or even does not exist because of a failure). The schedular is essentially the glue that co-ordinates the packages - and that is often where there is a sometimes mistaken belief that they all need to know about each other. Certainly there is no point validating data that didn't load, but then, the validation doesn't have to do anything anyway.

Some purists will toally disagree, but, the basis for my reasoning is, there is very little performance overhead in running a task that has nothing to do, and maximum flexibility within a discrete module. I can change validation package as much or as little as needed without impact on anything else. More so, I can farm out different jobs to different developers based on skills, exeperience, and knowledge of data.

So take an object oriented perspective of the ETL process and manage each object (package). It kind of flows on from acperkins statement "Instead focus on maintenance and if the steps are independent of each other" except I take it one step further and suggest make the steps independant. It also kind of fits in with rand_knights suggestions, in so much as each individual step can be isolated, worked on etc...

You do have to design it pretty well, and in the validate process flag errors, and or approved data so that any ensuing steps can take appropriate action. ETL is data driven, so let the data drive the process.
0
 

Author Comment

by:Kevin_Maillet
ID: 22713570
Thank you for your help gentlemen
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 22713603
A pleasure. What did you decide to do ?
0

Featured Post

Best Practices: Disaster Recovery Testing

Besides backup, any IT division should have a disaster recovery plan. You will find a few tips below relating to the development of such a plan and to what issues one should pay special attention in the course of backup planning.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

When you hear the word proxy, you may become apprehensive. This article will help you to understand Proxy and when it is useful. Let's talk Proxy for SQL Server. (Not in terms of Internet access.) Typically, you'll run into this type of problem w…
The Delta outage: 650 cancelled flights, more than 1200 delayed flights, thousands of frustrated customers, tens of millions of dollars in damages – plus untold reputational damage to one of the world’s most trusted airlines. All due to a catastroph…
Familiarize people with the process of retrieving data from SQL Server using an Access pass-thru query. Microsoft Access is a very powerful client/server development tool. One of the ways that you can retrieve data from a SQL Server is by using a pa…
Via a live example, show how to set up a backup for SQL Server using a Maintenance Plan and how to schedule the job into SQL Server Agent.

860 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question