SSIS Newbie: how can I get two data streams from an OLEDB datasource.

Hi All,

I've just created my first SSIS package.  Right now it just consists of a Data Flow task which has inside of it a OLEDB Source task with my sql query, and a Flat File destination which outputs one column of the data to a flat file.  Back up on the Control Flow level I also have a File System task which moves the flat file to another location.  

What has me stumped is how I achieve my desired next steps.  Basically I need to build in the ability to remove any duplicates (items that have already been put into the flat file) before the flat file is created.  I have a plan, I just don't know how to get there.  Here is what I'm thinking:  Initially, just on the first run, run two separate but identical OLEDB sources, one to create my flat file and one to go to a separate flat file that I'll use as my 'archive" and input for the duplicate check.  As far as the actual duplicate check, I have an example I can follow for this so I'm not too concerned there.  Once the duplicate check is complete I'm hoping to be able to append any new rows to the end of this "archive" flat file which I'm using as input for the duplicate check.

So, I guess my question is - does this approach make sense?  And also, will it be possible to append rows to the archive file as I am hoping?
Who is Participating?
brl8Connect With a Mentor Author Commented:
vdr1620 - thanks for your replies.  I think your method would work.  But, my preference is not to create a database table.  Also, I need the data to persist until the next time I generate the file (I need to know which are the new records.)  I don't need to worry about the existing data changing, I just need to get new records (I think I did not mention this in my original post).

I figured out a way to do this.  I am creating an import file, then putting the results of the SQL query into a text file.  This text file represents the state of the database table at the moment the import file is created.  That way, the next time I run the import, I join the text file to the new results from the query, filtering out anything where there is a match.  Then, I recreate the text file.  This is working perfectly for me.  
This is what you can do to check the duplicates

--Create a Staging table
--load the Data from the Archive file into a Temp Table (Staging table)
--now in the Current Data Flow Task you have, Add a Lookup Transformation and Lookup the Values in the Staging Table
--Input the Non Matching rows from Lookup to your Archive file

Staging table


hope this Helps!!
brl8Author Commented:
Thanks, vdr1620.

I read through what you sent.  Maybe this is a stupid question but is the temp table deleted after the package completes?  If so, this will not work as I need to keep a running record of any record I've already output into the flat file and only bring over new records.

Yes, It will be deleted at the End of the Package completion... I am not sure why you think it will not work.. because you will be loading all the data into the Temp table each time the package is Executed and that way you will be able to check the existing records... you will need to use a date Column if one exists to get the incremental data

If you still think that the above approach will not work..then i am afraid to say that the only other solution would be to create a table in the database and update the values on each run or store the Value in a file and update the values in the file on each run

will the existing data in the Table Change after inserting it into the Flat File ??
brl8Author Commented:
I figured out a way to do this that is working for me.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.