I am developing a package to import a flat CSV file into a SQL table. It does work, but I am unhappy with the performance of over an hour to import an average of 50,000 records. I first tried the Data Flow Task to map the columns and define the output colun datatypes/lengths to match the SQL destination columns. Some output columns are to be ignored and are selected as such in the transformation. I tried initially with the CSV source file location on my local drive, being imported over to the server. With the poor performance, I then tried the CSV source file location on the same server as the destination, but performance was still poor.
So I then tried a Bulk Insert Task, which I'm not very familiar with. This does not seem to work since the source columns are larger than the destinationon and/or I have some source columns that need to be ignored.
Any ideas how I can improve performance of this import?