Can I make this import go faster in SSIS moving data between two databases on the same server?

We recently upgraded from SQL Server 2012 to 2016 and have noticed it takes longer than we expected to insert from one database into the other.  See below for details and specific question
Hardware Specs
•      SQL Server 2016 (SP1-CU6)
•      Virtual Server (32 processers and 32GB RAM)
•      Windows Server 2012 R2
•      1 instance of SQL server
•      Database A (compatibility =130)      
•      Database B (compatibility =130)      
•      Table Statistics in database A up-to-date
SSIS package runs Stored Procedure on DatabaseA (Estimated Cost = 2750)  (35Million rows returned from query)
Loads to table in DatabaseB. Table has 2 indexes, one clustered another non clustered.
Stored Procedure returns 25 columns. Most are integers, money or small varchars (255) is the biggest varchar.
When deployed to the server, the package take over 30 minutes to run.  
In SSIS we have tried “AutoAdjustBufferSize” = True
And DefaultBufferMaxRows  = 10,000
AutoAdjustBuffer size seems to be slower than using DefaultBufferMaxRows  = 10,000.

Is there anyway to make this run faster?
What are some possible things to check which might cause it to run slowly?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Mark WillsTopic AdvisorCommented:
Welcome to the world of Asking a Question - good to see :)

Need to know more about the process in terms of Tables.

Are you populating an already existing table ? Does it have existing content ?

Are logs on different (multiple) spindles ?

If same instance, different DB, why are you using SSIS - why not populate directly ?

If transferring that much data - have you tried setting the receiving DB to Bulk Logged ?

What does the SP do - is there a lot of manipulation ? Would like to see it if possible :)
BostonMAAuthor Commented:
The destination table is empty  when this process runs.

I'm not sure about the logs whether they are on different disks or not. I don't have full permissions on this box so it may take a while to figure out.

When you say populate directly, do you mean why not use INSERT INTO ....?  I thought SSIS was faster in this situation, no?

I'll try switching to bulk logged and let you know

I'll share the query once I test out Bulk logged.

Thank you!
Mark WillsTopic AdvisorCommented:
Well, considering a new empty table, do the indexing afterwards, there are a few things that can affect transaction log activity and a clustered index is one of them.

Order By is another...

We could also investigate using "BULK" import methods

Do you make use of Partitioning ?

Anyway, check out you environment and we can then strategise the next steps...

Oh, and if changing recovery model, do a transaction log backups before and after. Start with issuing a checkpoint command, do the log backup, change recovery and do another full (plus logs) backup then import.
Scott PletcherSenior DBACommented:
General ideas, in no particular order:

Bulk logging would definitely help.  Make sure you check the box that allows/causes SSIS to use bulk logging (I think it's a check box, not looking at it now).  

If you can, set the batch size to 0.  If you use a different batch size, say 50K, then only first 50K will be bulk-logged, the rest will get full logging.

Also, make sure that DatabaseB has enough log space pre-allocated to handle the entire load.  Dynamic log growth kills performance.  If you do decide to load in batches, that won't be all that much log space.  But if you do everything in one batch, to get fully bulk loaded, you'll need roughly 6GB+ of log space available (assuming each row is ~150 bytes long).

If you can, use compression on the table.

Make sure the fillfactor on the receiving table is set high enough, typically 98+%.  The default is, (sadly), often much lower than that, which just wastes pages.

If the collations are compatible, "tell" SQL that / make sure SQL "knows" that.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.