[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1056
  • Last Modified:

How to truncate and reload a large table in SQL Server without generating transaction logs

I have a large table (1,000,000+ rows) in a SQL Server 2005 database that I truncate and reload from scratch 3-5  times per week.  This table is for reporting purposes only and the data in this table does not need to be restored in the event of a system failure (I can easily restore the data by rerunning the program that normally truncates and reloads it).

Whenever I do this, there are a lot of transactions generated and my transaction log grows very large.

Is there a way to load this table without generating all of the transaction logs?

In Oracle, I would create the table as "UNRECOVERABLE" or with "NOLOGGING".

Is there something similar for SQL Server?
0
jbaird123
Asked:
jbaird123
  • 5
  • 4
  • 3
  • +2
1 Solution
 
tim_csCommented:
How are you loading the table?  Have you looked into switching to Bulk logged recovery during the load then switching back to full?

http://msdn.microsoft.com/en-us/library/ms189275.aspx
0
 
jbaird123Author Commented:
tim_cs,

I am loading the table using an ETL process created in Talend.  There is a "bulk insert" option for loading data in bulk, but I am not sure what it does.  Unfortunately, there isn't any documentation explaining what that option does.  I wouldn't know how to tell if using this option was having the desired affect or not.

Your suggestion to switch to bulk logged mode will affect the entire database - correct?  I was hoping for a setting that could be done on a table by table basis (creating certain tables without logging and others with logging).

Thanks.
0
 
Scott PletcherSenior DBACommented:
SQL Server does not have any equivalent to "UNRECOVERABLE".

You could put that table in tempdb (under a non-temp name) or another database to move the activity away from the main db's log.  

For tempdb, if the SQL service went down and came back up, of course you'd have to reload the table.
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
liijaCommented:
For DW purposes, you could change the recovery mode permanently, SIMPLE is enough for DW.
(Management Studio/Database properties/Options)
0
 
Scott PletcherSenior DBACommented:
The log will grow the same amount regardless of the recovery model.

SIMPLE just allows the resulting log space to be re-used without backing up the log first.
0
 
liijaCommented:
Scott is right, here we are talking about one table - so recovery mode might not help at all in this case.

Just to be sure, even though you are talking about truncating the table, you are of course using TRUNCATE table instead of DELETE?
DELETE creates logs, TRUNCATE doesn't.
0
 
Scott PletcherSenior DBACommented:
TRUNCATE does log: it logs at the page level instead of the row level, so it's less overhead.
0
 
Scott PletcherSenior DBACommented:
The tempdb database only inherently has reduced logging compared to other dbs, since SQL "knows" it will never have to recover tempdb later, such as when SQL restarts (tempdb is always rebuilt from scratch when SQL starts).
0
 
liijaCommented:
TRUNCATE doesn't log data. It logs page deallocations. That means less log.
0
 
jbaird123Author Commented:
All:

Thanks for the feedback.  

Based on the information above, it seems like the best option is to move the table to a different database altogether and set the recovery mode to simple.  I don't want to use the tempdb since if I restart the server I don't want to rebuild the table.  

Is there any better solution?

Thanks.
0
 
liijaCommented:
Is your ETL-process just copying the data into SQL Server - or is there some other logic? Is it doing any updates afterwards to the table- which could be moved into the ETL process itself -or moved to the source query?

If not, it would be interesting to know what Talend does in the background. You could use SQL Server profiler to find out what is really going on while the data is written into SQL Server table. If you are just copying data from table to another, might be that you cannot do anything to Talend internal logic though.
0
 
jonnidipCommented:
If your need is to completely reload the table and "truncate table" logs too much for you (even if I would say not), you can consider dropping and re-creating the table itself.
Just script it in your SSMS by right-clicking the table, "script table as", "drop and create to".
And remember to include non-pk indexes.

This is the most cruel way to reload a table, but I think it is the less-logging (even if, I repeat, I would use "truncate table").

Regards.
0
 
liijaCommented:
True - and indexes of course. If you are rebuilding indexes after the load - it might also cause log increase.

Dropping indexes before the load and recreating them after the load is finished might cause less log. In many cases the load process is also faster - taken into account the time that recreating indexes takes.
0
 
jonnidipCommented:
Yes, in that case I would script the indexes re-creation just after the table creation.
Creating indexes over an empty table should log only a little amount of data...
Obviously the load process will be slowed down by the indexes update, but that is a disadvantage that usually worth the cost.

Regards.
0
 
jbaird123Author Commented:
Thank you.  I will put the table in another database with a simple recovery model.
0

Featured Post

New feature and membership benefit!

New feature! Upgrade and increase expert visibility of your issues with Priority Questions.

  • 5
  • 4
  • 3
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now