Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17


Using SQL Server 2012 for a database with a lot of inserts, methods to keep the log file size small

Posted on 2016-09-15
Medium Priority
Last Modified: 2016-10-19
Hello experts,

we have a SQL Server 2012. And since 2015 we have a supportive system that keeps growing. It's collecting information from a lot of computers. And those computers keep sending.

Which results in a lot of INSERTs into the database. 1.5 GB of data per day. 62363 submissions per day. Processed in about 250ms per submission. Each containing about 30-60 rows that have to be inserted into a table.

As you might imagine this has quite some impact on our transaction log file. Which now has grown to a size of 17GB.
Our recovery model is set to "simple" which is not "perfect" but it keeps the size of the log file down.

We know that every backup resets the transaction log, which is not equal to freeing up disk space but space in the log file. So we believe we have a chance to reduce the necessary transaction log size by executing the backup more often than only every 24 hours.

Does anyone know what else we can do to keep the transaction file size small?
Do you have any pluralsight-course, udemy-course, book or web recommendation that could help?
Are we doing maybe something wrong here, because why should 1.5 GB of data transfer cause 17 gigs of transaction log file?

Thank you very much for any idea you might have!
Question by:Stefan Hoffmann
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
LVL 52

Assisted Solution

by:Vitor Montalvão
Vitor Montalvão earned 400 total points (awarded by participants)
ID: 41799367
Our recovery model is set to "simple"
We know that every backup resets the transaction log, which is not equal to freeing up disk space but space in the log file.
That's only true for Bulk and Full Recovery models because it's when you are able to perform a transaction log backup. For Simple the transaction log is truncated every time a CHECKPOINT or COMMIT occurs.

Are we doing maybe something wrong here, because why should 1.5 GB of data transfer cause 17 gigs of transaction log file?
How big is the database? Are you performing regular reindexes on the database?
LVL 35

Assisted Solution

ste5an earned 400 total points (awarded by participants)
ID: 41799435
So we believe we have a chance to reduce the necessary transaction log size by executing the backup more often than only every 24 hours.

This believe is wrong.

The size of your transaction log reflects the necessary log space for the actual load. When you resize it to a smaller size, then the next loading process will request that space again from the OS. This can be a serious performance bottleneck.

because why should 1.5 GB of data transfer cause 17 gigs of transaction log file?
This depends on how the input is processed..
LVL 16

Assisted Solution

by:Megan Brooks
Megan Brooks earned 800 total points (awarded by participants)
ID: 41799782
When the simple recovery model is selected for a database, SQL Server does indeed truncate the log as it is able. There is no need to dump it periodically -- that is why the model is called "simple." Your application appears to be similar to one that I have been supporting in recent years, and we used simple recovery and did not have this issue. So there would seem to be something "interesting" happening behind the scenes.

The first thing that comes to mind is that there might be transactions that are being held open for extended periods of time. Log space for an operation can only be released after it commits. It's hard to know where to start looking because I haven't seen your application and how it is structured. Does it do massive inserts within transactions? This can result either from a single huge insert, which runs in an implicit transaction, or from a series of possibly smaller inserts running inside a single BEGIN TRAN / COMMIT TRAN block. And COMMIT TRAN doesn't actually commit if it is nested, so some outer transaction block may be holding the transaction open even when there appear to be frequent COMMITs.

You should also, before you spend too much time looking at your application, make sure that the log growth occurs as a result of your regular transactional processing, and not due to some sort of periodic maintenance operation that is carried out inside a transaction. If 3rd-party software and support is involved, anything is possible. You can review auto-growth events to see when the growth is happening.

Does any of this so far sound like it might be a reason for the size of your log?
Survive A High-Traffic Event with Percona

Your application or website rely on your database to deliver information about products and services to your customers. You can’t afford to have your database lose performance, lose availability or become unresponsive – even for just a few minutes.

LVL 27

Assisted Solution

Zberteoc earned 400 total points (awarded by participants)
ID: 41799906
If your recovery model is set to simple the frequency of the FULL database backups has no effect whatsoever on the log file. The log file is truncated only when the data in it has been committed and became permanent. Until then, if a transaction is open, the size will remain as needed. In simple recovery mode you can NOT make transaction log backups.

On the other hand if your inserts are so big and so frequent keeping the database in simple recovery is not recommended exactly for that reason: SQL engine truncates the log file from those big sizes only to ask the OS for space to grow it again when data is inserted, which will result in performance issues and resources pressure.

I would recommend to change the database to FULL recovery mode and make sure you backup the log file every 10 or 15 minutes, or even more frequent, and don't worry about the log size. It will grow to a maximum needed by the transactions but that space will be released(not reduced/truncated) after every log backup and reused. This way the server will not waste time with truncation and growing back the log file. At the same time you will have to make sure that the inserts that are executed are not too big transactions and that the data is committed after every batch.
LVL 16

Accepted Solution

Megan Brooks earned 800 total points (awarded by participants)
ID: 41800014
If the cause for the large log files can be traced to a vendor application, you may just need to live with it (assuming that they reach a certain huge size and stop growing). The same applies if it is due to an in-house application that is considered too costly to fix. But if it is something that can readily be fixed, or is going to lead to more problems than just disk space, figure out what it is and fix it.

By the way, with either simple or full recovery, open transactions are going to prevent the log from being truncated. Full will truncate on a schedule (barring open transactions), while you don't control the timing for simple truncation. Personally, I haven't had a database where the timing of truncation mattered either way.

Simple recovery has a number of uses, one of which is to eliminate maintenance overhead for databases that capture data that has already been archived in some other way prior to entering the database, especially when it is streaming in in high volume as in this system. If there is a fast, straightforward way to replay the incoming data from the archive then full recovery may not provide a significant benefit.

Author Comment

by:Stefan Hoffmann
ID: 41801332
Thanks for your comments so far.
You have mentioned a lot of things that I need to research on now.
It will take a while. At least a few days.
LVL 52

Expert Comment

by:Vitor Montalvão
ID: 41823318
Stefan, any news about this issue?
LVL 52

Expert Comment

by:Vitor Montalvão
ID: 41849709
Recommending to close this question by accepting the above comments.

Featured Post

Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

International Data Corporation (IDC) prognosticates that before the current the year gets over disbursing on IT framework products to be sent in cloud environs will be $37.1B.
It is possible to export the data of a SQL Table in SSMS and generate INSERT statements. It's neatly tucked away in the generate scripts option of a database.
Using examples as well as descriptions, and references to Books Online, show the documentation available for date manipulation functions and by using a select few of these functions, show how date based data can be manipulated with these functions.
Viewers will learn how the fundamental information of how to create a table.

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question